Ceph对象存储 S3
ceph对象存储
作为文件系统的磁盘,操作系统不能直接访问对象存储。相反,它只能通过应用程序级别的API访问。ceph是一种分布式对象存储系统,通过ceph对象网关提供对象存储接口,也称为RADOS网关(RGW)接口,它构建在ceph RADOS层之上。RGW使用librgw(RADOS Gateway library)和librados,允许应用程序与ceph对象存储建立连接。RGW为应用程序提供了一个RESTful S3/swift兼容的接口,用于在ceph集群中以对象的形式存储数据。ceph还支持多租户对象存储,可以通过RESTful API访问。此外,RGW还支持ceph管理API,可以使用本机API调用来管理ceph存储集群。
librados软件库非常灵活,允许用户应用程序通过C、C++、java、python和php绑定直接访问ceph存储集群。ceph对象存储还具有多站点功能,即灾难恢复提供解决方案。
部署对象存储
安装ceph-radosgw
[ceph-admin@ceph-node1 my-cluster]$ sudo yum install ceph-radosgw
部署rgw
[ceph-admin@ceph-node1 my-cluster]$ ceph-deploy rgw create ceph-node1 ceph-node2 ceph-node3
[ceph-admin@ceph-node1 my-cluster]$ sudo netstat -tnlp |grep 7480
tcp 0 0 0.0.0.0:7480 0.0.0.0:* LISTEN 15418/radosgw
如果要修改为80端口,可修改配置文件 重启
vim /etc/ceph/ceph.conf
[client.rgw.ceph-node1]
rgw_frontends = "civetweb port=80" sudo systemctl restart ceph-radosgw@rgw.ceph-node1.service
使用S3 API访问ceph对象存储
创建radosgw用户
[ceph-admin@ceph-node1 my-cluster]$ radosgw-admin user create --uid=radosgw --display-name="radosgw"
{
"user_id": "radosgw",
"display_name": "radosgw",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [],
"keys": [
{
"user": "radosgw",
"access_key": "DKOORDOMS6YHR2OW5M23",
"secret_key": "OOBNCO0d03oiBaLCtYePPQ7gIeUR2Y7UuB24pBW4"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw"
}
安装s3cmd客户端
[root@localhost ~]# yum install -y s3cmd
[root@localhost ~]# s3cmd --configure Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options. Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: DKOORDOMS6YHR2OW5M23
Secret Key: OOBNCO0d03oiBaLCtYePPQ7gIeUR2Y7UuB24pBW4
Default Region [US]: ZH Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]: When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]: no On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name: New settings:
Access Key: DKOORDOMS6YHR2OW5M23
Secret Key: OOBNCO0d03oiBaLCtYePPQ7gIeUR2Y7UuB24pBW4
Default Region: ZH
S3 Endpoint: s3.amazonaws.com
DNS-style bucket+hostname:port template for accessing a bucket: %(bucket)s.s3.amazonaws.com
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: False
HTTP Proxy server name:
HTTP Proxy server port: 0 Test access with supplied credentials? [Y/n] n Save settings? [y/N] y
Configuration saved to '/root/.s3cfg'
编辑s3配置文件
[root@localhost ~]# cat .s3cfg
[default]
access_key = DKOORDOMS6YHR2OW5M23
access_token =
add_encoding_exts =
add_headers =
bucket_location = US
ca_certs_file =
cache_file =
check_ssl_certificate = True
check_ssl_hostname = True
cloudfront_host = cloudfront.amazonaws.com
content_disposition =
content_type =
default_mime_type = binary/octet-stream
delay_updates = False
delete_after = False
delete_after_fetch = False
delete_removed = False
dry_run = False
enable_multipart = True
encrypt = False
expiry_date =
expiry_days =
expiry_prefix =
follow_symlinks = False
force = False
get_continue = False
gpg_command = /usr/bin/gpg
gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_passphrase =
guess_mime_type = True
host_base = ceph-node1:7480
host_bucket = %(bucket).ceph-node1:7480
human_readable_sizes = False
invalidate_default_index_on_cf = False
invalidate_default_index_root_on_cf = True
invalidate_on_cf = False
kms_key =
limit = -1
limitrate = 0
list_md5 = False
log_target_prefix =
long_listing = False
max_delete = -1
mime_type =
multipart_chunk_size_mb = 15
multipart_max_chunks = 10000
preserve_attrs = True
progress_meter = True
proxy_host =
proxy_port = 0
put_continue = False
recursive = False
recv_chunk = 65536
reduced_redundancy = False
requester_pays = False
restore_days = 1
restore_priority = Standard
secret_key = OOBNCO0d03oiBaLCtYePPQ7gIeUR2Y7UuB24pBW4
send_chunk = 65536
server_side_encryption = False
signature_v2 = False
signurl_use_https = False
simpledb_host = sdb.amazonaws.com
skip_existing = False
socket_timeout = 300
stats = False
stop_on_error = False
storage_class =
throttle_max = 100
upload_id =
urlencoding_mode = normal
use_http_expect = False
use_https = False
use_mime_magic = True
verbosity = WARNING
website_endpoint = http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
website_error =
website_index = index.html
创建桶并放入文件
[root@localhost ~]# s3cmd mb s3://first-bucket
Bucket 's3://first-bucket/' created
[root@localhost ~]# s3cmd ls
2019-02-15 07:45 s3://first-bucket
[root@localhost ~]# s3cmd put /etc/hosts s3://first-bucket
upload: '/etc/hosts' -> 's3://first-bucket/hosts' [1 of 1]
239 of 239 100% in 1s 175.80 B/s done
[root@localhost ~]# s3cmd ls s3://first-bucket
2019-02-15 07:47 239 s3://first-bucket/hosts
使用Swift API访问ceph对象存储
创建swift api子用户
[ceph-admin@ceph-node1 my-cluster]$ radosgw-admin subuser create --uid=radosgw --subuser=radosgw:swift --access=full
{
"user_id": "radosgw",
"display_name": "radosgw",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [
{
"id": "radosgw:swift",
"permissions": "full-control"
}
],
"keys": [
{
"user": "radosgw",
"access_key": "DKOORDOMS6YHR2OW5M23",
"secret_key": "OOBNCO0d03oiBaLCtYePPQ7gIeUR2Y7UuB24pBW4"
}
],
"swift_keys": [
{
"user": "radosgw:swift",
"secret_key": "bAL11KzCYE1GThPWY70tUo6dVIhvuIbSFEBP06yD"
}
],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw"
}
安装swift api客户端
[root@localhost ~]# yum install python-pip -y
[root@localhost ~]# pip install --upgrade python-swiftclient
测试
[root@localhost ~]# swift -A http://ceph-node1:7480/auth/1.0 -U radosgw:swift -K bAL11KzCYE1GThPWY70tUo6dVIhvuIbSFEBP06yD list
first-bucket
[root@localhost ~]# swift -A http://ceph-node1:7480/auth/1.0 -U radosgw:swift -K bAL11KzCYE1GThPWY70tUo6dVIhvuIbSFEBP06yD list
[root@localhost ~]# swift -A http://ceph-node1:7480/auth/1.0 -U radosgw:swift -K bAL11KzCYE1GThPWY70tUo6dVIhvuIbSFEBP06yD post second-bucket
[root@localhost ~]# swift -A http://ceph-node1:7480/auth/1.0 -U radosgw:swift -K bAL11KzCYE1GThPWY70tUo6dVIhvuIbSFEBP06yD list
first-bucket
second-bucket
[root@localhost ~]# s3cmd ls
2019-02-15 07:45 s3://first-bucket
2019-02-15 08:18 s3://second-bucket
ceph对象存储
作为文件系统的磁盘,操作系统不能直接访问对象存储。相反,它只能通过应用程序级别的API访问。ceph是一种分布式对象存储系统,通过ceph对象网关提供对象存储接口,也称为RADOS网关(RGW)接口,它构建在ceph RADOS层之上。RGW使用librgw(RADOS Gateway library)和librados,允许应用程序与ceph对象存储建立连接。RGW为应用程序提供了一个RESTful S3/swift兼容的接口,用于在ceph集群中以对象的形式存储数据。ceph还支持多租户对象存储,可以通过RESTful API访问。此外,RGW还支持ceph管理API,可以使用本机API调用来管理ceph存储集群。
librados软件库非常灵活,允许用户应用程序通过C、C++、java、python和php绑定直接访问ceph存储集群。ceph对象存储还具有多站点功能,即灾难恢复提供解决方案。
部署对象存储
安装ceph-radosgw
[ceph-admin@ceph-node1 my-cluster]$ sudo yum install ceph-radosgw
部署rgw
[ceph-admin@ceph-node1 my-cluster]$ ceph-deploy rgw create ceph-node1 ceph-node2 ceph-node3
[ceph-admin@ceph-node1 my-cluster]$ sudo netstat -tnlp |grep 7480
tcp 0 0 0.0.0.0:7480 0.0.0.0:* LISTEN 15418/radosgw
如果要修改为80端口,可修改配置文件 重启
vim /etc/ceph/ceph.conf
[client.rgw.ceph-node1]
rgw_frontends = "civetweb port=80" sudo systemctl restart ceph-radosgw@rgw.ceph-node1.service
创建池
[ceph-admin@ceph-node1 my-cluster]$ wget https://raw.githubusercontent.com/aishangwei/ceph-demo/master/ceph-deploy/rgw/pool
[ceph-admin@ceph-node1 my-cluster]$ wget https://raw.githubusercontent.com/aishangwei/ceph-demo/master/ceph-deploy/rgw/create_pool.sh
[ceph-admin@ceph-node1 my-cluster]$ cat create_pool.sh
#!/bin/bash PG_NUM=30
PGP_NUM=30
SIZE=3 for i in `cat /home/ceph-admin/my-cluster/pool`
do
ceph osd pool create $i $PG_NUM
ceph osd pool set $i size $SIZE
done for i in `cat /home/ceph-admin/my-cluster/pool`
do
ceph osd pool set $i pgp_num $PGP_NUM
done [ceph-admin@ceph-node1 my-cluster]$ chmod +x create_pool.sh
[ceph-admin@ceph-node1 my-cluster]$ ./create_pool.sh
测试是否能访问ceph集群
[ceph-admin@ceph-node1 my-cluster]$ sudo ls -l /var/lib/ceph/
total 0
drwxr-x--- 2 ceph ceph 6 Jan 31 00:48 bootstrap-mds
drwxr-x--- 2 ceph ceph 26 Feb 14 13:30 bootstrap-mgr
drwxr-x--- 2 ceph ceph 26 Feb 14 13:21 bootstrap-osd
drwxr-x--- 2 ceph ceph 6 Jan 31 00:48 bootstrap-rbd
drwxr-x--- 2 ceph ceph 26 Feb 15 14:13 bootstrap-rgw
drwxr-x--- 2 ceph ceph 6 Jan 31 00:48 mds
drwxr-x--- 3 ceph ceph 29 Feb 14 13:30 mgr
drwxr-x--- 3 ceph ceph 29 Feb 14 12:01 mon
drwxr-x--- 5 ceph ceph 48 Feb 14 13:22 osd
drwxr-xr-x 3 root root 33 Feb 15 14:13 radosgw
[ceph-admin@ceph-node1 my-cluster]$ sudo cp /var/lib/ceph/radosgw/ceph-rgw.ceph-node1/keyring ./
[ceph-admin@ceph-node1 my-cluster]$ ceph -s -k keyring --name client.rgw.ceph-node1
cluster:
id: cde2c9f7-009e-4bb4-a206-95afa4c43495
health: HEALTH_OK services:
mon: 3 daemons, quorum ceph-node1,ceph-node2,ceph-node3
mgr: ceph-node1(active), standbys: ceph-node2, ceph-node3
osd: 9 osds: 9 up, 9 in
rgw: 3 daemons active data:
pools: 18 pools, 550 pgs
objects: 240 objects, 114MiB
usage: 9.45GiB used, 171GiB / 180GiB avail
pgs: 550 active+clean io:
client: 0B/s rd, 0op/s rd, 0op/s wr
使用S3 API访问ceph对象存储
创建radosgw用户
[ceph-admin@ceph-node1 my-cluster]$ radosgw-admin user create --uid=radosgw --display-name="radosgw"
{
"user_id": "radosgw",
"display_name": "radosgw",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [],
"keys": [
{
"user": "radosgw",
"access_key": "DKOORDOMS6YHR2OW5M23",
"secret_key": "OOBNCO0d03oiBaLCtYePPQ7gIeUR2Y7UuB24pBW4"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw"
}
安装s3cmd客户端
[root@localhost ~]# yum install -y s3cmd
[root@localhost ~]# s3cmd --configure Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options. Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: DKOORDOMS6YHR2OW5M23
Secret Key: OOBNCO0d03oiBaLCtYePPQ7gIeUR2Y7UuB24pBW4
Default Region [US]: ZH Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]: When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]: no On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name: New settings:
Access Key: DKOORDOMS6YHR2OW5M23
Secret Key: OOBNCO0d03oiBaLCtYePPQ7gIeUR2Y7UuB24pBW4
Default Region: ZH
S3 Endpoint: s3.amazonaws.com
DNS-style bucket+hostname:port template for accessing a bucket: %(bucket)s.s3.amazonaws.com
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: False
HTTP Proxy server name:
HTTP Proxy server port: 0 Test access with supplied credentials? [Y/n] n Save settings? [y/N] y
Configuration saved to '/root/.s3cfg'
编辑s3配置文件
创建桶并放入文件
[root@localhost ~]# s3cmd mb s3://first-bucket
Bucket 's3://first-bucket/' created
[root@localhost ~]# s3cmd ls
2019-02-15 07:45 s3://first-bucket
[root@localhost ~]# s3cmd put /etc/hosts s3://first-bucket
upload: '/etc/hosts' -> 's3://first-bucket/hosts' [1 of 1]
239 of 239 100% in 1s 175.80 B/s done
[root@localhost ~]# s3cmd ls s3://first-bucket
2019-02-15 07:47 239 s3://first-bucket/hosts
使用Swift API访问ceph对象存储
创建swift api子用户
[ceph-admin@ceph-node1 my-cluster]$ radosgw-admin subuser create --uid=radosgw --subuser=radosgw:swift --access=full
{
"user_id": "radosgw",
"display_name": "radosgw",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [
{
"id": "radosgw:swift",
"permissions": "full-control"
}
],
"keys": [
{
"user": "radosgw",
"access_key": "DKOORDOMS6YHR2OW5M23",
"secret_key": "OOBNCO0d03oiBaLCtYePPQ7gIeUR2Y7UuB24pBW4"
}
],
"swift_keys": [
{
"user": "radosgw:swift",
"secret_key": "bAL11KzCYE1GThPWY70tUo6dVIhvuIbSFEBP06yD"
}
],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw"
}
安装swift api客户端
[root@localhost ~]# yum install python-pip -y
[root@localhost ~]# pip install --upgrade python-swiftclient
测试
[root@localhost ~]# swift -A http://ceph-node1:7480/auth/1.0 -U radosgw:swift -K bAL11KzCYE1GThPWY70tUo6dVIhvuIbSFEBP06yD list
first-bucket
[root@localhost ~]# swift -A http://ceph-node1:7480/auth/1.0 -U radosgw:swift -K bAL11KzCYE1GThPWY70tUo6dVIhvuIbSFEBP06yD list
[root@localhost ~]# swift -A http://ceph-node1:7480/auth/1.0 -U radosgw:swift -K bAL11KzCYE1GThPWY70tUo6dVIhvuIbSFEBP06yD post second-bucket
[root@localhost ~]# swift -A http://ceph-node1:7480/auth/1.0 -U radosgw:swift -K bAL11KzCYE1GThPWY70tUo6dVIhvuIbSFEBP06yD list
first-bucket
second-bucket
[root@localhost ~]# s3cmd ls
2019-02-15 07:45 s3://first-bucket
2019-02-15 08:18 s3://second-bucket
Ceph对象存储 S3的更多相关文章
- 基于LAMP php7.1搭建owncloud云盘与ceph对象存储S3借口整合案例
ownCloud简介 是一个来自 KDE 社区开发的免费软件,提供私人的 Web 服务.当前主要功能包括文件管理(内建文件分享).音乐.日历.联系人等等,可在PC和服务器上运行. 简单来说就是一个基于 ...
- Ceph对象存储网关中的索引工作原理<转>
Ceph 对象存储网关允许你通过 Swift 及 S3 API 访问 Ceph .它将这些 API 请求转化为 librados 请求.Librados 是一个非常出色的对象存储(库)但是它无法高效的 ...
- 腾讯云存储专家深度解读基于Ceph对象存储的混合云机制
背景 毫无疑问,乘着云计算发展的东风,Ceph已经是当今最火热的软件定义存储开源项目.如下图所示,它在同一底层平台之上可以对外提供三种存储接口,分别是文件存储.对象存储以及块存储,本文主要关注的是对象 ...
- 006.Ceph对象存储基础使用
一 Ceph文件系统 1.1 概述 Ceph 对象网关是一个构建在 librados 之上的对象存储接口,它为应用程序访问Ceph 存储集群提供了一个 RESTful 风格的网关 . Ceph 对象存 ...
- ceph 对象存储跨机房容灾
场景分析 每个机房的Ceph都是独立的cluster,彼此之间没有任何关系. 多个机房都独立的提供对象存储功能,每个Ceph Radosgw都有自己独立的命名空间和存储空间. 这样带来两个问题: 针对 ...
- ceph对象存储RADOSGW安装与使用
本文章ceph版本为luminous,操作系统为centos7.7,ceph安装部署方法可以参考本人其他文章. [root@ceph1 ceph-install]# ceph -v ceph vers ...
- CEPH 对象存储的系统池介绍
RGW抽象来看就是基于rados集群之上的一个rados-client实例. Object和pool简述 Rados集群网上介绍的文章很多,这里就不一一叙述,主要要说明的是object和pool.在r ...
- ceph对象存储场景
安装ceph-radosgw [root@ceph-node1 ~]# cd /etc/ceph # 这里要注意ceph的源,要和之前安装的ceph集群同一个版本 [root@ceph-node1 c ...
- 直播流怎么存储在Ceph对象存储上? Linux内存文件系统tmpfs(/dev/shm) 的应用
一./dev/shm理论 默认的Linux发行版中的内核配置都会开启tmpfs,映射到了/dev/下的shm目录.可以通过df 命令查看结果./dev/shm/是linux下一个非常有用的目录,因为这 ...
随机推荐
- [从源码学设计]蚂蚁金服SOFARegistry之服务上线
[从源码学设计]蚂蚁金服SOFARegistry之服务上线 目录 [从源码学设计]蚂蚁金服SOFARegistry之服务上线 0x00 摘要 0x01 业务领域 1.1 应用场景 1.1.1 服务发布 ...
- 杭电OJ:1089----1096(c++)(ACM入门第一步:所有的输入输出格式)
1089:输入输出练习的A + B(I) 问题描述 您的任务是计算a + b. 太容易了?!当然!我专门为ACM初学者设计了这个问题. 您一定已经发现某些问题与此标题具有相同的名称,是的,所有这些问题 ...
- WPF + RDLC + 动态生成列 + 表头合并
如下,评论超过20条,马上发代码*(੭*ˊᵕˋ)੭*ଘ,效果如下: 代码逻辑简单. WPF使用RDLC需要使用如下DLL 新建WPF 窗体,黏贴下大概如下 <Window xmlns:rv=&q ...
- Kubernetes官方java客户端之八:fluent style
欢迎访问我的GitHub https://github.com/zq2599/blog_demos 内容:所有原创文章分类汇总及配套源码,涉及Java.Docker.Kubernetes.DevOPS ...
- 【JavaWeb】JSTL 标签库
JSTL 标签库 简介 JSTL(JSP Standard Tag Library),即 JSP 标准标签库.标签库是为了替换代码脚本,使得整个 jsp 页面变得更加简洁. JSTL 有五个功能不同的 ...
- 【Maven】Maven 高级应用
Maven 高级应用 Maven 基础 Maven 是一个项目管理工具,它有如下好处: 节省磁盘空间 可以一键构建 可以跨平台使用 依赖传递和管理,提高开发效率 一键构建:Maven 自身集成了 To ...
- 【LeetCode】365.水壶问题
题目描述 解题思路 思路一:裴蜀定理-数学法 由题意,每次操作只会让桶里的水总量增加x或y,或者减少x或y,即会给水的总量带来x或y的变化量,转为数字描述即为:找到一对整数a,b使得下式成立: ax+ ...
- 天梯赛练习 L3-008 喊山 (30分) bfs搜索
题目分析: 本题是一题比较简单的bfs搜索题,首先由于数据给的比较多不能直接开二维数组存放,而是用了vector的动态的二维数组的形式存放,对于每个出发点,我们bfs向四周搜索,标记搜索过的点,遇到搜 ...
- 【IMP】IMP导入表的时候,如果表存在怎么办
在imp导入的时候,如果表存在的话,会追加数据在表中, 所以如果不想追加在表中的话,需要将想导入的表truncate掉后,在imp SQL: truncate table TEST1; imp tes ...
- k8s集群中遇到etcd集群故障的排查思路
一次在k8s集群中创建实例发现etcd集群状态出现连接失败状况,导致创建实例失败.于是排查了一下原因. 问题来源 下面是etcd集群健康状态: 1 2 3 4 5 6 7 8 9 10 11 [roo ...