C工程 交互 ceph 分布式存储系统
网上看到有人问,如何在C项目里调用ceph系统对外提供的API,实现分布式存储。
我在网上搜到了相关信息,但是因为不是会员无法追加答案,故而,贴于此。
赠予有缘人:)
————————————————————————————————————
The Ceph Storage Cluster provides the basic storage service that allows Ceph to uniquely deliver object, block, and file storage in one unified system. However, you are not limited to using the RESTful, block, or POSIX interfaces. Based upon RADOS, the librados API enables you to create your own interface to the Ceph Storage Cluster.
The librados API enables you to interact with the two types of daemons in the Ceph Storage Cluster:
- The Ceph Monitor, which maintains a master copy of the cluster map.
- The Ceph OSD Daemon (OSD), which stores data as objects on a storage node.
This guide provides a high-level introduction to using librados. Refer to 体系结构 for additional details of the Ceph Storage Cluster. To use the API, you need a running Ceph Storage Cluster. See Installation (Quick) for details.
获取 LIBRADOS
你的客户端应用必须绑定 librados 才能连接 Ceph 存储集群。在写使用 librados 的应用程序前,要安装 librados 及其他依赖包。 librados API 本身是用 C++ 实现的,另外有 C 、 Python 、 Java 和 PHP 绑定。
GETTING LIBRADOS FOR C/C++
To install librados development support files for C/C++ on Debian/Ubuntu distributions, execute the following:
sudo apt-get install librados-dev
配置集群句柄
A Ceph Client, via librados, interacts directly with OSDs to store and retrieve data. To interact with OSDs, the client app must invoke librados and connect to a Ceph Monitor. Once connected, librados retrieves the Cluster Map from the Ceph Monitor. When the client app wants to read or write data, it creates an I/O context and binds to a pool. The pool has an associated ruleset that defines how it will place data in the storage cluster. Via the I/O context, the client provides the object name to librados, which takes the object name and the cluster map (i.e., the topology of the cluster) and computes the placement group and OSD for locating the data. Then the client application can read or write data. The client app doesn’t need to learn about the topology of the cluster directly.
The Ceph Storage Cluster handle encapsulates the client configuration, including:
- The user ID for rados_create() or user name for rados_create2() (preferred).
- The cephx authentication key
- The monitor ID and IP address
- Logging levels
- Debugging levels
Thus, the first steps in using the cluster from your app are to 1) create a cluster handle that your app will use to connect to the storage cluster, and then 2) use that handle to connect. To connect to the cluster, the app must supply a monitor address, a username and an authentication key (cephx is enabled by default).
Tip
Talking to different Ceph Storage Clusters – or to the same cluster with different users – requires different cluster handles.
RADOS provides a number of ways for you to set the required values. For the monitor and encryption key settings, an easy way to handle them is to ensure that your Ceph configuration file contains a keyring path to a keyring file and at least one monitor address (e.g,. mon host). For example:
[global]
mon host = 192.168.1.1
keyring = /etc/ceph/ceph.client.admin.keyring
Once you create the handle, you can read a Ceph configuration file to configure the handle. You can also pass arguments to your app and parse them with the function for parsing command line arguments (e.g., rados_conf_parse_argv()), or parse Ceph environment variables (e.g., rados_conf_parse_env()). Some wrappers may not implement convenience methods, so you may need to implement these capabilities. The following diagram provides a high-level flow for the initial connection.
Once connected, your app can invoke functions that affect the whole cluster with only the cluster handle. For example, once you have a cluster handle, you can:
- Get cluster statistics
- Use Pool Operation (exists, create, list, delete)
- Get and set the configuration
One of the powerful features of Ceph is the ability to bind to different pools. Each pool may have a different number of placement groups, object replicas and replication strategies. For example, a pool could be set up as a “hot” pool that uses SSDs for frequently used objects or a “cold” pool that uses erasure coding.
The main difference in the various librados bindings is between C and the object-oriented bindings for C++, Java and Python. The object-oriented bindings use objects to represent cluster handles, IO Contexts, iterators, exceptions, etc.
EXAMPLE(要链接librados.so)
#include <stdio.h>
#include <string.h>
#include <rados/librados.h> int main (int argc, char argv**)
{ /* Declare the cluster handle and required arguments. */
rados_t cluster;
char cluster_name[] = "ceph";
char user_name[] = "client.admin";
uint64_t flags; /* Initialize the cluster handle with the "ceph" cluster name and the "client.admin" user */
int err;
err = rados_create2(&cluster, cluster_name, user_name, flags); if (err < 0) {
fprintf(stderr, "%s: Couldn't create the cluster handle! %s\n", argv[0], strerror(-err));
exit(EXIT_FAILURE);
} else {
printf("\nCreated a cluster handle.\n");
} /* Read a Ceph configuration file to configure the cluster handle. */
err = rados_conf_read_file(cluster, "/etc/ceph/ceph.conf");
if (err < 0) {
fprintf(stderr, "%s: cannot read config file: %s\n", argv[0], strerror(-err));
exit(EXIT_FAILURE);
} else {
printf("\nRead the config file.\n");
} /* Read command line arguments */
err = rados_conf_parse_argv(cluster, argc, argv);
if (err < 0) {
fprintf(stderr, "%s: cannot parse command line arguments: %s\n", argv[0], strerror(-err));
exit(EXIT_FAILURE);
} else {
printf("\nRead the command line arguments.\n");
} /* Connect to the cluster */
err = rados_connect(cluster);
if (err < 0) {
fprintf(stderr, "%s: cannot connect to cluster: %s\n", argv[0], strerror(-err));
exit(EXIT_FAILURE);
} else {
printf("\nConnected to the cluster.\n");
}
CREATING AN I/O CONTEXT
Once your app has a cluster handle and a connection to a Ceph Storage Cluster, you may create an I/O Context and begin reading and writing data. An I/O Context binds the connection to a specific pool. The user must have appropriate CAPS permissions to access the specified pool. For example, a user with read access but not write access will only be able to read data. I/O Context functionality includes:
- Write/read data and extended attributes
- List and iterate over objects and extended attributes
- Snapshot pools, list snapshots, etc.
RADOS enables you to interact both synchronously and asynchronously. Once your app has an I/O Context, read/write operations only require you to know the object/xattr name. The CRUSH algorithm encapsulated in librados uses the cluster map to identify the appropriate OSD. OSD daemons handle the replication, as described in Smart Daemons Enable Hyperscale. The librados library also maps objects to placement groups, as described in Calculating PG IDs.
The following examples use the default data pool. However, you may also use the API to list pools, ensure they exist, or create and delete pools. For the write operations, the examples illustrate how to use synchronous mode. For the read operations, the examples illustrate how to use asynchronous mode.
Important
Use caution when deleting pools with this API. If you delete a pool, the pool and ALL DATA in the pool will be lost.
EXAMPLE
#include <stdio.h>
#include <string.h>
#include <rados/librados.h> int main (int argc, const char argv**)
{
/*
* Continued from previous C example, where cluster handle and
* connection are established. First declare an I/O Context.
*/ rados_ioctx_t io;
char *poolname = "data"; err = rados_ioctx_create(cluster, poolname, &io);
if (err < 0) {
fprintf(stderr, "%s: cannot open rados pool %s: %s\n", argv[0], poolname, strerror(-err));
rados_shutdown(cluster);
exit(EXIT_FAILURE);
} else {
printf("\nCreated I/O context.\n");
} /* Write data to the cluster synchronously. */
err = rados_write(io, "hw", "Hello World!", 12, 0);
if (err < 0) {
fprintf(stderr, "%s: Cannot write object \"hw\" to pool %s: %s\n", argv[0], poolname, strerror(-err));
rados_ioctx_destroy(io);
rados_shutdown(cluster);
exit(1);
} else {
printf("\nWrote \"Hello World\" to object \"hw\".\n");
} char xattr[] = "en_US";
err = rados_setxattr(io, "hw", "lang", xattr, 5);
if (err < 0) {
fprintf(stderr, "%s: Cannot write xattr to pool %s: %s\n", argv[0], poolname, strerror(-err));
rados_ioctx_destroy(io);
rados_shutdown(cluster);
exit(1);
} else {
printf("\nWrote \"en_US\" to xattr \"lang\" for object \"hw\".\n");
} /*
* Read data from the cluster asynchronously.
* First, set up asynchronous I/O completion.
*/
rados_completion_t comp;
err = rados_aio_create_completion(NULL, NULL, NULL, &comp);
if (err < 0) {
fprintf(stderr, "%s: Could not create aio completion: %s\n", argv[0], strerror(-err));
rados_ioctx_destroy(io);
rados_shutdown(cluster);
exit(1);
} else {
printf("\nCreated AIO completion.\n");
} /* Next, read data using rados_aio_read. */
char read_res[100];
err = rados_aio_read(io, "hw", comp, read_res, 12, 0);
if (err < 0) {
fprintf(stderr, "%s: Cannot read object. %s %s\n", argv[0], poolname, strerror(-err));
rados_ioctx_destroy(io);
rados_shutdown(cluster);
exit(1);
} else {
printf("\nRead object \"hw\". The contents are:\n %s \n", read_res);
} /* Wait for the operation to complete */
rados_wait_for_complete(comp); /* Release the asynchronous I/O complete handle to avoid memory leaks. */
rados_aio_release(comp); char xattr_res[100];
err = rados_getxattr(io, "hw", "lang", xattr_res, 5);
if (err < 0) {
fprintf(stderr, "%s: Cannot read xattr. %s %s\n", argv[0], poolname, strerror(-err));
rados_ioctx_destroy(io);
rados_shutdown(cluster);
exit(1);
} else {
printf("\nRead xattr \"lang\" for object \"hw\". The contents are:\n %s \n", xattr_res);
} err = rados_rmxattr(io, "hw", "lang");
if (err < 0) {
fprintf(stderr, "%s: Cannot remove xattr. %s %s\n", argv[0], poolname, strerror(-err));
rados_ioctx_destroy(io);
rados_shutdown(cluster);
exit(1);
} else {
printf("\nRemoved xattr \"lang\" for object \"hw\".\n");
} err = rados_remove(io, "hw");
if (err < 0) {
fprintf(stderr, "%s: Cannot remove object. %s %s\n", argv[0], poolname, strerror(-err));
rados_ioctx_destroy(io);
rados_shutdown(cluster);
exit(1);
} else {
printf("\nRemoved object \"hw\".\n");
}
}
CLOSING SESSIONS
EXAMPLE
rados_ioctx_destroy(io);
rados_shutdown(cluster);
Finally:
网上文章多是出于各个大公司的技术内涵研究人员之手,介绍一堆理论,但是唯独不说实际开发中如何用
这篇文章算是比较好的补充吧。希望可以帮到需要的人!!!
C工程 交互 ceph 分布式存储系统的更多相关文章
- ceph分布式存储系统初探
前言 由于公司的业务调整,现在我又要接触ceph这个东西,由于我接手的是一个网盘类项目,所以分布式存储系统ceph就是我必须要学的了.现在压力还是比较大的,从业务直接到后台核心. 大概在这几天,我将c ...
- 分布式存储系统 Ceph
你了解Ceph吗? Ceph是一种分布式存储系统,它可以将多台服务器组成一个超大集群,把这些机器中的磁盘资源整合到一块儿,形成一个大的资源池(PB级别),然后按需分配给应用使用. 那么你知道Ceph的 ...
- 分布式存储系统之Ceph集群CephX认证和授权
前文我们了解了Ceph集群存储池操作相关话题,回顾请参考https://www.cnblogs.com/qiuhom-1874/p/16743611.html:今天我们来聊一聊在ceph上认证和授权的 ...
- 高性能、高容错、基于内存的开源分布式存储系统Tachyon的简单介绍
Tachyon是什么? Tachyon是一个高性能.高容错.基于内存的开源分布式存储系统,并具有类Java的文件API.插件式的底层文件系统.兼容Hadoop MapReduce和Apache Spa ...
- 关于分布式存储系统中-CAP原则(CAP定理)与BASE理论比较
CAP原则又称CAP定理,指的是在一个分布式系统中, Consistency(一致性). Availability(可用性).Partition tolerance(分区容错性),三者不可得兼. CA ...
- Bayou复制分布式存储系统
本文主要参考文献[1]完成. 第1章导读 Bayou是一个复制的.弱一致性的存储系统,用于移动计算环境.为了最大化可用性,Bayou为用户提供了可以任意读写访问的副本.Bayou的设计侧重于为应用程序 ...
- [转载] 360分布式存储系统Bada的设计和应用
原文: http://mp.weixin.qq.com/s?__biz=MzAwMDU1MTE1OQ==&mid=208931479&idx=1&sn=1dc6ea4fa28a ...
- 分布式存储系统-HBASE
简介 HBase –Hadoop Database,是一个高可靠性.高性能.面向列.可伸缩的分布式存储系统,利用HBse技术可在廉价PC Server上搭建起大规模结构化存储集群.HBase利用Had ...
- (第6篇)大数据发展背后的强力推手——HBase分布式存储系统
摘要: 今天我们介绍可在廉价PC Server上搭建起大规模结构化存储集群的分布式存储系统——HBase. 博主福利 给大家赠送一套hadoop视频课程 授课老师是百度 hadoop 核心架构师 内容 ...
随机推荐
- Java编程的逻辑 (88) - 正则表达式 (上)
本系列文章经补充和完善,已修订整理成书<Java编程的逻辑>,由机械工业出版社华章分社出版,于2018年1月上市热销,读者好评如潮!各大网店和书店有售,欢迎购买,京东自营链接:http:/ ...
- 资源查找器PathMatchingResourcePatternResolver的使用
资源查找器PathMatchingResourcePatternResolver的使用 PathMatchingResourcePatternResolver是一个Ant通配符模式的Resource查 ...
- halcon 创建region的最大尺寸问题
gen_region 之类的创建region 之前需要提前设置region的最大尺寸,设置方法如下: set_system('width',2000)set_system('height',2000) ...
- Android开发怎么让自己的APP UI漂亮、大方(配色篇二)
我们在没有效果图的app开发中有一件事情肯定很头疼:一个按钮的调色改过来改过去,还是很难看,最终只能暂时作罢,浪费了大量的开发时间和精力.开发规范篇见Android开发怎么让自己的APP UI漂亮.大 ...
- SpringMvc中获取Request
Controller中加参数 @Controller public class TestController { @RequestMapping("/test") public v ...
- mysql 表分区技术
表分区,是指根据一定规则,将数据库中的一张表分解成多个更小的,容易管理的部分.从逻辑上看,只有一张表,但是底层却是由多个物理分区组成. 表分区有什么好处: a.分区表的数据可以分布在不同的物理设备上, ...
- laravel5.8笔记十:Redis操作
> 位置:\vendor\laravel\framework\src\Illuminate\Redis\Connections\PhpRedisConnection.php > 参考:ht ...
- Django admin 自定义Choice_field
在使用Django Admin后台时,有时候想自定义某一字段的Choice_field,例如屏蔽某些选项,只显示某些指定的选项. 想象这样的应用场景,我有一个网站,导航栏是这样的: 点开“技术杂谈”后 ...
- MYSQL 文件类型
首先, 表结构文件 : 1) *.frm是描述了表的结构, 数据及索引文件 如果是MyISAM引擎,则是 1) *.MYD保存了表的数据记录, 2) *.MYI则是表的索引 对于 InnoDB引擎,则 ...
- Jquery EasyUI Combotree只能选择叶子节点且叶子节点有多选框
Jquery EasyUI Combotree只能选择叶子节点且叶子节点有多选框 Jquery EasyUI Combotree单选框,Jquery EasyUI Combotree只能选择叶子节点 ...