Data Replication in a Multi-Cloud Environment using Hadoop & Peer-to-Peer technologies
要FQ。。。
——————————————————————————————————————————————————————
Few years ago, i started working on a project named Jxtadoop providing Hadoop Distributed Filesystem capabilities on top of of a peer-to-peer network. This initial goal was simply to load a file once to a Data Cloud which will take care of replication wherever the peers (data nodes) are deployed... I also wanted to avoid putting my data outside of my private network to ensure complete data privacy.
After some times, it appeared that this solution is also a very good fit to support data replication in a Multi-Cloud Environment. Any file (small or big) can be loaded in one cloud and then gets automatically replicated to the other clouds. This makes multi-cloud Data Brokering very easy and straightforward.
Hadoop is a very good candidate to provide those functionalities at a datacenter level. However when moving to a multi-cloud environment, it is no longer viable unless a Virtual Private Data Network is built. This VPDN is created on top of a peer-to-peer network which provides redundancy, multi-path routing, privacy, encryption across all the clouds...
Concept
Let's assume we are in a true Multi-Cloud Broker environment. For this blog, i assume i actually have 3 clouds hosting multiple workloads (aka virtual servers) in each. The picture below depicts a classical configuration which will appear in the coming years where business will source IT from different Cloud providers and really consume IT in a true Service Broker model.
ACME Corp. has its headquarters based out of France and subsidiaries all over the world.
The new strategy is to source IT infrastructure from local service providers to deliver IT services directly to the local branches. There is no will to set up local IT anymore.
Data replication, propagation, protection (...) is really an issue in such a configuration and reversibility has to also be configured.
Setting up this Virtual Private Data Network will support this Service Broker Strategy.
Conceptually, the solution is very simple. A master node (calledRendez-vous Namenode) is located in the HQ and is the brain of the VPDN. That's where all the logic is handled such as data availability, multi-path data transfer, data placement... In each Cloud, there is a Relay Datanode which acts as the entry point for the Cloud. It will play a routing role communicating directly with other Cloud relays and also play a buffering role for data transmission. To avoid any SPOF, all those peers can be deployed in a multi-instance mode.
Finally each workload instance (physical servers, virtual machines, containers...) hosts a Peer Datanode which is the actual endpoint for data storage and consumption.
Virtual Private Data Network Architecture
As explained in the previous section, the overall architecture relies on three main components.
- Namenode Rendez-Vous providing Data Transport Logic as well as Data placement and replication. It has the peer-to-peer network topology overall understanding as well as the data cloud meta-data. There is no data traffic going through this peer.
- Datanode Relay providing Data Storage as well as Data Transport. The local peers which can communicate between each others through multicast, will rely on the relays to communicate with remote peers located in other data clouds. It can also store data as a temporary buffer.
- Datanode Peer providing Data Storage to store data chunks on each server peer and even on remote desktop peers.
All the peers can be made redundant (multiple Rendez-Vous, multiple Relays ...) to increase the multi-path routing capability and avoid any SPOF. The data is then split into chunks of pre-defined size and dispatched across the Data Cloud. Data locality can be set to ensure there is one replica per Cloud or that replicas are limited to a Cloud (for example for data which must stay in a specific country).
All the communications are multi-path, authenticated, encrypted... There is no need to set-up VPNs between the Clouds which could lead to some contentions points. Here the communication is either direct through multicast or going through the best (shortest) routing path at the peer-to-peer layer level.
The traffic flows are of 2 kinds.
. The RPC flow and the DATA flow. The first one handles all the signaling required to operate the VPDN such as routing, heartbeat, placement requests, updates ... There is actually no business data on this flow, hence it is possible to have a set-up where data traffic is limited to a cloud or even a country while the commands are centrally managed.
. The DATA flow is the actual business data transferred over the wire. This flow can be local to a datacentre using multicast wherever possible. It can also still be local but transiting through the Cloud relay for multiple domains. Finally this flow can go through multiple relays. In the example below, a data block located on the Windows PC will get replicated to the APAC Cloud by going through 2 relays (the DC one + the APAC Cloud one).
Benefits
This new approach brings many benefits for a mutli-Cloud environment and for companies willing to operate their IT with an IT Service Broker model.
- Redundancy : the data is automatically replicated in the Clouds wherever needed ;
- Availability : the data is always available with the use of multiple replicas (3, 5, 7...) ;
- Efficiency : quick deployment, quick capacity expansion ;
- Simplicity : load once on a peer and automated replication ;
- Future-proof : leverage big data technologies ;
- Portable : can run on any server and desktop platforms supporting Java 7 ;
- Confidentiality : all the data transfer are encrypted, authenticated ... ;
- Locality : data can be located in a specific Cloud and not leak outside ;
Setting up your own environment
The technology used to create this Virtual Private Data Network can be found here. The testing described above has been done using a physical environment from OVH in France to simulate the HQ. 3 clouds have been consumed :Numergy (EMEA - France), Rackspace (U.S. - Virginia) and Amazon (APAC - Australia).
The testing leveraged Docker to create multiple Datanode peers on a single VM with a complex network topology (see1 & 2). The associated containers can be found on the Docker main repository :
- Namenode Rendez-Vous (jxtadoop/namenode)
- Datanode Relay (jxtadoop/relay)
- Datanode Peer (jxtadoop/datanode)
Desktop clients have been installed on Mac OS, Windows and Linux. Just ensure you use Windows 7.
Conclusion
This concludes my Jxtadoop project which will get released as version 1.0.0 later this month. I'll provide a SaaS set-up with a Rendez-vous Namenode and a Relay Peer for quick testing.
Next ideas :
- Roll-out StandaloneHDFSUI for Jxtadoop ;
- Release FileSharing capability based on Jxtadoop ;
- PaaS/SaaS set-up for Jxtadoop with Docker and CloudFoundry ;
- Think about magic combination of App Virtualization (Docker), Network Virtualization (Open vSwitch) and Data Virtualization (Jxtadoop) ;
Links
Data Replication in a Multi-Cloud Environment using Hadoop & Peer-to-Peer technologies的更多相关文章
- elasticsearch6.7 05. Document APIs(1)data replication model
data replication model 本节首先简要介绍Elasticsearch的data replication model,然后详细描述以下CRUD api: 1.读写文档(Reading ...
- ACID、Data Replication、CAP与BASE
ACID 在传数据库系统中,事务具有ACID 4个属性. (1) 原子性(Atomicity):事务是一个原子操作单元,其对数据的修改,要么全都执行,要么全都不执行. (2) 一致性(Consiste ...
- 【Cloud Computing】Hadoop环境安装、基本命令及MapReduce字数统计程序
[Cloud Computing]Hadoop环境安装.基本命令及MapReduce字数统计程序 1.虚拟机准备 1.1 模板机器配置 1.1.1 主机配置 IP地址:在学校校园网Wifi下连接下 V ...
- 6 Multi-Cloud Architecture Designs for an Effective Cloud
https://www.simform.com/multi-cloud-architecture/ Enterprises increasingly want to take advantage of ...
- Hive-0.x.x - Enviornment Setup
All Hadoop sub-projects such as Hive, Pig, and HBase support Linux operating system. Therefore, you ...
- Enabling granular discretionary access control for data stored in a cloud computing environment
Enabling discretionary data access control in a cloud computing environment can begin with the obtai ...
- Tagging Physical Resources in a Cloud Computing Environment
A cloud system may create physical resource tags to store relationships between cloud computing offe ...
- Awesome Big Data List
https://github.com/onurakpolat/awesome-bigdata A curated list of awesome big data frameworks, resour ...
- Scalable MySQL Cluster with Master-Slave Replication, ProxySQL Load Balancing and Orchestrator
MySQL is one of the most popular open-source relational databases, used by lots of projects around t ...
随机推荐
- hdu Ignatius and the Princess II
Ignatius and the Princess II Time Limit : 2000/1000ms (Java/Other) Memory Limit : 65536/32768K (Ja ...
- 【spring data jpa】jpa中使用in查询或删除 在@Query中怎么写 ,报错:org.springframework.expression.spel.SpelEvaluationException: EL1007E: Property or field 'goodsConfigUid' cannot be found on null 怎么处理
示例代码如下: @Modifying @Transactional @Query("delete from GoodsBindConfigMapping gbc " + " ...
- Kingdee Apusic 中间件有关资料
Kingdee Apusic 中间件有关资料: 1.官方网站:http://www.apusic.com 2.资料目录:http://www.apusic.com/dist 3.Apusic 8 资料 ...
- 关于Android的Build类——获取Android手机设备各种信息
经常遇到要获取Android手机设备的相关信息,来进行业务的开发,比如经常会遇到要获取CPU的类型来进行so库的动态的下载.而这些都是在Android的Build类里面.相关信息如下: private ...
- 算法导论-顺序统计-快速求第i小的元素
目录 1.问题的引出-求第i个顺序统计量 2.方法一:以期望线性时间做选择 3.方法二(改进):最坏情况线性时间的选择 4.完整测试代码(c++) 5.参考资料 内容 1.问题的引出-求第i个顺序统计 ...
- linux导入so文件
在linux系统中,有时候会遇到so文件丢失的问题. 此时一个常用的操作是将缺失的so文件拷贝到主机上.然后设置以下环境变量来进行导入 export LD_LIBRARY_PATH=/usr/lib/ ...
- Spark(四) -- Spark工作机制
一.应用执行机制 一个应用的生命周期即,用户提交自定义的作业之后,Spark框架进行处理的一系列过程. 在这个过程中,不同的时间段里,应用会被拆分为不同的形态来执行. 1.应用执行过程中的基本组件和形 ...
- 如何在Ubuntu上使用Glances监控系统
导读 Glances 是一个用于监控系统的跨平台.基于文本模式的命令行工具.它是用 Python 编写的,使用 psutil 库从系统获取信息.你可以用它来监控 CPU.平均负载.内存.网络接口.磁盘 ...
- linux的chmod命令
chmod命令用来变更文件或目录的权限.在UNIX系统家族里,文件或目录权限的控制分别以读取.写入.执行3种一般权限来区分,另有3种特殊权限可供运用.用户可以使用chmod指令去变更文件与目录的权限, ...
- [Typescript] Improve Readability with TypeScript Numeric Separators when working with Large Numbers
When looking at large numbers in code (such as 1800000) it’s oftentimes difficult for the human eye ...