通过Azure File Service搭建基于iscsi的共享盘
在Azure上目前已经有基于Samba协议的共享存储了。
但目前在Azure上,还不能把Disk作为共享盘。而在实际的应用部署中,共享盘是做集群的重要组件之一。比如仲裁盘、Shared Disk等。
本文将介绍,如果通过基于Samba的文件共享,加上Linux的Target、iscsid以及multipath等工具,提供带HA的共享Disk。
上图是整体架构:
通过两台CentOS7.2的VM都挂载Azure File;在Azure File上创建一个文件: disk.img,两个VM通过iscsi server的软件target,同时把这个disk.img作为iscsi的disk发布出去;一台装有iscsid的CentOS7.2的Server同时挂载这两个iscsi Disk;再采用multipath的软件把这两个盘合成一个。
在这种架构下,iscsi客户端获得了一块iscsi的disk。而且这块Disk是提供HA的网络Disk。
具体实现方式如下:
一、 创建File Service
1. 在Azure Portal上创建File Service
在Storage Account中点击Add:
填写相关信息,点击Create。
创建成功后,在Storage Account中选择创建好的Storage Account:
选择File:
点击+file Share后,填写File Share的名字,点击创建。
创建好后,可以看到在Linux中mount的命令提示:
在Access Keys中复制key
2. 在两台iscsi server上mount这个File Service
首先查看服务器版本:
[root@hwis01 ~]# cat /etc/redhat-release
CentOS Linux release 7.2. (Core)
都是CentOS7.0以上的版本,可以支持Samba3.0。
根据前面File Service的信息执行下面的命令:
[root@hwis01 ~]# mkdir /file
[root@hwis01 ~]# sudo mount -t cifs //hwiscsi.file.core.chinacloudapi.cn/hwfile /file -o vers=3.0,username=hwiscsi,password=xxxxxxxx==,dir_mode=0777,file_mode=0777
[root@hwis01 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 30G .1G 29G % /
devtmpfs 829M 829M % /dev
tmpfs 839M 839M % /dev/shm
tmpfs 839M 8.3M 831M % /run
tmpfs 839M 839M % /sys/fs/cgroup
/dev/sdb1 69G 53M 66G % /mnt/resource
tmpfs 168M 168M % /run/user/
//hwiscsi.file.core.chinacloudapi.cn/hwfile 5.0T 0 5.0T 0% /file
二、 在iscsi server上创建iscsi的disk
1. 在共享目录中创建disk.img
[root@hwis01 ~]# dd if=/dev/zero of=/file/disk.img bs=1M count=
+ records in
+ records out
bytes (1.1 GB) copied, 20.8512 s, 51.5 MB/s
本机查看
[root@hwis01 ~]# cd /file
[root@hwis01 file]# ll
total
-rwxrwxrwx. root root Nov : disk.img
在另外一台Server上查看:
[root@hwis02 ~]# cd /file
[root@hwis02 file]# ll
total
-rwxrwxrwx. root root Nov : disk.img
2.安装相关软件
iscsi服务器端安装targetcli:
[root@hwis01 file]# yum install -y targetcli
iscsi客户端安装iscsi-initiator-utils:
[root@hwic01 ~]# yum install iscsi-initiator-utils -y
安装完成后,在iscsi的客户端机器上查看iqn号码:
[root@hwic01 /]# cd /etc/iscsi/
[root@hwic01 iscsi]# more initiatorname.iscsi
InitiatorName=iqn.-.hw.ic01:client
3.用targetcli创建iscsi的disk
[root@hwis01 file]# targetcli
Warning: Could not load preferences file /root/.targetcli/prefs.bin.
targetcli shell version 2.1.fb41
Copyright - by Datera, Inc and others.
For help on commands, type 'help'.
/> ls
o- / ....................................................................................................................... [...]
o- backstores .............................................................................................................. [...]
| o- block .................................................................................................. [Storage Objects: ]
| o- fileio ................................................................................................. [Storage Objects: ]
| o- pscsi .................................................................................................. [Storage Objects: ]
| o- ramdisk ................................................................................................ [Storage Objects: ]
o- iscsi ............................................................................................................ [Targets: ]
o- loopback ......................................................................................................... [Targets: ]
/> cd backstores/
/backstores> cd fileio
/backstores/fileio> create disk01 /file/disk.img 1G
/file/disk.img exists, using its size ( bytes) instead
Created fileio disk01 with size
/backstores/fileio> cd /iscsi/
/iscsi> create iqn.-.hw.is01:disk01.lun0
Created target iqn.-.hw.is01:disk01.lun0.
Created TPG .
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (0.0.0.0), port .
/iscsi> cd iqn.-.hw.is01:disk01.lun0/tpg1/luns/
/iscsi/iqn....un0/tpg1/luns> create /backstores/fileio/disk01
Created LUN .
/iscsi/iqn....un0/tpg1/luns> cd ../acls/
/iscsi/iqn....un0/tpg1/acls> create iqn.-.hw.ic01:client
Created Node ACL for iqn.-.hw.ic01:client
Created mapped LUN .
/iscsi/iqn....un0/tpg1/acls> ls
o- acls ................................................................................................................ [ACLs: ]
o- iqn.-.hw.ic01:client ................................................................................... [Mapped LUNs: ]
o- mapped_lun0 ......................................................................................... [lun0 fileio/disk01 (rw)]
/iscsi/iqn....un0/tpg1/acls> cd /
/> ls
o- / ....................................................................................................................... [...]
o- backstores .............................................................................................................. [...]
| o- block .................................................................................................. [Storage Objects: ]
| o- fileio ................................................................................................. [Storage Objects: ]
| | o- disk01 ..................................................................... [/file/disk.img (.0GiB) write-back activated]
| o- pscsi .................................................................................................. [Storage Objects: ]
| o- ramdisk ................................................................................................ [Storage Objects: ]
o- iscsi ............................................................................................................ [Targets: ]
| o- iqn.-.hw.is01:disk01.lun0 ................................................................................... [TPGs: ]
| o- tpg1 ................................................................................................. [no-gen-acls, no-auth]
| o- acls .............................................................................................................. [ACLs: ]
| | o- iqn.-.hw.ic01:client ............................................................................... [Mapped LUNs: ]
| | o- mapped_lun0 ..................................................................................... [lun0 fileio/disk01 (rw)]
| o- luns .............................................................................................................. [LUNs: ]
| | o- lun0 ..................................................................................... [fileio/disk01 (/file/disk.img)]
| o- portals ........................................................................................................ [Portals: ]
| o- 0.0.0.0: ........................................................................................................... [OK]
o- loopback ......................................................................................................... [Targets: ]
查看配置文件中的wwn
[root@hwis01 file]# cd /etc/target
[root@hwis01 target]# ls
backup saveconfig.json
[root@hwis01 target]# vim saveconfig.json
[root@hwis01 target]# grep wwn saveconfig.json
"wwn": "acadb3f7-9a2d-44f4-8caf-de627ea98e27"
"node_wwn": "iqn.2016-10.hw.ic01:client"
"wwn": "iqn.2016-10.hw.is01:disk01.lun0"
记录下第一个disk的 "wwn": "acadb3f7-9a2d-44f4-8caf-de627ea98e27"
并将其复制到iscsi server2中的配置。
在server2上查看相关信息:
[root@hwis02 target]# grep wwn saveconfig.json
"wwn": "acadb3f7-9a2d-44f4-8caf-de627ea98e27"
"node_wwn": "iqn.2016-10.hw.ic01:client"
"wwn": "iqn.2016-10.hw.is02:disk01.lun0"
4.开启target服务
[root@hwis01 target]# systemctl enable target
Created symlink from /etc/systemd/system/multi-user.target.wants/target.service to /usr/lib/systemd/system/target.service.
[root@hwis01 target]# systemctl start target
[root@hwis01 target]# systemctl status target
target.service - Restore LIO kernel target configuration
Loaded: loaded (/usr/lib/systemd/system/target.service; enabled; vendor preset: disabled)
Active: active (exited) since Fri -- :: UTC; 7s ago
Process: ExecStart=/usr/bin/targetctl restore (code=exited, status=/SUCCESS)
Main PID: (code=exited, status=/SUCCESS)
Nov :: hwis01 systemd[]: Starting Restore LIO kernel target configuration...
Nov :: hwis01 systemd[]: Started Restore LIO kernel target configuration.
5.允许tcp3206
配置防火墙或者nsg,允许tcp3206的访问。
这里就不展开了。
三、 在iscsi客户端配置iscsi
1.发现并login iscsi的disk
发现:
[root@hwic01 iscsi]# iscsiadm -m discovery -t sendtargets -p 10.1.1.5
10.1.1.5:, iqn.-.hw.is02:disk01.lun0
Login:
[root@hwic01 iscsi]# iscsiadm --mode node --targetname iqn.-.hw.is02:disk01.lun0 --portal 10.1.1.5 --login
Logging in to [iface: default, target: iqn.-.hw.is02:disk01.lun0, portal: 10.1.1.5,] (multiple)
Login to [iface: default, target: iqn.-.hw.is02:disk01.lun0, portal: 10.1.1.5,] successful.
此时在如下文件中,就有了相关iqn的信息:
[root@hwic01 /]# ls /var/lib/iscsi/nodes
iqn.-.hw.is01:disk01.lun0 iqn.-.hw.is02:disk01.lun0
此时在dev中已经存在sdc和sdd两个新增加的disk。
2.安装multipath软件
[root@hwic01 dev]# yum install device-mapper-multipath -y
复制配置文件:
cd /etc
cp /usr/share/doc/device-mapper-multipath-0.4./multipath.conf .
修改配置文件:
vim multipath.conf blacklist {
devnode "^sda$"
devnode "^sdb$"
}
defaults {
find_multipaths yes
user_friendly_names yes
path_grouping_policy multibus
failback immediate
no_path_retry fail
}
启动服务:
[root@hwic01 etc]# systemctl enable multipathd
[root@hwic01 etc]# systemctl start multipathd
[root@hwic01 etc]# systemctl status multipathd
multipathd.service - Device-Mapper Multipath Device Controller
Loaded: loaded (/usr/lib/systemd/system/multipathd.service; enabled; vendor preset: enabled)
Active: active (running) since Fri -- :: UTC; 11s ago
Process: ExecStart=/sbin/multipathd (code=exited, status=/SUCCESS)
Process: ExecStartPre=/sbin/multipath -A (code=exited, status=/SUCCESS)
Process: ExecStartPre=/sbin/modprobe dm-multipath (code=exited, status=/SUCCESS)
Main PID: (multipathd)
CGroup: /system.slice/multipathd.service
└─ /sbin/multipathd
Nov :: hwic01 systemd[]: Starting Device-Mapper Multipath Device Controller...
Nov :: hwic01 systemd[]: PID file /run/multipathd/multipathd.pid not readable (yet?) after start.
Nov :: hwic01 systemd[]: Started Device-Mapper Multipath Device Controller.
Nov :: hwic01 multipathd[]: mpatha: load table [ multipath service-time : : ]
Nov :: hwic01 multipathd[]: mpatha: event checker started
Nov :: hwic01 multipathd[]: path checkers start up
刷新multipath:
[root@hwic01 etc]# multipath -F
查看:
[root@hwic01 etc]# multipath -l
mpatha (36001405acadb3f79a2d44f48cafde627) dm- LIO-ORG ,disk01
size=.0G features='' hwhandler='' wp=rw
`-+- policy='service-time 0' prio= status=active
|- ::: sdc : active undef running
`- ::: sdd : active undef running
可以看到,两个iscsi的disk已经合并成一个disk了。
[root@hwic01 /]# cd /dev/mapper/
[root@hwic01 mapper]# ll
total
crw-------. root root , Nov : control
lrwxrwxrwx. root root Nov : mpatha -> ../dm-
对dm-0进行分区:
[root@hwic01 dev]# fdisk /dev/mapper/mpatha
Welcome to fdisk (util-linux 2.23.).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x44f032cb.
Command (m for help): n
Partition type:
p primary ( primary, extended, free)
e extended
Select (default p): p
Partition number (-, default ):
First sector (-, default ):
Using default value
Last sector, +sectors or +size{K,M,G} (-, default ):
Using default value
Partition of type Linux and of size MiB is set
Command (m for help): p
Disk /dev/dm-: MB, bytes, sectors
Units = sectors of * = bytes
Sector size (logical/physical): bytes / bytes
I/O size (minimum/optimal): bytes / bytes
Disk label type: dos
Disk identifier: 0x44f032cb
Device Boot Start End Blocks Id System
mpatha1 Linux
格式化:
[root@hwic01 mapper]# mkfs.ext4 /dev/mapper/mpatha1
挂载、查看:
[root@hwic01 mapper]# mkdir /iscsi
[root@hwic01 mapper]# mount /dev/mapper/mpatha1 /iscsi
[root@hwic01 mapper]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 30G .2G 29G % /
devtmpfs 829M 829M % /dev
tmpfs 839M 839M % /dev/shm
tmpfs 839M 8.3M 831M % /run
tmpfs 839M 839M % /sys/fs/cgroup
/dev/sdb1 69G 53M 66G % /mnt/resource
tmpfs 168M 168M % /run/user/
/dev/mapper/mpatha1 985M 2.5M 915M % /iscsi
四、 检查iscsi disk的HA及其它属性
1. HA检查
将iscsi server1上的target服务停止掉:
[root@hwis01 target]# systemctl stop target
[root@hwis01 target]# systemctl status target
target.service - Restore LIO kernel target configuration
Loaded: loaded (/usr/lib/systemd/system/target.service; enabled; vendor preset: disabled)
Active: inactive (dead) since Fri -- :: UTC; 7s ago
Process: ExecStop=/usr/bin/targetctl clear (code=exited, status=/SUCCESS)
Process: ExecStart=/usr/bin/targetctl restore (code=exited, status=/SUCCESS)
Main PID: (code=exited, status=/SUCCESS)
Nov :: hwis01 systemd[]: Starting Restore LIO kernel target configuration...
Nov :: hwis01 systemd[]: Started Restore LIO kernel target configuration.
Nov :: hwis01 systemd[]: Stopping Restore LIO kernel target configuration...
Nov :: hwis01 systemd[]: Stopped Restore LIO kernel target configuration.
在iscsi客户端查看:
[root@hwic01 iscsi]# multipath -l
mpatha (36001405acadb3f79a2d44f48cafde627) dm- LIO-ORG ,disk01
size=.0G features='' hwhandler='' wp=rw
`-+- policy='service-time 0' prio= status=active
|- ::: sdc : failed faulty running
`- ::: sdd : active undef running
可以看到一条路径已经出现故障。但磁盘工作仍然正常:
[root@hwic01 iscsi]# ll
total
-rw-r--r--. root root Nov : a
drwx------. root root Nov : lost+found
[root@hwic01 iscsi]# touch b
[root@hwic01 iscsi]# ll
total
-rw-r--r--. root root Nov : a
-rw-r--r--. root root Nov : b
drwx------. root root Nov : lost+found
再恢复服务:
[root@hwis01 target]# systemctl start target
[root@hwis01 target]# systemctl status target
target.service - Restore LIO kernel target configuration
Loaded: loaded (/usr/lib/systemd/system/target.service; enabled; vendor preset: disabled)
Active: active (exited) since Fri -- :: UTC; 3s ago
Process: ExecStop=/usr/bin/targetctl clear (code=exited, status=/SUCCESS)
Process: ExecStart=/usr/bin/targetctl restore (code=exited, status=/SUCCESS)
Main PID: (code=exited, status=/SUCCESS)
Nov :: hwis01 systemd[]: Starting Restore LIO kernel target configuration...
Nov :: hwis01 systemd[]: Started Restore LIO kernel target configuration.
客户端multipath仍然正常工作:
[root@hwic01 iscsi]# multipath -l
mpatha (36001405acadb3f79a2d44f48cafde627) dm- LIO-ORG ,disk01
size=.0G features='' hwhandler='' wp=rw
`-+- policy='service-time 0' prio= status=active
|- ::: sdc : active undef running
`- ::: sdd : active undef running
2. iops
根据前面的一片iops测试的文章中的方法,进行iops的测试:
[root@hwic01 ~]# ./iops.py /dev/dm-
/dev/dm-, 1.07 G, sectorsize=512B, #threads=, pattern=random:
B blocks: 383.1 IO/s, 196.1 kB/s ( 1.6 Mbit/s)
kB blocks: 548.5 IO/s, 561.6 kB/s ( 4.5 Mbit/s)
kB blocks: 495.8 IO/s, 1.0 MB/s ( 8.1 Mbit/s)
kB blocks: 414.1 IO/s, 1.7 MB/s ( 13.6 Mbit/s)
kB blocks: 376.2 IO/s, 3.1 MB/s ( 24.7 Mbit/s)
kB blocks: 357.5 IO/s, 5.9 MB/s ( 46.9 Mbit/s)
kB blocks: 271.0 IO/s, 8.9 MB/s ( 71.0 Mbit/s)
kB blocks: 223.0 IO/s, 14.6 MB/s (116.9 Mbit/s)
kB blocks: 181.2 IO/s, 23.7 MB/s (190.0 Mbit/s)
kB blocks: 137.7 IO/s, 36.1 MB/s (288.9 Mbit/s)
kB blocks: 95.0 IO/s, 49.8 MB/s (398.6 Mbit/s)
MB blocks: 55.4 IO/s, 58.1 MB/s (465.0 Mbit/s)
MB blocks: 37.5 IO/s, 78.7 MB/s (629.9 Mbit/s)
MB blocks: 24.8 IO/s, 103.8 MB/s (830.6 Mbit/s)
MB blocks: 16.6 IO/s, 139.2 MB/s ( 1.1 Gbit/s)
MB blocks: 11.2 IO/s, 188.7 MB/s ( 1.5 Gbit/s)
MB blocks: 5.7 IO/s, 190.0 MB/s ( 1.5 Gbit/s)
可以看到,这个盘的iops大约在500左右,带宽在1.5Gbps左右。
总结:
通过Azure的File Service可以通过文件创建iscsi disk的方式把File Service中的文件发布成Disk,供iscsi客户端挂载。在多个iscsi Server提供ha服务的情况下,通过multipath软件可以实现HA的iscsi disk方案。
这个方案适合于一些集群需要共享磁盘、仲裁盘的情况。
通过Azure File Service搭建基于iscsi的共享盘的更多相关文章
- Azure File Service in IIS
微软Azure的Storage套件中提供了新的服务File Service,让我们运行在Azure中的程序都能共享存储,一个存储账号共享的没有上线,但每个共享的上限是5G.由于File Service ...
- 简单几步零成本使用Vercel部署OneIndex 无需服务器搭建基于OneDrive的网盘
前提 你需要一个OneDrive账号,必须管理员开放API 需要已安装Node.js 拥有Github账号,没有就注册一个 魔法上网环境(看情况) 注册应用 登录https://portal.azur ...
- Windows Azure Storage (20) 使用Azure File实现共享文件夹
<Windows Azure Platform 系列文章目录> Update 2016-4-14.在Azure VM配置FTP和IIS,请参考: http://blogs.iis.net/ ...
- Azure File Storage 基本用法 -- Azure Storage 之 File
Azure Storage 是微软 Azure 云提供的云端存储解决方案,当前支持的存储类型有 Blob.Queue.File 和 Table. 笔者在<Azure Blob Storage 基 ...
- 【Azure App Service】C#下制作的网站,所有网页本地测试运行无误,发布至Azure之后,包含CHART(图表)的网页打开报错,错误消息为 Runtime Error: Server Error in '/' Application
问题描述 C#下制作的网站,所有网页本地测试运行无误,发布至Azure之后,包含CHART(图表)的网页打开报错,错误消息为 Runtime Error: Server Error in '/' Ap ...
- Windows Azure文件共享服务--File Service
部署在Windows Azure上的虚拟机之间如何共享文件?例如:Web Server A和Web Server B组成负载均衡集群,两个服务器需要一个共享目录来存储用户上传的文件.通常,大家可能首先 ...
- 如何使用新浪微博账户进行应用登录验证(基于Windows Azure Mobile Service 集成登录验证)
使用三方账号登录应用应该对大家来说已经不是什么新鲜事儿了,但是今天为什么还要在这里跟大家聊这个话题呢,原因很简单 Windows Azure Mobiles Service Authenticatio ...
- Azure AD Domain Service(二)为域服务中的机器配置 Azure File Share 磁盘共享
一,引言 Azure File Share 是支持两种认证方式的! 1)Active Directory 2)Storage account key 记得上次分析的 "Azure File ...
- Azure Front Door(一)为基于.net core 开发的Azure App Service 提供流量转发
一,引言 之前我们讲解到使用 Azure Traffic Manager.Azure LoadBalancer.Azure Application Gateway,作为项目的负载均衡器来分发流量,转发 ...
随机推荐
- 转移灶,原发灶,cfDNA的外显子测序得到的突变点的关系
文章名称:Exome Sequencing of Cell-Free DNA from Metastatic Cancer Patients IdentifiesClinically Actionab ...
- JVM内存的堆、栈和方法区
JVM的内存分为堆.栈.方法区和程序计数器4个区域 存储内容:基本类型,对象引用,对象本身,class,常量,static变量 堆: 拥有者:所有线程 内容:对象本身,不存放基本类型和对象引用 垃圾回 ...
- php数组函数-array_combine()
array_combine()函数通过合并两个数组来创建一个新数组,其中一个数组是键名,另一个数组的值为键值. 如果其中一个数组为空,或者两个数组的元素个数不同,则该函数返回 false. array ...
- 有些 where 条件会导致索引无效
在查询中,WHERE 条件也是一个比较重要的因素,尽量少并且是合理的 where条件是徆重要的,尽量在多个条件的时候,把会提取尽量少数据量的条件放在前面,减少后一个 where 条件的查询时间.有些 ...
- vue中编辑代码是不注意格式时会报错
1.是因为我们使用了eslint的代码规范,我们不要使用这种规范就好 2.在build目录下找到webpack.base.conf.js 在里面找到关于eslint的相关配置注释或移除掉就好
- Spring初学之使用JdbcTemplate
Spring中使用JdbcTemplate.JdbcDaoSupport和NamedParameterJdbcTemplate来操作数据库,但是JdbcTemplate最常用,最易用. jdbc.pr ...
- review13
Date与Calendar类 Date类和Calendar类属于java.util包. Date类 1.使用无参数构造方法 使用Date类的无参构造方法创建的对象可以获取本机的当前日期和时间,例如: ...
- jsp:tld标签
z注意每个uri地址要保持统一 1.创建MytagPrinta.java文件 package cn.tag; import java.io.IOException; import javax.serv ...
- php实现微信扫码自动登陆与注册功能
本文实例讲述了php实现微信扫码自动登陆与注册功能.分享给大家供大家参考,具体如下: 微信开发已经是现在程序员必须要掌握的一项基本的技术了,其实做过微信开发的都知道微信接口非常的强大做起来也非常的简单 ...
- 在.Net下使用redis基于StackExchange.Redis--登录功能
研究了下redis在.net下的使用,因为以前在java上用redis用的是jedis操作,在.net不是很熟悉,在网站上也看了一部分的.net下redis的使用,大部分都是ServiceStack. ...