1-15-2-RAID5 企业级RAID磁盘阵列的搭建(RAID1、RAID5、RAID10)
RAID5的搭建
第一步:添加四个磁盘,开机并检查(略过)
第二步:使用fdisk命令分别对四个磁盘进行分区,效果如下图:
第三步:使用mdadm命令创建RAID5磁盘阵列
[root@localhost ~]# mdadm -Cv /dev/md5 -l 5 -n 3 -x 1 /dev/sd[b-e]1
注:-n 指定3个磁盘组成阵列、-x指定有一个热备盘(至少3个磁盘)
[root@localhost ~]# watch cat /proc/mdstat #查看磁盘阵列信息
[root@localhost ~]# mdadm -Cv /dev/md5 -l 5 -n 3 -x 1 /dev/sd[b-e]1
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 20954112K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sdd1[4] sde1[3](S) sdc1[1] sdb1[0]
41908224 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
[======>..............] recovery = 31.9% (6700756/20954112) finish=2.7min speed=85916K/sec
unused devices: <none>
[root@localhost ~]# watch cat !$
watch cat /proc/mdstat
[root@localhost ~]# mdadm -Dsv >> /etc/mdadm.conf
[root@localhost ~]# cat !$
cat /etc/mdadm.conf
ARRAY /dev/md5 level=raid5 num-devices=3 metadata=1.2 spares=2 name=localhost.localdomain:5 UUID=5f73c597:481d2f9e:df90d7de:1392d60a
devices=/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Sun Aug 21 02:07:09 2016
Raid Level : raid5
Array Size : 41908224 (39.97 GiB 42.91 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Sun Aug 21 02:09:54 2016
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 4
Failed Devices : 0
Spare Devices : 2
Layout : left-symmetric
Chunk Size : 512K
Rebuild Status : 68% complete
Name : localhost.localdomain:5 (local to host localhost.localdomain)
UUID : 5f73c597:481d2f9e:df90d7de:1392d60a
Events : 11
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
4 8 49 2 spare rebuilding /dev/sdd1
3 8 65 - spare /dev/sde1
[root@localhost ~]#
第四步:格式化并挂载
[root@localhost ~]# ls /dev/md5
[root@localhost ~]# mkdir /disk
[root@localhost ~]# mount /dev/md5 /disk #挂载失败,因为没有对分区进行格式化
[root@localhost ~]# mkfs.xfs /dev/md5
[root@localhost ~]# mount /dev/md5 /disk
[root@localhost ~]# df -h | tail -1
第五步:设置开机自动挂载
方法同RAID1
第六步:故障模拟及修复
当损坏一块磁盘时修复:
[root@localhost ~]# echo "hello world! " >> /disk/a.txt #检查文件
[root@localhost ~]# cat !$
cat /disk/a.txt
hello world!
[root@localhost ~]# mdadm /dev/md5 -f /dev/sdb1 #模拟磁盘损坏
mdadm: set /dev/sdb1 faulty in /dev/md5
[root@localhost ~]# mdadm -D /dev/md5 #检查磁盘状态
/dev/md5:
Version : 1.2
Creation Time : Sun Aug 21 02:07:09 2016
Raid Level : raid5
Array Size : 41908224 (39.97 GiB 42.91 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Sun Aug 21 02:17:34 2016
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 1
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Rebuild Status : 2% complete
Name : localhost.localdomain:5 (local to host localhost.localdomain)
UUID : 5f73c597:481d2f9e:df90d7de:1392d60a
Events : 21
Number Major Minor RaidDevice State
3 8 65 0 spare rebuilding /dev/sde1
1 8 33 1 active sync /dev/sdc1
4 8 49 2 active sync /dev/sdd1
0 8 17 - faulty /dev/sdb1
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sdd1[4] sde1[3] sdc1[1] sdb1[0](F)
41908224 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
[=>...................] recovery = 5.9% (1236864/20954112) finish=7.1min speed=45809K/sec
unused devices: <none>
[root@localhost ~]# cat /disk/a.txt #检查磁盘中数据完整性
hello world!
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sdd1[4] sde1[3] sdc1[1] sdb1[0](F)
41908224 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
[=========>...........] recovery = 48.8% (10229276/20954112) finish=4.4min speed=40192K/sec
unused devices: <none>
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Sun Aug 21 02:07:09 2016
Raid Level : raid5
Array Size : 41908224 (39.97 GiB 42.91 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Sun Aug 21 02:26:30 2016
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 1
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : localhost.localdomain:5 (local to host localhost.localdomain)
UUID : 5f73c597:481d2f9e:df90d7de:1392d60a
Events : 48
Number Major Minor RaidDevice State
3 8 65 0 active sync /dev/sde1
1 8 33 1 active sync /dev/sdc1
4 8 49 2 active sync /dev/sdd1
0 8 17 - faulty /dev/sdb1
[root@localhost ~]# df | grep disk
/dev/md5 41881600 33316 41848284 1% /disk
[root@localhost ~]# umount /disk
[root@localhost ~]# mdadm -S /dev/md5
mdadm: stopped /dev/md5
[root@localhost ~]# mdadm -zero-superblock /dev/sdb1
mdadm: -z does not set the mode, and so cannot be the first option.
[root@localhost ~]# mdadm --zero-superblock /dev/sdb1
[root@localhost ~]# mdadm -As /dev/md5
mdadm: /dev/md5 has been started with 3 drives.
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Sun Aug 21 02:07:09 2016
Raid Level : raid5
Array Size : 41908224 (39.97 GiB 42.91 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Sun Aug 21 02:30:34 2016
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : localhost.localdomain:5 (local to host localhost.localdomain)
UUID : 5f73c597:481d2f9e:df90d7de:1392d60a
Events : 48
Number Major Minor RaidDevice State
3 8 65 0 active sync /dev/sde1
1 8 33 1 active sync /dev/sdc1
4 8 49 2 active sync /dev/sdd1
[root@localhost ~]# mdadm /dev/md5 -a /dev/sdb1 #只有当磁盘同步完成后,才可以添加磁盘
mdadm: added /dev/sdb1
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Sun Aug 21 02:07:09 2016
Raid Level : raid5
Array Size : 41908224 (39.97 GiB 42.91 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Sun Aug 21 02:31:40 2016
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Name : localhost.localdomain:5 (local to host localhost.localdomain)
UUID : 5f73c597:481d2f9e:df90d7de:1392d60a
Events : 49
Number Major Minor RaidDevice State
3 8 65 0 active sync /dev/sde1
1 8 33 1 active sync /dev/sdc1
4 8 49 2 active sync /dev/sdd1
5 8 17 - spare /dev/sdb1
模拟两块磁盘损坏
[root@localhost ~]# mdadm /dev/md5 -f /dev/sdc1
mdadm: set /dev/sdc1 faulty in /dev/md5
[root@localhost ~]# mount -a
[root@localhost ~]# cat /disk/a.txt
hello world!
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Sun Aug 21 02:07:09 2016
Raid Level : raid5
Array Size : 41908224 (39.97 GiB 42.91 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Sun Aug 21 02:32:59 2016
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 1
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Rebuild Status : 4% complete
Name : localhost.localdomain:5 (local to host localhost.localdomain)
UUID : 5f73c597:481d2f9e:df90d7de:1392d60a
Events : 53
Number Major Minor RaidDevice State
3 8 65 0 active sync /dev/sde1
5 8 17 1 spare rebuilding /dev/sdb1
4 8 49 2 active sync /dev/sdd1
1 8 33 - faulty /dev/sdc1
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sdb1[5] sde1[3] sdd1[4] sdc1[1](F)
41908224 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [U_U]
[=>...................] recovery = 5.2% (1091216/20954112) finish=10.8min speed=30344K/sec
unused devices: <none>
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sdb1[5] sde1[3] sdd1[4] sdc1[1](F)
41908224 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [U_U]
[=============>.......] recovery = 66.8% (13998080/20954112) finish=3.9min speed=29308K/sec
unused devices: <none>
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sdb1[5] sde1[3] sdd1[4] sdc1[1](F)
41908224 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none>
[root@localhost ~]# mdadm /dev/md5 -f /dev/sde1
mdadm: set /dev/sde1 faulty in /dev/md5
[root@localhost ~]# cat /disk/a.txt
hello world!
[root@localhost ~]# clear
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Sun Aug 21 02:07:09 2016
Raid Level : raid5
Array Size : 41908224 (39.97 GiB 42.91 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Sun Aug 21 02:45:57 2016
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 2
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : localhost.localdomain:5 (local to host localhost.localdomain)
UUID : 5f73c597:481d2f9e:df90d7de:1392d60a
Events : 72
Number Major Minor RaidDevice State
0 0 0 0 removed
5 8 17 1 active sync /dev/sdb1
4 8 49 2 active sync /dev/sdd1
1 8 33 - faulty /dev/sdc1
3 8 65 - faulty /dev/sde1
[root@localhost ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/rhel-root 10475520 3307104 7168416 32% /
devtmpfs 485144 0 485144 0% /dev
tmpfs 500680 88 500592 1% /dev/shm
tmpfs 500680 13528 487152 3% /run
tmpfs 500680 0 500680 0% /sys/fs/cgroup
/dev/sr0 3947824 3947824 0 100% /mnt
/dev/sda1 201388 127728 73660 64% /boot
tmpfs 100136 16 100120 1% /run/user/0
/dev/md5 41881600 33316 41848284 1% /disk
[root@localhost ~]# umount /disk
[root@localhost ~]# mdadm -S /dev/md5
mdadm: stopped /dev/md5
[root@localhost ~]# mdadm --zero-superblock /dev/sdc1 /dev/sde1
[root@localhost ~]# mdadm -As /dev/md5
mdadm: /dev/md5 has been started with 2 drives (out of 3).
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Sun Aug 21 02:07:09 2016
Raid Level : raid5
Array Size : 41908224 (39.97 GiB 42.91 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 3
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Sun Aug 21 02:46:45 2016
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : localhost.localdomain:5 (local to host localhost.localdomain)
UUID : 5f73c597:481d2f9e:df90d7de:1392d60a
Events : 74
Number Major Minor RaidDevice State
0 0 0 0 removed
5 8 17 1 active sync /dev/sdb1
4 8 49 2 active sync /dev/sdd1
[root@localhost ~]# mdadm /dev/md5 -a /dev/sdc1
mdadm: added /dev/sdc1
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Sun Aug 21 02:07:09 2016
Raid Level : raid5
Array Size : 41908224 (39.97 GiB 42.91 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Sun Aug 21 02:47:47 2016
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Rebuild Status : 0% complete
Name : localhost.localdomain:5 (local to host localhost.localdomain)
UUID : 5f73c597:481d2f9e:df90d7de:1392d60a
Events : 76
Number Major Minor RaidDevice State
3 8 33 0 spare rebuilding /dev/sdc1
5 8 17 1 active sync /dev/sdb1
4 8 49 2 active sync /dev/sdd1
[root@localhost ~]# mdadm /dev/md5 -a /dev/sde1
mdadm: /dev/md5 has failed so using --add cannot work and might destroy
mdadm: data on /dev/sde1. You should stop the array and re-assemble it.
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sdc1[3] sdb1[5] sdd1[4]
41908224 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
[>....................] recovery = 2.7% (571648/20954112) finish=10.8min speed=31424K/sec
unused devices: <none>
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sdc1[3] sdb1[5] sdd1[4]
41908224 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
[========>............] recovery = 43.0% (9018880/20954112) finish=6.1min speed=32130K/sec
unused devices: <none>
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sdc1[3] sdb1[5] sdd1[4]
41908224 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
[================>....] recovery = 80.7% (16911360/20954112) finish=1.8min speed=35542K/sec
unused devices: <none>
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sdc1[3] sdb1[5] sdd1[4]
41908224 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none>
[root@localhost ~]# mdadm /dev/md5 -a /dev/sde1
mdadm: added /dev/sde1
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Sun Aug 21 02:07:09 2016
Raid Level : raid5
Array Size : 41908224 (39.97 GiB 42.91 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Sun Aug 21 02:59:26 2016
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Name : localhost.localdomain:5 (local to host localhost.localdomain)
UUID : 5f73c597:481d2f9e:df90d7de:1392d60a
Events : 95
Number Major Minor RaidDevice State
3 8 33 0 active sync /dev/sdc1
5 8 17 1 active sync /dev/sdb1
4 8 49 2 active sync /dev/sdd1
6 8 65 - spare /dev/sde1
[root@localhost ~]#
模拟三块磁盘损坏
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Sun Aug 21 02:07:09 2016
Raid Level : raid5
Array Size : 41908224 (39.97 GiB 42.91 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Sun Aug 21 02:59:26 2016
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Name : localhost.localdomain:5 (local to host localhost.localdomain)
UUID : 5f73c597:481d2f9e:df90d7de:1392d60a
Events : 95
Number Major Minor RaidDevice State
3 8 33 0 active sync /dev/sdc1
5 8 17 1 active sync /dev/sdb1
4 8 49 2 active sync /dev/sdd1
6 8 65 - spare /dev/sde1
[root@localhost ~]# mdadm -f /dev/sdc1
mdadm: /dev/sdc1 does not appear to be an md device
[root@localhost ~]# mdadm /dev/md5 -f /dev/sdc1
mdadm: set /dev/sdc1 faulty in /dev/md5
[root@localhost ~]# mdadm /dev/md5 -f /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md5
[root@localhost ~]# mdadm /dev/md5 -f /dev/sde1
mdadm: set /dev/sde1 faulty in /dev/md5
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Sun Aug 21 02:07:09 2016
Raid Level : raid5
Array Size : 41908224 (39.97 GiB 42.91 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Sun Aug 21 03:05:40 2016
State : clean, FAILED
Active Devices : 1
Working Devices : 1
Failed Devices : 3
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : localhost.localdomain:5 (local to host localhost.localdomain)
UUID : 5f73c597:481d2f9e:df90d7de:1392d60a
Events : 4245
Number Major Minor RaidDevice State
0 0 0 0 removed
2 0 0 2 removed
4 8 49 2 active sync /dev/sdd1
3 8 33 - faulty /dev/sdc1
5 8 17 - faulty /dev/sdb1
6 8 65 - faulty /dev/sde1
[root@localhost ~]# mount /dev/md5 /disk #当损坏三块磁盘后,数据将不能读取!!!
mount: /dev/md5: can't read superblock
[root@localhost ~]#
1-15-2-RAID5 企业级RAID磁盘阵列的搭建(RAID1、RAID5、RAID10)的更多相关文章
- 1-15-2-RAID1 企业级RAID磁盘阵列的搭建(RAID1、RAID5、RAID10)
大纲: 1.创建RAID1 2.创建RAID5 3.创建RAID10 =============================== 1.创建RAID1 RAID1原理:需要两块或以上磁盘,可添加热备 ...
- 1-15-2-RAID10 企业级RAID磁盘阵列的搭建(RAID1、RAID5、RAID10)
RAID10的搭建: 有两种方法, 第一种:直接使用四块磁盘,创建级别为10的磁盘阵列 第二种:使用四块磁盘先创建两个RAID1,然后在用RAID1创建RAID0 第一步:添加五个磁盘到虚拟机 开机后 ...
- RAID磁盘阵列的搭建(以raid0、raid1、raid5、raid10为例)
mdadm工具的使用 -C或--creat 建立一个新阵列 -r 移除设备 -A 激活磁盘阵列 -l 或--level= 设定磁盘阵列的级别 -D或--detail 打印阵列设备的详细信息 -n或-- ...
- RAID详解[RAID0/RAID1/RAID5]
RAID(Redundant Array of Independent Disk 独立冗余磁盘阵列)技术是加州大学伯克利分校1987年提出,最初是为了组合小的廉价磁盘来代替大的昂贵磁盘,同时希望磁盘失 ...
- 1-15-1 RAID磁盘阵列的原理和搭建
大纲: 1.1-1-企业级RAID磁盘阵列 RAID磁盘阵列的原理 RAID0,1,5,10的搭建 硬件RAID卡 1.2-1-使用廉价的磁盘搭建RAID磁盘阵列 实战-配置RAID0带区卷 ==== ...
- RAID磁盘阵列及CentOS7系统启动流程
磁盘阵列(Redundant Arrays of Independent Disks,RAID),有“独立磁盘构成的具有冗余能力的阵列”之意,,数据读取无影响.将数据切割成许多区段,分别存放在各个硬盘 ...
- Linux系统的RAID磁盘阵列
RAID概念 磁盘阵列(Redundant Arrays of Independent Disks,RAID),有“独立磁盘构成的具有冗余能力的阵列”之意. 磁盘阵列是由很多价格较便宜的磁盘,以硬件( ...
- RAID磁盘阵列与配置
RAID磁盘阵列与配置 目录 RAID磁盘阵列与配置 一.RAID磁盘阵列详解 1.RAID磁盘阵列概述 2.RAID 0(条带化存储) 3.RAID 1(镜像存储) 4.RAID 5 5.RAID ...
- RAID磁盘阵列技术
RAID磁盘阵列技术 1.RAID概述 RAID(Redundant Array of Independent Disk),从字面意思讲的是基于独立磁盘的具有冗余的磁盘阵列,其核心思想是将多块独立磁盘 ...
随机推荐
- 用python 实现生成双色球小程序
生成双色球小程序: #输入n,随机产生n条双色球号码,插入n条数据库 #表结构: seq CREATE TABLE `seq` ( `id` int(11) NOT NULL AUTO_INCREME ...
- python 中元祖tuple的使用
Python的元组与列表类似,不同之处在于元组的元素不能修改. 元组使用小括号,列表使用方括号. 元组创建很简单,只需要在括号中添加元素,并使用逗号隔开即可. eg, tup1 = (1, 2, 3 ...
- php.ini优化,,,php-fpm
无论是apache还是nginx,php.ini都是合适的.而php-fpm.conf适合nginx+fcgi的配置. 1)打开PHP的安全模式 PHP的安全模式是个非常重要的PHP内嵌的安全机制,能 ...
- HDU 1114 Piggy-Bank(完全背包模板题)
完全背包模板题 #include<cstdio> #include<cstring> #include<algorithm> using namespace std ...
- ruby 时间 今天 昨天
today = Time.now.strftime('%Y-%m-%d') yesterday = (Time.now - 1.day).strftime('%Y-%m-%d')
- 2-AMD
诞生背景1.随着前端逻辑越来越多,项目越来越大,开发大型项目就必须分模块开发2.一切都那么完美,在NodeJs实现后,当人们开始热情的打算把这种实现也用于浏览器时,却发现并不适合.NodeJS应用加载 ...
- Word 中将正文中的参考文件标号链接到参考文献具体条目
一.概论 在论文撰写过程中,不可避免地引用到参考文献.通常,论文格式要求我们在引用的正文后,使用中括号将参考文献章节中对应的出处条目序号引起来,例如: 有时,我们要建立起这两者之间的链接关系. 二.设 ...
- windows 常用cmd命令
为了减少使用鼠标的频次,熟记一些常用应用的快捷键与系统本身常用的命令是必须的,以下记录一些常用的windows系统命令. 查看网络端口占用情况 :netstat -ano | findstr 8080 ...
- 多路选择I/O
多路选择I/O提供另一种处理I/O的方法,相比于传统的I/O方法,这种方法更好,更具有效率.多路选择是一种充分利用系统时间的典型. 1.多路选择I/O的概念 当用户需要从网络设备上读数据时,会发生的读 ...
- Ubuntu16 安装Jira
参见:https://segmentfault.com/a/1190000008194333 https://www.ilanni.com/?p=12119烂泥:jira7.3/7.2安装.中文及破解 ...