目录

前文列表

手动部署 OpenStack Rocky 双节点

横向扩展裸金属管理服务节点

当前云基础设备服务清单

[root@controller ~]# openstack compute service list
+----+------------------+------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+------------+----------+---------+-------+----------------------------+
| 1 | nova-scheduler | controller | internal | enabled | up | 2019-05-08T09:51:29.000000 |
| 2 | nova-consoleauth | controller | internal | enabled | up | 2019-05-08T09:51:22.000000 |
| 3 | nova-conductor | controller | internal | enabled | up | 2019-05-08T09:51:27.000000 |
| 6 | nova-compute | controller | nova | enabled | up | 2019-05-08T09:51:24.000000 |
| 7 | nova-compute | compute | nova | enabled | up | 2019-05-08T09:51:24.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+ [root@controller ~]# openstack volume service list
+------------------+-----------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated At |
+------------------+-----------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller | nova | enabled | up | 2019-05-08T09:51:40.000000 |
| cinder-volume | controller@lvm | nova | enabled | up | 2019-05-08T09:51:43.000000 |
| cinder-volume | controller@ceph | nova | enabled | up | 2019-05-08T09:51:39.000000 |
| cinder-backup | controller | nova | enabled | up | 2019-05-08T09:51:42.000000 |
+------------------+-----------------+------+---------+-------+----------------------------+ [root@controller ~]# openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 41925586-9119-4709-bc23-4668433bd413 | Metadata agent | controller | None | :-) | UP | neutron-metadata-agent |
| 43281ac1-7699-4a81-a5b6-d4818f8cf8f9 | Open vSwitch agent | controller | None | :-) | UP | neutron-openvswitch-agent |
| b815e569-c85d-4a37-84ea-7bdc5fe5653c | DHCP agent | controller | nova | :-) | UP | neutron-dhcp-agent |
| d1ef7214-d26c-42c8-ba0b-2a1580a44446 | L3 agent | controller | nova | :-) | UP | neutron-l3-agent |
| f55311fc-635c-4985-ae6b-162f3fa8f886 | Open vSwitch agent | compute | None | :-) | UP | neutron-openvswitch-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ [root@controller ~]# openstack catalog list
+-----------+-----------+------------------------------------------------------------------------+
| Name | Type | Endpoints |
+-----------+-----------+------------------------------------------------------------------------+
| nova | compute | RegionOne |
| | | internal: http://controller:8774/v2.1 |
| | | RegionOne |
| | | admin: http://controller:8774/v2.1 |
| | | RegionOne |
| | | public: http://controller:8774/v2.1 |
| | | |
| cinderv2 | volumev2 | RegionOne |
| | | admin: http://controller:8776/v2/a2b55e37121042a1862275a9bc9b0223 |
| | | RegionOne |
| | | public: http://controller:8776/v2/a2b55e37121042a1862275a9bc9b0223 |
| | | RegionOne |
| | | internal: http://controller:8776/v2/a2b55e37121042a1862275a9bc9b0223 |
| | | |
| neutron | network | RegionOne |
| | | internal: http://controller:9696 |
| | | RegionOne |
| | | admin: http://controller:9696 |
| | | RegionOne |
| | | public: http://controller:9696 |
| | | |
| glance | image | RegionOne |
| | | admin: http://controller:9292 |
| | | RegionOne |
| | | public: http://controller:9292 |
| | | RegionOne |
| | | internal: http://controller:9292 |
| | | |
| keystone | identity | RegionOne |
| | | admin: http://controller:5000/v3/ |
| | | RegionOne |
| | | public: http://controller:5000/v3/ |
| | | RegionOne |
| | | internal: http://controller:5000/v3/ |
| | | |
| placement | placement | RegionOne |
| | | internal: http://controller:8778 |
| | | RegionOne |
| | | public: http://controller:8778 |
| | | RegionOne |
| | | admin: http://controller:8778 |
| | | |
| cinderv3 | volumev3 | RegionOne |
| | | internal: http://controller:8776/v3/a2b55e37121042a1862275a9bc9b0223 |
| | | RegionOne |
| | | admin: http://controller:8776/v3/a2b55e37121042a1862275a9bc9b0223 |
| | | RegionOne |
| | | public: http://controller:8776/v3/a2b55e37121042a1862275a9bc9b0223 |
| | | |
+-----------+-----------+------------------------------------------------------------------------+

BareMetal Node

  • ens160: 172.18.22.233/24
  • [可选] ens192: OvS Provider Bridge NIC of Provisioning Network

配置基础设施

  • DNS 解析
[root@localhost ~]# cat /etc/hosts
... 172.18.22.231 controller
172.18.22.232 compute
172.18.22.233 baremetal
  • NTP 时间同步
[root@baremetal ~]# cat /etc/chrony.conf | grep -v ^# | grep -v ^$
server controller iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony [root@baremetal ~]# systemctl enable chronyd.service
[root@baremetal ~]# systemctl start chronyd.service [root@baremetal ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^? controller 3 6 3 0 -9548us[-9548us] +/- 37ms
  • YUM 源
yum install centos-release-openstack-rocky -y
yum upgrade -y
yum install python-openstackclient -y
yum install openstack-selinux -y

安装 Ironic(BareMetal)

NOTE:请注意每个步骤的操作节点

  • 添加 Ironic 用户及其鉴权信息
openstack service create --name ironic --description "Ironic baremetal provisioning service" baremetal

openstack user create --domain default --password-prompt ironic
openstack role add --project service --user ironic admin openstack endpoint create --region RegionOne baremetal admin http://baremetal:6385
openstack endpoint create --region RegionOne baremetal public http://baremetal:6385
openstack endpoint create --region RegionOne baremetal internal http://baremetal:6385 openstack catalog list
  • 安装软件包
yum install openstack-ironic-api openstack-ironic-conductor python-ironicclient -y
  • 配置 ironic-api & ironic-conductor
[DEFAULT]
my_ip=172.18.22.233
transport_url = rabbit://openstack:fanguiju@controller
auth_strategy = keystone
state_path = /var/lib/ironic
debug = True [api]
port = 6385 [conductor]
automated_clean = false
clean_callback_timeout = 1800
rescue_callback_timeout = 1800
soft_power_off_timeout = 600
power_state_change_timeout = 30
power_failure_recovery_interval = 300 [database]
connection=mysql+pymysql://ironic:fanguiju@controller/ironic?charset=utf8 [dhcp]
dhcp_provider = neutron [neutron]
auth_type = password
auth_url = http://controller:5000
username=ironic
password=fanguiju
project_name=service
project_domain_id=default
user_domain_id=default
region_name = RegionOne
valid_interfaces=public [glance]
url = http://controller:9292
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = glance
password = fanguiju [cinder]
region_name = RegionOne
project_domain_id = default
user_domain_id = default
project_name = service
password = fanguiju
username = ironic
auth_url = http://controller:5000
auth_type = password [service_catalog]
region_name = RegionOne
project_domain_id = default
user_domain_id = default
project_name = service
password = fanguiju
username = ironic
auth_url = http://controller:5000
auth_type = password [keystone_authtoken]
auth_type=password
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
username=ironic
password=fanguiju
project_name=service
project_domain_name=default
user_domain_name=default

NOTE:本文 ironic-api 和 ironic-conductor 同节点

  • 创建数据库(Controller)
CREATE DATABASE ironic CHARACTER SET utf8;
GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' IDENTIFIED BY 'fanguiju';
GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' IDENTIFIED BY 'fanguiju';
  • 初始化数据库
ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema
  • 启动服务
systemctl enable openstack-ironic-api openstack-ironic-conductor
systemctl start openstack-ironic-api openstack-ironic-conductor
systemctl status openstack-ironic-api openstack-ironic-conductor
  • 验证
[root@controller ~]# openstack baremetal driver list
+---------------------+----------------+
| Supported driver(s) | Active host(s) |
+---------------------+----------------+
| ipmi | baremetal |
+---------------------+----------------+

安装 Nova Compute(BareMetal)

NOTE:该节点的 nova-compute service 作为 BareMetal 的管理与调度层,所以不需要支持嵌套虚拟化。e.g.

[root@baremetal ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
0
  • 软件包
yum install openstack-nova-compute -y
  • 配置
# /etc/nova/nova.conf

[DEFAULT]
my_ip = 172.18.22.233
transport_url = rabbit://openstack:fanguiju@controller
debug = True
use_neutron = true
compute_driver=ironic.IronicDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver reserved_host_cpus=0
reserved_host_memory_mb=0
reserved_host_disk_mb=0
update_resources_interval=10
cpu_allocation_ratio=1.0
ram_allocation_ratio=1.0
disk_allocation_ratio=1.0
bandwidth_poll_interval=-1 [ironic]
api_retry_interval = 5
api_max_retries = 300
auth_type=password
auth_url=http://controller:5000/v3
project_name=service
username=ironic
password=fanguiju
project_domain_name=default
user_domain_name=default [glance]
api_servers = http://controller:9292 [cinder]
os_region_name = RegionOne [neutron]
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = fanguiju [placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = fanguiju
  • 启动服务
systemctl enable openstack-nova-compute.service
systemctl start openstack-nova-compute.service
systemctl status openstack-nova-compute.service
  • 将 BareMetal Node 的 nova-compute 注册到 Cell
[root@controller ~]# nova-manage cell_v2 discover_hosts --by-service

[root@controller ~]# nova-manage cell_v2 list_cells
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+----------+
| Name | UUID | Transport URL | Database Connection | Disabled |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+----------+
| cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:****@controller/nova_cell0 | False |
| cell1 | 51e0592e-b622-4365-814c-98ce96bcce7b | rabbit://openstack:****@controller | mysql+pymysql://nova:****@controller/nova | False |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+----------+ [root@controller ~]# nova-manage cell_v2 list_hosts
+-----------+--------------------------------------+------------+
| Cell Name | Cell UUID | Hostname |
+-----------+--------------------------------------+------------+
| cell1 | 51e0592e-b622-4365-814c-98ce96bcce7b | baremetal |
| cell1 | 51e0592e-b622-4365-814c-98ce96bcce7b | compute |
| cell1 | 51e0592e-b622-4365-814c-98ce96bcce7b | controller |
+-----------+--------------------------------------+------------+
  • 验证
[root@controller ~]# openstack compute service list
+----+------------------+------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+------------+----------+---------+-------+----------------------------+
| 1 | nova-scheduler | controller | internal | enabled | up | 2019-05-08T11:28:57.000000 |
| 2 | nova-consoleauth | controller | internal | enabled | up | 2019-05-08T11:28:55.000000 |
| 3 | nova-conductor | controller | internal | enabled | up | 2019-05-08T11:28:58.000000 |
| 6 | nova-compute | controller | nova | enabled | up | 2019-05-08T11:28:54.000000 |
| 7 | nova-compute | compute | nova | enabled | up | 2019-05-08T11:28:56.000000 |
| 8 | nova-compute | baremetal | nova | enabled | up | 2019-05-08T11:28:58.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+
  • 因为是裸机环境,所以关闭 Nova Scheduler 跟踪虚拟机信息
[filter_scheduler]
track_instance_changes=False
  • 重启 Nova Scheduler
systemctl restart openstack-nova-scheduler

配置 Neutron 提供 Provisioning Network

抽象网络模型

  • Provisioning Network 是注册并部署裸金属实例的网络,所以要求连通 IPMI、DHCP、PXE 以及裸金属服务器。其中 DHCP 由 Neutron DHCP Agent 提供,PXE 由 Ironic Conductor 提供。有 Flat、VLAN、SDN 等多种方案,本文选择使用 Flat 类型。
  • Cleaning Network 是初始化裸金属节点的网络,主要完成抹盘、初始化硬件配置信息等工作,所以要求连通 IPMI 以及裸金属服务器。
  • Tenant Network 是常规的 Neutron 租户网络,在裸金属服务器部署完成之后会将裸金属节点的部署端口(PXE 网卡)从 Provisioning Network 切换到 Tenant Network。在此之上可以通过 L3 Router 完成裸机与虚拟机的跨网络通信。

Flat 网络模型

Flat 网络模型下,所有的物理服务器(裸金属节点、OpenSack 节点)都处于同一个 Flat(扁平)网络中,无需交换机或者为透明交换机。物理机网络由运维人员完成预配置,Neutron 在 Flat 网络模型中只负责提供 DHCP 服务。

  • IPMI OOB Network、OpenStack MGMT Network 二层互通:Ironic Conductor 节点可以通过 ipmitool 管理 BM Node。
  • Provisioning Network 复用 External Network 与 BM Nodes 二层互通:BM Node 可以从 Provisioning Network DHCP 获取到 IP 地址和 TFTP 服务器的访问接口。
  • OpenStack MGMT Network 与 External Network 二层互通:BM Node 可以经 OpenStack MGMT Network 访问到 TFTP 服务器;运行在 BM Node 上的 IPA 与 Ironic Conductor 通信。

VLAN 网络模型

VLAN 网络模型下,Neutron 可以通过 Networking Generic Switch 来实现对物理交换机的接管。 在此基础上无论是 Provisioning Network 还是 Tenant Network 都通过 VLAN 网络类型(Physical Network)接入到物理交换机,由 Neutron 完成对物理交换机端口配置的切换控制。比如:部署时,BM Node 接入端口为 Provisioning Network VLAN;部署完成后,BM Node 接入端口为 Tenant Network VLAN。

  • Provisioning Network 与 Tenant Network 复用一个 VLAN Physical Network,并具有不同的 VLAN ID。
  • Provisioning Network、Tenant Network、BM Nodes 全部接入一个二层交换网络,通过对 BM Node 上联端口 VLAN ID 的动态配置实现 BM Node 接入网络的切换。
    • 部署时,BM Node VLAN 接入 Provisioning Network:BM Node 从 DHCP 获取 IP 地址。
    • 部署完成后,BM Node VLAN 接入 Tenant Network:BM Node 与同租户网络中的其他节点互通。
  • OpenStack MGMT Network 要求与 IPMI OOB Network、Provisioning Network、物理交换机管理 IP 打通,可以部署为同一个网络。
    • 与 IPMI OOB Network 打通:Ironic Conductor 通过 ipmitool 接管 IPMI。
    • 与 Provisioning Network 打通:BM Node 访问 TFTP 服务器;运行在 BM Node 上的 IPA 与 Ironic Conductor 通信。
    • 与物理交换机管理 IP 打通:Neutron 接管交换机。

配置 Open vSwitch Agent(Controller)

NOTE:因为在本文环境中已经存在一个基础的 OpenStack 环境,包括已经配置好了的 Flat Physical Network。这里本质的需求是打造一个可以提供 Provisioning Network 的基础网络配置,所以根据个人的实际情况来选择是否执行下列配置。

  • 安装 OvS Agent
yum install openstack-neutron-openvswitch ipset -y
  • 配置 OvS Agent
# /etc/neutron/plugins/ml2/openvswitch_agent.ini

[ovs]
datapath_type = system
bridge_mappings = provider:br-provider [agent]
l2_population = True [securitygroup]
firewall_driver = openvswitch
  • 启动 vswitchd 守护进程
systemctl enable openvswitch
systemctl start openvswitch
systemctl status openvswitch
  • 手动创建 OvS Provider Bridge
ovs-vsctl add-br br-provider
ovs-vsctl add-port br-provider ens192
  • 启动 OvS Agent
systemctl enable neutron-openvswitch-agent.service
systemctl start neutron-openvswitch-agent.service
systemctl status neutron-openvswitch-agent.service
  • 验证
[root@baremetal ~]# ovs-vsctl show
52fd6a40-ed6b-460c-8af9-8b13239a9ad5
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Bridge br-provider
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port br-provider
Interface br-provider
type: internal
Port "ens192"
Interface "ens192"
Port phy-br-provider
Interface phy-br-provider
type: patch
options: {peer=int-br-provider}
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port int-br-provider
Interface int-br-provider
type: patch
options: {peer=phy-br-provider}
Port br-int
Interface br-int
type: internal
ovs_version: "2.10.1" [root@controller ~]# openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 02ac17a4-9a27-4dd6-b11f-a6eada895432 | Open vSwitch agent | baremetal | None | :-) | UP | neutron-openvswitch-agent |
...
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
  • 在 Neutron ML2 配置 Flat Physical Network,用于创建 Provisioning Network
# /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2_type_flat]
flat_networks = provider,external [ml2_type_vlan]
network_vlan_ranges = provider1:1:1000 [ml2_type_vxlan]
vni_ranges = 1:1000
  • 重启 Neutron Server
systemctl restart neutron-server

配置 Networking-baremetal ML2 mechanism driver(Controller)[可选]

以往使用 Flat 网络接口时创建的裸机 Port 状态会一直处于 DOWN,但裸机操作系统的部署依然能够成功且正常工作。而 Networking-baremetal 项目正是希望解决裸机 Port 状态不正常的问题,该项目提供了网络服务和裸机服务深度集成的功能,不仅能完成裸机 Port 状态变更,还能提供 Routed Networks 功能。

PSRouted Networks & Multi-Segments

Networking-baremetal ML2 mechanism driver 是一个 Neutron ML2 Mechanism Driver,主要用于 Fake Neutron Port Attach 动作,使其状态保存健康。但实际上 Networking-baremetal ML2 mechanism driver 是可选的。因为 Ironic Driver in Nova 的 Port Binding 允许失败。类比:

VM:将 Neutron Port 绑定到计算节点的 Tap

BM:将 Neutron Port Fake Attach

  • 安装 Networking-baremetal ML2 mechanism driver
yum install python2-networking-baremetal -y
  • 配置使用 Networking-baremetal ML2 mechanism driver
# /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
type_drivers = local,flat,vlan,vxlan
tenant_network_types = vxlan
extension_drivers = port_security
mechanism_drivers = openvswitch,l2population,baremetal
  • 重启服务
systemctl restart neutron-server

配置 Ironic Neutron Agent(Controller)[可选]

Ironic Neutron Agent 和 Networking-baremetal ML2 mechanism driver 配合使用。

  • 安装 Ironic Neutron Agent
yum install -y python2-ironic-neutron-agent
  • 配置 Ironic Neutron Agent
# /etc/neutron/plugins/ml2/ironic_neutron_agent.ini

[DEFAULT]
debug = true [agent]
log_agent_heartbeats = true [ironic]
project_domain_name = default
project_name = service
user_domain_name = default
password = fanguiju
username = ironic
auth_url = http://controller:5000/v3
auth_type = password
region_name = RegionOne
  • 启动 Ironic Neutron Agent
systemctl enable ironic-neutron-agent
systemctl start ironic-neutron-agent
systemctl status ironic-neutron-agent

创建 Provisioning Network

如前文所述,我们创建一个 Flat 类型的 Provisioning Network,该网络的本质是一个运营商网络,通过运营商的物理交换机连通到裸金属服务器,使得裸金属服务器可以借助该网络的 DHCP 服务获取到 IP 地址和 PXE 服务器的信息。所以 Subnet 一定要 Enable DHCP。

openstack network create --project admin provisioning-net-1 --share --provider-network-type flat --provider-physical-network provider

openstack subnet create provisioning-subnet-1 --network provisioning-net-1 \
--subnet-range 172.18.22.0/24 --ip-version 4 --gateway 172.18.22.1 \
--allocation-pool start=172.18.22.237,end=172.18.22.240 --dhcp

配置 Neutron 提供 Cleaning Network(BareMetal)

本文环境将 Provisioning Network 和 Cleaning Network 合并。

  • 获取 Provisioning Network UUID
[root@controller ~]# openstack network list
+--------------------------------------+--------------------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+--------------------+--------------------------------------+
| 3e8d84ab-9d6e-4194-b8c0-4a14807cf8ed | ext_net | 8792cf1d-51e8-49b7-80ae-656226c440e6 |
| b90fce07-0f32-4ba5-a1fd-a8e5e00f9c65 | provisioning-net-1 | 67327a38-4dd1-41bb-99cc-2be0bd2de00a |
| be8ca1f5-f243-4640-b7e1-4107fe16dd70 | vxlan-net-1000 | 85c68fdd-85f7-4f19-9538-ff82b5c8c5f0 |
+--------------------------------------+--------------------+--------------------------------------+
  • 配置 ironic-conductor
# /etc/ironic/ironic.conf

[neutron]
cleaning_network = b90fce07-0f32-4ba5-a1fd-a8e5e00f9c65
  • 重启服务
systemctl restart openstack-ironic-conductor

配置 Ironic 使用 Neutron Networking(BareMetal)

  • 修改配置让 Ironic 默认使用 Flat Network Interface 以及填入 Provisioning/Cleaning Network
# /etc/ironic/ironic.conf

[DEFAULT]
...
enabled_network_interfaces=noop,flat,neutron
default_network_interface=flat [neutron]
...
cleaning_network = b90fce07-0f32-4ba5-a1fd-a8e5e00f9c65
cleaning_network_security_groups = b9ce73bb-58c1-44f6-91cf-f66d5f55f57f
provisioning_network = b90fce07-0f32-4ba5-a1fd-a8e5e00f9c65
provisioning_network_security_groups = b9ce73bb-58c1-44f6-91cf-f66d5f55f57f

NOTE: The “provisioning” and “cleaning” networks may be the same network or distinct networks. To ensure that communication between the Bare Metal service and the deploy ramdisk works, it is important to ensure that security groups are disabled for these networks, or that the default security groups allow:

  • DHCP

  • TFTP

  • egress port used for the Bare Metal service (6385 by default)

  • ingress port used for ironic-python-agent (9999 by default)

  • if using iSCSI deploy, the ingress port used for iSCSI (3260 by default)

  • if using Direct deploy, the egress port used for the Object Storage service (typically 80 or 443)

  • if using iPXE, the egress port used for the HTTP server running on the ironic-conductor nodes (typically 80).

  • 重启服务

systemctl restart openstack-ironic-api
systemctl restart openstack-ironic-conductor

构建 Images

  • 安装 Disk Image Builder
$ virtualenv dib
$ source dib/bin/activate
(dib) $ pip install diskimage-builder

构建 Deploy Images

官方文档:Building or downloading a deploy ramdisk image

官方文档:Installing Ironic Python Agent!

NOTE:这里我们没有定制需求,直接下载。

wget https://tarballs.openstack.org/ironic-python-agent/coreos/files/coreos_production_pxe.vmlinuz
wget https://tarballs.openstack.org/ironic-python-agent/coreos/files/coreos_production_pxe_image-oem.cpio.gz
  • 上传至 Glance
glance image-create --name deploy-vmlinuz --visibility public --disk-format aki --container-format aki < coreos_production_pxe.vmlinuz
glance image-create --name deploy-initrd --visibility public --disk-format ari --container-format ari < coreos_production_pxe_image-oem.cpio.gz
  • 验证
[root@baremetal deploy_images]# openstack image list
+--------------------------------------+----------------+--------+
| ID | Name | Status |
+--------------------------------------+----------------+--------+
| d18923bd-86fc-4f77-b5e8-976d3b1c367c | cirros_raw | active |
| 6000a17f-0ab7-418a-990c-2009a59c3392 | deploy-initrd | active |
| e650d33b-8fad-42f7-948c-5c12526bcd07 | deploy-vmlinuz | active |
+--------------------------------------+----------------+--------+

构建 User Images

  • Partition images
# 支持 Cloud Init
# 设置登录账户
$ DIB_CLOUD_INIT_DATASOURCES="ConfigDrive, OpenStack"
DIB_DEV_USER_USERNAME=root \
DIB_DEV_USER_PWDLESS_SUDO=yes \
DIB_DEV_USER_PASSWORD=fanguiju \
disk-image-create \
centos7 \
dhcp-all-interfaces \
baremetal \
grub2 \
-o my-image $ ls
my-image.d my-image.initrd my-image.qcow2 my-image.vmlinuz

The partition image command creates my-image.qcow2, my-image.vmlinuz and my-image.initrd files. The grub2 element in the partition image creation command is only needed if local boot will be used to deploy my-image.qcow2, otherwise the images my-image.vmlinuz and my-image.initrd will be used for PXE booting after deploying the bare metal with my-image.qcow2.

  • 上传至 Glance
glance image-create --name my-image.vmlinuz --visibility public --disk-format aki --container-format aki < my-image.vmlinuz
glance image-create --name my-image.initrd --visibility public --disk-format ari --container-format ari < my-image.initrd export MY_VMLINUZ_UUID=$(openstack image list | awk '/my-image.vmlinuz/ { print $2 }')
export MY_INITRD_UUID=$(openstack image list | awk '/my-image.initrd/ { print $2 }')
glance image-create --name my-image --visibility public --disk-format qcow2 --container-format bare --property kernel_id=$MY_VMLINUZ_UUID --property ramdisk_id=$MY_INITRD_UUID < my-image.qcow2
  • 验证
(dib) [root@baremetal user_images]# openstack image list
+--------------------------------------+------------------+--------+
| ID | Name | Status |
+--------------------------------------+------------------+--------+
| d18923bd-86fc-4f77-b5e8-976d3b1c367c | cirros_raw | active |
| 6000a17f-0ab7-418a-990c-2009a59c3392 | deploy-initrd | active |
| e650d33b-8fad-42f7-948c-5c12526bcd07 | deploy-vmlinuz | active |
| 5e756d4d-b4e9-43a9-9e49-d530c72a7674 | my-image | active |
| 24c9d142-3589-420a-b59c-f70e04575dbe | my-image.initrd | active |
| 3bf6aaa0-58b6-4037-803a-43ee6d8937c4 | my-image.vmlinuz | active |
+--------------------------------------+------------------+--------+

配置 Bare Metal Provisioning 驱动

根据裸机集群的具体厂商和硬件设备来配置 Bare Metal Provisioning Drivers。Ironic 支持的驱动类型非常之多,具体可浏览官方文档。

官方文档Set up the drivers for the Bare Metal service

常见组合

  • pxe + ipmi:IPMI 控制硬件设备、使用 PXE 实施部署

  • pxe + drac:DRAC 控制硬件设备、使用 PXE 实施部署

  • pxe + ilo:iLO 控制硬件设备、使用 PXE 实施部署

  • pxe + iboot:iBoot 控制硬件设备、使用 PXE 实施部署

  • pxe + ssh:SSH 控制硬件设备、使用 PXE 实施部署

  • 配置

# /etc/ironic/ironic.conf

[DEFAULT]
...
enabled_hardware_types = ipmi,redfish
# boot
enabled_boot_interfaces = pxe
# console
enabled_console_interfaces = ipmitool-socat,no-console
# deploy
enabled_deploy_interfaces = direct,iscsi
# inspect
enabled_inspect_interfaces = inspector
# management
enabled_management_interfaces = ipmitool,redfish
# power
enabled_power_interfaces = ipmitool,redfish
# raid
enabled_raid_interfaces = agent
# vendor
enabled_vendor_interfaces = ipmitool, no-vendor
# storage
enabled_storage_interfaces = cinder, noop
# network
enabled_network_interfaces = flat,neutron
  • 重启
systemctl restart openstack-ironic-conductor
  • 验证
[root@controller ~]# openstack baremetal driver list
+---------------------+----------------+
| Supported driver(s) | Active host(s) |
+---------------------+----------------+
| ipmi | baremetal |
| redfish | baremetal |
+---------------------+----------------+

配置 PXE 服务器

  • 编辑
# /etc/ironic/ironic.conf

[ipmi]
retry_timeout=60 [pxe]
ipxe_enabled = False
pxe_append_params = nofb nomodeset vga=normal console=ttyS0 systemd.journald.forward_to_console=yes
tftp_root=/tftpboot
tftp_server=172.18.22.233
  • 重启服务
systemctl restart openstack-ironic-conductor

配置 IPMI Tool

在 Ironic Conductor 节点安装 IPMI Tool。

  • 安装
yum install ipmitool -y
  • 验证
# ipmitool -I lanplus -H <ip-address> -U <username> -P <password> chassis power status

[root@baremetal ~]# ipmitool -I lanplus -H 172.18.22.106 -U admin -P admin chassis power status
Chassis Power is on

配置 TFTP 服务器

在 Ironic Conductor 节点配置 TFTP。

  • 安装
sudo mkdir -p /tftpboot
sudo chown -R ironic /tftpboot
sudo yum install tftp-server syslinux-tftpboot xinetd -y
  • 配置
# /etc/xinetd.d/tftp

service tftp
{
protocol = udp
port = 69
socket_type = dgram
wait = yes
user = root
server = /usr/sbin/in.tftpd
server_args = -v -v -v -v -v --map-file /tftpboot/map-file /tftpboot
disable = no
flags = IPv4
}
  • 准备 pxelinux.0、chain.c32 文件
sudo cp /usr/share/syslinux/pxelinux.0 /tftpboot

# If whole disk images need to be deployed via PXE-netboot, copy the chain.c32 image to /tftpboot to support it
sudo cp /usr/share/syslinux/chain.c32 /tftpboot/
  • If the version of syslinux is greater than 4 we also need to make sure that we copy the library modules into the /tftpboot directory. Create a map file in the tftp boot directory
echo 're ^(/tftpboot/) /tftpboot/\2' > /tftpboot/map-file
echo 're ^/tftpboot/ /tftpboot/' >> /tftpboot/map-file
echo 're ^(^/) /tftpboot/\1' >> /tftpboot/map-file
echo 're ^([^/]) /tftpboot/\1' >> /tftpboot/map-file
  • 启动
sudo systemctl enable xinetd
sudo systemctl restart xinetd
sudo systemctl status xinetd
  • 验证
# 服务端
[root@baremetal ~]# echo 'test tftp' > /tftpboot/aaa # 客户端
[root@controller ~]# tftp baremetal -c get aaa
[root@controller ~]# cat aaa
test tftp

配置支持 PXE UEFI

NOTE:Make sure that bare metal node is configured to boot in UEFI boot mode and boot device is set to network/pxe.

  • Install Grub2 and shim packages
sudo yum install grub2-efi shim -y
  • Copy grub and shim boot loader images to /tftpboot directory
sudo cp /boot/efi/EFI/centos/shim.efi /tftpboot/bootx64.efi
sudo cp /boot/efi/EFI/centos/grubx64.efi /tftpboot/grubx64.efi
  • Create master grub.cfg
$ GRUB_DIR=/tftpboot/EFI/centos

$ sudo mkdir -p $GRUB_DIR

$ cat $GRUB_DIR/grub.cfg
set default=master
set timeout=5
set hidden_timeout_quiet=false menuentry "master" {
configfile /tftpboot/$net_default_mac.conf
} $ sudo chmod 644 $GRUB_DIR/grub.cfg
  • Update the bare metal node with boot_mode capability in node’s properties field
openstack baremetal node set <node-uuid> --property capabilities='boot_mode:uefi'

配置支持 iSCSI-base Driver

因为使用 iSCSI Deploy 方式的话,Ironic Conductor 节点会作为 iSCSI Client 并执行镜像的注入,所以需要安装 qemu-img 和 iscsiadm 指令行工具。

  • 安装 QEMU 镜像操作工具
yum install qemu-img
  • 安装 iSCSI 客户端
yum -y install iscsi-initiator-utils

参考文章

https://www.cnblogs.com/zhangyufei/p/8473306.html

手动集成 Ironic 裸金属管理服务(Rocky)的更多相关文章

  1. Ironic 裸金属管理服务

    目录 文章目录 目录 Ironic 软件架构设计 资源模型设计 全生命周期的状态机设计 Inspection 裸金属上架自检阶段 Provision 裸金属部署阶段 Clean 裸金属回收阶段 快速体 ...

  2. Ironic 裸金属管理服务的网络模型

    目录 文章目录 目录 Bare-Metal networking in Neutron 核心网络类型 网络拓扑 抽象网络拓扑图 Neutron Implementation Neutron 了解裸金属 ...

  3. Ironic 裸金属管理服务的底层技术支撑

    目录 文章目录 目录 底层技术支撑 DHCP NBP TFTP IPMI PXE & iPXE Cloud Init Linux 操作系统启动引导过程 底层技术支撑 PXE:预启动执行环境,支 ...

  4. 注册 Ironic 裸金属节点并部署裸金属实例

    目录 文章目录 目录 前文列表 注册(Enrollment)裸机 创建裸金属实例的 Flavor 部署裸金属实例 日志分析 问题:Failed to create neutron ports for ...

  5. Ironic 裸金属实例的部署流程

    目录 文章目录 目录 逻辑架构 部署架构 前提条件 部署流程 iSCSI Deploy UML PXE Deploy Driver Direct Deploy UML IPA Deploy Drive ...

  6. 使用disk-image-builder(DIB)制作Ironic 裸金属镜像

    export DIB_DEV_USER_USERNAME=centos export DIB_DEV_USER_PASSWORD= export DIB_DEV_USER_PWDLESS_SUDO=Y ...

  7. OpenStack-Ironic裸金属简介

    一,Ironic简述 简而言之,OpenStack Ironic就是一个进行裸机部署安装的项目.    所谓裸机,就是指没有配置操作系统的计算机.从裸机到应用还需要进行以下操作:  (1)硬盘RAID ...

  8. OpenStack Newton:集虚拟化,裸金属和容器部署的统一云平台(转载)

    2016-10-08木屐大数据在线 国庆长假第六天,OpenStack第十四版本Newton(牛顿?)发布,官方介绍中强调这是一个集虚拟化.裸金属和容器技术的一体化平台,可通过一套API来管理裸金属. ...

  9. ironic组件硬件自检服务——ironic-inspector

    介绍 ironic-inspector是一个用于硬件自检的辅助型服务,它可以对被ironic组件管理的裸金属节点进行硬件自检,通过在裸金属节点上运行内存系统,发现裸金属节点的硬件信息,例如CPU数量和 ...

随机推荐

  1. 理解JavaScript里的 [].forEach.call() 写法

    原文:  http://www.webhek.com/javascript-foreach-call document.querySelectorAll() 返回的并不是我们想当然的数组,而是 Nod ...

  2. CentOS7 xrdp 安装和设置

    1) 安装 $ sudo yum install xrdp $ sudo yum install tigervnc $ sudo yum install tigervnc-server 2) 设置密码 ...

  3. jquery中.each()方法遍历循环如何终止方法

    使用return false 终止循环 function checkStar(obj){ var flag=false; //获取本节点星级 var star = obj.getAttribute(& ...

  4. HDU - 6223 Infinite Fraction Path (倍增+后缀数组)

    题意:给定一个长度为n(n<=150000)的字符串,每个下标i与(i*i+1)%n连边,求从任意下标出发走n步能走出的字典序最大的字符串. 把下标看成结点,由于每个结点有唯一的后继,因此形成的 ...

  5. poj3691 DNA repair[DP+AC自动机]

    $给定 n 个模式串,和一个长度为 m 的原串 s,求至少修改原串中的几个字符可以使得原串中不包含任一个模式串.模式串总长度 ≤ 1000,m ≤ 1000.$ 先建出模式串的AC自动机,然后考虑怎么 ...

  6. k8s安装flannel报错“node "master" pod cidr not assigned”

    一.在安装flannel网络插件后,发现pod: kube-flannel-ds 一直是CrashLoopBackOff 此报错是因为安装Kubeadm Init的时候,没有增加 --pod-netw ...

  7. C# Lambda表达式学习笔记

    本笔记摘抄自:https://www.cnblogs.com/leslies2/archive/2012/03/22/2389318.html,记录一下学习过程以备后续查用.     一.Lambda ...

  8. 什么是favicon.ico?

    ㈠定义 ⑴所谓favicon,即Favorites Icon的缩写,顾名思义,便是其可以让浏览器的收藏夹中除显示相应的标题外,还以图标的方式区别不同的网站. ⑵根据浏览器的不同,Favicon显示也有 ...

  9. 11-ajax

    Ajax   1.什么是ajax Asynchronous JavaScript and XML(异步JavaScript和XML) 节省用户操作,时间,提高用户体验,减少数据请求 传输获取数据 特点 ...

  10. codeforces613E

    Puzzle Lover CodeForces - 613E Oleg Petrov loves crossword puzzles and every Thursday he buys his fa ...