Nova 的设计遵循 service -> manager -> driver 的三层架构。

主要包括 nova-api, nova-conductor, nova-scheduler, nova-compute 四大核心组件(进程),组件之间通过 RPC 进行通信。

RESTful 入口 nova-api

1
2
3
4
# etc/nova/api-paste.ini

[app:osapi_compute_app_v21]
paste.app_factory = nova.api.openstack.compute:APIRouterV21.factory
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# nova/api/openstack/compute/routes.py

ROUTE_LIST = (
('/servers', {
'GET': [server_controller, 'index'],
'POST': [server_controller, 'create']
}),
('/servers/{id}', {
'GET': [server_controller, 'show'],
'PUT': [server_controller, 'update'],
'DELETE': [server_controller, 'delete']
}),
('/servers/{server_id}/os-volume_attachments', {
'GET': [server_volume_attachments_controller, 'index'],
'POST': [server_volume_attachments_controller, 'create'],
}),
('/servers/{server_id}/os-volume_attachments/{id}', {
'GET': [server_volume_attachments_controller, 'show'],
'PUT': [server_volume_attachments_controller, 'update'],
'DELETE': [server_volume_attachments_controller, 'delete']
}),
)

class APIRouterV21(base_wsgi.Router):
def __init__(self, custom_routes=None):
for path, methods in ROUTE_LIST + custom_routes:
for method, controller_info in methods.items():
self.map.create_route(path, method, controller, action)

虚机创建

1
2
3
4
5
# nova/api/openstack/compute/servers.py

class ServersController(wsgi.Controller):
def create(self, req, body):
self.compute_api.create
1
2
3
4
5
6
# nova/compute/api.py

class API(base.Base):
def create
self._create_instance
self.compute_task_api.schedule_and_build_instance

api 只是对 rpcapi 的简单封装

1
2
3
4
5
# nova/conductor/api.py

class ComputeTaskAPI(object):
def schedule_and_build_instances
self.conductor_compute_rpcapi.schedule_and_build_instances
1
2
3
4
5
# nova/conductor/rpcapi.py

class ComputeTaskAPI(object):
def schedule_and_build_instances
cctxt.cast(context, 'schedule_and_build_instances', **kw)

rpc 的 endpoint 在 manager 中实现

1
2
3
4
5
6
7
8
# nova/conductor/manager.py

class ComputeTaskManager(base.Base):
def schedule_and_build_instances
self._schedule_instances
self.query_client.select_destinations
self.scheduler_rpcapi.select_destinations
self.compute_rpcapi.build_and_run_instance

从 nova-api 到 nova-conductor, nova-scheduler 兜兜转转了一圈,又回到了 nova-compute

1
2
3
4
5
# nova/compute/rpcapi.py

class ComputeAPI(object):
def build_and_run_instance
cctxt.cast(ctxt, 'build_and_run_instance', **kwargs)

实例创建的同时创建相应的块设备

1
2
3
4
5
6
7
8
9
10
11
12
# nova/compute/manager.py

class ComputeManager(manager.Manager):
def build_and_run_instance
self._do_build_and_run_instance
self._build_and_run_instance
self._build_resources
self._prep_block_device
driver.block_device_info_get_mapping
driver_block_device.attach_block_devices
bdm.attach
self.driver.spawn

bdm.attach 的实现将在下面云盘挂载流程里进行说明。

云盘挂载

1
2
3
4
5
# nova/api/openstack/compute/volumes.py

class VolumeAttachmentController(wsgi.Controller):
def create
self.compute_api.attach_volume
1
2
3
4
5
6
7
8
# nova/compute/api.py

class API(base.Base):
def attach_volume
self._attach_volume
self._create_volume_bdm
self.compute_rpcapi.reserve_block_device_name
self.compute_rpcapi.attach_volume
1
2
3
4
5
# nova/compute/rpcapi.py

class ComputeAPI(object):
def attach_volume
cctxt.cast(ctxt, 'attach_volume', instance=instance, bdm=bdm)
1
2
3
4
5
6
# nova/compute/manager.py

class ComputeManager(manager.Manager):
def attach_volume
self._attach_volume
bdm.attach
1
2
3
4
5
6
7
8
9
10
# nova/virt/block_device.py

class DriverVolumeBlockDevice(DriverBlockDevice):
def attach
self._do_attach
virt_driver.get_volume_connector(instance)
self._volume_attach
# Cinder 将调用驱动的 initialize_connection 方法得到 hypervisor 节点访问卷所需的信息
volume_api.attachment_update
virt_driver.attach_volume
1
2
3
4
5
6
7
8
9
# nova/volume/cinder.py

class API(object):
def attachment_update
cinderclient(context, '3.44', skip_version_check=True).attachments.update
body = {'attachment': {'connector': connector}}
resp = self._update('/attachments/%s' % id, body)
return self.resource_class(self, resp['attachment'], loaded=True,
resp=resp)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# nova/virt/libvirt/driver.py

class LibvirtDriver(driver.ComputeDriver):
def get_volume_connector
# 调用进入 os_brick 模块,返回所有 connector 的信息,如 iSCSI connector 就会返回 initiator 的 iqn
connector.get_connector_properties

def attach_volume
self._connect_volume
# 对于 iSCSI 而言,卷驱动为 LibvirtISCSIVolumeDriver,这里会进行 iSCSI login 以在 hypervisor 节点创建本地块设备
# 对于 RBD 而言,卷驱动为 LibvirtNetVolumeDriver,这里直接使用父类的实现,即什么都不做
vol_driver.connect_volume
# iSCSI
self.connector.connect_volume
self._get_volume_config
vol_driver.get_config
guest.attach_device

os-brick

os-brick 调用 get_connector_properties 接口收集当前客户端节点的所有信息,然后把这些信息作为入参调用 Cinder 卷驱动提供的 initialize_connection 接口;

initialize_connection 接口返回客户端挂载(严谨一点应该是”使用“,因为对于 librbd RBD 就不需要在客户端节点进行挂载)卷所需要的信息;

os-brick 调用 InitiatorConnector.factory 接口根据不同的卷类型(如 initiator.ISCSI, initiatro.RBD)构造 InitiatorConnector 子类实例;

os-brick 在挂载时仅知道卷的 ID,它是如何知道卷类型的,其实是因为卷驱动 initialize_connection 接口返回的 json 字段 driver_volume_type 标识了卷类型;

InitiatorConnector 子类实例调用 connect_volume 实例实现在客户端节点挂载卷(对于 iSCSI 而言就是 iscsiadm login 生成本地块设备,对于 RBD 而言则什么都不做);

Cinder

Cinder 负责卷控制面的工作,如增删改查,它不负责客户端对卷的使用;

但是,它会告诉客户端使用卷所需要的所有参数,这是在卷驱动实现的 initialize_connection 接口中实现的;

Cinder 虽然支持很多驱动,但大的驱动类别实际上非常少,归纳起来常见的就是:RBD, NFS, iSCSI, FC;

os-brick 侧我们使用的是 iSCSI 挂载的方式,因此 Cinder 驱动侧我们 initialize_connection 接口返回的 driver_volume_type 就是 iscsi,同时返回的其它信息都是和 iSCSI 客户端登陆所需要的;

ACL 关系,需要在 initialize_connection 阶段建立,target 的 discovery 也都是这个阶段进行的,这样才能返回 target 的 portal, iqn 等
iSCSI initiator login 所必须的信息;

参考资料

First step for reading OpenStack Nova source code

https://medium.com/uckey/first-step-for-reading-openstack-nova-source-code-280758ff77c9

Block Device Mapping in Nova

https://docs.openstack.org/nova/stein/user/block-device-mapping.html

OpenStack Note

https://gtcsq.readthedocs.io/en/latest/openstack/index.html

nova-api: api-paste流程说明

https://bbs.huaweicloud.com/blogs/3635f96d993111e7b8317ca23e93a891