首页
归档
时光轴
推荐
Cloud
图床
导航
Search
1
Deploy OpenStack offline based on Kolla
737 阅读
2
openstact 基础环境安装 (手动版)
686 阅读
3
Mariadb 主从复制&读写分离
642 阅读
4
Typecho 1.2.0 部署
640 阅读
5
FusionCompute8.0 体验
573 阅读
Python
Linux
随笔
mysql
openstack
Search
标签搜索
linux
Pike
python
爬虫
openstack
mysql
Essay
Ansible
docker
Zabbix
kolla
Internet
Redis
1+X
Hyper-V
jenkins
Kickstart
自动化
sh
pxe
Acha
累计撰写
77
篇文章
累计收到
1
条评论
首页
栏目
Python
Linux
随笔
mysql
openstack
页面
归档
时光轴
推荐
Cloud
图床
导航
搜索到
7
篇与
的结果
2022-07-19
OpenStack-Pike 搭建之Cinder(七)
概述 OpenStack块存储服务(cinder)为虚拟机添加持久的存储,块存储提供一个基础设施为了管理卷,以及和OpenStack计算服务交互,为实例提供卷。此服务也会激活管理卷的快照和卷类型的功能。 块存储服务通常包含下列组件: cinder-api 接受API请求,并将其路由到cinder-volume执行。 cinder-volume 与块存储服务和例如cinder-scheduler的进程进行直接交互。它也可以与这些进程通过一个消息队列进行交互。cinder-volume服务响应送到块存储服务的读写请求来维持状态。它也可以和多种存储提供者在驱动架构下进行交互。 cinder-scheduler守护进程 选择最优存储提供节点来创建卷。其与nova-scheduler组件类似。 cinder-backup daemon cinder-backup服务提供任何种类备份卷到一个备份存储提供者。就像cinder-volume服务,它与多种存储提供者在驱动架构下进行交互。 消息队列 在块存储的进程之间路由信息。 安装并配置控制节点 先决条件 1、创建数据库 root 用户连接到数据库服务器 mysql -u root -p000000 创建 cinder 数据库 CREATE DATABASE cinder; 允许 cinder 数据库的cinder用户访问权限: GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '000000'; GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '000000'; 2、获取 admin 凭证 . admin-openrc 3、创建服务证书 创建一个 cinder 用户 openstack user create --domain default --password 000000 cinder 添加 admin 角色到 cinder 用户上 openstack role add --project service --user cinder admin 创建服务实体 openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2 openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3 创建服务API openstack endpoint create --region RegionOne \ volumev2 public http://controller:8776/v2/%\(project_id\)s openstack endpoint create --region RegionOne \ volumev2 internal http://controller:8776/v2/%\(project_id\)s openstack endpoint create --region RegionOne \ volumev2 admin http://controller:8776/v2/%\(project_id\)s openstack endpoint create --region RegionOne \ volumev3 public http://controller:8776/v3/%\(project_id\)s openstack endpoint create --region RegionOne \ volumev3 internal http://controller:8776/v3/%\(project_id\)s openstack endpoint create --region RegionOne \ volumev3 admin http://controller:8776/v3/%\(project_id\)s {collapse} {collapse-item label="CMD"} [root@openstack ~]# mysql -u root -p000000 Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 379 Server version: 10.1.20-MariaDB MariaDB Server Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> CREATE DATABASE cinder; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \ -> IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \ -> IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> exit Bye [root@openstack ~]# . admin-openrc [root@openstack ~]# openstack user create --domain default --password 000000 cinder +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 15c1c62c21f543d984563abe5c063726 | | name | cinder | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ [root@openstack ~]# openstack role add --project service --user cinder admin [root@openstack ~]# openstack service create --name cinderv2 \ > --description "OpenStack Block Storage" volumev2 +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Block Storage | | enabled | True | | id | 052ab471e8ed4e5ca687bd73537935b5 | | name | cinderv2 | | type | volumev2 | +-------------+----------------------------------+ [root@openstack ~]# openstack service create --name cinderv3 \ > --description "OpenStack Block Storage" volumev3 +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Block Storage | | enabled | True | | id | af52c95327614d1e9fb70286fcb552ea | | name | cinderv3 | | type | volumev3 | +-------------+----------------------------------+ [root@openstack ~]# openstack endpoint create --region RegionOne \ > volumev2 public http://controller:8776/v2/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | 2c3e428aa796442696e7f5919175c1e2 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 052ab471e8ed4e5ca687bd73537935b5 | | service_name | cinderv2 | | service_type | volumev2 | | url | http://controller:8776/v2/%(project_id)s | +--------------+------------------------------------------+ [root@openstack ~]# openstack endpoint create --region RegionOne \ > volumev2 internal http://controller:8776/v2/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | c25695b19d1b4e33a9ea2d4c023b8732 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 052ab471e8ed4e5ca687bd73537935b5 | | service_name | cinderv2 | | service_type | volumev2 | | url | http://controller:8776/v2/%(project_id)s | +--------------+------------------------------------------+ [root@openstack ~]# openstack endpoint create --region RegionOne \ > volumev2 admin http://controller:8776/v2/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | 2928643fb2ac4bd99aa8cc795b55f7e1 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 052ab471e8ed4e5ca687bd73537935b5 | | service_name | cinderv2 | | service_type | volumev2 | | url | http://controller:8776/v2/%(project_id)s | +--------------+------------------------------------------+ [root@openstack ~]# openstack endpoint create --region RegionOne \ > volumev3 public http://controller:8776/v3/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | a63b291e2ed4450e91ee574e4f9b4a7a | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | af52c95327614d1e9fb70286fcb552ea | | service_name | cinderv3 | | service_type | volumev3 | | url | http://controller:8776/v3/%(project_id)s | +--------------+------------------------------------------+ [root@openstack ~]# openstack endpoint create --region RegionOne \ > volumev3 internal http://controller:8776/v3/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | f47bd12c58af41298cdc7c5fe76cb8d4 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | af52c95327614d1e9fb70286fcb552ea | | service_name | cinderv3 | | service_type | volumev3 | | url | http://controller:8776/v3/%(project_id)s | +--------------+------------------------------------------+ [root@openstack ~]# openstack endpoint create --region RegionOne \ > volumev3 admin http://controller:8776/v3/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | 6dd9f8c300754f32abecf2b77e241d5d | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | af52c95327614d1e9fb70286fcb552ea | | service_name | cinderv3 | | service_type | volumev3 | | url | http://controller:8776/v3/%(project_id)s | +--------------+------------------------------------------+ {/collapse-item} {/collapse} 安装并配置组件 1、安装软件包 yum install -y openstack-cinder 2、配置 cinder.conf # sed -i.bak '/^#/d;/^$/d' /etc/cinder/cinder.conf # vim /etc/cinder/cinder.conf [database] # 配置数据库访问 connection = mysql+pymysql://cinder:000000@controller/cinder [DEFAULT] # 配置RabbitMQ 消息队列访问 transport_url = rabbit://openstack:000000@controller # 配置身份服务访问 auth_strategy = keystone # 控制器节点的管理接口 IP 地址 my_ip = 178.120.2.100 [keystone_authtoken] # 配置身份服务访问 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = 000000 [oslo_concurrency] # 配置锁定路径 lock_path = /var/lib/cinder/tmp 3、同步数据库 # su -s /bin/sh -c "cinder-manage db sync" cinder 配置计算节点使用块设备存储 配置 nova.conf # vim /etc/nova/nova.conf [cinder] os_region_name = RegionOne 完成安装 1、重启计算API服务 systemctl restart openstack-nova-api.service 2、启动设备块存储服务并设置开机自启 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service 安装并配置存储节点 先决条件 1、安装支持工具包 yum install -y lvm2 device-mapper-persistent-data systemctl enable lvm2-lvmetad.service && systemctl start lvm2-lvmetad.service 2、创建LVM物理卷 pvcreate /dev/sdb 3、创建LVM卷组 vgcreate cinder-volumes /dev/sdb 4、添加过滤器 # vim /etc/lvm/lvm.conf filter = [ "a/vdb/", "r/.*/"] {collapse} {collapse-item label="CMD"} [root@storage_node ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sr0 11:0 1 1024M 0 rom vda 252:0 0 30G 0 disk ├─vda1 252:1 0 1G 0 part /boot └─vda2 252:2 0 29G 0 part └─centos-root 253:0 0 29G 0 lvm / vdb 252:16 0 100G 0 disk vdc 252:32 0 100G 0 disk [root@storage_node ~]# pvcreate /dev/vdb Physical volume "/dev/vdb" successfully created. [root@storage_node ~]# vgcreate cinder-volumes /dev/vdb Volume group "cinder-volumes" successfully create {/collapse-item} {/collapse} 安装并配置组件 1、安装软件包 yum install -y openstack-cinder targetcli python-keystone 2、配置 cinder.conf # sed -i.bak '/^#/d;/^$/d' /etc/cinder/cinder.conf # vim /etc/cinder/cinder.conf [database] # 配置数据库访问 connection = mysql+pymysql://cinder:000000@controller/cinder [DEFAULT] # RabbitMQ 消息队列访问 transport_url = rabbit://openstack:000000@controller # 配置身份服务访问 auth_strategy = keystone # 存储节点上管理网络接口的 IP 地址 my_ip = 178.120.2.192 # 启用 LVM 后端 enabled_backends = lvm # 配置镜像服务 API 的位置 glance_api_servers = http://controller:9292 [keystone_authtoken] # 配置身份服务访问 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = 000000 [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes iscsi_protocol = iscsi iscsi_helper = lioadm [oslo_concurrency] # 配置锁定路径 lock_path = /var/lib/cinder/tmp 完成安装 启动存储卷服务,设置开机自启 systemctl enable openstack-cinder-volume.service target.service systemctl start openstack-cinder-volume.service target.service
2022年07月19日
203 阅读
0 评论
0 点赞
2022-07-14
OpenStack-Pike 搭建之Dashboard(六)
Dashboard 安装和配置组件 1、安装软件包 yum install -y openstack-dashboard 2、配置 local_settings cp -a /etc/openstack-dashboard/local_settings /root/local_settings vim /etc/openstack-dashboard/local_settings # 仪表板在控制节点 OPENSTACK_HOST = "controller" # 允许主机访问仪表板 ALLOWED_HOSTS = ['*'] # 配置会话存储服务 memcached SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } # 启用身份 API 版本 3 OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST # 启用对域的支持 OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True # 配置 API 版本 OPENSTACK_API_VERSIONS = { "identity": 3, "image": 2, "volume": 2, } # 配置为通过仪表板创建的用户的默认域:Default OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" # 配置为通过仪表板创建的用户的默认角色:user OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" # 如果选择网络选项 1,请禁用对第 3 层网络服务的支持 OPENSTACK_NEUTRON_NETWORK = { ... 'enable_router': False, 'enable_quotas': False, 'enable_distributed_router': False, 'enable_ha_router': False, 'enable_lb': False, 'enable_firewall': False, 'enable_vpn': False, 'enable_fip_topology_check': False, } {collapse} {collapse-item label="查看执行过程"} 配置 local_settings [root@controller ~]# cat /etc/openstack-dashboard/local_settings import os from django.utils.translation import ugettext_lazy as _ from openstack_dashboard.settings import HORIZON_CONFIG DEBUG = False WEBROOT = '/dashboard/' ALLOWED_HOSTS = ['*'] LOCAL_PATH = '/tmp' SECRET_KEY='04f3ac91f6f48932c88a' SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend' OPENSTACK_HOST = "controller" OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_API_VERSIONS = { "identity": 3, "image": 2, "volume": 2, } OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" OPENSTACK_KEYSTONE_BACKEND = { 'name': 'native', 'can_edit_user': True, 'can_edit_group': True, 'can_edit_project': True, 'can_edit_domain': True, 'can_edit_role': True, } OPENSTACK_HYPERVISOR_FEATURES = { 'can_set_mount_point': False, 'can_set_password': False, 'requires_keypair': False, 'enable_quotas': True } OPENSTACK_CINDER_FEATURES = { 'enable_backup': False, } OPENSTACK_NEUTRON_NETWORK = { 'enable_router': False, 'enable_quotas': False, 'enable_distributed_router': False, 'enable_ha_router': False, 'enable_lb': False, 'enable_firewall': False, 'enable_vpn': False, 'enable_fip_topology_check': False, 'supported_vnic_types': ['*'], 'physical_networks': [], } OPENSTACK_HEAT_STACK = { 'enable_user_pass': True, } IMAGE_CUSTOM_PROPERTY_TITLES = { "architecture": _("Architecture"), "kernel_id": _("Kernel ID"), "ramdisk_id": _("Ramdisk ID"), "image_state": _("Euca2ools state"), "project_id": _("Project ID"), "image_type": _("Image Type"), } IMAGE_RESERVED_CUSTOM_PROPERTIES = [] API_RESULT_LIMIT = 1000 API_RESULT_PAGE_SIZE = 20 SWIFT_FILE_TRANSFER_CHUNK_SIZE = 512 * 1024 INSTANCE_LOG_LENGTH = 35 DROPDOWN_MAX_ITEMS = 30 TIME_ZONE = "UTC" POLICY_FILES_PATH = '/etc/openstack-dashboard' LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'console': { 'format': '%(levelname)s %(name)s %(message)s' }, 'operation': { 'format': '%(message)s' }, }, 'handlers': { 'null': { 'level': 'DEBUG', 'class': 'logging.NullHandler', }, 'console': { 'level': 'INFO', 'class': 'logging.StreamHandler', 'formatter': 'console', }, 'operation': { 'level': 'INFO', 'class': 'logging.StreamHandler', 'formatter': 'operation', }, }, 'loggers': { 'django.db.backends': { 'handlers': ['null'], 'propagate': False, }, 'requests': { 'handlers': ['null'], 'propagate': False, }, 'horizon': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'horizon.operation_log': { 'handlers': ['operation'], 'level': 'INFO', 'propagate': False, }, 'openstack_dashboard': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'novaclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'cinderclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'keystoneclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'glanceclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'neutronclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'heatclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'swiftclient': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'openstack_auth': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'nose.plugins.manager': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'django': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': False, }, 'iso8601': { 'handlers': ['null'], 'propagate': False, }, 'scss': { 'handlers': ['null'], 'propagate': False, }, }, } SECURITY_GROUP_RULES = { 'all_tcp': { 'name': _('All TCP'), 'ip_protocol': 'tcp', 'from_port': '1', 'to_port': '65535', }, 'all_udp': { 'name': _('All UDP'), 'ip_protocol': 'udp', 'from_port': '1', 'to_port': '65535', }, 'all_icmp': { 'name': _('All ICMP'), 'ip_protocol': 'icmp', 'from_port': '-1', 'to_port': '-1', }, 'ssh': { 'name': 'SSH', 'ip_protocol': 'tcp', 'from_port': '22', 'to_port': '22', }, 'smtp': { 'name': 'SMTP', 'ip_protocol': 'tcp', 'from_port': '25', 'to_port': '25', }, 'dns': { 'name': 'DNS', 'ip_protocol': 'tcp', 'from_port': '53', 'to_port': '53', }, 'http': { 'name': 'HTTP', 'ip_protocol': 'tcp', 'from_port': '80', 'to_port': '80', }, 'pop3': { 'name': 'POP3', 'ip_protocol': 'tcp', 'from_port': '110', 'to_port': '110', }, 'imap': { 'name': 'IMAP', 'ip_protocol': 'tcp', 'from_port': '143', 'to_port': '143', }, 'ldap': { 'name': 'LDAP', 'ip_protocol': 'tcp', 'from_port': '389', 'to_port': '389', }, 'https': { 'name': 'HTTPS', 'ip_protocol': 'tcp', 'from_port': '443', 'to_port': '443', }, 'smtps': { 'name': 'SMTPS', 'ip_protocol': 'tcp', 'from_port': '465', 'to_port': '465', }, 'imaps': { 'name': 'IMAPS', 'ip_protocol': 'tcp', 'from_port': '993', 'to_port': '993', }, 'pop3s': { 'name': 'POP3S', 'ip_protocol': 'tcp', 'from_port': '995', 'to_port': '995', }, 'ms_sql': { 'name': 'MS SQL', 'ip_protocol': 'tcp', 'from_port': '1433', 'to_port': '1433', }, 'mysql': { 'name': 'MYSQL', 'ip_protocol': 'tcp', 'from_port': '3306', 'to_port': '3306', }, 'rdp': { 'name': 'RDP', 'ip_protocol': 'tcp', 'from_port': '3389', 'to_port': '3389', }, } REST_API_REQUIRED_SETTINGS = ['OPENSTACK_HYPERVISOR_FEATURES', 'LAUNCH_INSTANCE_DEFAULTS', 'OPENSTACK_IMAGE_FORMATS', 'OPENSTACK_KEYSTONE_DEFAULT_DOMAIN', 'CREATE_IMAGE_DEFAULTS'] ALLOWED_PRIVATE_SUBNET_CIDR = {'ipv4': [], 'ipv6': []} {/collapse-item} {/collapse} 完成安装 重启 Web服务器 和 会话存储服务 systemctl restart httpd.service memcached.service
2022年07月14日
203 阅读
0 评论
0 点赞
2022-07-14
OpenStack-Pike 搭建之Neutron(五)
Neutron 安装和配置 控制节点 前置条件 1、创建数据库并授权 使用 root 用户登录数据库 mysql -u root -p000000 创建 neutron 数据库 CREATE DATABASE neutron; neutron 用户对 neutron数据库有所有权限 GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ IDENTIFIED BY '000000'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ IDENTIFIED BY '000000'; 2、获取 admin 凭证 . admin-openrc 3、创建服务凭证 创建 neutron 用户 openstack user create --domain default --password 000000 neutron 将 service项目 中的 neutron用户 设置为 admin角色 openstack role add --project service --user neutron admin 创建 neutron 服务实体 openstack service create --name neutron --description "OpenStack Networking" network 4、创建 网络服务 API端点 openstack endpoint create --region RegionOne network public http://controller:9696 openstack endpoint create --region RegionOne network internal http://controller:9696 openstack endpoint create --region RegionOne network admin http://controller:9696 {collapse} {collapse-item label="查看执行过程"} 前置条件 [root@controller ~]# mysql -u root -p000000 Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 68 Server version: 10.1.20-MariaDB MariaDB Server Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> CREATE DATABASE neutron; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ -> IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ -> IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> exit Bye [root@controller ~]# . admin-openrc [root@controller ~]# openstack user create --domain default --password-prompt neutron User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | bd11a70055634b8996bdd7096ea91a60 | | name | neutron | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ [root@controller ~]# openstack role add --project service --user neutron admin [root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Networking | | enabled | True | | id | 3f33133eae714fa492723f3617e8705f | | name | neutron | | type | network | +-------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 4df23df9efe547ea88b5ec0e01201c4a | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 3f33133eae714fa492723f3617e8705f | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne network internal http://controller:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 0412574e5e3f4b3ca5e7e18f753d7e80 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 3f33133eae714fa492723f3617e8705f | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne network admin http://controller:9696 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 2251b821a7484bb0a5eb65697af351f6 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 3f33133eae714fa492723f3617e8705f | | service_name | neutron | | service_type | network | | url | http://controller:9696 | +--------------+----------------------------------+ {/collapse-item} {/collapse} 配置网络选项(Falt 网络) [ 配置参考] :https://docs.openstack.org/neutron/latest/configuration/config.html 安装组件 yum install -y openstack-neutron openstack-neutron-ml2 \ openstack-neutron-linuxbridge ebtables 配置服务组件 配置 neutron.conf # sed -i.bak '/^#/d;/^$/d' /etc/neutron/neutron.conf # vim /etc/neutron/neutron.conf [database] # 配置数据库访问 connection = mysql+pymysql://neutron:000000@controller/neutron [DEFAULT] # 启用 ML2插件并禁用其他插件 core_plugin = ml2 service_plugins = # 配置RabbitMQ 消息队列访问 transport_url = rabbit://openstack:000000@controller # 配置身份服务访问 auth_strategy = keystone # 配置 Networking 以通知 Compute 网络拓扑更改 notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [keystone_authtoken] # 配置身份服务访问 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = 000000 [nova] # 配置 Networking 以通知 Compute 网络拓扑更改 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = 000000 [oslo_concurrency] # 配置锁定路径 lock_path = /var/lib/neutron/tmp 配置 ML2插件 配置 ml2_conf.ini # sed -i.bak '/^#/d;/^$/d' /etc/neutron/plugins/ml2/ml2_conf.ini # vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] # 启用平面和 VLAN 网络 type_drivers = flat,vlan # 禁用自助服务网络 tenant_network_types = # 启用 Linux 桥接机制 mechanism_drivers = linuxbridge # 启用端口安全扩展驱动程序 extension_drivers = port_security [securitygroup] # 启用 ipset 以提高安全组规则的效率 enable_ipset = true 配置 Linux网桥代理 配置 linuxbridge_agent.ini # sed -i.bak '/^#/d;/^$/d' /etc/neutron/plugins/ml2/linuxbridge_agent.ini # vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] # 将Flat网络映射到物理网络接口 physical_interface_mappings = provider:eth0 [vxlan] # 禁用 VXLAN 覆盖网络 enable_vxlan = false [securitygroup] # 启用安全组并配置 Linux 网桥 iptables 防火墙驱动程序 enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver 配置 DHCP代理 配置 dhcp_agent.ini # sed -i.bak '/^#/d;/^$/d' /etc/neutron/dhcp_agent.ini # vim /etc/neutron/dhcp_agent.ini [DEFAULT] # 配置 Linux 网桥接口驱动程序、Dnsmasq DHCP 驱动程序,并启用隔离元数据 interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true {collapse} {collapse-item label="查看执行过程"} 配置服务组件 [root@controller ~]# yum install -y openstack-neutron openstack-neutron-ml2 \ > openstack-neutron-linuxbridge ebtables [root@controller ~]# sed -i.bak '/^#/d;/^$/d' /etc/neutron/neutron.conf [root@controller ~]# vim /etc/neutron/neutron.conf [DEFAULT] # 启用 ML2插件并禁用其他插件 core_plugin = ml2 service_plugins = # 配置RabbitMQ 消息队列访问 transport_url = rabbit://openstack:000000@controller # 配置身份服务访问 auth_strategy = keystone # 配置 Networking 以通知 Compute 网络拓扑更改 notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [agent] [cors] [database] # 配置数据库访问 connection = mysql+pymysql://neutron:000000@controller/neutron [keystone_authtoken] # 配置身份服务访问 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = 000000 [matchmaker_redis] [nova] # 配置 Networking 以通知 Compute 网络拓扑更改 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = 000000 [oslo_concurrency] # 配置锁定路径 lock_path = /var/lib/neutron/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [quotas] [ssl] [root@controller ~]# sed -i.bak '/^#/d;/^$/d' /etc/neutron/plugins/ml2/ml2_conf.ini [root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini [root@controller ~]# cat /etc/neutron/plugins/ml2/ml2_conf.ini [DEFAULT] [l2pop] [ml2] # 启用平面和 VLAN 网络 type_drivers = flat,vlan # 禁用自助服务网络 tenant_network_types = # 启用 Linux 桥接机制 mechanism_drivers = linuxbridge # 启用端口安全扩展驱动程序 extension_drivers = port_security [ml2_type_flat] [ml2_type_geneve] [ml2_type_gre] [ml2_type_vlan] [ml2_type_vxlan] [securitygroup] # 启用 ipset 以提高安全组规则的效率 enable_ipset = true [root@controller ~]# sed -i.bak '/^#/d;/^$/d' /etc/neutron/plugins/ml2/linuxbridge_agent.ini [root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [root@controller ~]# cat /etc/neutron/plugins/ml2/linuxbridge_agent.ini [DEFAULT] [agent] [linux_bridge] # 将Flat网络映射到物理网络接口 physical_interface_mappings = provider:eth0 [securitygroup] # 启用安全组并配置 Linux 网桥 iptables 防火墙驱动程序 enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver [vxlan] # 禁用 VXLAN 覆盖网络 enable_vxlan = false [root@controller ~]# sed -i.bak '/^#/d;/^$/d' /etc/neutron/dhcp_agent.ini [root@controller ~]# vim /etc/neutron/dhcp_agent.ini [root@controller ~]# cat /etc/neutron/dhcp_agent.ini [DEFAULT] # 配置 Linux 网桥接口驱动程序、Dnsmasq DHCP 驱动程序,并启用隔离元数据 interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true [agent] [ovs] {/collapse-item} {/collapse} 配置元数据代理 配置 metadata_agent.ini sed -i.bak '/^#/d;/^$/d' /etc/neutron/metadata_agent.ini vim /etc/neutron/metadata_agent.ini [DEFAULT] # 配置元数据主机和共享密钥 nova_metadata_host = controller metadata_proxy_shared_secret = 000000 {collapse} {collapse-item label="查看执行过程"} 配置元数据代理 [root@controller ~]# sed -i.bak '/^#/d;/^$/d' /etc/neutron/metadata_agent.ini [root@controller ~]# vim /etc/neutron/metadata_agent.ini [root@controller ~]# cat /etc/neutron/metadata_agent.ini [DEFAULT] # 配置元数据主机和共享密钥 nova_metadata_host = controller metadata_proxy_shared_secret = 000000 [agent] [cache] {/collapse-item} {/collapse} 配置计算服务使用网络服务 配置 nova.conf vim /etc/nova/nova.conf [neutron] # 配置访问参数、启用元数据代理和配置密钥 url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = 000000 service_metadata_proxy = true metadata_proxy_shared_secret = 000000 {collapse} {collapse-item label="查看执行过程"} 配置计算服务使用网络服务 [root@controller ~]# vim /etc/nova/nova.conf {/collapse-item} {/collapse} 完成安装 1、创建 plugin.ini 链接 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini 2、同步 neutron 数据库 su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron 3、重启 nova-api 服务 systemctl restart openstack-nova-api.service 4、启动网络服务设置开机自启 systemctl enable neutron-server.service \ neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service systemctl start neutron-server.service \ neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service 5、开启路由转发 [root@controller ~]# vim /etc/sysctl.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv6.conf.all.disable_ipv6 = 1 [root@controller ~]# sysctl -p net.bridge.bridge-nf-call-iptables = 1 net.ipv6.conf.all.disable_ipv6 = 1 {collapse} {collapse-item label="查看执行过程"} 完成安装 [root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini [root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ > --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. Running upgrade for neutron ... INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. INFO [alembic.runtime.migration] Running upgrade -> kilo, kilo_initial INFO [alembic.runtime.migration] Running upgrade kilo -> 354db87e3225, nsxv_vdr_metadata.py INFO [alembic.runtime.migration] Running upgrade 354db87e3225 -> 599c6a226151, neutrodb_ipam INFO [alembic.runtime.migration] Running upgrade 599c6a226151 -> 52c5312f6baf, Initial operations in support of address scopes INFO [alembic.runtime.migration] Running upgrade 52c5312f6baf -> 313373c0ffee, Flavor framework INFO [alembic.runtime.migration] Running upgrade 313373c0ffee -> 8675309a5c4f, network_rbac INFO [alembic.runtime.migration] Running upgrade 8675309a5c4f -> 45f955889773, quota_usage INFO [alembic.runtime.migration] Running upgrade 45f955889773 -> 26c371498592, subnetpool hash INFO [alembic.runtime.migration] Running upgrade 26c371498592 -> 1c844d1677f7, add order to dnsnameservers INFO [alembic.runtime.migration] Running upgrade 1c844d1677f7 -> 1b4c6e320f79, address scope support in subnetpool INFO [alembic.runtime.migration] Running upgrade 1b4c6e320f79 -> 48153cb5f051, qos db changes INFO [alembic.runtime.migration] Running upgrade 48153cb5f051 -> 9859ac9c136, quota_reservations INFO [alembic.runtime.migration] Running upgrade 9859ac9c136 -> 34af2b5c5a59, Add dns_name to Port INFO [alembic.runtime.migration] Running upgrade 34af2b5c5a59 -> 59cb5b6cf4d, Add availability zone INFO [alembic.runtime.migration] Running upgrade 59cb5b6cf4d -> 13cfb89f881a, add is_default to subnetpool INFO [alembic.runtime.migration] Running upgrade 13cfb89f881a -> 32e5974ada25, Add standard attribute table INFO [alembic.runtime.migration] Running upgrade 32e5974ada25 -> ec7fcfbf72ee, Add network availability zone INFO [alembic.runtime.migration] Running upgrade ec7fcfbf72ee -> dce3ec7a25c9, Add router availability zone INFO [alembic.runtime.migration] Running upgrade dce3ec7a25c9 -> c3a73f615e4, Add ip_version to AddressScope INFO [alembic.runtime.migration] Running upgrade c3a73f615e4 -> 659bf3d90664, Add tables and attributes to support external DNS integration INFO [alembic.runtime.migration] Running upgrade 659bf3d90664 -> 1df244e556f5, add_unique_ha_router_agent_port_bindings INFO [alembic.runtime.migration] Running upgrade 1df244e556f5 -> 19f26505c74f, Auto Allocated Topology - aka Get-Me-A-Network INFO [alembic.runtime.migration] Running upgrade 19f26505c74f -> 15be73214821, add dynamic routing model data INFO [alembic.runtime.migration] Running upgrade 15be73214821 -> b4caf27aae4, add_bgp_dragent_model_data INFO [alembic.runtime.migration] Running upgrade b4caf27aae4 -> 15e43b934f81, rbac_qos_policy INFO [alembic.runtime.migration] Running upgrade 15e43b934f81 -> 31ed664953e6, Add resource_versions row to agent table INFO [alembic.runtime.migration] Running upgrade 31ed664953e6 -> 2f9e956e7532, tag support INFO [alembic.runtime.migration] Running upgrade 2f9e956e7532 -> 3894bccad37f, add_timestamp_to_base_resources INFO [alembic.runtime.migration] Running upgrade 3894bccad37f -> 0e66c5227a8a, Add desc to standard attr table INFO [alembic.runtime.migration] Running upgrade 0e66c5227a8a -> 45f8dd33480b, qos dscp db addition INFO [alembic.runtime.migration] Running upgrade 45f8dd33480b -> 5abc0278ca73, Add support for VLAN trunking INFO [alembic.runtime.migration] Running upgrade kilo -> 30018084ec99, Initial no-op Liberty contract rule. INFO [alembic.runtime.migration] Running upgrade 30018084ec99 -> 4ffceebfada, network_rbac INFO [alembic.runtime.migration] Running upgrade 4ffceebfada -> 5498d17be016, Drop legacy OVS and LB plugin tables INFO [alembic.runtime.migration] Running upgrade 5498d17be016 -> 2a16083502f3, Metaplugin removal INFO [alembic.runtime.migration] Running upgrade 2a16083502f3 -> 2e5352a0ad4d, Add missing foreign keys INFO [alembic.runtime.migration] Running upgrade 2e5352a0ad4d -> 11926bcfe72d, add geneve ml2 type driver INFO [alembic.runtime.migration] Running upgrade 11926bcfe72d -> 4af11ca47297, Drop cisco monolithic tables INFO [alembic.runtime.migration] Running upgrade 4af11ca47297 -> 1b294093239c, Drop embrane plugin table INFO [alembic.runtime.migration] Running upgrade 1b294093239c -> 8a6d8bdae39, standardattributes migration INFO [alembic.runtime.migration] Running upgrade 8a6d8bdae39 -> 2b4c2465d44b, DVR sheduling refactoring INFO [alembic.runtime.migration] Running upgrade 2b4c2465d44b -> e3278ee65050, Drop NEC plugin tables INFO [alembic.runtime.migration] Running upgrade e3278ee65050 -> c6c112992c9, rbac_qos_policy INFO [alembic.runtime.migration] Running upgrade c6c112992c9 -> 5ffceebfada, network_rbac_external INFO [alembic.runtime.migration] Running upgrade 5ffceebfada -> 4ffceebfcdc, standard_desc INFO [alembic.runtime.migration] Running upgrade 4ffceebfcdc -> 7bbb25278f53, device_owner_ha_replicate_int INFO [alembic.runtime.migration] Running upgrade 7bbb25278f53 -> 89ab9a816d70, Rename ml2_network_segments table INFO [alembic.runtime.migration] Running upgrade 5abc0278ca73 -> d3435b514502, Add device_id index to Port INFO [alembic.runtime.migration] Running upgrade d3435b514502 -> 30107ab6a3ee, provisioning_blocks.py INFO [alembic.runtime.migration] Running upgrade 30107ab6a3ee -> c415aab1c048, add revisions table INFO [alembic.runtime.migration] Running upgrade c415aab1c048 -> a963b38d82f4, add dns name to portdnses INFO [alembic.runtime.migration] Running upgrade 89ab9a816d70 -> c879c5e1ee90, Add segment_id to subnet INFO [alembic.runtime.migration] Running upgrade c879c5e1ee90 -> 8fd3918ef6f4, Add segment_host_mapping table. INFO [alembic.runtime.migration] Running upgrade 8fd3918ef6f4 -> 4bcd4df1f426, Rename ml2_dvr_port_bindings INFO [alembic.runtime.migration] Running upgrade 4bcd4df1f426 -> b67e765a3524, Remove mtu column from networks. INFO [alembic.runtime.migration] Running upgrade a963b38d82f4 -> 3d0e74aa7d37, Add flavor_id to Router INFO [alembic.runtime.migration] Running upgrade 3d0e74aa7d37 -> 030a959ceafa, uniq_routerports0port_id INFO [alembic.runtime.migration] Running upgrade 030a959ceafa -> a5648cfeeadf, Add support for Subnet Service Types INFO [alembic.runtime.migration] Running upgrade a5648cfeeadf -> 0f5bef0f87d4, add_qos_minimum_bandwidth_rules INFO [alembic.runtime.migration] Running upgrade 0f5bef0f87d4 -> 67daae611b6e, add standardattr to qos policies INFO [alembic.runtime.migration] Running upgrade 67daae611b6e -> 6b461a21bcfc, uniq_floatingips0floating_network_id0fixed_port_id0fixed_ip_addr INFO [alembic.runtime.migration] Running upgrade 6b461a21bcfc -> 5cd92597d11d, Add ip_allocation to port INFO [alembic.runtime.migration] Running upgrade 5cd92597d11d -> 929c968efe70, add_pk_version_table INFO [alembic.runtime.migration] Running upgrade 929c968efe70 -> a9c43481023c, extend_pk_with_host_and_add_status_to_ml2_port_binding INFO [alembic.runtime.migration] Running upgrade a9c43481023c -> 804a3c76314c, Add data_plane_status to Port INFO [alembic.runtime.migration] Running upgrade 804a3c76314c -> 2b42d90729da, qos add direction to bw_limit_rule table INFO [alembic.runtime.migration] Running upgrade 2b42d90729da -> 62c781cb6192, add is default to qos policies INFO [alembic.runtime.migration] Running upgrade 62c781cb6192 -> c8c222d42aa9, logging api INFO [alembic.runtime.migration] Running upgrade c8c222d42aa9 -> 349b6fd605a6, Add dns_domain to portdnses INFO [alembic.runtime.migration] Running upgrade 349b6fd605a6 -> 7d32f979895f, add mtu for networks INFO [alembic.runtime.migration] Running upgrade b67e765a3524 -> a84ccf28f06a, migrate dns name from port INFO [alembic.runtime.migration] Running upgrade a84ccf28f06a -> 7d9d8eeec6ad, rename tenant to project INFO [alembic.runtime.migration] Running upgrade 7d9d8eeec6ad -> a8b517cff8ab, Add routerport bindings for L3 HA INFO [alembic.runtime.migration] Running upgrade a8b517cff8ab -> 3b935b28e7a0, migrate to pluggable ipam INFO [alembic.runtime.migration] Running upgrade 3b935b28e7a0 -> b12a3ef66e62, add standardattr to qos policies INFO [alembic.runtime.migration] Running upgrade b12a3ef66e62 -> 97c25b0d2353, Add Name and Description to the networksegments table INFO [alembic.runtime.migration] Running upgrade 97c25b0d2353 -> 2e0d7a8a1586, Add binding index to RouterL3AgentBinding INFO [alembic.runtime.migration] Running upgrade 2e0d7a8a1586 -> 5c85685d616d, Remove availability ranges. OK [root@controller ~]# systemctl restart openstack-nova-api.service [root@controller ~]# systemctl enable neutron-server.service \ > neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ > neutron-metadata-agent.service Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-server.service to /usr/lib/systemd/system/neutron-server.service. Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service. Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-dhcp-agent.service to /usr/lib/systemd/system/neutron-dhcp-agent.service. Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-metadata-agent.service to /usr/lib/systemd/system/neutron-metadata-agent.service. [root@controller ~]# systemctl start neutron-server.service \ > neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ > neutron-metadata-agent.service {/collapse-item} {/collapse} 安装和配置 计算节点 安装组件 yum install -y openstack-neutron-linuxbridge ebtables ipset 配置通用组件 配置 neutron.conf # sed -i.bak '/^#/d;/^$/d' /etc/neutron/neutron.conf # vim /etc/neutron/neutron.conf [DEFAULT] # 配置RabbitMQ 消息队列访问 transport_url = rabbit://openstack:000000@controller # 配置身份服务访问 auth_strategy = keystone [keystone_authtoken] # 配置身份服务访问 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = 000000 [oslo_concurrency] # 配置锁定路径 lock_path = /var/lib/neutron/tmp {collapse} {collapse-item label="查看执行过程"} 配置通用组件 [root@compute ~]# sed -i.bak '/^#/d;/^$/d' /etc/neutron/neutron.conf [root@compute ~]# vim /etc/neutron/neutron.conf [root@compute ~]# cat /etc/neutron/neutron.conf [DEFAULT] # 配置RabbitMQ 消息队列访问 transport_url = rabbit://openstack:000000@controller # 配置身份服务访问 auth_strategy = keystone [agent] [cors] [database] [keystone_authtoken] # 配置身份服务访问 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = 000000 [matchmaker_redis] [nova] [oslo_concurrency] # 配置锁定路径 lock_path = /var/lib/neutron/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [quotas] [ssl] {/collapse-item} {/collapse} 配置网络选项(Flat网络) 配置 linuxbridge_agent.ini # sed -i.bak '/^#/d;/^$/d' /etc/neutron/plugins/ml2/linuxbridge_agent.ini # vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] # 将Flat网络映射到物理网络接口 physical_interface_mappings = provider:eth0 [vxlan] # 禁用 VXLAN 覆盖网络 enable_vxlan = false [securitygroup] # 启用安全组并配置 Linux 网桥 iptables 防火墙驱动程序 enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver {collapse} {collapse-item label="查看执行过程"} 配置网络选项(Flat网络) [root@compute ~]# sed -i.bak '/^#/d;/^$/d' /etc/neutron/plugins/ml2/linuxbridge_agent.ini [root@compute ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [root@compute ~]# cat /etc/neutron/plugins/ml2/linuxbridge_agent.ini [DEFAULT] [agent] [linux_bridge] # 将Flat网络映射到物理网络接口 physical_interface_mappings = provider:eth0 [securitygroup] # 启用安全组并配置 Linux 网桥 iptables 防火墙驱动程序 enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver [vxlan] # 禁用 VXLAN 覆盖网络 enable_vxlan = false {/collapse-item} {/collapse} 配置计算服务使用网络服务 配置 nova.conf # vim /etc/nova/nova.conf [neutron] # 配置访问参数 url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = 000000 {collapse} {collapse-item label="查看执行过程"} 配置计算服务使用网络服务 [root@compute ~]# vim /etc/nova/nova.conf [neutron] # 配置访问参数 url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = 000000 {/collapse-item} {/collapse} 完成安装 1、重启计算服务 systemctl restart openstack-nova-compute.service 2、启动 网桥服务并设置开机自启 systemctl enable neutron-linuxbridge-agent.service systemctl start neutron-linuxbridge-agent.service 3、开启路由转发 [root@compute ~]# vim /etc/sysctl.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv6.conf.all.disable_ipv6 = 1 [root@compute ~]# sysctl -p net.bridge.bridge-nf-call-iptables = 1 net.ipv6.conf.all.disable_ipv6 = 1 {collapse} {collapse-item label="查看执行过程"} 完成安装 root@compute ~]# systemctl restart openstack-nova-compute.service [root@compute ~]# systemctl enable neutron-linuxbridge-agent.service systemctl start neutron-linuxbridge-agent.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service. [root@compute ~]# systemctl start neutron-linuxbridge-agent.service [root@compute ~]# sysctl -p net.bridge.bridge-nf-call-iptables = 1 net.ipv6.conf.all.disable_ipv6 = 1 {/collapse-item} {/collapse}
2022年07月14日
222 阅读
0 评论
0 点赞
2022-07-13
OpenStack-Pike 搭建之Nova(四)
Nova 概述 Use OpenStack Compute to host and manage cloud computing systems. OpenStack Compute is a major part of an Infrastructure-as-a-Service (IaaS) system. The main modules are implemented in Python. OpenStack Compute interacts with OpenStack Identity for authentication; OpenStack Image service for disk and server images; and OpenStack Dashboard for the user and administrative interface. Image access is limited by projects, and by users; quotas are limited per project (the number of instances, for example). OpenStack Compute can scale horizontally on standard hardware, and download images to launch instances. OpenStack Compute consists of the following areas and their components: nova-api service Accepts and responds to end user compute API calls. The service supports the OpenStack Compute API, the Amazon EC2 API, and a special Admin API for privileged users to perform administrative actions. It enforces some policies and initiates most orchestration activities, such as running an instance. nova-api-metadata service Accepts metadata requests from instances. The nova-api-metadata service is generally used when you run in multi-host mode with nova-network installations. For details, see Metadata service in the Compute Administrator Guide. nova-compute service A worker daemon that creates and terminates virtual machine instances through hypervisor APIs. For example:XenAPI for XenServer/XCPlibvirt for KVM or QEMUVMwareAPI for VMwareProcessing is fairly complex. Basically, the daemon accepts actions from the queue and performs a series of system commands such as launching a KVM instance and updating its state in the database. nova-placement-api service Tracks the inventory and usage of each provider. For details, see Placement API. nova-scheduler service Takes a virtual machine instance request from the queue and determines on which compute server host it runs. nova-conductor module Mediates interactions between the nova-compute service and the database. It eliminates direct accesses to the cloud database made by the nova-compute service. The nova-conductor module scales horizontally. However, do not deploy it on nodes where the nova-compute service runs. For more information, see the conductor section in the Configuration Options. nova-consoleauth daemon Authorizes tokens for users that console proxies provide. See nova-novncproxy and nova-xvpvncproxy. This service must be running for console proxies to work. You can run proxies of either type against a single nova-consoleauth service in a cluster configuration. For information, see About nova-consoleauth. nova-novncproxy daemon Provides a proxy for accessing running instances through a VNC connection. Supports browser-based novnc clients. nova-spicehtml5proxy daemon Provides a proxy for accessing running instances through a SPICE connection. Supports browser-based HTML5 client. nova-xvpvncproxy daemon Provides a proxy for accessing running instances through a VNC connection. Supports an OpenStack-specific Java client. The queue A central hub for passing messages between daemons. Usually implemented with RabbitMQ, also can be implemented with another AMQP message queue, such as ZeroMQ. SQL database Stores most build-time and run-time states for a cloud infrastructure, including: Available instance types Instances in use Available networks Projects Theoretically, OpenStack Compute can support any database that SQLAlchemy supports. Common databases are SQLite3 for test and development work, MySQL, MariaDB, and PostgreSQL. 安装和配置 控制节点 前置条件 1、创建数据库并授权 使用 root 用户登录数据库 mysql -u root -p000000 创建 nova_api、nova 和 nova_cell0 数据库 CREATE DATABASE nova_api; CREATE DATABASE nova; CREATE DATABASE nova_cell0; 对 nova用户 授权 GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ IDENTIFIED BY '000000'; GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ IDENTIFIED BY '000000'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ IDENTIFIED BY '000000'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ IDENTIFIED BY '000000'; GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ IDENTIFIED BY '000000'; GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ IDENTIFIED BY '000000'; 2、获取 admin 凭证 . admin-openrc 3、创建 计算服务凭证 创建 nova用户 openstack user create --domain default --password 000000 nova 将 service项目 中 nova用户,设置为 admin角色 openstack role add --project service --user nova admin 创建 nova服务实体 openstack service create --name nova --description "OpenStack Compute" compute 4、创建 计算服务 API 端点 openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 5、创建 placement 凭证 创建 placement 用户 openstack user create --domain default --password 000000 placement 将 service项目 中 placement 用户,设置为 admin角色 openstack role add --project service --user placement admin 创建 nova服务实体 openstack service create --name placement --description "Placement API" placement 6、创建 placement API端点 openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 {collapse} {collapse-item label="查看执行过程"} 前置条件 [root@controller ~]# mysql -u root -p000000 Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 37 Server version: 10.1.20-MariaDB MariaDB Server Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> CREATE DATABASE nova_api; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> CREATE DATABASE nova; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> CREATE DATABASE nova_cell0; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ -> IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ -> IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ -> IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ -> IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ -> IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ -> IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> exit Bye [root@controller ~]# . admin-openrc [root@controller ~]# openstack user create --domain default --password 000000 nova +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 8d9a97f85a7845deb20d54bc468bb549 | | name | nova | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ [root@controller ~]# openstack role add --project service --user nova admin [root@controller ~]# openstack service create --name nova --description "OpenStack Compute" compute +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Compute | | enabled | True | | id | 4fce66abb9794a1796874dd4a5d8bf34 | | name | nova | | type | compute | +-------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 938784c1725e497b933016403c535c10 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 4fce66abb9794a1796874dd4a5d8bf34 | | service_name | nova | | service_type | compute | | url | http://controller:8774/v2.1 | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 9d357f7717134d228e51c484837104ac | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 4fce66abb9794a1796874dd4a5d8bf34 | | service_name | nova | | service_type | compute | | url | http://controller:8774/v2.1 | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 08ba2de0acfd45d3bd4892f1d3f17287 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 4fce66abb9794a1796874dd4a5d8bf34 | | service_name | nova | | service_type | compute | | url | http://controller:8774/v2.1 | +--------------+----------------------------------+ [root@controller ~]# openstack user create --domain default --password 000000 placement +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 458722588d03402eb1ceab933c9d4045 | | name | placement | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ [root@controller ~]# openstack role add --project service --user placement admin [root@controller ~]# openstack service create --name placement --description "Placement API" placement +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | Placement API | | enabled | True | | id | 3aa8d5df8bff4099831826e202972ab6 | | name | placement | | type | placement | +-------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 2f9f6ac1ad5f4bd38eb1ac2607cb0b80 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 3aa8d5df8bff4099831826e202972ab6 | | service_name | placement | | service_type | placement | | url | http://controller:8778 | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 58613208c94d41c89787783d25812098 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 3aa8d5df8bff4099831826e202972ab6 | | service_name | placement | | service_type | placement | | url | http://controller:8778 | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 009e3cc34fb342e3bdde0e5c8d3d8c80 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 3aa8d5df8bff4099831826e202972ab6 | | service_name | placement | | service_type | placement | | url | http://controller:8778 | +--------------+----------------------------------+ {/collapse-item} {/collapse} 安装和配置组件 1、安装软件包 yum install -y openstack-nova-api openstack-nova-conductor \ openstack-nova-console openstack-nova-novncproxy \ openstack-nova-scheduler openstack-nova-placement-api 2、配置 nova.conf sed -i.bak '/^#/d;/^$/d' /etc/nova/nova.conf vim /etc/nova/nova.conf [DEFAULT] # 仅启用计算和元数据API enabled_apis = osapi_compute,metadata # 配置RabbitMQ消息队列访问 transport_url = rabbit://openstack:000000@controller # 控制器节点的管理IP my_ip = 178.120.2.10 # 启用对网络服务的支持 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver [api_database] # 配置数据库访问 connection = mysql+pymysql://nova:000000@controller/nova_api [database] # 配置数据库访问 connection = mysql+pymysql://nova:000000@controller/nova [api] # 配置身份服务访问 auth_strategy = keystone [keystone_authtoken] # 配置身份服务访问 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = 000000 [vnc] enabled = true # VNC代理配置为 使用控制器节点的管理接口IP地址 vncserver_listen = $my_ip vncserver_proxyclient_address = $my_ip [glance] # 配置图像服务API的位置 api_servers = http://controller:9292 [oslo_concurrency] # 配置锁定路径 lock_path = /var/lib/nova/tmp [placement] # 配置 Placement API os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:35357/v3 username = placement password = 000000 3、配置 00-nova-placement-api.conf vim /etc/httpd/conf.d/00-nova-placement-api.conf # 启用对 Placement API 的访问 <Directory /usr/bin> <IfVersion >= 2.4> Require all granted </IfVersion> <IfVersion < 2.4> Order allow,deny Allow from all </IfVersion> </Directory> 4、同步 nova 数据库 su -s /bin/sh -c "nova-manage api_db sync" nova 5、数据库同步 注册 cell0 数据库 su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova 创建 cell1 单元格 su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova 同步 nova 数据库 su -s /bin/sh -c "nova-manage db sync" nova 5、验证 nova、cell0 和 cell1 成功注册 nova-manage cell_v2 list_cells {collapse} {collapse-item label="查看执行过程"} 安装和配置组件 [root@controller ~]# yum install -y openstack-nova-api openstack-nova-conductor \ > openstack-nova-console openstack-nova-novncproxy \ > openstack-nova-scheduler openstack-nova-placement-api Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile Resolving Dependencies --> Running transaction check ---> Package openstack-nova-api.noarch 1:16.1.6-1.el7 will be installed --> Processing Dependency: openstack-nova-common = 1:16.1.6-1.el7 for package: 1:openstack-nova-api-16.1.6-1.el7.noarch ---> Package openstack-nova-conductor.noarch 1:16.1.6-1.el7 will be installed ---> Package openstack-nova-console.noarch 1:16.1.6-1.el7 will be installed --> Processing Dependency: python-websockify >= 0.8.0 for package: 1:openstack-nova-console-16.1.6-1.el7.noarch ---> Package openstack-nova-novncproxy.noarch 1:16.1.6-1.el7 will be installed --> Processing Dependency: novnc for package: 1:openstack-nova-novncproxy-16.1.6-1.el7.noarch ---> Package openstack-nova-placement-api.noarch 1:16.1.6-1.el7 will be installed ---> Package openstack-nova-scheduler.noarch 1:16.1.6-1.el7 will be installed --> Running transaction check ---> Package novnc.noarch 0:0.5.1-2.el7 will be installed ---> Package openstack-nova-common.noarch 1:16.1.6-1.el7 will be installed --> Processing Dependency: python-nova = 1:16.1.6-1.el7 for package: 1:openstack-nova-common-16.1.6-1.el7.noarch ---> Package python-websockify.noarch 0:0.8.0-1.el7 will be installed --> Running transaction check ---> Package python-nova.noarch 1:16.1.6-1.el7 will be installed --> Processing Dependency: python-tooz >= 1.58.0 for package: 1:python-nova-16.1.6-1.el7.noarch --> Processing Dependency: python-paramiko >= 2.0 for package: 1:python-nova-16.1.6-1.el7.noarch --> Processing Dependency: python-oslo-versionedobjects >= 1.17.0 for package: 1:python-nova-16.1.6-1.el7.noarch --> Processing Dependency: python-oslo-reports >= 0.6.0 for package: 1:python-nova-16.1.6-1.el7.noarch --> Processing Dependency: python-os-vif >= 1.7.0 for package: 1:python-nova-16.1.6-1.el7.noarch --> Processing Dependency: python-microversion-parse >= 0.1.2 for package: 1:python-nova-16.1.6-1.el7.noarch --> Processing Dependency: python-psutil for package: 1:python-nova-16.1.6-1.el7.noarch --> Processing Dependency: python-os-traits for package: 1:python-nova-16.1.6-1.el7.noarch --> Running transaction check ---> Package python-paramiko.noarch 0:2.1.1-9.el7 will be installed ---> Package python-tooz.noarch 0:1.58.0-1.el7 will be installed --> Processing Dependency: python-voluptuous >= 0.8.9 for package: python-tooz-1.58.0-1.el7.noarch --> Processing Dependency: python-zake for package: python-tooz-1.58.0-1.el7.noarch --> Processing Dependency: python-redis for package: python-tooz-1.58.0-1.el7.noarch ---> Package python2-microversion-parse.noarch 0:0.1.4-2.el7 will be installed ---> Package python2-os-traits.noarch 0:0.3.3-1.el7 will be installed ---> Package python2-os-vif.noarch 0:1.7.0-1.el7 will be installed ---> Package python2-oslo-reports.noarch 0:1.22.1-1.el7 will be installed ---> Package python2-oslo-versionedobjects.noarch 0:1.26.2-1.el7 will be installed --> Processing Dependency: python-oslo-versionedobjects-lang = 1.26.2-1.el7 for package: python2-oslo-versionedobjects-1.26.2-1.el7.noarch --> Processing Dependency: python-mock for package: python2-oslo-versionedobjects-1.26.2-1.el7.noarch ---> Package python2-psutil.x86_64 0:5.2.2-2.el7 will be installed --> Running transaction check ---> Package python-oslo-versionedobjects-lang.noarch 0:1.26.2-1.el7 will be installed ---> Package python-redis.noarch 0:2.10.3-1.el7 will be installed ---> Package python-voluptuous.noarch 0:0.8.9-1.el7 will be installed ---> Package python2-mock.noarch 0:2.0.0-1.el7 will be installed ---> Package python2-zake.noarch 0:0.2.2-2.el7 will be installed --> Processing Dependency: python-kazoo for package: python2-zake-0.2.2-2.el7.noarch --> Running transaction check ---> Package python-kazoo.noarch 0:2.2.1-1.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================================ Package Arch Version Repository Size ================================================================================================ Installing: openstack-nova-api noarch 1:16.1.6-1.el7 OpenStack-Pike-tuna 8.2 k openstack-nova-conductor noarch 1:16.1.6-1.el7 OpenStack-Pike-tuna 5.8 k openstack-nova-console noarch 1:16.1.6-1.el7 OpenStack-Pike-tuna 6.8 k openstack-nova-novncproxy noarch 1:16.1.6-1.el7 OpenStack-Pike-tuna 6.2 k openstack-nova-placement-api noarch 1:16.1.6-1.el7 OpenStack-Pike-tuna 6.0 k openstack-nova-scheduler noarch 1:16.1.6-1.el7 OpenStack-Pike-tuna 5.8 k Installing for dependencies: novnc noarch 0.5.1-2.el7 OpenStack-Pike-tuna 176 k openstack-nova-common noarch 1:16.1.6-1.el7 OpenStack-Pike-tuna 371 k python-kazoo noarch 2.2.1-1.el7 OpenStack-Pike-tuna 130 k python-nova noarch 1:16.1.6-1.el7 OpenStack-Pike-tuna 3.3 M python-oslo-versionedobjects-lang noarch 1.26.2-1.el7 OpenStack-Pike-tuna 8.0 k python-paramiko noarch 2.1.1-9.el7 base 269 k python-redis noarch 2.10.3-1.el7 OpenStack-Pike-tuna 94 k python-tooz noarch 1.58.0-1.el7 OpenStack-Pike-tuna 94 k python-voluptuous noarch 0.8.9-1.el7 OpenStack-Pike-tuna 36 k python-websockify noarch 0.8.0-1.el7 OpenStack-Pike-tuna 69 k python2-microversion-parse noarch 0.1.4-2.el7 OpenStack-Pike-tuna 16 k python2-mock noarch 2.0.0-1.el7 OpenStack-Pike-tuna 120 k python2-os-traits noarch 0.3.3-1.el7 OpenStack-Pike-tuna 22 k python2-os-vif noarch 1.7.0-1.el7 OpenStack-Pike-tuna 59 k python2-oslo-reports noarch 1.22.1-1.el7 OpenStack-Pike-tuna 53 k python2-oslo-versionedobjects noarch 1.26.2-1.el7 OpenStack-Pike-tuna 72 k python2-psutil x86_64 5.2.2-2.el7 OpenStack-Pike-tuna 310 k python2-zake noarch 0.2.2-2.el7 OpenStack-Pike-tuna 39 k Transaction Summary ================================================================================================ Install 6 Packages (+18 Dependent packages) Total download size: 5.2 M Installed size: 23 M Downloading packages: (1/24): openstack-nova-api-16.1.6-1.el7.noarch.rpm | 8.2 kB 00:00:01 (2/24): novnc-0.5.1-2.el7.noarch.rpm | 176 kB 00:00:01 (3/24): openstack-nova-common-16.1.6-1.el7.noarch.rpm | 371 kB 00:00:00 (4/24): openstack-nova-console-16.1.6-1.el7.noarch.rpm | 6.8 kB 00:00:00 (5/24): openstack-nova-conductor-16.1.6-1.el7.noarch.rpm | 5.8 kB 00:00:00 (6/24): openstack-nova-novncproxy-16.1.6-1.el7.noarch.rpm | 6.2 kB 00:00:00 (7/24): openstack-nova-placement-api-16.1.6-1.el7.noarch.rpm | 6.0 kB 00:00:00 (8/24): openstack-nova-scheduler-16.1.6-1.el7.noarch.rpm | 5.8 kB 00:00:00 (9/24): python-kazoo-2.2.1-1.el7.noarch.rpm | 130 kB 00:00:00 (10/24): python-oslo-versionedobjects-lang-1.26.2-1.el7.noarch.rpm | 8.0 kB 00:00:00 (11/24): python-redis-2.10.3-1.el7.noarch.rpm | 94 kB 00:00:00 (12/24): python-paramiko-2.1.1-9.el7.noarch.rpm | 269 kB 00:00:00 (13/24): python-tooz-1.58.0-1.el7.noarch.rpm | 94 kB 00:00:00 (14/24): python-voluptuous-0.8.9-1.el7.noarch.rpm | 36 kB 00:00:00 (15/24): python-websockify-0.8.0-1.el7.noarch.rpm | 69 kB 00:00:01 (16/24): python2-microversion-parse-0.1.4-2.el7.noarch.rpm | 16 kB 00:00:00 (17/24): python2-mock-2.0.0-1.el7.noarch.rpm | 120 kB 00:00:00 (18/24): python-nova-16.1.6-1.el7.noarch.rpm | 3.3 MB 00:00:03 (19/24): python2-os-traits-0.3.3-1.el7.noarch.rpm | 22 kB 00:00:00 (20/24): python2-os-vif-1.7.0-1.el7.noarch.rpm | 59 kB 00:00:00 (21/24): python2-oslo-reports-1.22.1-1.el7.noarch.rpm | 53 kB 00:00:00 (22/24): python2-oslo-versionedobjects-1.26.2-1.el7.noarch.rpm | 72 kB 00:00:00 (23/24): python2-zake-0.2.2-2.el7.noarch.rpm | 39 kB 00:00:00 (24/24): python2-psutil-5.2.2-2.el7.x86_64.rpm | 310 kB 00:00:00 ------------------------------------------------------------------------------------------------ Total 738 kB/s | 5.2 MB 00:00:07 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : python-websockify-0.8.0-1.el7.noarch 1/24 Installing : python2-psutil-5.2.2-2.el7.x86_64 2/24 Installing : python2-oslo-reports-1.22.1-1.el7.noarch 3/24 Installing : novnc-0.5.1-2.el7.noarch 4/24 Installing : python2-os-traits-0.3.3-1.el7.noarch 5/24 Installing : python-voluptuous-0.8.9-1.el7.noarch 6/24 Installing : python2-mock-2.0.0-1.el7.noarch 7/24 Installing : python-paramiko-2.1.1-9.el7.noarch 8/24 Installing : python-kazoo-2.2.1-1.el7.noarch 9/24 Installing : python2-zake-0.2.2-2.el7.noarch 10/24 Installing : python2-microversion-parse-0.1.4-2.el7.noarch 11/24 Installing : python-redis-2.10.3-1.el7.noarch 12/24 Installing : python-tooz-1.58.0-1.el7.noarch 13/24 Installing : python-oslo-versionedobjects-lang-1.26.2-1.el7.noarch 14/24 Installing : python2-oslo-versionedobjects-1.26.2-1.el7.noarch 15/24 Installing : python2-os-vif-1.7.0-1.el7.noarch 16/24 Installing : 1:python-nova-16.1.6-1.el7.noarch 17/24 Installing : 1:openstack-nova-common-16.1.6-1.el7.noarch 18/24 Installing : 1:openstack-nova-conductor-16.1.6-1.el7.noarch 19/24 Installing : 1:openstack-nova-console-16.1.6-1.el7.noarch 20/24 Installing : 1:openstack-nova-scheduler-16.1.6-1.el7.noarch 21/24 Installing : 1:openstack-nova-api-16.1.6-1.el7.noarch 22/24 Installing : 1:openstack-nova-placement-api-16.1.6-1.el7.noarch 23/24 Installing : 1:openstack-nova-novncproxy-16.1.6-1.el7.noarch 24/24 Verifying : 1:openstack-nova-conductor-16.1.6-1.el7.noarch 1/24 Verifying : python2-zake-0.2.2-2.el7.noarch 2/24 Verifying : python2-oslo-reports-1.22.1-1.el7.noarch 3/24 Verifying : 1:openstack-nova-console-16.1.6-1.el7.noarch 4/24 Verifying : 1:openstack-nova-scheduler-16.1.6-1.el7.noarch 5/24 Verifying : 1:openstack-nova-common-16.1.6-1.el7.noarch 6/24 Verifying : python-oslo-versionedobjects-lang-1.26.2-1.el7.noarch 7/24 Verifying : 1:python-nova-16.1.6-1.el7.noarch 8/24 Verifying : python-redis-2.10.3-1.el7.noarch 9/24 Verifying : python2-microversion-parse-0.1.4-2.el7.noarch 10/24 Verifying : python2-oslo-versionedobjects-1.26.2-1.el7.noarch 11/24 Verifying : python-kazoo-2.2.1-1.el7.noarch 12/24 Verifying : python-paramiko-2.1.1-9.el7.noarch 13/24 Verifying : python2-mock-2.0.0-1.el7.noarch 14/24 Verifying : python-tooz-1.58.0-1.el7.noarch 15/24 Verifying : python-voluptuous-0.8.9-1.el7.noarch 16/24 Verifying : novnc-0.5.1-2.el7.noarch 17/24 Verifying : 1:openstack-nova-api-16.1.6-1.el7.noarch 18/24 Verifying : python2-psutil-5.2.2-2.el7.x86_64 19/24 Verifying : 1:openstack-nova-placement-api-16.1.6-1.el7.noarch 20/24 Verifying : python2-os-vif-1.7.0-1.el7.noarch 21/24 Verifying : python2-os-traits-0.3.3-1.el7.noarch 22/24 Verifying : python-websockify-0.8.0-1.el7.noarch 23/24 Verifying : 1:openstack-nova-novncproxy-16.1.6-1.el7.noarch 24/24 Installed: openstack-nova-api.noarch 1:16.1.6-1.el7 openstack-nova-conductor.noarch 1:16.1.6-1.el7 openstack-nova-console.noarch 1:16.1.6-1.el7 openstack-nova-novncproxy.noarch 1:16.1.6-1.el7 openstack-nova-placement-api.noarch 1:16.1.6-1.el7 openstack-nova-scheduler.noarch 1:16.1.6-1.el7 Dependency Installed: novnc.noarch 0:0.5.1-2.el7 openstack-nova-common.noarch 1:16.1.6-1.el7 python-kazoo.noarch 0:2.2.1-1.el7 python-nova.noarch 1:16.1.6-1.el7 python-oslo-versionedobjects-lang.noarch 0:1.26.2-1.el7 python-paramiko.noarch 0:2.1.1-9.el7 python-redis.noarch 0:2.10.3-1.el7 python-tooz.noarch 0:1.58.0-1.el7 python-voluptuous.noarch 0:0.8.9-1.el7 python-websockify.noarch 0:0.8.0-1.el7 python2-microversion-parse.noarch 0:0.1.4-2.el7 python2-mock.noarch 0:2.0.0-1.el7 python2-os-traits.noarch 0:0.3.3-1.el7 python2-os-vif.noarch 0:1.7.0-1.el7 python2-oslo-reports.noarch 0:1.22.1-1.el7 python2-oslo-versionedobjects.noarch 0:1.26.2-1.el7 python2-psutil.x86_64 0:5.2.2-2.el7 python2-zake.noarch 0:0.2.2-2.el7 Complete! [root@controller ~]# sed -i.bak '/^#/d;/^$/d' /etc/nova/nova.conf [root@controller ~]# vim /etc/nova/nova.conf [root@controller ~]# cat /etc/nova/nova.conf [DEFAULT] # 仅启用计算和元数据API enabled_apis = osapi_compute,metadata # 配置RabbitMQ消息队列访问 transport_url = rabbit://openstack:000000@controller # 控制器节点的管理IP my_ip = 178.120.2.10 # 启用对网络服务的支持 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver [api] # 配置身份服务访问 auth_strategy = keystone [api_database] # 配置数据库访问 connection = mysql+pymysql://nova:000000@controller/nova_api [barbican] [cache] [cells] [cinder] [compute] [conductor] [console] [consoleauth] [cors] [crypto] [database] # 配置数据库访问 connection = mysql+pymysql://nova:000000@controller/nova [ephemeral_storage_encryption] [filter_scheduler] [glance] # 配置图像服务API的位置 api_servers = http://controller:9292 [guestfs] [healthcheck] [hyperv] [ironic] [key_manager] [keystone] [keystone_authtoken] # 配置身份服务访问 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = 000000 [libvirt] [matchmaker_redis] [metrics] [mks] [neutron] [notifications] [osapi_v21] [oslo_concurrency] # 配置锁定路径 lock_path = /var/lib/nova/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] slo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [pci] [placement] # 配置 Placement API os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:35357/v3 username = placement password = 000000 [quota] [rdp] [remote_debug] [scheduler] [serial_console] [service_user] [spice] [trusted_computing] [upgrade_levels] [vendordata_dynamic_auth] [vmware] [vnc] enabled = true # VNC代理配置为 使用控制器节点的管理接口IP地址 vncserver_listen = $my_ip vncserver_proxyclient_address = $my_ip [workarounds] [wsgi] [xenserver] [xvp] [root@controller ~]# vim /etc/httpd/conf.d/00-nova-placement-api.conf [root@controller ~]# cat /etc/httpd/conf.d/00-nova-placement-api.confListen 8778 <VirtualHost *:8778> WSGIProcessGroup nova-placement-api WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On WSGIDaemonProcess nova-placement-api processes=3 threads=1 user=nova group=nova WSGIScriptAlias / /usr/bin/nova-placement-api <IfVersion >= 2.4> ErrorLogFormat "%M" </IfVersion> ErrorLog /var/log/nova/nova-placement-api.log #SSLEngine On #SSLCertificateFile ... #SSLCertificateKeyFile ... </VirtualHost> Alias /nova-placement-api /usr/bin/nova-placement-api <Location /nova-placement-api> SetHandler wsgi-script Options +ExecCGI WSGIProcessGroup nova-placement-api WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On </Location> # 启用对 Placement API 的访问 <Directory /usr/bin> <IfVersion >= 2.4> Require all granted </IfVersion> <IfVersion < 2.4> Order allow,deny Allow from all </IfVersion> </Directory> [root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova [root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova [root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova 21614893-248b-41df-9668-73d056ddda1e [root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova /usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u'Duplicate index `block_device_mapping_instance_uuid_virtual_name_device_name_idx`. This is deprecated and will be disallowed in a future release.') result = self._query(query) /usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u'Duplicate index `uniq_instances0uuid`. This is deprecated and will be disallowed in a future release.') result = self._query(query) [root@controller ~]# nova-manage cell_v2 list_cells +-------+--------------------------------------+------------------------------------+-------------------------------------------------+ | Name | UUID | Transport URL | Database Connection | +-------+--------------------------------------+------------------------------------+-------------------------------------------------+ | cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:****@controller/nova_cell0 | | cell1 | 21614893-248b-41df-9668-73d056ddda1e | rabbit://openstack:****@controller | mysql+pymysql://nova:****@controller/nova | +-------+--------------------------------------+------------------------------------+-------------------------------------------------+ {/collapse-item} {/collapse} 安装完成 启动计算服务并设置开机自启 systemctl enable openstack-nova-api.service \ openstack-nova-consoleauth.service openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service systemctl start openstack-nova-api.service \ openstack-nova-consoleauth.service openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service {collapse} {collapse-item label="查看执行过程"} 安装完成 [root@controller ~]# systemctl enable openstack-nova-api.service \ > openstack-nova-consoleauth.service openstack-nova-scheduler.service \ > openstack-nova-conductor.service openstack-nova-novncproxy.service Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-api.service to /usr/lib/systemd/system/openstack-nova-api.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-consoleauth.service to /usr/lib/systemd/system/openstack-nova-consoleauth.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-scheduler.service to /usr/lib/systemd/system/openstack-nova-scheduler.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-conductor.service to /usr/lib/systemd/system/openstack-nova-conductor.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-novncproxy.service to /usr/lib/systemd/system/openstack-nova-novncproxy.service. [root@controller ~]# systemctl start openstack-nova-api.service \ > openstack-nova-consoleauth.service openstack-nova-scheduler.service \ > openstack-nova-conductor.service openstack-nova-novncproxy.service {/collapse-item} {/collapse} 安装和配置 计算节点 安装和配置组件 1、安装软件包 yum install -y openstack-nova-compute > Tip: > Error: Package: 1:openstack-nova-compute-16.1.6-1.el7.noarch (OpenStack-Pike-tuna) rpm -ivh http://mirrors.163.com/centos/7/extras/x86_64/Packages/centos-release-virt-common-1-1.el7.centos.noarch.rpm --replacepkgs rpm -ivh http://mirrors.163.com/centos/7/extras/x86_64/Packages/centos-release-qemu-ev-1.0-4.el7.centos.noarch.rpm --replacepkgs 2、配置 nova.conf sed -i.bak '/^#/d;/^$/d' /etc/nova/nova.conf vim /etc/nova/nova.conf [DEFAULT] # 启用 计算 和 元数据API enabled_apis = osapi_compute,metadata # 配置 RabbitMQ消息队列 访问 transport_url = rabbit://openstack:000000@controller # 计算节点上管理网络 IP地址 my_ip = 178.120.2.20 # 启用对网络服务的支持 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver [api] # 配置 身份服务访问 auth_strategy = keystone [keystone_authtoken] # 配置身份服务访问 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = 000000 [vnc] # 启用 和 配置远程控制台访问 enabled = True vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = $my_ip novncproxy_base_url = http://178.120.2.10:6080/vnc_auto.html [glance] # 配置图像服务 API的位置 api_servers = http://controller:9292 [oslo_concurrency] # 配置锁定路径 lock_path = /var/lib/nova/tmp [placement] # 配置 Placement API os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:35357/v3 username = placement password = 000000 完成安装 1、确认计算节点是否支持虚拟化 egrep -c '(vmx|svm)' /proc/cpuinfo vim /etc/nova/nova.conf [libvirt] # 虚拟化选项(默认kvm) virt_type = qemu 2、启动计算服务并设置开机自启 systemctl enable libvirtd.service openstack-nova-compute.service systemctl start libvirtd.service openstack-nova-compute.service {collapse} {collapse-item label="查看执行过程"} 安装和配置组件 [root@compute nova]# vim nova.conf [root@compute nova]# cat nova.conf [DEFAULT] # 启用 计算 和 元数据API enabled_apis = osapi_compute,metadata # 配置 RabbitMQ消息队列 访问 transport_url = rabbit://openstack:000000@controller # 计算节点上管理网络 IP地址 my_ip = 178.120.2.20 # 启用对网络服务的支持 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver [api] # 配置 身份服务访问 auth_strategy = keystone [api_database] [barbican] [cache] [cells] [cinder] [compute] [conductor] [console] [consoleauth] [cors] [crypto] [database] [ephemeral_storage_encryption] [filter_scheduler] [glance] # 配置图像服务 API的位置 api_servers = http://controller:9292 [guestfs] [healthcheck] [hyperv] [ironic] [key_manager] [keystone] [keystone_authtoken] # 配置身份服务访问 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = 000000 [libvirt] virt_type = qemu [matchmaker_redis] [metrics] [mks] [neutron] [notifications] [osapi_v21] [oslo_concurrency] # 配置锁定路径 lock_path = /var/lib/nova/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [pci] [placement] # 配置 Placement API os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:35357/v3 username = placement password = 000000 [quota] [rdp] [remote_debug] [scheduler] [serial_console] [service_user] [spice] [trusted_computing] [upgrade_levels] [vendordata_dynamic_auth] [vmware] [vnc] # 启用 和 配置远程控制台访问 enabled = True vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = $my_ip novncproxy_base_url = http://178.120.2.10:6080/vnc_auto.html [workarounds] [wsgi] [xenserver] [xvp] [root@compute nova]# egrep -c '(vmx|svm)' /proc/cpuinfo 8 [root@compute nova]# systemctl enable libvirtd.service openstack-nova-compute.service Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service to /usr/lib/systemd/system/openstack-nova-compute.service. systemctl start libvirtd.service openstack-nova-compute.service [root@compute nova]# systemctl start libvirtd.service openstack-nova-compute.service {/collapse-item} {/collapse} 添加计算节点 控制节点执行 1、检查数据库中有该计算节点 . admin-openrc openstack compute service list --service nova-compute 2、注册 计算节点 su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova 自动注册(可选) [scheduler] # 自动注册主机时间 discover_hosts_in_cells_interval = 300 {collapse} {collapse-item label="查看执行过程"} 添加计算节点 [root@controller ~]# . admin-openrc [root@controller ~]# openstack compute service list --service nova-compute +----+--------------+---------+------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +----+--------------+---------+------+---------+-------+----------------------------+ | 7 | nova-compute | compute | nova | enabled | up | 2022-07-13T07:02:24.000000 | +----+--------------+---------+------+---------+-------+----------------------------+ [root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting computes from cell 'cell1': 21614893-248b-41df-9668-73d056ddda1e Checking host mapping for compute host 'compute': 6e85d3a5-24a5-417b-8735-5edb7859ad03 Creating host mapping for compute host 'compute': 6e85d3a5-24a5-417b-8735-5edb7859ad03 Found 1 unmapped computes in cell: 21614893-248b-41df-9668-73d056ddda1e {/collapse-item} {/collapse} 验证 1、获取 admin 凭证 . admin-openrc 2、查询 计算服务组件列表 openstack compute service list 3、查询 Keytone 中API端点 列表 openstack catalog list 4、 查询 placement API 和 Cell 是否工作 nova-status upgrade check {collapse} {collapse-item label="查看执行过程"} 验证 [root@controller ~]# . admin-openrc [root@controller ~]# openstack compute service list +----+------------------+------------+----------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +----+------------------+------------+----------+---------+-------+----------------------------+ | 1 | nova-consoleauth | controller | internal | enabled | up | 2022-07-13T07:10:56.000000 | | 2 | nova-conductor | controller | internal | enabled | up | 2022-07-13T07:10:55.000000 | | 6 | nova-scheduler | controller | internal | enabled | up | 2022-07-13T07:10:56.000000 | | 7 | nova-compute | compute | nova | enabled | up | 2022-07-13T07:10:55.000000 | +----+------------------+------------+----------+---------+-------+----------------------------+ [root@controller ~]# openstack catalog list +-----------+-----------+-----------------------------------------+ | Name | Type | Endpoints | +-----------+-----------+-----------------------------------------+ | keystone | identity | RegionOne | | | | internal: http://controller:5000/v3/ | | | | RegionOne | | | | public: http://controller:5000/v3/ | | | | RegionOne | | | | admin: http://controller:35357/v3/ | | | | | | glance | image | RegionOne | | | | admin: http://controller:9292 | | | | RegionOne | | | | public: http://controller:9292 | | | | RegionOne | | | | internal: http://controller:9292 | | | | | | placement | placement | RegionOne | | | | admin: http://controller:8778 | | | | RegionOne | | | | public: http://controller:8778 | | | | RegionOne | | | | internal: http://controller:8778 | | | | | | nova | compute | RegionOne | | | | admin: http://controller:8774/v2.1 | | | | RegionOne | | | | public: http://controller:8774/v2.1 | | | | RegionOne | | | | internal: http://controller:8774/v2.1 | | | | | +-----------+-----------+-----------------------------------------+ [root@controller ~]# nova-status upgrade check +---------------------------+ | Upgrade Check Results | +---------------------------+ | Check: Cells v2 | | Result: Success | | Details: None | +---------------------------+ | Check: Placement API | | Result: Success | | Details: None | +---------------------------+ | Check: Resource Providers | | Result: Success | | Details: None | +---------------------------+ {/collapse-item} {/collapse}
2022年07月13日
165 阅读
0 评论
0 点赞
2022-07-13
OpenStack-Pike 搭建之Glance(三)
Glance 概述 The OpenStack Image service includes the following components: glance-api Accepts Image API calls for image discovery, retrieval, and storage. glance-registry Stores, processes, and retrieves metadata about images. Metadata includes items such as size and type. Database Stores image metadata and you can choose your database depending on your preference. Most deployments use MySQL or SQLite. Storage repository for image files Various repository types are supported including normal file systems (or any filesystem mounted on the glance-api controller node), Object Storage, RADOS block devices, VMware datastore, and HTTP. Note that some repositories will only support read-only usage. Metadata definition service A common API for vendors, admins, services, and users to meaningfully define their own custom metadata. This metadata can be used on different types of resources like images, artifacts, volumes, flavors, and aggregates. A definition includes the new property’s key, description, constraints, and the resource types which it can be associated with. 前置条件 创建 数据库并授权 1、使用 root用户登录数据库 mysql -u root -p000000 2、创建 glance 数据库 CREATE DATABASE glance; 3、授权 glance用户 对 glance数据库 所有权限 GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ IDENTIFIED BY '000000'; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ IDENTIFIED BY '000000'; {collapse} {collapse-item label="查看执行过程"} 前置条件 [root@controller ~]# mysql -u root -p000000 Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 27 Server version: 10.1.20-MariaDB MariaDB Server Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> CREATE DATABASE glance; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ -> IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ -> IDENTIFIED BY '000000'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> exit Bye {/collapse-item} {/collapse} 创建 服务凭证 和 API端点 1、加载 admin用户信息 . admin-openrc 2、创建 服务凭证 创建 glance 用户 openstack user create --domain default --password 000000 glance 将 service项目 中的 glance用户 设置为 admin角色 openstack role add --project service --user glance admin 创建 glance服务 openstack service create --name glance \ --description "OpenStack Image" image 3、创建 glance 服务 API端点 openstack endpoint create --region RegionOne \ image public http://controller:9292 openstack endpoint create --region RegionOne \ image internal http://controller:9292 openstack endpoint create --region RegionOne \ image admin http://controller:9292 {collapse} {collapse-item label="查看执行过程"} 创建 服务凭证 和 API端点 [root@controller ~]# . admin-openrc [root@controller ~]# openstack user create --domain default --password 000000 glance +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | f66e07e3922147f99dd60b01aa68d1c0 | | name | glance | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ [root@controller ~]# openstack role add --project service --user glance admin [root@controller ~]# openstack service create --name glance \ > --description "OpenStack Image" image +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Image | | enabled | True | | id | 1109e2bc82474c078171ed3640272493 | | name | glance | | type | image | +-------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne \ > image public http://controller:9292 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 8eca34e46a144eaeaf790b601b9f8c88 | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | 1109e2bc82474c078171ed3640272493 | | service_name | glance | | service_type | image | | url | http://controller:9292 | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne \ > image internal http://controller:9292 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | ce24756b281e406ea069f2c656485001 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | 1109e2bc82474c078171ed3640272493 | | service_name | glance | | service_type | image | | url | http://controller:9292 | +--------------+----------------------------------+ [root@controller ~]# openstack endpoint create --region RegionOne \ > image admin http://controller:9292 +--------------+----------------------------------+ | Field | Value | +--------------+----------------------------------+ | enabled | True | | id | 43ac650380b9456ea268edaac326908b | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | 1109e2bc82474c078171ed3640272493 | | service_name | glance | | service_type | image | | url | http://controller:9292 | +--------------+----------------------------------+ {/collapse-item} {/collapse} 安装 和 配置组件 1、安装软件包 yum install -y openstack-glance 2、配置 glance-api.conf # sed -i.bak '/^#/d;/^$/d' /etc/glance/glance-api.conf # vim /etc/glance/glance-api.conf [database] # 配置数据库访问 connection = mysql+pymysql://glance:000000@controller/glance [keystone_authtoken] # 配置身份服务访问 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = 000000 [paste_deploy] # 配置身份服务访问 flavor = keystone [glance_store] # 配置本地文件系统存储和镜像文件的位置 stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ 3、配置 glance-registry.conf # sed -i.bak '/^#/d;/^$/d' /etc/glance/glance-registry.conf # vim /etc/glance/glance-registry.conf [database] # 配置数据库访问 connection = mysql+pymysql://glance:000000@controller/glance [keystone_authtoken] # 配置身份服务访问 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = 000000 [paste_deploy] # 配置身份服务访问 flavor = keystone 4、同步 glance 数据库 su -s /bin/sh -c "glance-manage db_sync" glance {collapse} {collapse-item label="查看执行过程"} 安装 和 配置组件 [root@controller ~]# yum install -y openstack-glance Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile Resolving Dependencies --> Running transaction check ---> Package openstack-glance.noarch 1:15.0.1-1.el7 will be installed --> Processing Dependency: python-glance = 1:15.0.1-1.el7 for package: 1:openstack-glance-15.0.1-1.el7.noarch --> Running transaction check ---> Package python-glance.noarch 1:15.0.1-1.el7 will be installed --> Processing Dependency: python-wsme >= 0.8 for package: 1:python-glance-15.0.1-1.el7.noarch --> Processing Dependency: python-taskflow >= 2.7.0 for package: 1:python-glance-15.0.1-1.el7.noarch --> Processing Dependency: python-swiftclient >= 2.2.0 for package: 1:python-glance-15.0.1-1.el7.noarch --> Processing Dependency: python-oslo-vmware >= 0.11.1 for package: 1:python-glance-15.0.1-1.el7.noarch --> Processing Dependency: python-os-brick >= 1.8.0 for package: 1:python-glance-15.0.1-1.el7.noarch --> Processing Dependency: python-glance-store >= 0.21.0 for package: 1:python-glance-15.0.1-1.el7.noarch --> Processing Dependency: python-retrying for package: 1:python-glance-15.0.1-1.el7.noarch --> Processing Dependency: python-httplib2 for package: 1:python-glance-15.0.1-1.el7.noarch --> Processing Dependency: python-cursive for package: 1:python-glance-15.0.1-1.el7.noarch --> Processing Dependency: python-boto for package: 1:python-glance-15.0.1-1.el7.noarch --> Processing Dependency: pysendfile for package: 1:python-glance-15.0.1-1.el7.noarch --> Running transaction check ---> Package pysendfile.x86_64 0:2.0.0-5.el7 will be installed ---> Package python-boto.noarch 0:2.34.0-4.el7 will be installed --> Processing Dependency: python-rsa for package: python-boto-2.34.0-4.el7.noarch ---> Package python-httplib2.noarch 0:0.9.2-1.el7 will be installed ---> Package python-retrying.noarch 0:1.2.3-4.el7 will be installed ---> Package python2-cursive.noarch 0:0.1.2-1.el7 will be installed --> Processing Dependency: python-lxml >= 2.3 for package: python2-cursive-0.1.2-1.el7.noarch --> Processing Dependency: python-castellan >= 0.4.0 for package: python2-cursive-0.1.2-1.el7.noarch ---> Package python2-glance-store.noarch 0:0.22.0-1.el7 will be installed --> Processing Dependency: python-oslo-privsep >= 1.9.0 for package: python2-glance-store-0.22.0-1.el7.noarch --> Processing Dependency: python-oslo-rootwrap for package: python2-glance-store-0.22.0-1.el7.noarch ---> Package python2-os-brick.noarch 0:1.15.6-1.el7 will be installed --> Processing Dependency: python-os-win >= 2.0.0 for package: python2-os-brick-1.15.6-1.el7.noarch ---> Package python2-oslo-vmware.noarch 0:2.23.1-1.el7 will be installed --> Processing Dependency: python-oslo-vmware-lang = 2.23.1-1.el7 for package: python2-oslo-vmware-2.23.1-1.el7.noarch --> Processing Dependency: python-suds >= 0.6 for package: python2-oslo-vmware-2.23.1-1.el7.noarch ---> Package python2-swiftclient.noarch 0:3.4.0-1.el7 will be installed ---> Package python2-taskflow.noarch 0:2.14.1-1.el7 will be installed --> Processing Dependency: python-networkx >= 1.10 for package: python2-taskflow-2.14.1-1.el7.noarch --> Processing Dependency: python-automaton >= 0.5.0 for package: python2-taskflow-2.14.1-1.el7.noarch --> Processing Dependency: python-networkx-core for package: python2-taskflow-2.14.1-1.el7.noarch ---> Package python2-wsme.noarch 0:0.9.2-1.el7 will be installed --> Processing Dependency: python-simplegeneric for package: python2-wsme-0.9.2-1.el7.noarch --> Running transaction check ---> Package python-lxml.x86_64 0:3.2.1-4.el7 will be installed --> Processing Dependency: libxslt.so.1(LIBXML2_1.1.9)(64bit) for package: python-lxml-3.2.1-4.el7.x86_64 --> Processing Dependency: libxslt.so.1(LIBXML2_1.1.26)(64bit) for package: python-lxml-3.2.1-4.el7.x86_64 --> Processing Dependency: libxslt.so.1(LIBXML2_1.1.2)(64bit) for package: python-lxml-3.2.1-4.el7.x86_64 --> Processing Dependency: libxslt.so.1(LIBXML2_1.0.24)(64bit) for package: python-lxml-3.2.1-4.el7.x86_64 --> Processing Dependency: libxslt.so.1(LIBXML2_1.0.22)(64bit) for package: python-lxml-3.2.1-4.el7.x86_64 --> Processing Dependency: libxslt.so.1(LIBXML2_1.0.18)(64bit) for package: python-lxml-3.2.1-4.el7.x86_64 --> Processing Dependency: libxslt.so.1(LIBXML2_1.0.11)(64bit) for package: python-lxml-3.2.1-4.el7.x86_64 --> Processing Dependency: libxslt.so.1()(64bit) for package: python-lxml-3.2.1-4.el7.x86_64 --> Processing Dependency: libexslt.so.0()(64bit) for package: python-lxml-3.2.1-4.el7.x86_64 ---> Package python-networkx.noarch 0:1.10-1.el7 will be installed ---> Package python-networkx-core.noarch 0:1.10-1.el7 will be installed --> Processing Dependency: scipy for package: python-networkx-core-1.10-1.el7.noarch ---> Package python-oslo-vmware-lang.noarch 0:2.23.1-1.el7 will be installed ---> Package python-simplegeneric.noarch 0:0.8-7.el7 will be installed ---> Package python2-automaton.noarch 0:1.12.1-1.el7 will be installed ---> Package python2-castellan.noarch 0:0.12.2-1.el7 will be installed ---> Package python2-os-win.noarch 0:2.2.0-1.el7 will be installed ---> Package python2-oslo-privsep.noarch 0:1.22.1-1.el7 will be installed --> Processing Dependency: python-oslo-privsep-lang = 1.22.1-1.el7 for package: python2-oslo-privsep-1.22.1-1.el7.noarch ---> Package python2-oslo-rootwrap.noarch 0:5.9.1-1.el7 will be installed ---> Package python2-rsa.noarch 0:3.3-2.el7 will be installed ---> Package python2-suds.noarch 0:0.7-0.4.94664ddd46a6.el7 will be installed --> Running transaction check ---> Package libxslt.x86_64 0:1.1.28-6.el7 will be installed ---> Package python-oslo-privsep-lang.noarch 0:1.22.1-1.el7 will be installed ---> Package python2-scipy.x86_64 0:0.18.0-3.el7 will be installed --> Processing Dependency: numpy for package: python2-scipy-0.18.0-3.el7.x86_64 --> Processing Dependency: libgfortran.so.3(GFORTRAN_1.4)(64bit) for package: python2-scipy-0.18.0-3.el7.x86_64 --> Processing Dependency: libgfortran.so.3(GFORTRAN_1.0)(64bit) for package: python2-scipy-0.18.0-3.el7.x86_64 --> Processing Dependency: libtatlas.so.3()(64bit) for package: python2-scipy-0.18.0-3.el7.x86_64 --> Processing Dependency: libquadmath.so.0()(64bit) for package: python2-scipy-0.18.0-3.el7.x86_64 --> Processing Dependency: libgfortran.so.3()(64bit) for package: python2-scipy-0.18.0-3.el7.x86_64 --> Running transaction check ---> Package atlas.x86_64 0:3.10.1-12.el7 will be installed ---> Package libgfortran.x86_64 0:4.8.5-44.el7 will be installed ---> Package libquadmath.x86_64 0:4.8.5-44.el7 will be installed ---> Package python2-numpy.x86_64 1:1.11.2-2.el7 will be installed --> Processing Dependency: python-nose for package: 1:python2-numpy-1.11.2-2.el7.x86_64 --> Running transaction check ---> Package python-nose.noarch 0:1.3.7-7.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ========================================================================================================================================================================= Package Arch Version Repository Size ========================================================================================================================================================================= Installing: openstack-glance noarch 1:15.0.1-1.el7 OpenStack-Pike-tuna 75 k Installing for dependencies: atlas x86_64 3.10.1-12.el7 base 4.5 M libgfortran x86_64 4.8.5-44.el7 base 301 k libquadmath x86_64 4.8.5-44.el7 base 190 k libxslt x86_64 1.1.28-6.el7 base 242 k pysendfile x86_64 2.0.0-5.el7 OpenStack-Pike-tuna 10 k python-boto noarch 2.34.0-4.el7 OpenStack-Pike-tuna 1.6 M python-glance noarch 1:15.0.1-1.el7 OpenStack-Pike-tuna 779 k python-httplib2 noarch 0.9.2-1.el7 OpenStack-Pike-tuna 115 k python-lxml x86_64 3.2.1-4.el7 base 758 k python-networkx noarch 1.10-1.el7 OpenStack-Pike-tuna 7.8 k python-networkx-core noarch 1.10-1.el7 OpenStack-Pike-tuna 1.6 M python-nose noarch 1.3.7-7.el7 OpenStack-Pike-tuna 276 k python-oslo-privsep-lang noarch 1.22.1-1.el7 OpenStack-Pike-tuna 8.1 k python-oslo-vmware-lang noarch 2.23.1-1.el7 OpenStack-Pike-tuna 9.3 k python-retrying noarch 1.2.3-4.el7 OpenStack-Pike-tuna 16 k python-simplegeneric noarch 0.8-7.el7 OpenStack-Pike-tuna 12 k python2-automaton noarch 1.12.1-1.el7 OpenStack-Pike-tuna 37 k python2-castellan noarch 0.12.2-1.el7 OpenStack-Pike-tuna 94 k python2-cursive noarch 0.1.2-1.el7 OpenStack-Pike-tuna 26 k python2-glance-store noarch 0.22.0-1.el7 OpenStack-Pike-tuna 215 k python2-numpy x86_64 1:1.11.2-2.el7 OpenStack-Pike-tuna 3.2 M python2-os-brick noarch 1.15.6-1.el7 OpenStack-Pike-tuna 333 k python2-os-win noarch 2.2.0-1.el7 OpenStack-Pike-tuna 396 k python2-oslo-privsep noarch 1.22.1-1.el7 OpenStack-Pike-tuna 30 k python2-oslo-rootwrap noarch 5.9.1-1.el7 OpenStack-Pike-tuna 38 k python2-oslo-vmware noarch 2.23.1-1.el7 OpenStack-Pike-tuna 188 k python2-rsa noarch 3.3-2.el7 OpenStack-Pike-tuna 63 k python2-scipy x86_64 0.18.0-3.el7 OpenStack-Pike-tuna 12 M python2-suds noarch 0.7-0.4.94664ddd46a6.el7 OpenStack-Pike-tuna 234 k python2-swiftclient noarch 3.4.0-1.el7 OpenStack-Pike-tuna 156 k python2-taskflow noarch 2.14.1-1.el7 OpenStack-Pike-tuna 678 k python2-wsme noarch 0.9.2-1.el7 OpenStack-Pike-tuna 193 k Transaction Summary ========================================================================================================================================================================= Install 1 Package (+32 Dependent packages) Total download size: 28 M Installed size: 121 M Downloading packages: (1/33): atlas-3.10.1-12.el7.x86_64.rpm | 4.5 MB 00:00:03 (2/33): libquadmath-4.8.5-44.el7.x86_64.rpm | 190 kB 00:00:00 (3/33): libxslt-1.1.28-6.el7.x86_64.rpm | 242 kB 00:00:00 (4/33): openstack-glance-15.0.1-1.el7.noarch.rpm | 75 kB 00:00:00 (5/33): python-boto-2.34.0-4.el7.noarch.rpm | 1.6 MB 00:00:01 (6/33): libgfortran-4.8.5-44.el7.x86_64.rpm | 301 kB 00:00:06 (7/33): python-glance-15.0.1-1.el7.noarch.rpm | 779 kB 00:00:01 (8/33): python-httplib2-0.9.2-1.el7.noarch.rpm | 115 kB 00:00:00 (9/33): python-networkx-1.10-1.el7.noarch.rpm | 7.8 kB 00:00:00 (10/33): python-networkx-core-1.10-1.el7.noarch.rpm | 1.6 MB 00:00:01 (11/33): python-nose-1.3.7-7.el7.noarch.rpm | 276 kB 00:00:00 (12/33): python-oslo-privsep-lang-1.22.1-1.el7.noarch.rpm | 8.1 kB 00:00:00 (13/33): python-oslo-vmware-lang-2.23.1-1.el7.noarch.rpm | 9.3 kB 00:00:00 (14/33): python-retrying-1.2.3-4.el7.noarch.rpm | 16 kB 00:00:00 (15/33): python-simplegeneric-0.8-7.el7.noarch.rpm | 12 kB 00:00:00 (16/33): pysendfile-2.0.0-5.el7.x86_64.rpm | 10 kB 00:00:06 (17/33): python2-automaton-1.12.1-1.el7.noarch.rpm | 37 kB 00:00:00 (18/33): python2-castellan-0.12.2-1.el7.noarch.rpm | 94 kB 00:00:00 (19/33): python2-cursive-0.1.2-1.el7.noarch.rpm | 26 kB 00:00:00 (20/33): python2-glance-store-0.22.0-1.el7.noarch.rpm | 215 kB 00:00:00 (21/33): python2-os-brick-1.15.6-1.el7.noarch.rpm | 333 kB 00:00:00 (22/33): python2-os-win-2.2.0-1.el7.noarch.rpm | 396 kB 00:00:01 (23/33): python2-oslo-privsep-1.22.1-1.el7.noarch.rpm | 30 kB 00:00:00 (24/33): python2-oslo-rootwrap-5.9.1-1.el7.noarch.rpm | 38 kB 00:00:00 (25/33): python2-oslo-vmware-2.23.1-1.el7.noarch.rpm | 188 kB 00:00:00 (26/33): python2-rsa-3.3-2.el7.noarch.rpm | 63 kB 00:00:00 (27/33): python-lxml-3.2.1-4.el7.x86_64.rpm | 758 kB 00:00:07 (28/33): python2-numpy-1.11.2-2.el7.x86_64.rpm | 3.2 MB 00:00:11 (29/33): python2-suds-0.7-0.4.94664ddd46a6.el7.noarch.rpm | 234 kB 00:00:00 (30/33): python2-swiftclient-3.4.0-1.el7.noarch.rpm | 156 kB 00:00:00 (31/33): python2-taskflow-2.14.1-1.el7.noarch.rpm | 678 kB 00:00:01 (32/33): python2-wsme-0.9.2-1.el7.noarch.rpm | 193 kB 00:00:00 (33/33): python2-scipy-0.18.0-3.el7.x86_64.rpm | 12 MB 00:00:16 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 947 kB/s | 28 MB 00:00:30 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : libquadmath-4.8.5-44.el7.x86_64 1/33 Installing : libgfortran-4.8.5-44.el7.x86_64 2/33 Installing : atlas-3.10.1-12.el7.x86_64 3/33 Installing : python-retrying-1.2.3-4.el7.noarch 4/33 Installing : python-httplib2-0.9.2-1.el7.noarch 5/33 Installing : libxslt-1.1.28-6.el7.x86_64 6/33 Installing : python-lxml-3.2.1-4.el7.x86_64 7/33 Installing : python2-suds-0.7-0.4.94664ddd46a6.el7.noarch 8/33 Installing : python2-os-win-2.2.0-1.el7.noarch 9/33 Installing : python-oslo-privsep-lang-1.22.1-1.el7.noarch 10/33 Installing : python2-oslo-privsep-1.22.1-1.el7.noarch 11/33 Installing : python2-os-brick-1.15.6-1.el7.noarch 12/33 Installing : python-oslo-vmware-lang-2.23.1-1.el7.noarch 13/33 Installing : python2-oslo-vmware-2.23.1-1.el7.noarch 14/33 Installing : python2-oslo-rootwrap-5.9.1-1.el7.noarch 15/33 Installing : python2-glance-store-0.22.0-1.el7.noarch 16/33 Installing : pysendfile-2.0.0-5.el7.x86_64 17/33 Installing : python2-castellan-0.12.2-1.el7.noarch 18/33 Installing : python2-cursive-0.1.2-1.el7.noarch 19/33 Installing : python-nose-1.3.7-7.el7.noarch 20/33 Installing : 1:python2-numpy-1.11.2-2.el7.x86_64 21/33 Installing : python2-scipy-0.18.0-3.el7.x86_64 22/33 Installing : python-networkx-core-1.10-1.el7.noarch 23/33 Installing : python-networkx-1.10-1.el7.noarch 24/33 Installing : python2-rsa-3.3-2.el7.noarch 25/33 Installing : python-boto-2.34.0-4.el7.noarch 26/33 Installing : python2-automaton-1.12.1-1.el7.noarch 27/33 Installing : python2-taskflow-2.14.1-1.el7.noarch 28/33 Installing : python-simplegeneric-0.8-7.el7.noarch 29/33 Installing : python2-wsme-0.9.2-1.el7.noarch 30/33 Installing : python2-swiftclient-3.4.0-1.el7.noarch 31/33 Installing : 1:python-glance-15.0.1-1.el7.noarch 32/33 Installing : 1:openstack-glance-15.0.1-1.el7.noarch 33/33 Verifying : python2-swiftclient-3.4.0-1.el7.noarch 1/33 Verifying : python-simplegeneric-0.8-7.el7.noarch 2/33 Verifying : python2-wsme-0.9.2-1.el7.noarch 3/33 Verifying : python-lxml-3.2.1-4.el7.x86_64 4/33 Verifying : python2-os-brick-1.15.6-1.el7.noarch 5/33 Verifying : python2-scipy-0.18.0-3.el7.x86_64 6/33 Verifying : atlas-3.10.1-12.el7.x86_64 7/33 Verifying : python-networkx-core-1.10-1.el7.noarch 8/33 Verifying : python2-automaton-1.12.1-1.el7.noarch 9/33 Verifying : python2-rsa-3.3-2.el7.noarch 10/33 Verifying : python2-glance-store-0.22.0-1.el7.noarch 11/33 Verifying : python-retrying-1.2.3-4.el7.noarch 12/33 Verifying : libquadmath-4.8.5-44.el7.x86_64 13/33 Verifying : python-nose-1.3.7-7.el7.noarch 14/33 Verifying : python2-castellan-0.12.2-1.el7.noarch 15/33 Verifying : python2-taskflow-2.14.1-1.el7.noarch 16/33 Verifying : 1:python-glance-15.0.1-1.el7.noarch 17/33 Verifying : pysendfile-2.0.0-5.el7.x86_64 18/33 Verifying : libgfortran-4.8.5-44.el7.x86_64 19/33 Verifying : python2-oslo-rootwrap-5.9.1-1.el7.noarch 20/33 Verifying : python-oslo-vmware-lang-2.23.1-1.el7.noarch 21/33 Verifying : python-networkx-1.10-1.el7.noarch 22/33 Verifying : python-oslo-privsep-lang-1.22.1-1.el7.noarch 23/33 Verifying : python2-os-win-2.2.0-1.el7.noarch 24/33 Verifying : 1:python2-numpy-1.11.2-2.el7.x86_64 25/33 Verifying : python2-cursive-0.1.2-1.el7.noarch 26/33 Verifying : python2-suds-0.7-0.4.94664ddd46a6.el7.noarch 27/33 Verifying : libxslt-1.1.28-6.el7.x86_64 28/33 Verifying : python-httplib2-0.9.2-1.el7.noarch 29/33 Verifying : python2-oslo-vmware-2.23.1-1.el7.noarch 30/33 Verifying : python2-oslo-privsep-1.22.1-1.el7.noarch 31/33 Verifying : 1:openstack-glance-15.0.1-1.el7.noarch 32/33 Verifying : python-boto-2.34.0-4.el7.noarch 33/33 Installed: openstack-glance.noarch 1:15.0.1-1.el7 Dependency Installed: atlas.x86_64 0:3.10.1-12.el7 libgfortran.x86_64 0:4.8.5-44.el7 libquadmath.x86_64 0:4.8.5-44.el7 libxslt.x86_64 0:1.1.28-6.el7 pysendfile.x86_64 0:2.0.0-5.el7 python-boto.noarch 0:2.34.0-4.el7 python-glance.noarch 1:15.0.1-1.el7 python-httplib2.noarch 0:0.9.2-1.el7 python-lxml.x86_64 0:3.2.1-4.el7 python-networkx.noarch 0:1.10-1.el7 python-networkx-core.noarch 0:1.10-1.el7 python-nose.noarch 0:1.3.7-7.el7 python-oslo-privsep-lang.noarch 0:1.22.1-1.el7 python-oslo-vmware-lang.noarch 0:2.23.1-1.el7 python-retrying.noarch 0:1.2.3-4.el7 python-simplegeneric.noarch 0:0.8-7.el7 python2-automaton.noarch 0:1.12.1-1.el7 python2-castellan.noarch 0:0.12.2-1.el7 python2-cursive.noarch 0:0.1.2-1.el7 python2-glance-store.noarch 0:0.22.0-1.el7 python2-numpy.x86_64 1:1.11.2-2.el7 python2-os-brick.noarch 0:1.15.6-1.el7 python2-os-win.noarch 0:2.2.0-1.el7 python2-oslo-privsep.noarch 0:1.22.1-1.el7 python2-oslo-rootwrap.noarch 0:5.9.1-1.el7 python2-oslo-vmware.noarch 0:2.23.1-1.el7 python2-rsa.noarch 0:3.3-2.el7 python2-scipy.x86_64 0:0.18.0-3.el7 python2-suds.noarch 0:0.7-0.4.94664ddd46a6.el7 python2-swiftclient.noarch 0:3.4.0-1.el7 python2-taskflow.noarch 0:2.14.1-1.el7 python2-wsme.noarch 0:0.9.2-1.el7 Complete! [root@controller ~]# sed -i.bak '/^#/d;/^$/d' /etc/glance/glance-api.conf [root@controller ~]# vim /etc/glance/glance-api.conf [root@controller ~]# cat /etc/glance/glance-api.conf EFAULT] [cors] [database] # 配置数据库访问 connection = mysql+pymysql://glance:000000@controller/glance [glance_store] # 配置本地文件系统存储和镜像文件的位置 stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ [image_format] [keystone_authtoken] # 配置身份服务访问 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = 000000 [matchmaker_redis] [oslo_concurrency] [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [paste_deploy] # 配置身份服务访问 flavor = keystone [profiler] [store_type_location_strategy] [task] [taskflow_executor] [root@controller ~]# sed -i.bak '/^#/d;/^$/d' /etc/glance/glance-registry.conf [root@controller ~]# vim /etc/glance/glance-registry.conf [root@controller ~]# cat /etc/glance/glance-registry.conf [DEFAULT] [database] # 配置数据库访问 connection = mysql+pymysql://glance:000000@controller/glance [keystone_authtoken] # 配置身份服务访问 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = 000000 [matchmaker_redis] [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_policy] [paste_deploy] # 配置身份服务访问 flavor = keystone [profiler] [root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:1328: OsloDBDeprecationWarning: EngineFacade is deprecated; please use oslo_db.sqlalchemy.enginefacade expire_on_commit=expire_on_commit, _conf=conf) INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. INFO [alembic.runtime.migration] Running upgrade -> liberty, liberty initial INFO [alembic.runtime.migration] Running upgrade liberty -> mitaka01, add index on created_at and updated_at columns of 'images' table INFO [alembic.runtime.migration] Running upgrade mitaka01 -> mitaka02, update metadef os_nova_server INFO [alembic.runtime.migration] Running upgrade mitaka02 -> ocata01, add visibility to and remove is_public from images INFO [alembic.runtime.migration] Running upgrade ocata01 -> pike01, drop glare artifacts tables INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. Upgraded database to: pike01, current revision(s): pike01 {/collapse-item} {/collapse} 完成安装 启动镜像服务并设置开机自启 systemctl enable openstack-glance-api.service \ openstack-glance-registry.service systemctl start openstack-glance-api.service \ openstack-glance-registry.service {collapse} {collapse-item label="查看执行过程"} 完成安装 [root@controller ~]# systemctl enable openstack-glance-api.service \ > openstack-glance-registry.service Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-glance-api.service to /usr/lib/systemd/system/openstack-glance-api.service. Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-glance-registry.service to /usr/lib/systemd/system/openstack-glance-registry.service. [root@controller ~]# systemctl start openstack-glance-api.service \ > openstack-glance-registry.service {/collapse-item} {/collapse} 验证 1、获取 admin 凭证 . admin-openrc 2、下载 测试镜像 wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img 3、上传 测试镜像 openstack image create "cirros" \ --file cirros-0.3.5-x86_64-disk.img \ --disk-format qcow2 --container-format bare \ --public 4、查询 镜像列表 openstack image list {collapse} {collapse-item label="查看执行过程"} 验证 [root@controller ~]# . admin-openrc [root@controller ~]# rz rz waiting to receive. zmodem trl+C ȡ 正在传输 cirros-0.4.0-x86_64-disk.img... 100% 12418 KB 2483 KB/ 00:00:05 0 [root@controller ~]# openstack image create "cirros" \ > --file cirros-0.4.0-x86_64-disk.img \ > --disk-format qcow2 --container-format bare \ > --public +------------------+------------------------------------------------------+ | Field | Value | +------------------+------------------------------------------------------+ | checksum | 443b7623e27ecf03dc9e01ee93f67afe | | container_format | bare | | created_at | 2022-07-13T04:49:09Z | | disk_format | qcow2 | | file | /v2/images/db8bad86-e1cb-47b4-8a8e-93f045d5e000/file | | id | db8bad86-e1cb-47b4-8a8e-93f045d5e000 | | min_disk | 0 | | min_ram | 0 | | name | cirros | | owner | cecafb35ed3649819247ea27a77871aa | | protected | False | | schema | /v2/schemas/image | | size | 12716032 | | status | active | | tags | | | updated_at | 2022-07-13T04:49:09Z | | virtual_size | None | | visibility | public | +------------------+------------------------------------------------------+ [root@controller ~]# openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | db8bad86-e1cb-47b4-8a8e-93f045d5e000 | cirros | active | +--------------------------------------+--------+--------+ {/collapse-item} {/collapse}
2022年07月13日
182 阅读
0 评论
0 点赞
1
2