OpenStack Queens 部署手册

1.1 测试服务器硬件配置

  • 数量: 3
  • 类型: KVM虚拟机
  • vCPU: 8
  • RAM: 16GB
  • Disk: 40GB

The following minimum requirements should support a proof-of-concept environment with core services and several CirrOS instances:

  • Controller Node: 1 processor, 4 GB memory, and 5 GB storage
  • Compute Node: 1 processor, 2 GB memory, and 10 GB storage

1.2 组件分布

  • Controller node *1
    • OpenStack组件
      • keystone
      • glance
      • neutron
      • Dashboard
    • 其他基础组件
      • MySQL
      • rabbitmq
      • NTP
  • Compute node *2
  • Block Storage

1.3 Passwords

为了方便,本次部署使用以下密码:

Password name Description
ADMIN_PASS Password of user admin
CINDER_DBPASS Database password for the Block Storage service
CINDER_PASS Password of Block Storage service user cinder
DASH_DBPASS Database password for the Dashboard
DEMO_PASS Password of user demo
GLANCE_DBPASS Database password for Image service
GLANCE_PASS Password of Image service user glance
KEYSTONE_DBPASS Database password of Identity service
METADATA_SECRET Secret for the metadata proxy
NEUTRON_DBPASS Database password for the Networking service
NEUTRON_PASS Password of Networking service user neutron
NOVA_DBPASS Database password for Compute service
NOVA_PASS Password of Compute service user nova
PLACEMENT_PASS Password of the Placement service user placement
RABBIT_PASS Password of RabbitMQ user openstack

@ALL node:

  1. config HOST /etc/hosts

https://docs.openstack.org/install-guide/environment-packages-rdo.html

禁用系统自动更新设置,防止干扰OpenStack运行;

@ALL node:

yum install -y yum-plugin-priorities -y
yum install centos-release-openstack-queens -y
# 如果有内核的更新,则需要reboot重启一下系统
yum upgrade -y
yum install python-openstackclient openstack-utils -y

4.1 MariaDB

https://docs.openstack.org/install-guide/environment-sql-database-rdo.html

@contrller node:

yum install mariadb mariadb-server python2-PyMySQL -y
 
cat > /etc/my.cnf.d/openstack.cnf << EOF
[mysqld]
bind-address = 0.0.0.0
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
EOF
 
systemctl enable mariadb
systemctl start mariadb
mysql_secure_installation

4.2 RabbitMQ

OpenStack uses a message queue to coordinate operations and status information among services. The message queue service typically runs on the controller node. OpenStack supports several message queue services including RabbitMQ, Qpid, and ZeroMQ.

https://docs.openstack.org/install-guide/environment-messaging-rdo.html

@contrller node:

yum install rabbitmq-server -y
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service
# 创建rabbit用户名openstack,密码: RABBIT_PASS
rabbitmqctl add_user openstack RABBIT_PASS
# 配置用户openstack的配置、读、写权限
rabbitmqctl set_permissions openstack ".*" ".*" ".*" 

4.3 memcache

The Identity service authentication mechanism for services uses Memcached to cache tokens. The memcached service typically runs on the controller node.

https://docs.openstack.org/install-guide/environment-memcached-rdo.html

@contrller node:

yum install memcached python-memcached -y
 
cat > /etc/sysconfig/memcached<<'EOF'
PORT="11211"
USER="memcached"
MAXCONN="4096"
CACHESIZE="256"
OPTIONS="-l 127.0.0.1,::1,controller"
EOF
 
systemctl enable memcached.service
systemctl start memcached.service
systemctl status memcached.service
netstat -antlp|grep 11211

4.4 Etcd

OpenStack services may use Etcd, a distributed reliable key-value store for distributed key locking, storing configuration, keeping track of service live-ness and other scenarios.

https://docs.openstack.org/install-guide/environment-etcd-rdo.html

@contrller node:

yum install etcd -y
vim /etc/etcd/etcd.conf
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://172.17.1.9:2380"
ETCD_LISTEN_CLIENT_URLS="http://172.17.1.9:2379"
ETCD_NAME="controller"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://172.17.1.9:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://172.17.1.9:2379"
ETCD_INITIAL_CLUSTER="controller=http://172.17.1.9:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
systemctl enable etcd
systemctl start etcd
netstat -antlp | egrep '2379|2380'

This section describes how to install and configure the OpenStack Identity service, code-named keystone, on the controller node. For scalability purposes, this configuration deploys Fernet tokens and the Apache HTTP server to handle requests.

5.1 配置数据库

mysql -u root -p123123
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';
flush privileges;

5.2 安装

yum install openstack-keystone httpd mod_wsgi -y

5.3 配置

cp /etc/keystone/keystone.conf{,.ori}
openstack-config --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
openstack-config --set /etc/keystone/keystone.conf token provider fernet
openstack-config --set /etc/keystone/keystone.conf token expiration 10800
# 创建初始化keystone数据库
su -s /bin/sh -c "keystone-manage db_sync" keystone
# 初始化key
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
# Bootstrap the Identity service
keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
  --bootstrap-admin-url http://controller:5000/v3/ \
  --bootstrap-internal-url http://controller:5000/v3/ \
  --bootstrap-public-url http://controller:5000/v3/ \
  --bootstrap-region-id RegionOne

5.4 配置httpd

vim /etc/httpd/conf/httpd.conf

ServerName controller
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
systemctl enable httpd.service
systemctl start httpd.service

5.5 创建OpenStack客户端环境脚本

https://docs.openstack.org/keystone/queens/install/keystone-openrc-rdo.html

admin-openrc:

cat >admin-openrc<<'EOF'
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF

demo-openrc:

cat >demo-openrc<<'EOF'
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=DEMO_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF
. admin-openrc

5.6 创建域/项目/用户/角色

身份认证服务为每个OpenStack服务提供认证服务。认证服务使用domains,projects ,user,role的组合。

https://docs.openstack.org/keystone/queens/install/keystone-users-rdo.html

# 创建域default
openstack domain create --description "An Example Domain" example
# 创建service项目
openstack project create --domain default --description "Service Project" service
# 创建demo项目
openstack project create --domain default --description "Demo Project" demo
# 创建demo用户,DEMO_PASS
openstack user create --domain default --password DEMO_PASS demo
# 创建user角色
openstack role create user
# 添加user角色到demo项目和用户
openstack role add --project demo --user demo user

5.7 验证操作

https://docs.openstack.org/keystone/queens/install/keystone-verify-rdo.html

@contrller node:

unset OS_AUTH_URL OS_PASSWORD
# 作为 admin 用户请求认证令牌
# ADMIN_PASS
openstack --os-auth-url http://controller:35357/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name admin --os-username admin token issue
# 作为 demo 用户请求认证令牌
# DEMO_PASS
openstack --os-auth-url http://controller:5000/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name demo --os-username demo token issue
# 加载admin-openrc文件来身份认证
. admin-openrc
# 请求认证令牌
openstack token issue

镜像服务 (glance) 允许用户发现、注册和获取虚拟机镜像。它提供了一个 REST API,允许您查询虚拟机镜像的 metadata 并获取一个现存的镜像。您可以将虚拟机镜像存储到各种位置,从简单的文件系统到对象存储系统.

简单来说,本指南描述了使用file作为后端配置镜像服务,能够上传并存储在一个托管镜像服务的控制节点目录中。默认情况下,这个目录是 /var/lib/glance/images/.

The Glance Registry Service and its APIs have been DEPRECATED in the Queens release and are subject to removal at the beginning of the ‘S’ development cycle.

@contrller node:

6.1 创建数据库

mysql -u root -p123123
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';
FLUSH privileges;

6.2 创建服务凭据

. admin-openrc
# 密码 GLANCE_PASS
openstack user create --domain default --password GLANCE_PASS glance
# 配置glance是service的admin用户
openstack role add --project service --user glance admin
# 创建glance服务
openstack service create --name glance --description "OpenStack Image" image
# 创建api的endpoint
openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292

6.3 安装

yum install openstack-glance -y

6.4 配置

config /etc/glance/glance-api.conf:

cp /etc/glance/glance-api.conf{,.ori}
openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password GLANCE_PASS
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http
openstack-config --set /etc/glance/glance-api.conf glance_store default_store file
openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/

config /etc/glance/glance-registry.conf:

cp /etc/glance/glance-registry.conf{,.ori}
openstack-config --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password GLANCE_PASS
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone

6.5 初始化数据库

su -s /bin/sh -c "glance-manage db_sync" glance

6.6 启动服务

systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl start openstack-glance-api.service openstack-glance-registry.service

6.7 验证操作

创建image

7.1 controller node

7.1.1 创建数据库

mysql -u root -p123123
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
FLUSH privileges;

7.1.2 创建nova服务凭据

. admin-openrc
# 创建nova用户,密码为 NOVA_PASS
openstack user create --domain default --password NOVA_PASS nova
# 添加nova为管理员权限
openstack role add --project service --user nova admin
# 创建nova服务
openstack service create --name nova --description "OpenStack Compute" compute
# 创建nova-api endpoint
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
# 创建placement用户,密码为 PLACEMENT_PASS
openstack user create --domain default --password PLACEMENT_PASS placement
# 设置placement为管理员
openstack role add --project service --user placement admin
# 创建Placement API 服务凭据
openstack service create --name placement --description "Placement API" placement
# 创建Placement API endpoints
openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778

https://docs.openstack.org/nova/queens/install/controller-install-rdo.html

7.1.3 安装

@contrller node:

yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api -y

7.1.4 配置

  • 注释或移除[keystone_authtoken] 内其他配置;

/etc/nova/nova.conf:

cp /etc/nova/nova.conf{,.ori}
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:NOVA_DBPASS@controller/nova
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_PASS
# management interface IP address of the controller node
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 172.17.1.9
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf vnc enabled true
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen '$my_ip'
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address '$my_ip'
openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf placement os_region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name default
openstack-config --set /etc/nova/nova.conf placement auth_url http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password PLACEMENT_PASS

修改http的配置文件:

cat >> /etc/httpd/conf.d/00-nova-placement-api.conf <<'EOF'
 
 
<Directory /usr/bin>
   <IfVersion >= 2.4>
      Require all granted
   </IfVersion>
   <IfVersion < 2.4>
      Order allow,deny
      Allow from all
   </IfVersion>
</Directory>
EOF
 
systemctl restart httpd

7.1.5 同步数据库

忽略报错:

# Ignore any deprecation messages in this output.
su -s /bin/sh -c "nova-manage api_db sync" nova
# Register the cell0 database
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
# Create the cell1 cell
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
# Populate the nova database
su -s /bin/sh -c "nova-manage db sync" nova
# verify
nova-manage cell_v2 list_cells

7.1.6 启动服务

# autostart
systemctl enable openstack-nova-api.service \
  openstack-nova-consoleauth.service openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service
# start
systemctl start openstack-nova-api.service \
  openstack-nova-consoleauth.service openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service

7.2 compute node

如果原来安装过kvm,libvirt等,由于epel里的版本低会造成启动失败,请卸载了重新安装.

7.2.1 安装

yum install openstack-nova-compute -y

7.2.2 配置

配置计算节点,注意注释或移除[keystone_authtoken] 内配置

cp /etc/nova/nova.conf{,.ori}
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_PASS
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 172.17.1.2
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf vnc enabled True
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address '$my_ip'
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://controller:6080/vnc_auto.html
openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf placement os_region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name default
openstack-config --set /etc/nova/nova.conf placement auth_url http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password PLACEMENT_PASS

选择虚拟化类型:

egrep -c '(vmx|svm)' /proc/cpuinfo
# 如果没有输出或者输出0,说明不支持虚拟化,需要使用qemu而不能用kvm而要用qemu了
openstack-config --set /etc/nova/nova.conf libvirt virt_type kvm
# 设置CPU类型,比如实现nested-virtualization的时候就需要CPU mode为pass-through/host-model
openstack-config --set /etc/nova/nova.conf libvirt cpu_mode host-passthrough

7.2.3 启动服务

systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service

7.3 到controller上注册nova

@contrller node:

. admin-openrc
# 查看当前的hypervisor列表为空
openstack hypervisor list
openstack compute service list --service nova-compute
# 发现compute列表
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
# 再次查看,发现设备
openstack hypervisor list
# 配置设备自动发现
openstack-config --set /etc/nova/nova.conf scheduler discover_hosts_in_cells_interval 300

7.4 验证操作

https://docs.openstack.org/nova/queens/install/verify.html

@contrller node:

. admin-openrc
openstack compute service list
# 查看api列表
openstack catalog list
# 查看image列表
openstack image list
# Check the cells and placement API are working successfully
nova-status upgrade check

https://docs.openstack.org/neutron/queens/install/install-rdo.html

8.1 controller node

8.1.1 创建数据库

mysql -u root -p123123
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';
FLUSH privileges;

8.1.2 创建服务凭据

. admin-openrc
# 创建用户 密码 NEUTRON_PASS
openstack user create --domain default --password NEUTRON_PASS neutron
# Add the admin role to the neutron user
openstack role add --project service --user neutron admin
# Create the neutron service entity
openstack service create --name neutron --description "OpenStack Networking" network
# Create the Networking service API endpoints
openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696

8.1.3 配置网络

官网安装文档:

从下面列表选取一种方式进行安装,完成后返回继续metadata agent的配置:

    • Provider network,2层网络,直接桥接进外部网络,只有admin或者其他特权用户可以管理;
    • self-service network,3层网络,有自己的私网,增强安全性和弹性,需要EIP来从外部接入实例;

8.1.4 Configure the metadata agent

metadata agent1) 2) 负责提供配置信息,虚拟机启动时访问http://169.254.169.254获得对应的配置信息,cloud-init就能完成配置。

cp /etc/neutron/metadata_agent.ini{,.ori}
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_host controller
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret METADATA_SECRET

8.1.5 配置compute service使用网络服务

openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password NEUTRON_PASS
openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy true
openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret METADATA_SECRET

8.1.6 完成安装

# 网络服务初始化脚本需要一个链接
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
# 同步数据库:
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
# 重启compute api
systemctl restart openstack-nova-api.service
# 启动服务
systemctl enable neutron-server.service \
  neutron-linuxbridge-agent.service \
  neutron-dhcp-agent.service \
  neutron-metadata-agent.service
systemctl start neutron-server.service \
  neutron-linuxbridge-agent.service \
  neutron-dhcp-agent.service \
  neutron-metadata-agent.service
# 对于网络选项2,同样启用layer3服务并设置自启动
systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service

8.2 compute node

The compute node handles connectivity and security groups for instances.

官网文档:

8.2.1 Install the components

yum install -y openstack-neutron-linuxbridge ebtables ipset

8.2.2 Configure the common component

cp /etc/neutron/neutron.conf{,.ori}
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password NEUTRON_PASS
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp

依据实际部署方案选择并配置:

8.2.3 配置compute service使用网络服务

cp /etc/nova/nova.conf{,.ori}
openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password NEUTRON_PASS

8.2.4 启动服务

# 重启nova客户端
systemctl restart openstack-nova-compute.service
# 启动网络服务
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service
# 查看状态
systemctl status neutron-linuxbridge-agent.service openstack-nova-compute.service

8.3 验证操作

@contrller node:

. admin-openrc
openstack extension list --network
# 注意,如果是option1网络,会比以上信息少L3 agent信息
openstack network agent list

@contrller node:

9.1 安装

https://docs.openstack.org/horizon/queens/install/install-rdo.html

yum install openstack-dashboard -y

9.2 配置

配置 /etc/openstack-dashboard/local_settings,注意 True/False 首字母要大些,不然报错!

# 在 controller 节点上配置仪表盘以使用 OpenStack 服务
OPENSTACK_HOST = "controller"
# 允许所有主机访问仪表板
ALLOWED_HOSTS = ['*', ]
 
# 配置 memcached 会话存储服务
SESSION_ENGINE = 'django.contrib.sessions.backends.file'
SESSION_TIMEOUT = 10800
CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}
 
# 启用第3版认证API
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
# 启用对域的支持
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
# 配置API版本
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 2,
}
# 通过仪表盘创建用户时的默认域配置为 default
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
# 通过仪表盘创建的用户默认角色配置为 user
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
# 时区
TIME_ZONE = "Asia/Shanghai"
 
#######对于option-1网络服务#######
OPENSTACK_NEUTRON_NETWORK = {                                                                                                                     
    'enable_router': False,
    'enable_quotas': False,
    'enable_ipv6': False,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    'enable_fip_topology_check': False,
    'enable_lb': False,
    'enable_firewall':False,
    'enable_vpn': False,
    ...
}
 
#######对于option-2网络服务#######
OPENSTACK_NEUTRON_NETWORK = {
    'enable_router': True,
    'enable_quotas': True,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    'enable_lb': False,
    'enable_firewall': False,
    'enable_vpn': False,
    'enable_fip_topology_check': True,
...

vim /etc/httpd/conf.d/openstack-dashboard.conf

# 添加
WSGIApplicationGroup %{GLOBAL}

9.3 启动服务

systemctl restart httpd.service memcached.service

9.4 访问

http://controller/dashboard

  • 用户名: admin / demo
  • 密码: ADMIN_PASS / DEMO_PASS

9.5 进一步配置

cinder_install

启动实例


  • virtualization/openstack/queens_deploy.txt
  • 最后更改: 2019/04/16 18:31
  • (外部编辑)