大家好,今天来为大家解答高效云计算平台搭建指南:OpenStack部署教程这个问题的一些问题点,包括也一样很多人还不知道,因此呢,今天就来为大家分析分析,现在让我们一起来看看吧!如果解决了您的问题,还望您关注下本站哦,谢谢~
1.1、yum安装设置
yum list |grep openstack
centos-release-openstack-newton.noarch 1-2.el7 extrascentos-release-openstack-ocata.noarch 1-2.el7 extrascentos-release-openstack-pike.x86_64 1-1.el7 extrascentos-release-openstack-queens。 x86_64 1-1.el7.centos extrasyum install centos-release-openstack-queens.x86_64 1-1.el7.centos -y 此时会在/etc/yum.repo.d/下生成Openstack的yum源配置
1.2、OpenStack 客户端
yum install python-openstackclient -y
yum install openstack-selinux -y
2.安装
2.1、mariadb数据库的安装
OpenStack 使用数据库进行存储,支持大多数数据库MariaDB、MySQL 或PostgreSQL。数据库运行在控制节点上。
卸载原版本mysqlrpm -qa|grep mariadb
rpm -e --nodeps mysql-community-common-5.7.9-1.el7.x86_64.rpm
rpm -e --nodeps mysql-community-libs-5.7.9-1.el7.x86_64.rpm
rpm -e --nodeps mysql-community-client-5.7.9-1.el7.x86_64.rpm
rpm -e --nodeps mysql-community-server-5.7.9-1.el7.x86_64.rpm install mysqlyum install mariadb mariadb-server python2-PyMySQL -y 修改配置(/etc/my.cnf.d/mariadb-服务器.cnf) [mysqld]
绑定地址=10.20.16.229
默认存储引擎=innodb
innodb_file_per_table=开
最大连接数=4096
排序规则服务器=utf8_general_ci
字符集服务器=utf8
# 目录预规划
datadir=/data/openstack/mysql/data
套接字=/data/openstack/mysql/mysql.sock
日志错误=/data/openstack/mysql/log/mariadb.log
pid-file=/data/openstack/mysql/mariadb.pid 修改工作目录组chown mysql:mysql -R /data/openstack/mysql 启动systemctl enable mariadb.service
systemctl start mariadb.service执行初始化设置#账户初始化
mysql_安全_安装
#远程访问设置(用于以后连接其他节点)
将*.* 上的所有权限授予由"ips" 标识的"root"@"%";
2.1、rabbitmq数据库的安装
卸载旧版本(省略,)安装yum installrabbitmq-server -y设置账户和权限#这里设置RABBIT_PASS为ips
rabbitmqctl add_user openstack RABBIT_PASS
rabbitmqctl set_permissions openstack ".*" ".*" ".*"创建工作目录mkdir -p /data/openstack/rabbitmq
chownrabbitmq:rabbitmq -Rrabbitmq 修改启动文件(/usr/lib/systemd/system/rabbitmq-server.service)Environment=RABBITMQ_LOG_BASE=/data/openstack/rabbitmq/log
WorkDirectory=/data/openstack/rabbitmq/data start systemctl启用rabbitmq-server.service
systemctl startrabbitmq-server.service 为了方便管理,可以启用相关插件(rabbitmq相关,不详)rabbitmq-pluginsenablerabbitmq_management
systemctl 重新启动rabbitmq-server
登录(http://ip:15672/)
注意:用户必须有admin权限
2.3、Memcached的安装
卸载旧版本(省略、) 安装yum install memcached python-memcached -y 修改配置文件(/etc/sysconfig/memcached) PORT="11211"
USER="memcached"
MAXCONN="1024"
缓存大小="64"
#主要添加控制器
OPTIONS="-l 127.0.0.1,1,controller"start systemctl启用memcached.service
systemctl start memcached.service
2.4、身份认证服务keytone(控制节点)
创建存储CREATE DATABASE keystone;
将keystone.* 上的所有权限授予由“ips”标识的“keystone”@“localhost”;
将keystone.* 上的所有权限授予“ips”标识的“keystone”@“%”;安装相关软件包
yum install openstack-keystone httpd mod_wsgi -y 配置keystone (编辑文件/etc/keystone/keystone.conf)
/etc/keystone/keystone.conf
[数据库]
···
连接=mysql+pymysql: //keystone:ips@controller/keystone
[代币]
.
provider=uuid 初始化身份认证服务的数据库和Fernet
su -s /bin/sh -c "keystone-管理db_sync" keystone
keystone-管理fernet_setup --keystone-用户keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone引导身份服务:
# Queens版本只需要一个端口(5000),用于所有接口。在之前的版本中,5000用于普通接口,35357只负责管理服务。将此处的ADMIN_PASS 替换为ips。
keystone-manage bootstrap --bootstrap-password ADMIN_PASS
--bootstrap-admin-url http://controller:5000/v3/
--bootstrap-internal-url http://controller:5000/v3/
--bootstrap-public-url http://controller:5000/v3/
--bootstrap-region-id RegionOne 配置Apache HTTP 服务器(/etc/httpd/conf/httpd.conf)
vim /etc/httpd/conf/httpd.conf
服务器名称controllercp -f /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
#主要修改日志生成路径
听5000
监听35357WSGIDaemonProcess keystone-public 进程=5 个线程=1 用户=keystone 组=keystone 显示名称=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias//usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On=2.4ErrorLogFormat "%{cu}t %M"ErrorLog /data/openstack/httpd/keystone-error.log
CustomLog /data/openstack/httpd/keystone-access.log合并=2.4要求所有授予的订单允许,拒绝
允许来自所有WSGIDaemonProcess keystone-admin 进程=5 个线程=1 用户=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias//usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On=2.4ErrorLogFormat "%{cu}t %M"ErrorLog /data/openstack/httpd/keystone-error.log
CustomLog /data/openstack/httpd/keystone-access.log合并=2.4要求所有授予的订单允许,拒绝
允许从all创建admin-rc文件,写入以下内容
导出OS_USERNAME=admin
导出OS_PASSWORD=ips
导出OS_PROJECT_NAME=admin
导出OS_USER_DOMAIN_NAME=默认
导出OS_PROJECT_DOMAIN_NAME=默认
导出OS_AUTH_URL=http://controller:35357/v3
导出OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2创建域、项目、用户和角色
#创建域。默认情况下,域已存在:default
openstack 域创建--description "示例域" 示例
openstack 项目创建--domain default --description "服务项目" 服务
#创建项目
openstack 项目创建--domain default --description "演示项目" demo
#创建项目并设置密码为ips
openstack 用户创建--domain default --password-prompt demo
#创建角色,需要设置为ipsrole。
openstack角色创建用户
# 绑定用户、角色、项目的关系
openstack角色添加--项目演示--用户演示用户
核实
取消设置OS_AUTH_URL OS_PASSWORD
openstack --os-auth-url http://controller:35357/v3
--os-project-domain-name 默认--os-user-domain-name 默认
--os-project-name admin --os-username admin token issues 创建客户端访问配置admin-rc 已创建,下面创建demo
导出OS_PROJECT_DOMAIN_NAME=默认
导出OS_USER_DOMAIN_NAME=默认
导出OS_PROJECT_NAME=演示
导出OS_USERNAME=演示
导出OS_PASSWORD=ips
导出OS_AUTH_URL=http://controller:5000/v3
导出OS_IDENTITY_API_VERSION=3
导出OS_IMAGE_API_VERSION=2
``本节的质量检查
QA1:Error: Package: perl-DBD-MySQL-4.023-5.el7.x86_64 (@base)
rpm -ivh mysql-community-libs-compat-5.7.18-1.el7.x86_64.rpmQA2:缺少身份验证插件密码所需的值auth-url
源admin-rcQA3:命令“WSGIDaemonProcess”无效,可能拼写错误或由未包含在服务器配置中的模块定义
# 安装说明中有包含,但有时为了处理httpd的问题,卸载httpd也会卸载该组件,安装时需要一并安装。
yum install apache2-mod_wsgiQA4:您发出的请求需要身份验证。 (HTTP 401) (请求ID: req-9a49935d-49a6-4673-ae3b-193d53eb0444)
# 安装过程中难免会出现错误。当你回去处理问题时,一种可能是密码已经被更改,另一种可能是之前的执行还没有生效。
keystone-manage bootstrap --bootstrap-password ips
--bootstrap-admin-url http://controller:5000/v3/
--bootstrap-internal-url http://controller:5000/v3/
--bootstrap-public-url http://controller:5000/v3/
--bootstrap-region-id RegionOne
2.3、镜像服务glance(控制节点)
创建存储创建数据库概览;
授予所有特权ONlance.* 至由“ips”标识的“glance”@“localhost”;
GRANT ALL PRIVILEGES ONlance.* TO "glance"@"%" IDENTIFIED BY "ips";在openstack中创建用户glance#创建用户并将密码设置为ips
openstack 用户创建--domain default --password-prompt 一目了然
# 授予服务权限和管理员角色以查看
openstack角色添加--项目服务--用户浏览管理
#创建用于镜像的服务和端点,
openstack服务创建--名称一目了然--描述“OpenStack镜像”图像
openstack端点创建--region RegionOne图像公共http://controller:9292
openstack端点创建--region RegionOne图像内部http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292安装相关软件包yum install openstack-glance -y修改配置文件
/etc/glance/glance-api.conf[数据库]
连接=mysql+pymysql: //glance:ips@controller/glance
[keystone_authtoken]
auth_uri=http://controller:5000
auth_url=http://controller:5000
memcached_servers=控制器:11211
身份验证类型=密码
项目域名=默认
用户域名称=默认
项目名称=服务
用户名=一目了然
密码=ips
[粘贴_部署]
味道=基石
# 图片存储方式及位置
[浏览商店]
商店=文件,http
默认存储=文件
filesystem_store_datadir=/data/openstack/glance/images//etc/glance/glance-registry.conf[数据库]
连接=mysql+pymysql: //glance:ips@controller/glance
[keystone_authtoken]
auth_uri=http://controller:5000
auth_url=http://controller:5000
memcached_servers=控制器:11211
身份验证类型=密码
项目域名=默认
用户域名称=默认
项目名称=服务
用户名=一目了然
密码=ips
[粘贴_部署]
flavor=keystone 创建工作目录mkdir -p /data/openstack/glance/images/
mkdir -p /data/openstack/glance/log/
chownlance:glance -R /data/openstack/glance 初始化glance数据库su -s /bin/sh -c "glance-manage db_sync"lance 修改openstack-glance-api.service和openstack-glance-registry.service统一存储日志,而Start #主要是重新指定日志的存储位置
ExecStart=/usr/bin/glance-api --log-dir /data/openstack/glance/log/
ExecStart=/usr/bin/glance-registry --log-dir /data/openstack/glance/log/
#启动
systemctl 守护进程重新加载
systemctl 启用openstack-glance-api.service openstack-glance-registry.service
systemctl start openstack-glance-api.service openstack-glance-registry.service verify # 下载测试镜像
wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
#导入图像
openstack 镜像创建"cirros"
--文件cirros-0.3.5-x86_64-disk.img
--磁盘格式qcow2 --容器格式裸
- 民众
# 查看图像
openstack 镜像列表
# 拉qcow2
wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
#导入图像
openstack 镜像创建"CentOS7"
--文件CentOS-7-x86_64-GenericCloud.qcow2
--磁盘格式qcow2 --容器格式裸
--public
2.4、Compute 服务(nova)
2.4.1、控制节点安装
创建存储CREATE DATABASE nova_api;
创建数据
ABASE nova; CREATE DATABASE nova_cell0; GRANT ALL PRIVILEGES ON nova_api.* TO "nova"@"localhost" IDENTIFIED BY "ips"; GRANT ALL PRIVILEGES ON nova_api.* TO "nova"@"%" IDENTIFIED BY "ips"; GRANT ALL PRIVILEGES ON nova.* TO "nova"@"localhost" IDENTIFIED BY "ips"; GRANT ALL PRIVILEGES ON nova.* TO "nova"@"%" IDENTIFIED BY "ips"; GRANT ALL PRIVILEGES ON nova_cell0.* TO "nova"@"localhost" IDENTIFIED BY "ips"; GRANT ALL PRIVILEGES ON nova_cell0.* TO "nova"@"%" IDENTIFIED BY "ips"; flush privileges;创建openstack中的用户nova# 创建user ,此时设置密码为ips openstack user create --domain default --password-prompt nova # 给nova赋予service权限和admin角色 openstack role add --project service --user nova admin # 创建service和endpoints,用于镜像, openstack service create --name nova --description "OpenStack Compute" compute openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1创建openstack中的用户placement# 创建user ,此时设置密码为ips openstack user create --domain default --password-prompt placement # 给placement赋予service权限和admin角色 openstack role add --project service --user placement admin # 创建service和endpoints,用于镜像, openstack service create --name placement --description "Placement API" placement openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778在控制节点上安装相关软件包yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api -y创建工作目录mkdir -p /data/openstack/nova/ chown nova:nova -R /data/openstack/nova修改配置文件(/etc/nova/nova.conf )[DEFAULT] # ... enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:ips@controller my_ip = 10.20.16.229 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver [api_database] # ... connection = mysql+pymysql://nova:ips@controller/nova_api [database] # ... connection = mysql+pymysql://nova:ips@controller/nova [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = ips [vnc] enabled = true # ... server_listen = $my_ip server_proxyclient_address = $my_ip [glance] # ... api_servers = http://controller:9292 [oslo_concurrency] # ... lock_path = /data/openstack/nova/tmp [placement] # ... os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = ips修改配置文件(/etc/httpd/conf.d/00-nova-placement-api.conf)并重启httpd#官方BUG,增加配置= 2.4>Require all grantedOrder allow,deny Allow from all# 重启 systemctl restart httpd初始化nova数据库,并验证# 初始化 su -s /bin/sh -c "nova-manage api_db sync" nova su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova su -s /bin/sh -c "nova-manage db sync" nova # 验证 nova-manage cell_v2 list_cells修改openstack-nova-*.service相关文件,主要统一存储日志,并启动# 主要是重新指定日志的存储位置 # openstack-nova-api.service ExecStart=/usr/bin/nova-api --log-dir /data/openstack/nova/log/ # openstack-nova-consoleauth.service ExecStart=/usr/bin/nova-consoleauth --log-dir /data/openstack/nova/log/ # openstack-nova-scheduler.service ExecStart=/usr/bin/nova-scheduler --log-dir /data/openstack/nova/log/ # openstack-nova-conductor.service ExecStart=/usr/bin/nova-conductor --log-dir /data/openstack/nova/log/ # openstack-nova-novncproxy.service ExecStart=/usr/bin/nova-novncproxy --web /usr/share/novnc/ $OPTIONS --log-dir /data/openstack/nova/log/ #启动 systemctl daemon-reload systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service本节QA QA1:官方BUG:修改配置文件/etc/httpd/conf.d/00-nova-placement-api.conf:2.4.2、计算节点安装
在计算节点上安装相关软件包 yum install openstack-nova-compute -y更改配置文件( /etc/nova/nova.conf) [DEFAULT] # ... verbose = True #替换为计算节点上的管理网络接口的IP 地址,例如 :ref:example architecture`中所示的第一个节点 10.0.0.31 。 my_ip = 10.20.16.228 enabled_apis = osapi_compute,metadata transport_url= rabbit://openstack:ips@controller use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = ips [vnc] # ... enabled = True #服务器组件监听所有的 IP 地址 vncserver_listen = 0.0.0.0 #代理组件仅仅监听计算节点管理网络接口的 IP 地址 vncserver_proxyclient_address = $my_ip #使用 web 浏览器访问位于该计算节点上实例的远程控制台的位置 novncproxy_base_url = http://controller:6080/vnc_auto.html [glance] # ... api_servers = http://controller:9292 #配置锁路径 [oslo_concurrency] # (可选的)为帮助排错,在 “[DEFAULT]”部分启用详细日志(verbose = True)。 lock_path = /data/openstack/nova/tmp [placement] # ... os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = ips查看CPU核数,确认是否支持CPU加速 egrep -c "(vmx|svm)" /proc/cpuinfo #如果这个命令返回 >1的值,说明计算节点支持硬件加速。如果等于0 ,需要在/etc/nova/nova.conf中修改virt_type为QEMU,否则KVM。 [libvirt] ... virt_type = qemu修改日志目录,启动计算服务 # openstack-nova-compute.service ExecStart=/usr/bin/nova-compute --log-dir /data/openstack/nova/compute # 启动 systemctl daemon-reload systemctl enable libvirtd.service openstack-nova-compute.service systemctl start libvirtd.service openstack-nova-compute.service将新的计算节点加入到库中(cell ) openstack compute service list --service nova-compute # 新增节点此处都要执行 su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova #当然如果不想手动执行,可以在 /etc/nova/nova.conf配置定时扫描发现 [scheduler] discover_hosts_in_cells_interval = 300本节QA2.5、网络服务neutron
2.5.1、控制节点
创建存储CREATE DATABASE neutron; GRANT ALL PRIVILEGES ON neutron.* TO "neutron"@"localhost" IDENTIFIED BY "ips"; GRANT ALL PRIVILEGES ON neutron.* TO "neutron"@"%" IDENTIFIED BY "ips"; flush privileges;创建openstack中的用户neutron# 创建user ,此时设置密码为ips openstack user create --domain default --password-prompt neutron # 给neutron赋予service权限和admin角色 openstack role add --project service --user neutron admin # 创建service和endpoints,用于镜像, openstack service create --name neutron --description "OpenStack Networking" network openstack endpoint create --region RegionOne network public http://controller:9696 openstack endpoint create --region RegionOne network internal http://controller:9696 openstack endpoint create --region RegionOne network admin http://controller:9696安装软件包(Provider networks)yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y修改配置 /etc/neutron/neutron.conf[DEFAULT] # ... core_plugin = ml2 service_plugins = transport_url = rabbit://openstack:ips@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [keystone_authtoken] # ... auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = ips [nova] # ... auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = ips [oslo_concurrency] # 预先创建好工作目录 # mkdir -p /data/openstack/neutron/lock # chown neutron:neutron -R /data/openstack/neutron lock_path = lock_path = /data/openstack/neutron/lockModular Layer 2 (ML2) plug-in: /etc/neutron/plugins/ml2/ml2_conf.ini[ml2] # ... type_drivers = flat,vlan tenant_network_types = mechanism_drivers = linuxbridge extension_drivers = port_security [ml2_type_flat] # ... flat_networks = provider [securitygroup] # ... enable_ipset = trueLinux bridge agent: /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan] enable_vxlan = false [securitygroup] # ... enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriveriptables: /usr/lib/sysctl.d/00-system.conf使之生效 sysctl -pnet.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1DHCP agent: /etc/neutron/dhcp_agent.ini[DEFAULT] # ... interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = truemetadata agent : /etc/neutron/metadata_agent.ini[DEFAULT] # ... nova_metadata_host = controller metadata_proxy_shared_secret = ips/etc/nova/nova.conf (不要改变以前的配置过的)[neutron] url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = ips service_metadata_proxy = true metadata_proxy_shared_secret = ips配置连接到指定文件ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini初始化数据库su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron由于刚才更改nova的配置文件,需重启systemctl restart openstack-nova-api.service修改启动配置,并启动# /usr/lib/systemd/system/neutron-server.service ExecStart=/usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --log-file /data/openstack/neutron/log/server.log # /usr/lib/systemd/system/neutron-linuxbridge-agent.service ExecStart=/usr/bin/neutron-linuxbridge-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/linuxbridge_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-linuxbridge-agent --log-file /data/openstack/neutron/log/linuxbridge-agent.log # /usr/lib/systemd/system/neutron-dhcp-agent.service ExecStart=/usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file /data/openstack/neutron/log/dhcp-agent.log # /usr/lib/systemd/system/neutron-metadata-agent.service ExecStart=/usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/metadata_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-metadata-agent --log-file /data/openstack/neutron/log/metadata-agent.log # 启动 systemctl daemon-reload systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service2.5.2、计算节点
安装计算节点上相关软件包yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset -y更改配置文件 /etc/neutron/neutron.conf[DEFAULT] ... #RabbitMQ消息队列访问 transport_url = rabbit://openstack:ips@controller #配置认证服务访问 auth_strategy = keystone verbose = True [keystone_authtoken] # ... auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = ips #配置锁路径: [oslo_concurrency] ... #(可选的)为帮助排错,在 “[DEFAULT]”部分启用详细日志(verbose = True)。 lock_path = /data/openstack/neutron/tmp #注释所有``connection`` 项,因为计算节点不直接访问数据库 [database]Linux bridge agent:/etc/neutron/plugins/ml2/linuxbridge_agent.ini[linux_bridge] physical_interface_mappings = provider:eno1 [vxlan] enable_vxlan = false [securitygroup] # ... enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriveriptables: /usr/lib/sysctl.d/00-system.conf使之生效 sysctl -pnet.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1/etc/nova/nova.conf[neutron] url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = ips由于修改了nova配置,重启计算服务# systemctl restart openstack-nova-compute.service修改启动配置,启动Linux桥接代理并配置它开机自启动# /usr/lib/systemd/system/neutron-linuxbridge-agent.service 其中目录提前创建 # mkdir -p /data/openstack/neutron/log # chown neutron:neutron -R /data/openstack/neutron ExecStart=/usr/bin/neutron-linuxbridge-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/linuxbridge_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-linuxbridge-agent --log-file /data/openstack/neutron/log/linuxbridge-agent.log # 启动 systemctl daemon-reload systemctl enable neutron-linuxbridge-agent.service systemctl start neutron-linuxbridge-agent.service验证openstack extension list --network2.5创建实例
Flavor 本节QA QA1:创建server时,在nova-conductor.log中,报如下错误: 2018-05-15 11:45:10.816 5547 ERROR oslo_messaging.rpc.server MessageDeliveryFailure: Unable to connect to AMQP server on controller:5672 after None tries: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile.解决办法:https://blog.silversky.moe/works/openstack-lanuch-instance-infinite-scheduling su -s /bin/sh -c "nova-manage db sync" nova 如果仍有问题,到库中确认配置是否正确 SELECT * FROM `nova_api`.`cell_mappings` WHERE `created_at` LIKE BINARY "%openstack%" OR `updated_at` LIKE BINARY "%openstack%" OR `id` LIKE BINARY "%openstack%" OR `uuid` LIKE BINARY "%openstack%" OR `name` LIKE BINARY "%openstack%" OR `transport_url` LIKE BINARY "%openstack%" OR `database_connection` LIKE BINARY "%openstack%" ;此外,即便配置正确在使用openstack4j 拿去token时也会包该问题 su -s /bin/sh -c "nova-manage db sync" novaQA2:创建服务时,{u"message": u"No valid host was found. ", u"code": 500, u"created": u"2018-05-17T02:22:47Z" 管理员给这个工程的资源配额是最多创建10个实例,最多使用20个vcpu, 最多使用5G的内存,只要达到某一个资源的使用上限,就会出现异常,这就是配额管理。 # 修改默认配置 openstack quota set c5ba590cab874f55b1668bad5cd2a6a6 --instances 30 --cores 90 --ram 204800QA3:Build of instance 00b69820-ef36-447c-82ca-7bdec4c70ed2 was re-scheduled: invalid argument: could not find capabilities for domaintype=kvm # kvm 被 BIOS 禁用了 dmesg | grep kvm 重启进入设置即可2.6、dashboard安装
安装软件包yum install openstack-dashboard -y更改配置文件(/etc/openstack-dashboard/local_settings)#配置控制节点,来使用 OpenStack 服务 OPENSTACK_HOST = "controller" #允许所有主机访问仪表板 ALLOWED_HOSTS = ["*", ] #配置 memcached 会话存储服务 CACHES = { "default": { "BACKEND": "django.core.cache.backends.memcached.MemcachedCache", "LOCATION": "controller:11211", } } #为通过仪表盘创建的用户配置默认的 user 角色 OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" #启用multi-domain model OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True #配置服务API版本,这样你就可以通过Keystone V3 API来登录dashboard OPENSTACK_API_VERSIONS = { "identity": 3, "volume": 2, "image": 2 } #如果您选择网络参数1,禁用支持3层网络服务 OPENSTACK_NEUTRON_NETWORK = { ... "enable_router": False, "enable_quotas": False, "enable_distributed_router": False, "enable_ha_router": False, "enable_lb": False, "enable_firewall": False, "enable_vpn": False, "enable_fip_topology_check": False, } #可以选择性地配置时区 TIME_ZONE = "Asia/Shanghai"启动web 服务器和会话存储服务,并配置它们随系统启动# systemctl enable httpd.service memcached.service # systemctl restart httpd.service memcached.service2.6、块设备存储服务cinder (控制节点和计算节点)
2.6.1、控制节点
* 创建存储 ``` CREATE DATABASE cinder; GRANT ALL PRIVILEGES ON cinder.* TO "cinder"@"localhost" IDENTIFIED BY "ips"; GRANT ALL PRIVILEGES ON cinder.* TO "cinder"@"%" IDENTIFIED BY "ips"; flush privileges; ```创建openstack中的用户cinder# 创建user ,此时设置密码为ips openstack user create --domain default --password-prompt cinder; # 给cinder 赋予service权限和admin角色 openstack role add --project service --user cinder admin; # 创建cinderv2 和 cinderv3 服务 openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2; openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3; # 创建service和endpoints,用于镜像 openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2 openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%(project_id)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%(project_id)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%(project_id)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%(project_id)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%(project_id)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%(project_id)s安装cinderyum install openstack-cinder -y修改配置文件 :/etc/cinder/cinder.conf[DEFAULT] # ... transport_url = rabbit://openstack:ips@controller auth_strategy = keystone [keystone_authtoken] # ... auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_id = default user_domain_id = default project_name = service username = cinder password = ips [database] # ... connection = mysql+pymysql://cinder:ips@controller/cinder # 目录预先创建 # mkdir -p /data/openstack/cinder/tmp # chown cinder:cinder -R /data/openstack/cinder [oslo_concurrency] # ... lock_path = /data/openstack/cinder/tmp修改配置文件并重启 :/etc/nova/nova.conf[cinder] os_region_name = RegionOne重启novasystemctl restart openstack-nova-api.service初始化数据结构su -s /bin/sh -c "cinder-manage db sync" cinder修改启动配置:主要为了归档日志# openstack-cinder-api.service ExecStart=/usr/bin/cinder-api --config-file /usr/share/cinder/cinder-dist.conf -- config-file /etc/cinder/cinder.conf --logfile /data/openstack/cinder/log/api.log # openstack-cinder-scheduler.service ExecStart=/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf --logfile /data/openstack/cinder/log/scheduler.log启动cindersystemctl start openstack-cinder-api.service openstack-cinder-scheduler.service2.6.2、计算节点
增加LVM支持,安装相关组件yum install lvm2 device-mapper-persistent-data openstack-cinder targetcli python-keystone -y # 启动 # systemctl enable lvm2-lvmetad.service # systemctl start lvm2-lvmetad.service为块存储服务创建物理卷(cinder 会在这个卷组中创建逻辑卷)# 提前准备好分区nvme0n1p4 pvcreate /dev/nvme0n1p4 vgcreate cinder-volumes /dev/nvme0n1p4修改配置文件/etc/lvm/lvm.confdevices { ... #此处配置一定要正确不然会导致cinder-volume的State为down filter =[ "a|^/dev/nvme0n1p4$|","r|.*/|" ]更改配置文件(/etc/cinder/cinder.conf)[DEFAULT] # ... #RabbitMQ消息队列访问 rpc_backend = rabbit://openstack:ips@controller #配置认证服务访问 auth_strategy = keystone my_ip = 10.20.16.227 # 启用 LVM 后端 enabled_backends = lvm #配置锁路径 lock_path = /data/openstack/cinder/tmp #启用详细日志 verbose = True #配置镜像服务的位置 glance_api_servers = http://controller:9292 #配置数据库访问 [database] ... connection = mysql://cinder:ips@controller/cinder #替换 CINDER_DBPASS #配置认证服务访问,注释或者删除其他选项 [keystone_authtoken] ... auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = cinder password = CINDER_PASS #cinder用户选择的密码 #配置LVM后端以LVM驱动结束,卷组``cinder-volumes`` ,iSCSI 协议和正确的 iSCSI服务,在[DEFAULT]中启用 [lvm] ... volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes iscsi_protocol = iscsi iscsi_helper = lioadm启动块存储卷服务及其依赖的服务,并将其配置为随系统启动# systemctl enable openstack-cinder-volume.service target.service # systemctl start openstack-cinder-volume.service target.serviceCentOS 镜像
设置固定root密码virt-customize -a CentOS-7-x86_64-GenericCloud.qcow2 --root-password password:root123设置其他用户密码[root@host229 openstack]# guestfish --rw -a CentOS-7-x86_64-GenericCloud.qcow2 >run >list-filesystems /dev/sda1: xfs >mount /dev/sda1 / >vi /etc/cloud/cloud.cfg解除root锁定:/etc/cloud/cloud.cfgdisable_root: 0 ssh_pwauth: 1 ······ system_info: default_user: name: centos lock_passwd: false plain_text_passwd: "root@ips"增加ssh 登陆支持:/etc/ssh/sshd_configPort 22 #AddressFamily any ListenAddress 0.0.0.0 #ListenAddress :: PermitRootLogin yes【高效云计算平台搭建指南:OpenStack部署教程】相关文章:
2.米颠拜石
3.王羲之临池学书
8.郑板桥轶事十则
用户评论
OpenStack 真的是配置太复杂了!
有14位网友表示赞同!
想要玩儿云计算,就先从 OpenStack 开始吧。
有8位网友表示赞同!
安装 OpenStack 不容易,需要仔细阅读文档啊。
有17位网友表示赞同!
有没有大佬分享一下 OpenStack 安装步骤?很感谢!
有20位网友表示赞同!
OpenStack 的社区支持很好,可以随时问问题。
有6位网友表示赞同!
OpenStack 的架构真的很有意思,学习一下感觉很有成就感!
有17位网友表示赞同!
听说 OpenStack 现在越来越成熟了,是真的吗?
有11位网友表示赞同!
我正在安装 OpenStack,遇到了一些问题,希望大家可以帮忙解答。
有9位网友表示赞同!
用 OpenStack 搭建私有云还是挺好玩的。
有15位网友表示赞同!
OpenStack 的学习成本比较高,需要有一定的基础知识。
有18位网友表示赞同!
安装 OpenStack 这件事好像和上个世纪的 Linux 操作系统很像啊!
有10位网友表示赞同!
希望哪天 OpenStack 可以变得简单易用一点,这样更多的人都能体验。
有13位网友表示赞同!
其实 OpenStack 的功能很多,可以用来实现各种不同的场景。
有6位网友表示赞同!
OpenStack 的应用领域相当广泛,从个人开发到大型企业都有使用。
有15位网友表示赞同!
感觉 OpenStack 这几年发展得越来越快了,新版本的功能越来越强大!
有8位网友表示赞同!
在学习 OpenStack 的过程中,我学到了很多新的知识和技能!
有8位网友表示赞同!
希望将来可以更多地了解 OpenStack,并尝试使用它来构建一些项目。
有7位网友表示赞同!
OpenStack 的发展趋势怎么样?未来会不会越来越重要?
有12位网友表示赞同!
学习 OpenStack 的文档有很多,但是也有一些比较晦涩的地方。
有19位网友表示赞同!