- Installation and Configuration Guide
Installing and Configuring OpenStack environments manually
Edition 1
Legal Notice
1801 Varsity Drive
Raleigh, NC 27606-2072 USA
Phone: +1 919 754 3700
Phone: 888 733 4281
Fax: +1 919 754 3701
Abstract
- Preface
- I. Introduction
- II. Deploying OpenStack with Foreman
- III. Installing OpenStack Manually
- 7. Installing the Database Server
- 8. Installing the Message Broker
- 9. Installing the OpenStack Identity Service
- 9.1. Identity Service Requirements
- 9.2. Installing the Packages
- 9.3. Creating the Identity Database
- 9.4. Configuring the Service
- 9.5. Starting the Identity Service
- 9.6. Creating the Identity Service Endpoint
- 9.7. Creating an Administrator Account
- 9.8. Creating a Regular User Account
- 9.9. Creating the Services Tenant
- 9.10. Validating the Identity Service Installation
- 10. Installing the OpenStack Object Storage Service
- 11. Installing the OpenStack Image Service
- 11.1. Image Service Requirements
- 11.2. Installing the Image Service Packages
- 11.3. Creating the Image Service Database
- 11.4. Configuring the Image Service
- 11.4.1. Configuration Overview
- 11.4.2. Creating the Image Identity Records
- 11.4.3. Setting the Database Connection String
- 11.4.4. Configuring the Use of the Identity Service
- 11.4.5. Using the Object Storage Service for Image Storage
- 11.4.6. Configuring the Firewall
- 11.4.7. Populating the Image Service Database
- 11.5. Starting the Image API and Registry Services
- 11.6. Validating the Image Service Installation
- 12. Installing OpenStack Block Storage
- 13. Installing the OpenStack Networking Service
- 13.1. OpenStack Networking Installation Overview
- 13.2. Networking Prerequisite Configuration
- 13.3. Common Networking Configuration
- 13.4. Configuring the Networking Service
- 13.5. Configuring the DHCP Agent
- 13.6. Configuring a Provider Network
- 13.7. Configuring the Plug-in Agent
- 13.8. Configuring the L3 Agent
- 13.9. Validating the OpenStack Networking Installation
- 14. Installing the OpenStack Compute Service
- 15. Installing the Dashboard
- IV. Validating the Installation
- V. Monitoring the OpenStack Environment
- VI. Managing OpenStack Environment Expansion
- A. Installation Checklist
- B. Troubleshooting the OpenStack Environment
- C. Service Log Files
- D. Revision History
Mono-spaced Bold
To see the contents of the filemy_next_bestselling_novel
in your current working directory, enter thecat my_next_bestselling_novel
command at the shell prompt and press Enter to execute the command.
Press Enter to execute the command.Press Ctrl+Alt+F2 to switch to a virtual terminal.
mono-spaced bold
. For example:
File-related classes includefilesystem
for file systems,file
for files, anddir
for directories. Each class has its own associated set of permissions.
Choose Mouse Preferences. In the Buttons tab, select the Left-handed mouse check box and click to switch the primary mouse button from the left to the right (making the mouse suitable for use in the left hand).→ → from the main menu bar to launchTo insert a special character into a gedit file, choose → → from the main menu bar. Next, choose → from the Character Map menu bar, type the name of the character in the Search field and click . The character you sought will be highlighted in the Character Table. Double-click this highlighted character to place it in the Text to copy field and then click the button. Now switch back to your document and choose → from the gedit menu bar.
Mono-spaced Bold Italic
or Proportional Bold Italic
To connect to a remote machine using ssh, typessh
at a shell prompt. If the remote machine isusername
@domain.name
example.com
and your username on that machine is john, typessh john@example.com
.Themount -o remount
command remounts the named file system. For example, to remount thefile-system
/home
file system, the command ismount -o remount /home
.To see the version of a currently installed package, use therpm -q
command. It will return a result as follows:package
.
package-version-release
Publican is a DocBook publishing system.
mono-spaced roman
and presented thus:
books Desktop documentation drafts mss photos stuff svn books_tests Desktop1 downloads images notes scripts svgs
mono-spaced roman
but add syntax highlighting as follows:
static int kvm_vm_ioctl_deassign_device(struct kvm *kvm, struct kvm_assigned_pci_dev *assigned_dev) { int r = 0; struct kvm_assigned_dev_kernel *match; mutex_lock(&kvm->lock); match = kvm_find_assigned_dev(&kvm->arch.assigned_dev_head, assigned_dev->assigned_dev_id); if (!match) { printk(KERN_INFO "%s: device hasn't been assigned before, " "so cannot be deassigned\n", __func__); r = -EINVAL; goto out; } kvm_deassign_device(kvm, match); kvm_free_assigned_device(kvm, match); out: mutex_unlock(&kvm->lock); return r; }
Note
Important
Warning
- search or browse through a knowledge base of technical support articles about Red Hat products.
- submit a support case to Red Hat Global Support Services (GSS).
- access other product documentation.
Table of Contents
- Fully distributed object storage
- Persistent block-level storage
- Virtual-machine provisioning engine and image storage
- Authentication and authorization mechanism
- Integrated networking
- Web browser-based GUI for both users and administration.
Table 1.1. Services
Service | Codename | Description | |
---|---|---|---|
Dashboard | horizon |
A web-based dashboard for managing OpenStack services.
| |
Identity | keystone | A centralized identity service that provides authentication and authorization for other services, and manages users, tenants, and roles. | |
OpenStack Networking | neutron | A networking service that provides connectivity between the interfaces of other OpenStack services. | |
Block Storage | cinder | A service that manages persistent block storage volumes for virtual machines. | |
Compute | nova | A service that launches and schedules networks of machines running on nodes. | |
Image | glance | A registry service for virtual machine images. | |
Object Storage | swift | A service providing object storage which allows users to store and retrieve files (arbitrary data). | |
Metering
| ceilometer | A service providing measurements of cloud resources. | |
Orchestration
| heat | A service providing a template-based orchestration engine, which supports the automatic creation of resource stacks. |
The following Service Details section provides more detailed information about the Openstack service components. Each OpenStack service is comprised of a collection of Linux services, MySQL databases, or other components, which together provide a functional group. For example, the
glance-api
and glance-registry
Linux services, together with a MySQL database, implement the Image service.
adminURL
, the URL for the administrative endpoint for the service. Only the Identity service might use a value here that is different from publicURL; all other services will use the same value.internalURL
, the URL of an internal-facing endpoint for the service (typically same as the publicURL).publicURL
, the URL of the public-facing endpoint for the service.region
, in which the service is located. By default, if a region is not specified, the 'RegionOne' location is used.
- Users, which have associated information (such as a name and password). In addition to custom users, a user must be defined for each cataloged service (for example, the 'glance' user for the Image service).
- Tenants, which are generally the user's group, project, or organization.
- Roles, which determine a user's permissions.
- Users can create networks, control traffic, and connect servers and devices to one or more networks.
- OpenStack offers flexible networking models, so that administrators can change the networking model to adapt to their volume and tenancy.
- IPs can be dedicated or floating; floating IPs allow dynamic traffic rerouting.
Table 1.4. Networking Service components
Component | Description |
---|---|
neutron-server
|
A Python daemon, which manages user requests (and exposes the API). It is configured with a plugin that implements the OpenStack Networking API operations using a specific set of networking mechanisms. A wide choice of plugins are also available. For example, the
openvswitch and linuxbridge plugins utilize native Linux networking mechanisms, while other plugins interface with external devices or SDN controllers.
|
neutron-l3-agent
|
An agent providing L3/NAT forwarding.
|
neutron-*-agent
|
A plug-in agent that runs on each node to perform local networking configuration for the node's VMs and networking services.
|
neutron-dhcp-agent
|
An agent providing DHCP services to tenant networks.
|
Database
|
Provides persistent storage.
|
- Create, list, and delete volumes.
- Create, list, and delete snapshots.
- Attach and detach volumes to running virtual machines.
Table 1.5. Block Storage Service components
Component | Description |
---|---|
openstack-cinder-volume
|
Carves out storage for virtual machines on demand. A number of drivers are included for interaction with storage providers.
|
openstack-cinder-api
|
Responds to and handles requests, and places them in the message queue.
|
openstack-cinder-scheduler
|
Assigns tasks to the queue and determines the provisioning volume server.
|
Database
|
Provides state information.
|
See Also:
Table 1.6. Ways to Segregate the Cloud
Concept | Description |
---|---|
Regions
|
Each service cataloged in the Identity service is identified by its region, which typically represents a geographical location, and its endpoint. In a cloud with multiple Compute deployments, regions allow for the discrete separation of services, and are a robust way to share some infrastructure between Compute installations, while allowing for a high degree of failure tolerance.
|
Cells
|
A cloud's Compute hosts can be partitioned into groups called cells (to handle large deployments or geographically separate installations). Cells are configured in a tree. The top-level cell ('API cell') runs the
nova-api service, but no nova-compute services. In contrast, each child cell runs all of the other typical nova-* services found in a regular installation, except for the nova-api service. Each cell has its own message queue and database service, and also runs nova-cells , which manages the communication between the API cell and its child cells.
This means that:
|
Host Aggregates and Availability Zones
|
A single Compute deployment can be partitioned into logical groups (for example, into multiple groups of hosts that share common resources like storage and network, or which have a special property such as trusted computing hardware).
If the user is:
Aggregates, or zones, can be used to:
|
Table 1.7. Compute Service components
Component | Description |
---|---|
openstack-nova-api
|
Handles requests and provides access to the Compute services (such as booting an instance).
|
openstack-nova-cert
|
Provides the certificate manager.
|
openstack-nova-compute
|
Creates and terminates virtual instances. Interacts with the Hypervisor to bring up new instances, and ensures that the state is maintained in the Compute database.
|
openstack-nova-conductor
|
Provides database-access support for Compute nodes (thereby reducing security risks).
|
openstack-nova-consoleauth
|
Handles console authentication.
|
openstack-nova-network
|
Handles Compute network traffic (both private and public access). Handles such tasks as assigning an IP address to a new virtual instance, and implementing security group rules.
|
openstack-nova-novncproxy
|
Provides a VNC proxy for browsers (enabling VNC consoles to access virtual machines).
|
openstack-nova-scheduler
|
Dispatches requests for new virtual machines to the correct node.
|
Apache Qpid server (
qpidd )
|
Provides the AMPQ message queue. This server (also used by Block Storage) handles the OpenStack transaction management, including queuing, distribution, security, management, clustering, and federation. Messaging becomes especially important when an OpenStack deployment is scaled and its services are running on multiple machines.
|
libvirtd
|
The driver for the hypervisor. Enables the creation of virtual machines.
|
KVM Linux hypervisor
|
Creates virtual machines and enables their live migration from node to node.
|
Database
|
Provides build-time and run-time infrastructure state.
|
- raw (unstructured format)
- aki/ami/ari (Amazon kernel, ramdisk, or machine image)
- iso (archive format for optical discs; for example, CDROM)
- qcow2 (Qemu/KVM, supports Copy on Write)
- vhd (Hyper-V, common for virtual machine monitors from VMWare, Xen, Microsoft, VirtualBox, and others)
- vdi (Qemu/VirtualBox)
- vmdk (VMWare)
- bare (no metadata is included)
- ovf (OVF format)
- aki/ami/ari (Amazon kernel, ramdisk, or machine image)
Table 1.8. Image Service components
Component | Description |
---|---|
openstack-glance-api
|
Handles requests and image delivery (interacts with storage back-ends for retrieval and storage). Uses the registry to retrieve image information (the registry service is never, and should never be, accessed directly).
|
openstack-glance-registry
|
Manages all metadata associated with each image. Requires a database.
|
Database
|
Stores image metadata.
|
- Storage replicas, which are used to maintain the state of objects in the case of outage. A minimum of three replicas is recommended.
- Storage zones, which are used to host replicas. Zones ensure that each replica of a given object can be stored separately. A zone might represent an individual disk drive or array, a server, all the servers in a rack, or even an entire data center.
- Storage regions, which are essentially a group of zones sharing a location. Regions can be, for example, groups of servers or server farms, usually located in the same geographical area. Regions have a separate API endpoint per Object Storage service installation, which allows for a discrete separation of services.
Table 1.9. Object Storage Service components
Component | Description |
---|---|
openstack-swift-proxy
|
Exposes the public API, and is responsible for handling requests and routing them accordingly. Objects are streamed through the proxy server to the user (not spooled). Objects can also be served out via HTTP.
|
openstack-swift-object
|
Stores, retrieves, and deletes objects.
|
openstack-swift-account
|
Responsible for listings of containers, using the account database.
|
openstack-swift-container
|
Handles listings of objects (what objects are in a specific container), using the container database.
|
Ring files
|
Contain details of all the storage devices, and are used to deduce where a particular piece of data is stored (maps the names of stored entities to their physical location). One file is created for each object, account, and container server.
|
Account database
| |
Container database
| |
ext4 (recommended) or XFS file system
|
Used for object storage.
|
Housekeeping processes
|
Replication and auditors.
|
Table 1.10. Metering Service components
Component | Description |
---|---|
ceilometer-agent-compute
|
An agent that runs on each Compute node to poll for resource utilization statistics.
|
ceilometer-agent-central
|
An agent that runs on a central management server to poll for utilization statistics about resources not tied to instances or Compute nodes.
|
ceilometer-collector
|
An agent that runs on one or more central management servers to monitor the message queues. Notification messages are processed and turned into metering messages, and sent back out on to the message bus using the appropriate topic. Metering messages are written to the data store without modification.
|
Mongo database
|
For collected usage sample data.
|
API Server
|
Runs on one or more central management servers to provide access to the data store's data. Only the Collector and the API server have access to the data store.
|
- A single template provides access to all underlying service APIs.
- Templates are modular (resource oriented).
- Templates can be recursively defined, and therefore reusable (nested stacks). This means that the cloud infrastructure can be defined and reused in a modular way.
- Resource implementation is pluggable, which allows for custom resources.
- Autoscaling functionality (automatically adding or removing resources depending upon usage).
- Basic high availability functionality.
Table 1.11. Orchestration Service components
Component | Description |
---|---|
heat
|
A CLI tool that communicates with the heat-api to execute AWS CloudFormation APIs.
|
heat-api
|
An OpenStack-native REST API that processes API requests by sending them to the heat-engine over RPC.
|
heat-api-cfn
|
Provides an AWS-Query API that is compatible with AWS CloudFormation and processes API requests by sending them to the heat-engine over RPC.
|
heat-engine
|
Orchestrates the launching of templates and provide events back to the API consumer.
|
heat-api-cloudwatch
|
Provides monitoring (metrics collection) for the Orchestration service.
|
heat-cfntools
|
A package of helper scripts (for example, cfn-hup, which handles updates to metadata and executes custom hooks).
|
Note
heat-cfntools
package is only installed on images that are launched by heat into Compute servers.
Table 1.12. TableTitle
Guide | Description |
---|---|
Administration User Guide
|
HowTo procedures for administrating Red Hat Enterprise Linux OpenStack Platform environments.
|
Configuration Reference Guide
|
Configuration options and sample configuration files for each OpenStack component.
|
End User Guide
|
HowTo procedures for using Red Hat Enterprise Linux OpenStack Platform environments.
|
Getting Started Guide
|
Packstack deployment procedures for a Red Hat Enterprise Linux OpenStack Platform cloud, as well as brief HowTos for getting your cloud up and running.
|
Installation and Configuration Guide (this guide)
|
Deployment procedures for a Red Hat Enterprise Linux OpenStack Platform cloud; procedures for both a manual and Foreman installation are included. Also included are brief procedures for validating and monitoring the installation.
|
Release Notes
|
Information about the current release, including notes about technology previews, recommended practices, and known issues.
|
See Also:
root
user on the system being registered.
Important
- Run the
subscription-manager register
command to register the system to Red Hat Network.#
subscription-manager register
- Enter your Red Hat Network user name when prompted.
Username:
admin@example.com
Important
Your Red Hat Network account must have Red Hat Enterprise Linux OpenStack Platform entitlements. If your Red Hat Network account does not have Red Hat Enterprise Linux OpenStack entitlements then you may register for access to the evaluation program at http://www.redhat.com/openstack/. - Enter your Red Hat Network password when prompted.
Password:
- When registration completes successfully system is assigned a unique identifier.
The system has been registered with id:
IDENTIFIER
root
user. Repeat these steps on each system in the OpenStack environment.
- Use the
subscription-manager list
command to locate the pool identifier of the Red Hat Enterprise Linux subscription.#
+-------------------------------------------+ Available Subscriptions +-------------------------------------------+ Product Name: Red Hat Enterprise Linux Server Product Id: 69 Pool Id:subscription-manager list
--available
POOLID
Quantity: 1 Service Level: None Service Type: None Multi-Entitlement: No Expires: 01/01/2022 Machine Type: physical ...The pool identifier is indicated in thePool Id
field associated with theRed Hat Enterprise Linux Server
product. The identifier will be unique to your subscription. Take note of this identifier as it will be required to perform the next step.Note
The output displayed in this step has been truncated to conserve space. All other available subscriptions will also be listed in the output of the command. - Use the
subscription-manager attach
command to attach the subscription identified in the previous step.#
subscription-manager
Successfully attached a subscription for Red Hat Enterprise Linux Server.attach
--pool=
POOLID
ReplacePOOLID
with the unique identifier associated with your Red Hat Enterprise Linux Server subscription. This is the identifier that was located in the previous step. - Run the
yum repolist
command. This command ensures that the repository configuration file/etc/yum.repos.d/redhat.repo
exists and is up to date.#
yum
repolist
Once repository metadata has been downloaded and examined, the list of repositories enabled will be displayed, along with the number of available packages.repo id repo name status rhel-6-server-rpms Red Hat Enterprise Linux 6 Server (RPMs) 8,816 repolist: 8,816
Note
The output displayed in this step may differ from that which appears when you run theyum repolist
command on your system. In particular the number of packages listed will vary if or when additional packages are added to therhel-6-server-rpms
repository.
- Red Hat Cloud Infrastructure
- Red Hat Cloud Infrastructure (without Guest OS)
- Red Hat Enterprise Linux OpenStack Platform
- Red Hat Enterprise Linux OpenStack Platform Preview
- Red Hat Enterprise Linux OpenStack Platform (without Guest OS)
root
user. Repeat these steps on each system in the environment.
- Use the
subscription-manager list
command to locate the pool identifier of the relevant Red Hat Cloud Infrastructure or Red Hat Enterprise Linux OpenStack Platform entitlement.#
+-------------------------------------------+ Available Subscriptions +-------------------------------------------+ ... Product Name:subscription-manager list
--available
ENTITLEMENT
Product Id:ID_1
Pool Id:POOLID_1
Quantity: 3 Service Level: None Service Type: None Multi-Entitlement: No Expires: 02/14/2013 Machine Type: physical Product Name:ENTITLEMENT
Product Id:ID_2
Pool Id:POOLID_2
Quantity: unlimited Service Level: None Service Type: None Multi-Entitlement: No Expires: 02/14/2013 Machine Type: virtual ...Locate the entry in the list where theProduct Name
matches the name of the entitlement that will be used to access Red Hat Enterprise Linux OpenStack Platform packages. Take note of the pool identifier associated with the entitlement, this value is indicated in thePool Id
field. The pool identifier is unique to your subscription and will be required to complete the next step.Note
The output displayed in this step has been truncated to conserve space. All other available subscriptions will also be listed in the output of the command. - Use the
subscription-manager attach
command to attach the subscription identified in the previous step.#
subscription-manager
Successfully attached a subscription forattach
--pool=
POOLID
ENTITLEMENT
.ReplacePOOLID
with the unique identifier associated with your Red Hat Cloud Infrastructure or Red Hat Enterprise Linux OpenStack Platform entitlement. This is the identifier that was located in the previous step. - Install the yum-utils package. The yum-utils package is provided by the Red Hat Enterprise Linux subscription but provides the
yum-config-manager
utility required to complete configuration of the Red Hat Enterprise Linux OpenStack Platform software repositories.#
yum install -y yum-utils
Note that depending on the options selected during Red Hat Enterprise Linux installation the yum-utils package may already be installed. - Use the
yum-config-manager
command to ensure that the correct software repositories are enabled. Each successful invocation of the command will display the updated repository configuration.- Ensure that the repository for Red Hat OpenStack 1.0 (Essex) has been disabled.
#
yum-config-manager
Loaded plugins: product-id ==== repo: rhel-server-ost-6-preview-rpms ==== [rhel-server-ost-6-preview-rpms] bandwidth = 0 base_persistdir = /var/lib/yum/repos/x86_64/6Server baseurl = https://cdn.redhat.com/content/beta/rhel/server/6/6Server/x86_64/openstack/essex/os cache = 0 cachedir = /var/cache/yum/x86_64/6Server/rhel-server-ost-6-preview-rpms cost = 1000 enabled = False ...--disable rhel-server-ost-6-preview-rpms
Note
Yum treats the valuesFalse
and0
as equivalent. As a result the output on your system may instead contain this string:enabled = 0
Note
If you encounter this message in the output fromyum-config-manager
then the system has been registered to Red Hat Network using either RHN Classic or RHN Satellite.This system is receiving updates from RHN Classic or RHN Satellite.
Consult the Red Hat Subscription Management Guide for more information on managing subscriptions using RHN Classic or RHN Satellite. - Ensure that the repository for Red Hat OpenStack 2.1 (Folsom) is disabled.
#
yum-config-manager
Loaded plugins: product-id ==== repo: rhel-server-ost-6-folsom-rpms ==== [rhel-server-ost-6-folsom-rpms] bandwidth = 0 base_persistdir = /var/lib/yum/repos/x86_64/6Server baseurl = https://cdn.redhat.com/content/beta/rhel/server/6/6Server/x86_64/openstack/folsom/os cache = 0 cachedir = /var/cache/yum/x86_64/6Server/rhel-server-ost-6-folsom-rpms cost = 1000 enabled = False ...--disable rhel-server-ost-6-folsom-rpms
- Ensure that the repository for Red Hat Enterprise Linux OpenStack Platform 3 (Grizzly) has been disabled.
#
yum-config-manager
Loaded plugins: product-id ==== repo: rhel-server-ost-6-3-rpms ==== [rhel-server-ost-6-3-rpms] bandwidth = 0 base_persistdir = /var/lib/yum/repos/x86_64/6Server baseurl = https://cdn.redhat.com/content/dist/rhel/server/6/6Server/x86_64/openstack/3/os cache = 0 cachedir = /var/cache/yum/x86_64/6Server/rhel-server-ost-6-3-rpms cost = 1000 enabled = False ...--disable rhel-server-ost-6-3-rpms
- Ensure that the repository for Red Hat Enterprise Linux OpenStack Platform 4 (Havana) has been enabled.
#
yum-config-manager
Loaded plugins: product-id ==== repo: rhel-server-ost-6-4-rpms ==== [rhel-server-ost-6-4-rpms] bandwidth = 0 base_persistdir = /var/lib/yum/repos/x86_64/6Server baseurl = https://cdn.redhat.com/content/dist/rhel/server/6/6Server/x86_64/openstack/4/os cache = 0 cachedir = /var/cache/yum/x86_64/6Server/rhel-server-ost-6-4-rpms cost = 1000 enabled = True ...--enable rhel-server-ost-6-4-rpms
Note
Yum treats the valuesTrue
and1
as equivalent. As a result the output on your system may instead contain this string:enabled = 1
- Run the
yum repolist
command. This command ensures that the repository configuration file/etc/yum.repos.d/redhat.repo
exists and is up to date.#
yum
repolist
Once repository metadata has been downloaded and examined, the list of repositories enabled will be displayed, along with the number of available packages.repo id repo name status rhel-6-server-rpms Red Hat Enterprise Linux 6 Server (RPMs) 8,816 rhel-server-ost-6-4-rpms Red Hat OpenStack 4 (RPMs) 144 repolist: 10,058
Note
The output displayed in this step may differ from that which appears when you run theyum repolist
command on your system. In particular the number of packages listed will vary if or when additional packages are added to the repositories. - Install the yum-plugin-priorities package. The yum-plugin-priorities package provides a
yum
plug-in allowing configuration of per-repository priorities.#
yum install -y yum-plugin-priorities
- Use the
yum-config-manager
command to set the priority of the Red Hat Enterprise Linux OpenStack Platform software repository to1
. This is the highest priority value supported by the yum-plugin-priorities plug-in.#
yum-config-manager --enable rhel-server-ost-6-4-rpms \
Loaded plugins: product-id ==== repo: rhel-server-ost-6-4-rpms ==== [rhel-server-ost-6-4-rpms] bandwidth = 0 base_persistdir = /var/lib/yum/repos/x86_64/6Server baseurl = https://cdn.redhat.com/content/dist/rhel/server/6/6Server/x86_64/openstack/4/os cache = 0 cachedir = /var/cache/yum/x86_64/6Server/rhel-server-ost-6-4-rpms cost = 1000 enabled = True ... priority = 1 ...--setopt="rhel-server-ost-6-4-rpms.priority=1"
- Run the
yum
update
command and reboot to ensure that the most up to date packages, including the kernel, are installed and running.#
yum
update
-y
#
reboot
yum repolist
command to confirm the repository configuration again at any time.
Note
Table 2.1. OpenStack Daemons
Component | Code | Reserved UID | Reserved GID |
---|---|---|---|
Identity
|
keystone
|
163
|
163
|
OpenStack Networking
|
neutron
|
164
|
164
|
Block Storage
|
cinder
|
165
|
165
|
Compute
|
nova
|
162
|
162
|
Image
|
glance
|
161
|
161
|
Object Storage
|
swift
|
160
|
160
|
Metering
|
ceilometer
|
166
|
166
|
Orchestration
|
heat
|
187
|
187
|
Table 2.2. Third-party Components
Component | Reserved UID | Reserved GID |
---|---|---|
MongoDB
|
184
|
184
|
Memcached
|
497
|
497
|
MySQL
|
27
|
27
|
Nagios
|
496
|
495
|
Qpidd
|
498
|
499
|
- Processor
- 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions, and the AMD-V or Intel VT hardware virtualization extensions enabled.
- Memory
- A minimum of 2 GB of RAM is recommended.Add additional RAM to this requirement based on the amount of memory that you intend to make available to virtual machine instances.
- Disk Space
- A minimum of 50 GB of available disk space is recommended.Add additional disk space to this requirement based on the amount of space that you intend to make available to virtual machine instances. This figure varies based on both the size of each disk image you intend to create and whether you intend to share one or more disk images between multiple instances.1 TB of disk space is recommended for a realistic environment capable of hosting multiple instances of varying sizes.
- Network Interface Cards
- 2 x 1 Gbps Network Interface Cards.
- Processor
- No specific CPU requirements are imposed by the networking services.
- Memory
- A minimum of 2 GB of RAM is recommended.
- Disk Space
- A minimum of 10 GB of available disk space is recommended.No additional disk space is required by the networking services other than that required to install the packages themselves. Some disk space however must be available for log and temporary files.
- Network Interface Cards
- 2 x 1 Gbps Network Interface Cards.
openstack-cinder-volume
) and provide volumes for use by virtual machine instances or other cloud users. The block storage API (openstack-cinder-api
) and scheduling services (openstack-cinder-scheduler
) may run on the same nodes as the volume service or separately. In either case the primary hardware requirement of the block storage nodes is that there is enough block storage available to serve the needs of the OpenStack environment.
- The number of volumes that will be created in the environment.
- The average size of the volumes that will be created in the environment.
- Whether or not the storage backend will be configured to support redundancy.
- Whether or not the storage backend will be configured to create sparse volumes by default.
VOLUMES
*SIZE
*REDUNDANCY
*UTILIZATION
=TOTAL
- Replace
VOLUMES
with the number of volumes that it is expected will exist in the environment at any one time. - Replace
SIZE
with the expected average size of the volumes that will exist in the environment at any one time. - Replace
REDUNDANCY
with the expected number of redundant copies of each volume the backend storage will be configured to keep. Use1
or skip this multiplication operation if no redundancy will be used. - Replace
UTILIZATION
with the expected percentage of each volume that will actually be used. Use1
, indicating 100%, if the use of sparse volumes will not be enabled.
Table of Contents
- OpenStack Controller
- This host group is intended for use on a single host that will act as a controller for the OpenStack deployment. Services that will be deployed to hosts added to this host group include:
- OpenStack Dashboard (Horizon).
- OpenStack Image Storage service (Glance).
- OpenStack Identity service (Keystone).
- MySQL database server.
- Qpid message broker.
The OpenStack API and scheduling services, including those of the Compute service (Nova), also run on the controller. - OpenStack Nova Compute
- This host group is intended for use on one or more hosts that will act as compute nodes for the OpenStack deployment. These are the systems that virtual machine instances will run on, while accessing the authentication, storage, and messaging infrastructure provided by the controller node. An instance of the Compute service (Nova) runs on each compute node.
Important
openstack-nova-network
) instead of OpenStack Networking to provide network services. Additionally the OpenStack Block Storage service (Cinder) is not currently included in the provided host group definitions.
Important
Procedure 3.1. Installing Packages
- Log in to the system that will host the Foreman installation as the
root
user. - Install the openstack-foreman-installer and foreman-selinux packages.
#
yum install -y openstack-foreman-installer foreman-selinux
Procedure 3.2. Configuring the Installer
- Log in to the system that will host the Foreman installation as the
root
user. - Open the
/usr/share/foreman-installer/bin/foreman_server.sh
file in a text editor.#
vi /usr/share/foreman-installer/bin/foreman_server.sh
- Locate the OpenStack configuration keys within the file. The first configuration key listed is
PRIVATE_CONTROLLER_IP
.Example 3.1. OpenStack Configuration
#PRIVATE_CONTROLLER_IP=10.0.0.10 #PRIVATE_INTERFACE=eth1 #PRIVATE_NETMASK=10.0.0.0/23 #PUBLIC_CONTROLLER_IP=10.9.9.10 #PUBLIC_INTERFACE=eth2 #PUBLIC_NETMASK=10.9.9.0/24 #FOREMAN_GATEWAY=10.0.0.1 (or false for no gateway)
- Remove the comment character (#) from the start of each line, then edit the value of each configuration key. These configuration keys will determine the network topology of the OpenStack environment, once deployed:
PRIVATE_CONTROLLER_IP
- The IP address to assign to the Compute controller node on the private OpenStack network.
PRIVATE_INTERFACE
- The network interface on the controller node to connect to the private OpenStack network.
PRIVATE_NETMASK
- The IP range to associate with the OpenStack private network is defined using Classless Inter-Domain Routing (CIDR) notation.
PUBLIC_CONTROLLER_IP
- The IP address to assign to the Compute controller node on the public OpenStack network.
PUBLIC_INTERFACE
- The network interface on the Compute controller node to connect to the public OpenStack network.
PUBLIC_NETMASK
- The IP range to associate with the OpenStack public network is defined using Classless Inter-Domain Routing (CIDR) notation.
FOREMAN_GATEWAY
- The IP address of the default gateway on the Foreman network. The nodes will use this gateway to access installation media during the provisioning process.
Important
If bare metal provisoning is not required then set the value of theFOREMAN_GATEWAY
configuration key tofalse
. TheFOREMAN_PROVISIONING
configuration key found elsewhere in the file must also be set tofalse
.- Before:
if [ "x$FOREMAN_PROVISIONING" = "x" ]; then FOREMAN_PROVISIONING=true fi
- After:
if [ "x$FOREMAN_PROVISIONING" = "x" ]; then FOREMAN_PROVISIONING=false fi
Note
It is possible to use identical network definitions for the OpenStack public and private network for testing purposes. Such a configuration is not recommended for production environments. - Save the file and exit the text editor.
Procedure 3.3. Running the Installer
- Log in to the system that will host the Foreman installation as the
root
user. - Change to the
/usr/share/openstack-foreman-installer/bin/
directory. The installer must be run from this directory.#
cd /usr/share/openstack-foreman-installer/bin/
- Run the
foreman_server.sh
script.#
shforeman_server.sh
- A message is displayed indicating that the functionality being deployed is provided as a Technology Preview:
#################### RED HAT OPENSTACK ##################### Thank you for using the Red Hat OpenStack Foreman Installer! Please note that this tool is a Technology Preview For more information about Red hat Technology Previews, see https://access.redhat.com/support/offerings/techpreview/ ############################################################ Press [Enter] to continue
Press the Enter key to indicate that you understand the support ramifications of the Technology Preview designation and wish to proceed with installation. - The installer deploys Foreman using Puppet manifests. This may take a significant amount of time.
- A message is displayed once deployment of the Puppet manifests has been completed:
Foreman is installed and almost ready for setting up your OpenStack First, you need to alter a few parameters in Foreman. Visit: https://
FQDN
/puppetclasses/quickstack::compute/edit https://FQDN
/puppetclasses/quickstack::controller/edit Go to the Smart Class Parameters tab and work though each of the parameters in the left-hand column Then copy /tmp/foreman_client.sh to your openstack client nodes Run that script and visit the HOSTS tab in foreman. Pick CONTROLLER host group for your controller node and COMPUTE host group for the rest Once puppet runs on the machines, OpenStack is ready!In the actual outputFQDN
will have been replaced with the fully qualified domain name of the system to which Foreman was deployed.
admin
and password changeme
. It is highly recommended that users change this password immediately following installation.
Procedure 4.1. Changing the Password
- Open a web browser either on the Foreman server itself or on a system with network access to the Foreman server.
- Browse to
https://
. ReplaceFQDN
/FQDN
with the fully qualified domain name of your Foreman server. - The login screen is displayed. Type
admin
in the Username field andchangeme
in the Password field. Click the button to log in. - The Overview screen is displayed. Select the → option in the top right hand corner of the screen to access account settings.
- The Edit User screen is displayed. Enter a new password in the Password field.
- Enter the new password again in the Verified field.
- Click thebutton to save the change.
admin
user has been updated.
Procedure 4.2. Configuring Installation Media
- Use a web browser on a system with network access to the Foreman server to open the Foreman web user interface.
- Log in using the
admin
user and the password that was set in Section 4.1, “Changing the Password”. - Click→ → in the top right hand corner of the page.
- The Installation Media page is displayed. An
OpenStack RHEL mirror
entry already exists by default but the associated path is only provided as an example and must be corrected. - Click the
OpenStack RHEL mirror
entry. - The Edit Medium page is displayed.
- Update the Path field to contain the URL of a local installation mirror. These variables can be used in the URL and will be replaced automatically:
$arch
- The system architecture, for example
x86_64
. $version
- The operating system version, for example
6.5
. $major
- The operating system major version, for example
6
. $minor
- The operating system minor version, for example
4
.
- Click Submit to save the updated Path.
OpenStack RHEL mirror
installation media configuration has been updated.
Important
admin_password
configuration key in the OpenStack Controller host group. This configuration key determines the password to be used when logging into the OpenStack dashboard as the admin
user.
Procedure 4.3. Editing Host Groups
- Use a web browser on a system with network access to the Foreman server to open the Foreman web user interface.
- Log in using the
admin
user and the associated password. - Click→ → .
- The Host Groups page is displayed. The available options are:
- OpenStack Controller
- OpenStack Nova Compute
Click the name of the host group to edit. - The Edit page is displayed. Select the Parameters tab.
- The list of parameters associated with the host group is displayed. For each parameter to be edited click the associatedbutton. A text field will appear at the bottom of the page. Enter the new value for the parameter in the text field.For more information on the available host group parameters see:
- Repeat the previous step for each parameter in the host group that needs to be edited.
- Clickto save the updated parameter values.
Table 4.1. Controller Node Parameters
Parameter Name | Default Value | Description |
---|---|---|
admin_email
|
admin@
DOMAIN
|
The email address to associate with the OpenStack
admin user when it is created using the Identity service.
|
admin_password
|
Random
|
The password to associate with the
admin Identity user when it is created. This is the password of the user that will be used to administer the cloud.
|
cinder_db_password
|
Random
|
The password to associate with the
cinder database user, for use by the Block Storage service.
|
cinder_user_password
|
Random
|
The password to associate with the
cinder Identity service user, for use by the Block Storage service.
|
glance_db_password
|
Random
|
The password to associate with the
glance database user, for use by the Image Storage service.
|
glance_user_password
|
Random
|
The password to associate with the
glance Identity service user, for use by the Image Storage service.
|
horizon_secret_key
|
Random
|
The unique secret key to be stored in the Dashboard configuration.
|
keystone_admin_token
|
Random
|
The unique administrator token to be used by the Identity service. This token can be used by an administrator to access the Identity service when normal user authentication is not working or not yet configured.
|
keystone_db_password
|
Random
|
The password to associate with the
keystone database user, for use by the Identity service.
|
mysql_root_password
|
Random
|
The password to associate with the
root database user, for use when administering the database.
|
nova_db_password
|
Random
|
The password to associate with the
nova database user, for use by the Compute service.
|
nova_user_password
|
Random
|
The password to associate with the
nova Identity service user, for use by the Compute service.
|
pacemaker_priv_floating_ip
|
|
|
pacemaker_pub_floating_ip
|
|
|
verbose
|
true
|
Boolean value indicating whether or not verbose logging information must be generated.
|
Table 4.2. Compute Node Parameters
Parameter Name | Default Value | Description |
---|---|---|
fixed_network_range
|
|
|
floating_network_range
|
|
|
nova_db_password
|
Random
|
The password associated with the
nova database user. This password must match the value used for the same field in the controller host group.
|
nova_user_password
|
Random
|
The password associated with the
nova Identity service user. This password must match the value used for the same field in the controller host group.
|
pacemaker_priv_floating_ip
|
|
|
private_interface
|
eth1
|
The interface to attach to the private OpenStack network.
|
public_interface
|
eth2
|
The interface to attach to the public OpenStack network.
|
verbose
|
true
|
Boolean value indicating whether or not verbose logging information must be generated.
|
- To add existing Red Hat Enterprise Linux hosts to Foreman, see Section 5.1, “Registering Existing Hosts”.
- To provision bare metal hosts and add them to Foreman, see Section 5.2, “Provisioning New Hosts”.
/tmp/foreman_client.sh
file on the Foreman server. The script must be copied to each new host to facilitate registration with the Foreman server. When run on a new host the script performs these actions:
- Installs the augeas and ruby193-puppet packages.
- Configures the Puppet agent to access the Foreman server.
- Starts the Puppet agent.
Important
Procedure 5.1. Registering Hosts
- Log in to the Foreman server.
- Copy the
/tmp/foreman_client.sh
file from the Foreman server to the new host:$
scp /tmp/foreman_client.sh
USER
@IP
:DIR
ReplaceUSER
with the user to use when logging in to the new host, replaceIP
with the IP address or fully qualified domain name of the new host, and replaceDIR
with the path to the directory in which the file must be stored on the remote machine. The directory must already exist. - Log in to the new host as the
root
user. - Change into the directory to which the
foreman_client.sh
script was copied:#
cd
DIR
ReplaceDIR
with the path to the directory used when copying the file to the host. - Run the
foreman_client.sh
script:#
sh foreman_client.sh
- When the script completes successfully the Puppet agent is started and this message is displayed:
Starting puppet agent: [ OK ]
- Use a web browser on a system with network access to the Foreman server to open the Foreman web user interface.
- Log in using the
admin
user and the password that was defined during the Foreman installation. - Click the Hosts tab.
- Verify that the newly registered host is listed:
Procedure 5.2. Provisioning New Hosts
- Use a web browser on a system with network access to the Foreman server to open the Foreman web user interface.
- Log in using the
admin
user and associated password. - Click the Hosts tab.
- Click thebutton.
- Enter the desired fully qualified domain name for the new host in the Name field.
- Do not select a Host Group at this time.
- Click the Network tab. The Network tab contains settings that define the networking configuration for the new host.
- Enter the MAC address of the network interface on the physical machine that is connected to the Foreman network in the MAC address field.
Example 5.2. MAC Address
In this example theip link
command is used to identify the MAC address of theeth0
network interface.#
ip link show
2:eth0
eth0
: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether00:1a:4a:0f:18:bb
brd ff:ff:ff:ff:ff:ffThe MAC address of theeth0
network device is00:1a:4a:0f:18:bb
. - Ensure that the value in the Domain field matches the domain that Foreman is expected to manage.
- Ensure that a subnet is selected in the Subnet field. The IP address field will be automatically selected based on the subnet selection.
- Click the Operating System tab. The Operating System tab contains options for configuring the operating system installation for the host.
- Select
x86_64
in the Architecture field. - Select
RedHat 6.5
in the Operating System field. - Enter a password for the
root
user in the Root password field.Warning
It is highly recommended that a secureroot
password is provided. Strong passwords contain a mix of uppercase, lowercase, numeric and punctuation characters. They are six or more characters long and do not contain dictionary words. Failure to enter a secureroot
password in this field will result in the defaultroot
password,123123
, being used. - Click thebutton. Two templates will appear:
OpenStack PXE Template
OpenStack Kickstart Template
- Click the Parameters tab. The Parameters tab contains settings that will be used to register the new host to Red Hat Network or a Red Hat Network Satellite server.
- To configure the host to register to Red Hat Network:
- Click thebutton next to the
satellite_type
configuration key. Enterhosted
in the Value field. - Click thebutton next to the
satellite_host
configuration key. Enterxmlrpc.rhn.redhat.com
in the associated Value field. - Click thebutton next to the
activation_key
configuration key. Enter the Red Hat Network activation key to be used when registering the host in the associated Value field.
- To configure the host to register to a Red Hat Network Satellite server:
- Ensure the
satellite_type
is set to the default value,site
. - Click thebutton next to the
satellite_host
configuration key. Enter the IP address or fully qualified domain name of the Red Hat Network Satellite server in the associated Value field. - Click thebutton next to the
activation_key
configuration key. Enter the Red Hat Network activation key to be used when registering the host in the associated Value field.
Important
If the system is intended to be provisioned using packages from a different location instead of Red Hat Network or a Red Hat Network Satellite server then theOpenStack Kickstart Template
must be edited. Access the template editor by clicking → → , selecting theOpenStack Kickstart Template
entry, and removing this line from the template:<%= snippets "redhat_register" %>
The line must be replaced with environment specific commands suitable for registering the system and adding equivalent software repositories. - Click thebutton. Foreman will prepare to add the hosts the next time they boot.
- Ensure network boot (PXE) is enabled on the new host. Instructions for confirming this will be in the manufacturer instructions for the specific system.
- Restart the host. The host will retrieve an installation image from the Foreman PXE/TFTP server and begin the installation process.
- The host will restart again once installation has completed. Once this occurs use the Foreman web user interface to verify that the new host appears by clicking the Hosts tab.
Important
/var/lib/tftpboot/pxelinux.cfg/
) to ensure that future boots are performed using local boot devices. This ensures that the host does not re-provision itself when restarted. To re-provision a system that has already been registered to Foreman either:
- Use the web user interface to delete and re-add the host; or
- Use the web user interface to view the details of the host and click the Build button.
root
user to allow the host to register with a new certificate:
#
scl enable ruby193 'puppet cert clean
HOST
'
HOST
with the fully qualified domain name of the host.
Procedure 5.3. Assigning Hosts
- Use a web browser on a system with network access to the Foreman server to open the Foreman web user interface.
- Log in using the
admin
user and associated password. - Click the Hosts tab.
- Select the host from the list displayed in the table.
- Click the→ option.
- The Change Group window is displayed.
- Use the Select host group field to select a host group. The available options are:
- OpenStack Controller
- OpenStack Nova Compute
If no controller node has been configured yet, select OpenStack Controller, otherwise select OpenStack Nova Compute to provision compute nodes.Important
A controller node must be provisioned before attempting to provision a compute node. Puppet must have completed installation and configuration of the controller node before compute nodes are deployed. - Click thebutton to save the host group selection.
Note
root
user and run this command:
#
scl enable ruby193 "puppet agent --test"
HOSTNAME
with the host name or IP address of the server acting as the controller node:
- HTTPS
https://
HOSTNAME
/dashboard/ - HTTP
http://
HOSTNAME
/dashboard/
admin
user. This is the password that was set for the admin_password
configuration key of the controller node ( Section 4.3.1, “Controller Node”). To begin using the OpenStack deployment refer to Using OpenStack With the Dashboard in the Getting Started Guide.
Table of Contents
- 7. Installing the Database Server
- 8. Installing the Message Broker
- 9. Installing the OpenStack Identity Service
- 9.1. Identity Service Requirements
- 9.2. Installing the Packages
- 9.3. Creating the Identity Database
- 9.4. Configuring the Service
- 9.5. Starting the Identity Service
- 9.6. Creating the Identity Service Endpoint
- 9.7. Creating an Administrator Account
- 9.8. Creating a Regular User Account
- 9.9. Creating the Services Tenant
- 9.10. Validating the Identity Service Installation
- 10. Installing the OpenStack Object Storage Service
- 11. Installing the OpenStack Image Service
- 11.1. Image Service Requirements
- 11.2. Installing the Image Service Packages
- 11.3. Creating the Image Service Database
- 11.4. Configuring the Image Service
- 11.4.1. Configuration Overview
- 11.4.2. Creating the Image Identity Records
- 11.4.3. Setting the Database Connection String
- 11.4.4. Configuring the Use of the Identity Service
- 11.4.5. Using the Object Storage Service for Image Storage
- 11.4.6. Configuring the Firewall
- 11.4.7. Populating the Image Service Database
- 11.5. Starting the Image API and Registry Services
- 11.6. Validating the Image Service Installation
- 12. Installing OpenStack Block Storage
- 13. Installing the OpenStack Networking Service
- 13.1. OpenStack Networking Installation Overview
- 13.2. Networking Prerequisite Configuration
- 13.3. Common Networking Configuration
- 13.4. Configuring the Networking Service
- 13.5. Configuring the DHCP Agent
- 13.6. Configuring a Provider Network
- 13.7. Configuring the Plug-in Agent
- 13.8. Configuring the L3 Agent
- 13.9. Validating the OpenStack Networking Installation
- 14. Installing the OpenStack Compute Service
- 15. Installing the Dashboard
- mysql-server
- Provides the MySQL database server.
- mysql
- Provides the MySQL client tools and libraries. Installed as a dependency of the mysql-server package.
root
user.
- Install the required packages using the
yum
command:#
yum install -y mysql-server
root
user.
- Open the
/etc/sysconfig/iptables
file in a text editor. - Add an INPUT rule allowing TCP traffic on port
3306
to the file. The new rule must appear before any INPUT rules that REJECT traffic.-A INPUT -p tcp -m multiport --dports 3306 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptables
file. - Restart the
iptables
service to ensure that the change takes effect.#
service iptables restart
iptables
firewall is now configured to allow incoming connections to the MySQL database service on port 3306
.
root
user.
- Use the
service
command to start themysqld
service.#
service mysqld start
- Use the
chkconfig
command to ensure that themysqld
service will be started automatically in the future.#
chkconfig mysqld on
mysqld
service has been started.
root
user account. This account acts as the database administrator account.
root
database user once the database service has been started for the first time.
root
user.
- Use the
mysqladmin
command to set the password for theroot
database user.#
/usr/bin/mysqladmin -u root password "
PASSWORD
"ReplacePASSWORD
with the intended password. - The
mysqladmin
command can also be used to change the password of theroot
database user if required.#
/usr/bin/mysqladmin -u root -p
OLDPASS
NEWPASS
ReplaceOLDPASS
with the existing password andNEWPASS
with the password that is intended to replace it.
root
, password has been set. This password will be required when logging in to create databases and database users.
- qpid-cpp-server
- Provides the Qpid message broker.
- qpid-cpp-server-ssl
- Provides the Qpid plug-in enabling support for SSL as a transport later for AMQP traffic. This package is optional but recommended to support secure configuration of Qpid.
root
user.
- Install the required packages using the
yum
command:#
yum install -y qpid-cpp-server qpid-cpp-server-ssl
/etc/sasl2/qpidd.conf
on the broker. To narrow the allowed mechanisms to a smaller subset, edit this file and remove mechanisms.
Important
SASL Mechanisms
- ANONYMOUS
- Clients are able to connect anonymously.Note that when the broker is started with
auth=no
, authentication is disabled.PLAIN
andANONYMOUS
authentication mechanisms are available as identification mechanisms, but they have no authentication value. - PLAIN
- Passwords are passed in plain text between the client and the broker. This is not a secure mechanism, and should be used in development environments only. If PLAIN is used in production, it should only be used over SSL connections, where the SSL encryption of the transport protects the password.Note that when the broker is started with
auth=no
, authentication is disabled. ThePLAIN
andANONYMOUS
authentication mechanisms are available as identification mechanisms, but they have no authentication value. - DIGEST-MD5
- MD5 hashed passwords are exchanged using HTTP headers. This is a medium strength security protocol.
cyrus-sasl-*
package(s) that need to be installed on the server for each authentication mechanism to be available.
Table 8.1.
Method | Package |
/etc/sasl2/qpidd.conf entry
|
---|---|---|
ANONYMOUS
|
-
|
-
|
PLAIN
|
cyrus-sasl-plain
|
mech_list: PLAIN
|
DIGEST-MD5
|
cyrus-sasl-md5
|
mech_list: DIGEST-MD5
|
Procedure 8.1. Configure SASL using a Local Password File
guest
, which are included in the database at /var/lib/qpidd/qpidd.sasldb
on installation, or add your own accounts.
- Add new users to the database by using the
saslpasswd2
command. The User ID for authentication and ACL authorization uses the form
.user-id
@domain
Ensure that the correct realm has been set for the broker. This can be done by editing the configuration file or using the-u
option. The default realm for the broker isQPID
.# saslpasswd2 -f /var/lib/qpidd/qpidd.sasldb -u
QPID
new_user_name
- Existing user accounts can be listed by using the
-f
option:# sasldblistusers2 -f /var/lib/qpidd/qpidd.sasldb
Note
The user database at/var/lib/qpidd/qpidd.sasldb
is readable only by theqpidd
user. If you start the broker from a user other than theqpidd
user, you will need to either modify the configuration file, or turn authentication off.Note also that this file must be readable by theqpidd
user. If you delete and recreate this file, make sure the qpidd user has read permissions, or authentication attempts will fail. - To switch authentication on or off, add the appropriate line to to the
/etc/qpidd.conf
configuration file:auth=no auth=yes
The SASL configuration file is in/etc/sasl2/qpidd.conf
for Red Hat Enterprise Linux.
qpidd
is provided by Mozilla's Network Security Services Library (NSS).
- You will need a certificate that has been signed by a Certification Authority (CA). This certificate will also need to be trusted by your client. If you require client authentication in addition to server authentication, the client certificate will also need to be signed by a CA and trusted by the broker.In the broker, SSL is provided through the
ssl.so
module. This module is installed and loaded by default in MRG Messaging. To enable the module, you need to specify the location of the database containing the certificate and key to use. This is done using thessl-cert-db
option.The certificate database is created and managed by the Mozilla Network Security Services (NSS)certutil
tool. Information on this utility can be found on the Mozilla website, including tutorials on setting up and testing SSL connections. The certificate database will generally be password protected. The safest way to specify the password is to place it in a protected file, use the password file when creating the database, and specify the password file with thessl-cert-password-file
option when starting the broker.The following script shows how to create a certificate database using certutil:mkdir ${CERT_DIR} certutil -N -d ${CERT_DIR} -f ${CERT_PW_FILE} certutil -S -d ${CERT_DIR} -n ${NICKNAME} -s "CN=${NICKNAME}" -t "CT,," -x -f ${CERT_PW_FILE} -z /usr/bin/certutil
When starting the broker, setssl-cert-password-file
to the value of${CERT_PW_FILE}
, setssl-cert-db
to the value of${CERT_DIR}
, and setssl-cert-name
to the value of${NICKNAME}
. - The following SSL options can be used when starting the broker:
--ssl-use-export-policy
- Use NSS export policy
--ssl-cert-password-file
PATH
- Required. Plain-text file containing password to use for accessing certificate database.
--ssl-cert-db
PATH
- Required. Path to directory containing certificate database.
--ssl-cert-name
NAME
- Name of the certificate to use. Default is
localhost.localdomain
. --ssl-port
NUMBER
- Port on which to listen for SSL connections. If no port is specified, port 5671 is used.If the SSL port chosen is the same as the port for non-SSL connections (i.e. if the
--ssl-port
and--port
options are the same), both SSL encrypted and unencrypted connections can be established to the same port. However in this configuration there is no support for IPv6. --ssl-require-client-authentication
- Require SSL client authentication (i.e. verification of a client certificate) during the SSL handshake. This occurs before SASL authentication, and is independent of SASL.This option enables the
EXTERNAL
SASL mechanism for SSL connections. If the client chooses theEXTERNAL
mechanism, the client's identity is taken from the validated SSL certificate, using theCN
, and appending anyDC
's to create the domain. For instance, if the certificate contains the propertiesCN=bob
,DC=acme
,DC=com
, the client's identity isbob@acme.com
.If the client chooses a different SASL mechanism, the identity take from the client certificate will be replaced by that negotiated during the SASL handshake. --ssl-sasl-no-dict
- Do not accept SASL mechanisms that can be compromised by dictionary attacks. This prevents a weaker mechanism being selected instead of
EXTERNAL
, which is not vulnerable to dictionary attacks. --require-encryption
- This will cause
qpidd
to only accept encrypted connections. This means only clients with EXTERNAL SASL on the SSL-port, or with GSSAPI on the TCP port.
pk12util -o<p12exportfile>
-n<certname>
-d<certdir>
-w<p12filepwfile>
openssl pkcs12 -in<p12exportfile>
-out<clcertname>
-nodes -clcerts -passin pass:<p12pw>
man openssl
.
5672
.
iptables
. You can configure the firewall by editing the iptables
configuration file, namely /etc/sysconfig/iptables
. To do so:
- Open the
/etc/sysconfig/iptables
file in a text editor. - Add an
INPUT
rule allowing incoming connections on port5672
to the file. The new rule must appear before anyINPUT
rules thatREJECT
traffic.-A INPUT -p tcp -m tcp --dport 5672 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptables
file. - Restart the
iptables
service for the firewall changes to take effect.#
service iptables restart
#
service iptables status
qpidd
service must be started before the broker can commence sending and receiving messages.
- Use the
service
command to start the service.#
service qpidd start
- Use the
chkconfig
command to enable the service permanently.#
chkconfig qpidd on
qpidd
service has been started.
- 9.1. Identity Service Requirements
- 9.2. Installing the Packages
- 9.3. Creating the Identity Database
- 9.4. Configuring the Service
- 9.5. Starting the Identity Service
- 9.6. Creating the Identity Service Endpoint
- 9.7. Creating an Administrator Account
- 9.8. Creating a Regular User Account
- 9.9. Creating the Services Tenant
- 9.10. Validating the Identity Service Installation
- Access to Red Hat Network or equivalent service provided by a tool such as Satellite.
- A network interface that is addressable by all other systems that will host OpenStack services.
- Network access to the database server.
- Network access to the directory server if using an LDAP backend.
- openstack-keystone
- Provides the OpenStack Identity service.
- openstack-utils
- Provides supporting utilities to assist with a number of tasks including the editing of configuration files.
- openstack-selinux
- Provides OpenStack specific SELinux policy modules.
root
user.
- Install the required packages using the
yum
command:#
yum install -y openstack-keystone \
openstack-utils \
openstack-selinux
root
user (or at least as a user with the correct permissions: create db
, create user
, grant permissions
).
- Connect to the database service using the
mysql
command.#
mysql -u root -p
- Create the
keystone
database.mysql>
CREATE DATABASE keystone; - Create a
keystone
database user and grant it access to thekeystone
database.mysql>
GRANT ALL ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'PASSWORD
';mysql>
GRANT ALL ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'PASSWORD
';ReplacePASSWORD
with a secure password that will be used to authenticate with the database server as this user. - Flush the database privileges to ensure that they take effect immediately.
mysql>
FLUSH PRIVILEGES; - Exit the
mysql
client command.mysql>
quit
root
user.
- Use OpenSSL to generate an initial service token and save it in the
SERVICE_TOKEN
environment variable.#
export SERVICE_TOKEN=$(openssl rand -hex 10)
- Store the value of the administration token in a file for future use.
#
echo $SERVICE_TOKEN > ~/ks_admin_token
- Use the
openstack-config
tool to set the value of theadmin_token
configuration key to that of the newly created token.#
openstack-config --set /etc/keystone/keystone.conf \
DEFAULT admin_token $SERVICE_TOKEN
/etc/keystone/keystone.conf
file. It must be updated to point to a valid database server before starting the service.
root
user on the server hosting the identity service.
- Use the
openstack-config
command to set the value of theconnection
configuration key.#
openstack-config --set /etc/keystone/keystone.conf \
sql connection mysql://
USER
:PASS
@IP
/DB
Replace:USER
with the database user name the identity service is to use, usuallykeystone
.PASS
with the password of the chosen database user.IP
with the IP address or host name of the database server.DB
with the name of the database that has been created for use by the identity service, usuallykeystone
.
keystone-manage pki_setup
command. It is however possible to manually create and sign the required certificates using a third party certificate authority. If using third party certificates the identity service configuration must be manually updated to point to the certificates and supporting files.
[signing]
section of the /etc/keystone/keystone.conf
configuration file that are relevant to the PKI setup are:
- ca_certs
- Specifies the location of the certificate for the authority that issued the certificate denoted by the
certfile
configuration key. The default value is/etc/keystone/ssl/certs/ca.pem
. - ca_key
- Specifies the key of the certificate authority that issued the certificate denoted by the
certfile
configuration key. The default value is/etc/keystone/ssl/certs/cakey.pem
. - ca_password
- Specifies the password, if applicable, required to open the certificate authority file. The default action if no value is specified is not to use a password.
- certfile
- Specifies the location of the certificate that must be used to verify tokens. The default value of
/etc/keystone/ssl/certs/signing_cert.pem
is used if no value is specified. - keyfile
- Specifies the location of the private key that must be used when signing tokens. The default value of
/etc/keystone/ssl/private/signing_key.pem
is used if no value is specified. - token_format
- Specifies the algorithm to use when generating tokens. Possible values are
UUID
andPKI
. The default value isPKI
.
root
user.
- Run the
keystone-manage pki_setup
command.#
keystone-manage pki_setup \
--keystone-user
keystone
\--keystone-group
keystone
- Ensure that the
keystone
user owns the/var/log/keystone/
and/etc/keystone/ssl/
directories.#
chown -R keystone:keystone /var/log/keystone \
/etc/keystone/ssl/
Important
authlogin_nsswitch_use_ldap
Boolean enabled on any client machine accessing the LDAP backend. Run the following command on each client machine as the root
user to enable the Boolean and make it persistent across reboots:
# setsebool -P authlogin_nsswitch_use_ldap
dn: cn=example,cn=org dc: openstack objectClass: dcObject objectClass: organizationalUnit ou: openstack dn: ou=Groups,cn=example,cn=org objectClass: top objectClass: organizationalUnit ou: groups dn: ou=Users,cn=example,cn=org objectClass: top objectClass: organizationalUnit ou: users dn: ou=Roles,cn=example,cn=org objectClass: top objectClass: organizationalUnit ou: roles
/etc/keystone/keystone.conf
are:
[ldap] url = ldap://localhost user = dc=Manager,dc=openstack,dc=org password = badpassword suffix = dc=openstack,dc=org use_dumb_member = False allow_subtree_delete = False user_tree_dn = ou=Users,dc=openstack,dc=com user_objectclass = inetOrgPerson tenant_tree_dn = ou=Groups,dc=openstack,dc=com tenant_objectclass = groupOfNames role_tree_dn = ou=Roles,dc=example,dc=com role_objectclass = organizationalRole
objectClass
posixAccount
described in RFC2307 is commonly found in directory server implementations.
objectclass
, then the uid
field is likely to be named uidNumber
and the username
field is likely to be named either uid
or cn
. To change these two fields, the corresponding entries in the identity service configuration file are:
[ldap] user_id_attribute = uidNumber user_name_attribute = cn
[ldap] user_allow_create = False user_allow_update = False user_allow_delete = False tenant_allow_create = True tenant_allow_update = True tenant_allow_delete = True role_allow_create = True role_allow_update = True role_allow_delete = True
[ldap] user_filter = (memberof=CN=openstack-users,OU=workgroups,DC=openstack,DC=com) tenant_filter = role_filter =
[ldap] user_enabled_attribute = userAccountControl user_enabled_mask = 2 user_enabled_default = 512
userAccountControl
is an integer and the enabled attribute is listed in the first bit (bit 1). The values of the user_enabled_mask
and the user_enabled_attribute
are added together. If the resultant value matches the mask then the account is disabled.
enabled_nomask
. This is required to allow the restoration of the value when enabling or disabling a user. This needs to be done because the value contains more than just the status of the user. Setting the value of the user_enabled_mask
configuration key is required in order to create a default value on the integer attribute (512 = NORMAL ACCOUNT on Active Directory).
[ldap] user_objectclass = person user_id_attribute = cn user_name_attribute = cn user_mail_attribute = mail user_enabled_attribute = userAccountControl user_enabled_mask = 2 user_enabled_default = 512 user_attribute_ignore = tenant_id,tenants tenant_objectclass = groupOfNames tenant_id_attribute = cn tenant_member_attribute = member tenant_name_attribute = ou tenant_desc_attribute = description tenant_enabled_attribute = extensionName tenant_attribute_ignore = role_objectclass = organizationalRole role_id_attribute = cn role_name_attribute = ou role_member_attribute = roleOccupant role_attribute_ignore =
[ldap] use_tls = True tls_cacertfile = /etc/keystone/ssl/certs/cacert.pem tls_cacertdir = /etc/keystone/ssl/certs/ tls_req_cert = demand
tls_cacertfile
and tls_cacertdir
are set then tls_cacertfile
will be used and tls_cacertdir
is ignored. Furthermore, valid options for tls_req_cert
are demand
, never
, and allow
. These correspond to the standard options permitted by the TLS_REQCERT
TLS
option.
root
user.
- Open the
/etc/sysconfig/iptables
file in a text editor. - Add an INPUT rule allowing TCP traffic on ports
5000
and35357
to the file. The new rule must appear before any INPUT rules that REJECT traffic.-A INPUT -p tcp -m multiport --dports 5000,35357 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptables
file. - Restart the
iptables
service to ensure that the change takes effect.#
service iptables restart
iptables
firewall is now configured to allow incoming connections to the identity service on ports 5000
and 35357
.
- Use the
su
command to switch to thekeystone
user and run thekeystone-manage db_sync
command to initialize and populate the database identified in/etc/keystone/keystone.conf
.#
su keystone -s /bin/sh -c "keystone-manage db_sync"
root
user.
- Use the
service
command to start theopenstack-keystone
service.#
service openstack-keystone start
- Use the
chkconfig
command to ensure that theopenstack-keystone
service will be started automatically in the future.#
chkconfig openstack-keystone on
openstack-keystone
service has been started.
root
user.
Set the
SERVICE_TOKEN
Environment VariableSet theSERVICE_TOKEN
environment variable to the administration token. This is done by reading the token file created when setting the administration token.#
export SERVICE_TOKEN=`cat ~/ks_admin_token`Set the
SERVICE_ENDPOINT
Environment VariableSet theSERVICE_ENDPOINT
environment variable to point to the server hosting the identity service.#
export SERVICE_ENDPOINT="http://
IP
:35357/v2.0"ReplaceIP
with the IP address or host name of your identity server.Create a Service Entry
Create a service entry for the identity service using thekeystone service-create
command.#
keystone service-create --name=keystone --type=identity \
--description="Keystone Identity Service"
+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Keystone Identity Service | | id | a8bff1db381f4751bd8ac126464511ae | | name | keystone | | type | identity | +-------------+----------------------------------+Take note of the unique identifier assigned to the entry. This value will be required in subsequent steps.Create an Endpoint for the API
Create an endpoint entry for the v2.0 API identity service using thekeystone endpoint-create
command.#
keystone endpoint-create \
--service_id
ID
\--publicurl 'http://
IP
:5000/v2.0' \--adminurl 'http://
IP
:35357/v2.0' \--internalurl 'http://
+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | adminurl | http://IP
:5000/v2.0'IP
:35357/v2.0 | | id | 1295011fdc874a838f702518e95a0e13 | | internalurl | http://IP
:5000/v2.0 | | publicurl | http://IP
:5000/v2.0 | | region | regionOne | | service_id |ID
| +-------------+----------------------------------+ReplaceID
with the service identifier returned in the previous step. ReplaceIP
with the IP address or host name of the identity server.Important
Ensure that thepublicurl
,adminurl
, andinternalurl
parameters include the correct IP address for your Keystone identity server.Note
By default, the endpoint is created in the default region,regionOne
. If you need to specify a different region when creating an endpoint use the--region
argument to provide it.
- Set the
SERVICE_TOKEN
environment variable to the value of the administration token. This is done by reading the token file created when setting the administration token:#
SERVICE_TOKEN=`cat ~/ks_admin_token` - Set the
SERVICE_ENDPOINT
environment variable to point to the server hosting the identity service:#
export SERVICE_ENDPOINT="http://
IP
:35357/v2.0"ReplaceIP
with the IP address or host name of your identity server. - Use the
keystone user-create
command to create anadmin
user:#
keystone user-create --name admin --pass
+----------+-----------------------------------+ | Property | Value | +----------+-----------------------------------+ | email | | | enabled | True | | id |PASSWORD
94d659c3c9534095aba5f8475c87091a
| | name | admin | | tenantId | | +----------+-----------------------------------+ReplacePASSWORD
with a secure password for the account. Take note of the created user's ID as it will be required in subsequent steps. - Use the
keystone role-create
command to create anadmin
role:#
+----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | id |keystone role-create --name admin
78035c5d3cd94e62812d6d37551ecd6a
| | name | admin | +----------+----------------------------------+Take note of theadmin
user's ID as it will be required in subsequent steps. - Use the
keystone tenant-create
command to create anadmin
tenant:#
keystone tenant-create --name admin
+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | | | enabled | True | | id |6f8e3e36c4194b86b9a9b55d4b722af3
| | name | admin | +-------------+----------------------------------+Take note of theadmin
tenant's ID as it will be required in the next step. - Now that the user account, role, and tenant have been created, the relationship between them must be explicitly defined using the
keystone user-role-add
:#
keystone user-role-add --user-id
USERID
--role-idROLEID
--tenant-idTENANTID
Replace the user, role, and tenant IDs with those obtained in the previous steps. - The newly created
admin
account will be used for future management of the identity service. To facilitate authentication, create akeystonerc_admin
file in a secure location such as the home directory of theroot
user.Add these lines to the file to set the environment variables that will be used for authentication:export OS_USERNAME=admin export OS_TENANT_NAME=admin export OS_PASSWORD=
PASSWORD
export OS_AUTH_URL=http://IP
:35357/v2.0/ export PS1='[\u@\h \W(keystone_admin)]\$ 'ReplacePASSWORD
with the password of theadmin
user and replaceIP
with the IP address or host name of the identity server. - Run the
source
command on the file to load the environment variables used for authentication:#
source ~/keystonerc_admin
keystonerc_admin
file has also been created for authenticating as the admin
user.
- Load identity credentials from the
~/keystonerc_admin
file that was generated when the administrative user was created:#
source ~/keystonerc_admin
- Use the
keystone user-create
to create a regular user:#
keystone user-create --name
+----------+-----------------------------------+ | Property | Value | +----------+-----------------------------------+ | email | | | enabled | True | | id |USER
--passPASSWORD
b8275d7494dd4c9cb3f69967a11f9765
| | name |USER
| | tenantId | | +----------+-----------------------------------+ReplaceUSER
with the user name that you would like to use for the account. ReplacePASSWORD
with a secure password for the account. Take note of the created user's ID as it will be required in subsequent steps. - Use the
keystone role-create
command to create anMember
role. TheMember
role is the default role required for access to the dashboard:#
+----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | id |keystone role-create --name Member
78035c5d3cd94e62812d6d37551ecd6a
| | name | Member | +----------+----------------------------------+Take note of the created role's ID as it will be required in subsequent steps. - Use the
keystone tenant-create
command to create a tenant:#
keystone tenant-create --name
+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | | | enabled | True | | id |TENANT
6f8e3e36c4194b86b9a9b55d4b722af3
| | name |TENANT
| +-------------+----------------------------------+ReplaceTENANT
with the name that you wish to give to the tenant. Take note of the created tenant's ID as it will be required in the next step. - Now that the user account, role, and tenant have been created, the relationship between them must be explicitly defined using the
keystone user-role-add
:#
keystone user-role-add --user-id
USERID
--role-idROLEID
--tenant-idTENANTID
Replace the user, role, and tenant IDs with those obtained in the previous steps. - To facilitate authentication create a
keystonerc_user
file in a secure location such as the home directory of theroot
user.Set these environment variables that will be used for authentication:export OS_USERNAME=
USER
export OS_TENANT_NAME=TENANT
export OS_PASSWORD=PASSWORD
export OS_AUTH_URL=http://IP
:5000/v2.0/ export PS1='[\u@\h \W(keystone_user)]\$ 'ReplaceUSER
andTENANT
with the name of the new user and tenant respectively. ReplacePASSWORD
with the password of the user and replaceIP
with the IP address or host name of the identity server.
keystonerc_user
file has also been created for authenticating as the created user.
- Distributed, typically one service tenant is created for each endpoint on which services are running (excepting the Identity and Dashboard services).
- Deployed on a single node, only one service tenant is required (but of course this is just one option; more can be created for administrative purposes).
services
tenant.
Note
services
tenant:
- Run the
source
command on the file containing the environment variables used to identify the Identity service administrator.#
source ~/keystonerc_admin
- Create the
services
tenant in the Identity service:#
keystone tenant-create --name services --description "Services Tenant"
+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Services Tenant | | enabled | True | | id |7e193e36c4194b86b9a9b55d4b722af3
| | name | services | +-------------+----------------------------------+
Note
# keystone tenant-list
keystonerc_admin
and keystonerc_user
files containing the environment variables required to authenticate as the administrator user and a regular user respectively.
- Run the
source
command on the file containing the environment variables used to identify the identity service administrator.#
source ~/keystonerc_admin
- Run the
keystone user-list
command to authenticate with the identity service and list the users defined in the system.#
+----------------------------------+--------+---------+------------------+ | id | name | enabled | email | +----------------------------------+--------+---------+------------------+ |keystone user-list
94d659c3c9534095aba5f8475c87091a
| admin | True | | |b8275d7494dd4c9cb3f69967a11f9765
|USER
| True | | +----------------------------------+--------+---------+------------------+The list of users defined in the system is displayed. If the list is not displayed then there is an issue with the installation.- If the message returned indicates a permissions or authorization issue then check that the administrator user account, tenant, and role were created properly. Also ensure that the three objects are linked correctly.
Unable to communicate with identity service: {"error": {"message": "You are not authorized to perform the requested action: admin_required", "code": 403, "title": "Not Authorized"}}. (HTTP 403)
- If the message returned indicates a connectivity issue then verify that the
openstack-keystone
service is running and thatiptables
is configured to allow connections on ports5000
and35357
.Authorization Failed: [Errno 111] Connection refused
- Run the
source
command on the file containing the environment variables used to identify the regular identity service user.#
source ~/keystonerc_user
- Run the
keystone user-list
command to authenticate with the identity service and list the users defined in the system.#
Unable to communicate with identity service: {"error": {"message": "You are not authorized to perform the requested action: admin_required", "code": 403, "title": "Not Authorized"}}. (HTTP 403)keystone user-list
An error message is displayed indicating that the user isNot Authorized
to run the command. If the error message is not displayed but instead the user list appears then the regular user account was incorrectly attached to theadmin
role. - Run the
keystone token-get
command to verify that the regular user account is able to run commands that it is authorized to access.#
+-----------+----------------------------------+ | Property | Value | +-----------+----------------------------------+ | expires | 2013-05-07T13:00:24Z | | id | 5f6e089b24d94b198c877c58229f2067 | | tenant_id | f7e8628768f2437587651ab959fbe239 | | user_id | 8109f0e3deaf46d5990674443dcf7db7 | +-----------+----------------------------------+keystone token-get
- Proxy Service
- The proxy service uses the object ring to decide where to direct newly uploaded objects. It updates the relevant container database to reflect the presence of a new object. If a newly uploaded object goes to a new container, the proxy service updates the relevant account database to reflect the new container.The proxy service also directs get requests to one of the nodes where a replica of the requested object is stored, either randomly, or based on response time from the node.
- Object Service
- The object service is responsible for storing data objects in partitions on disk devices. Each partition is a directory. Each object is held in a subdirectory of its partition directory. A MD5 hash of the path to the object is used to identify the object itself.
- Container Service
- The container service maintains databases of objects in containers. There is one database file for each container, and the database files are replicated across the cluster. Containers are defined when objects are put in them. Containers make finding objects faster by limiting object listings to specific container namespaces.
- Account Service
- The account service maintains databases of all of the containers accessible by any given account. There is one database file for each account, and the database files are replicated across the cluster. Any account has access to a particular group of containers. An account maps to a tenant in the Identity Service.
Common Object Storage Service Deployment Configurations
- All services on all nodes.
- Simplest to set up.
- Dedicated proxy nodes, all other services combined on other nodes.
- The proxy service is CPU and I/O intensive. The other services are disk and I/O intensive. This configuration allows you to optimize your hardware usage.
- Dedicated proxy nodes, dedicated object service nodes, container and account services combined on other nodes.
- The proxy service is CPU and I/O intensive. The container and account services are more disk and I/O intensive than the object service. This configuration allows you to optimize your hardware usage even more.
- Supported Filesystems
- The Object Storage Service stores objects in filesystems. Currently,
XFS
andext4
are supported. Theext4
filesystem is recommended.Your filesystem must be mounted withxattr
enabled. For example, this is from/etc/fstab
:/dev/sdb1 /srv/node/d1 ext4 acl,user_xattr 0 0
- Acceptable Mountpoints
- The Object Storage service expects devices to be mounted at
/srv/node/
.
Primary OpenStack Object Storage packages
- openstack-swift-proxy
- Proxies requests for objects.
- openstack-swift-object
- Stores data objects of up to 5GB.
- openstack-swift-container
- Maintains a database that tracks all of the objects in each container.
- openstack-swift-account
- Maintains a database that tracks all of the containers in each account.
OpenStack Object Storage dependencies
- openstack-swift
- Contains code common to the specific services.
- openstack-swift-plugin-swift3
- The swift3 plugin for OpenStack Object Storage.
- memcached
- Soft dependency of the proxy server, caches authenticated clients rather than making them reauthorize at every interaction.
- openstack-utils
- Provides utilities for configuring Openstack.
Procedure 10.1. Installing the Object Storage Service Packages
- Install the required packages using the
yum
command as the root user:#
yum install -y openstack-swift-proxy \
openstack-swift-object \
openstack-swift-container \
openstack-swift-account \
openstack-utils \
memcached
Prerequisites:
- Create the
swift
user, who has theadmin
role in theservices
tenant. - Create the
swift
service entry and assign it an endpoint.
keystonerc_admin
file (which contains administrator credentials) and the keystone command-line utility is installed.
Procedure 10.2. Configuring the Identity Service to work with the Object Storage Service
- Set up the shell to access Keystone as the admin user:
$
source ~/keystonerc_admin
- Create the
swift
user and set its password by replacingPASSWORD
with your chosen password:$
keystone user-create --name swift --pass
PASSWORD
Note the create user's ID as it will be used in subsequent steps. - Get the ID of the
admin
role:$
keystone role-list | grep admin
If noadmin
role exists, create one:$
keystone role-create --name admin
- Get the ID of the
services
tenant:$
keystone tenant-list | grep services
If noservices
tenant exists, create one:$
keystone tenant-create --name services --description "Services Tenant"
This guide uses one tenant for all service users. For more information, refer to Creating the Services Tenant. - Add the
swift
user to theservices
tenant with theadmin
role:$
keystone user-role-add --role-id
ROLEID
--tenant-idTENANTID
--user-idUSERID
Replace the user, role, and tenant IDs with those obtained in the previous steps. - Create the
swift
Object Storage service entry:$
keystone service-create --name swift --type object-store \
--description "Swift Storage Service"
Note the create service's ID as it will be used in the next step. - Create the
swift
endpoint entry:$
keystone endpoint-create --service_id
SERVICEID
\--publicurl "http://
IP
:8080/v1/AUTH_\$(tenant_id)s" \--adminurl "http://
IP
:8080/v1" \--internalurl "http://
IP
:8080/v1/AUTH_\$(tenant_id)s"ReplaceSERVICEID
with the identifier returned by thekeystone service-create
command. ReplaceIP
with the IP address of fully qualified domain name of the system hosting the Object Storage Proxy service.
ext4
or XFS
, and mounted under the /srv/node/
directory. All of the services that will run on a given node must be enabled, and their ports opened.
Procedure 10.3. Configuring the Object Storage Service Storage Nodes
- Format your devices using the
ext4
orXFS
filesystem. Make sure thatxattr
s are enabled. - Add your devices to the
/etc/fstab
file to ensure that they are mounted under/srv/node/
at boot time.Use theblkid
command to find your device's unique ID, and mount the device using its unique ID.Note
If usingext4
, ensure that extended attributes are enabled by mounting the filesystem with theuser_xattr
option. (InXFS
, extended attributes are enabled by default.) - Configure the firewall to open the TCP ports used by each service running on each nodeBy default, the account service uses port 6002, the container service uses port 6001, and the object service uses port 6000.
- Open the
/etc/sysconfig/iptables
file in a text editor. - Add an
INPUT
rule allowing TCP traffic on the ports used by the account, service, and object service. The new rule must appear before anyreject-with icmp-host-prohibited
rule.-A INPUT -p tcp -m multiport --dports 6000,6001,6002,873 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptables
file. - Restart the
iptables
service for the firewall changes to take effect.#
service iptables restart
- Change the owner of the contents of
/srv/node/
toswift:swift
with thechown
command.#
chown -R swift:swift /srv/node/
- Set the
SELinux
context correctly for all directories under/srv/node/
with therestorcon
command.#
restorecon -R /srv
- Use the
openstack-config
command to add a hash prefix (swift_hash_path_prefix) to your/etc/swift.conf
:# openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_prefix \ $(openssl rand -hex 10)
- Use the
openstack-config
command to add a hash suffix (swift_hash_path_suffix) to your/etc/swift.conf
:# openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_suffix \ $(openssl rand -hex 10)
These details are required for finding and placing data on all of your nodes. Back/etc/swift.conf
up. - Use the
openstack-config
command to set the IP address your storage services will listen on. Run these commands for every service on every node in your Object Storage cluster.# openstack-config --set /etc/swift/object-server.conf DEFAULT bind_ip
node_ip_address
# openstack-config --set /etc/swift/account-server.conf DEFAULT bind_ipnode_ip_address
# openstack-config --set /etc/swift/container-server.conf DEFAULT bind_ipnode_ip_address
TheDEFAULT
argument specifies the DEFAULT section of the service configuration file. Replacenode_ip_address
with the IP address of the node you are configuring. - Copy
/etc/swift.conf
from the node you are currently configuring, to all of your Object Storage Service nodes.Important
The/etc/swift.conf
file must be identical on all of your Object Storage Service nodes. - Start the services which will run on your node.
#
service openstack-swift-account start
#
service openstack-swift-container start
#
service openstack-swift-object start
- Use the
chkconfig
command to make sure the services automatically start at boot time.#
chkconfig openstack-swift-account on
#
chkconfig openstack-swift-container on
#
chkconfig openstack-swift-object on
/srv/node/
. Any service running on the node has been enabled, and any ports used by services on the node have been opened.
gets
and puts
are directed.
Procedure 10.4. Configuring the Object Storage Service Proxy Service
- Update the configuration file for the proxy server with the correct authentication details for the appropriate service user:
#
openstack-config --set /etc/swift/proxy-server.conf \
filter:authtoken auth_host
IP
#
openstack-config --set /etc/swift/proxy-server.conf \
filter:authtoken admin_tenant_name
services
#
openstack-config --set /etc/swift/proxy-server.conf \
filter:authtoken admin_user
swift
#
openstack-config --set /etc/swift/proxy-server.conf \
filter:authtoken admin_password
PASSWORD
Where:IP
- The IP address or host name of the Identity server.services
- The name of the tenant that was created for the use of the Object Storage service (previous examples set this toservices
).swift
- The name of the service user that was created for the Object Storage service (previous examples set this toswift
).PASSWORD
- The password associated with the service user.
- Start the
memcached
andopenstack-swift-proxy
services using theservice
command:#
service memcached start
#
service openstack-swift-proxy start
- Use the
chown
command to change the ownership of the keystone signing directory:# chown swift:swift /tmp/keystone-signing-swift
- Enable the
memcached
andopenstack-swift-proxy
services permanently using thechkconfig
command:#
chkconfig memcached on
#
chkconfig openstack-swift-proxy on
- Allow incoming connections to the Swift proxy server by adding this firewall rule to the
/etc/sysconfig/iptables
configuration file:-A INPUT -p tcp -m multiport --dports 8080 -j ACCEPT
Important
This rule allows communication from all remote hosts to the system hosting the Swift proxy on port8080
. For information regarding the creation of more restrictive firewall rules refer to the Red Hat Enterprise Linux 6 Security Guide. - Use the
service
command to restart theiptables
service for the new rule to take effect:#
service iptables save
#
service iptables restart
Table 10.1. Parameters used when building ring files
Ring File Parameter | Description |
---|---|
Partition power
|
2 ^ partition power = partition count.
The partition is rounded up after calculation.
|
Replica count
|
The number of times that your data will be replicated in the cluster.
|
min_part_hours
|
Minimum number of hours before a partition can be moved. This parameter increases availability of data by not moving more than one copy of a given data item within that min_part_hours amount of time.
|
Procedure 10.5. Building Object Storage Service Ring Files
- Use the
swift-ring-builder
command to build one ring for each service. Provide a builder file, apartition power
, areplica count
, and theminimum hours between partition re-assignment
:#
swift-ring-builder /etc/swift/object.builder create
part_power
replica_count
min_part_hours
#
swift-ring-builder /etc/swift/container.builder create
part_power
replica_count
min_part_hours
#
swift-ring-builder /etc/swift/account.builder create
part_power
replica_count
min_part_hours
- When the rings are created, add storage devices to each ring:
- Add devices to the accounts ring. Repeat for each device on each node in the cluster that you want added to the ring.
#
swift-ring-builder /etc/swift/account.builder add z
X
-127.0.0.1:6002/device_mountpoint
partition_count
- Specify a zone with z
X
, whereX
is an integer (for example, z1 for zone one). - By default, all three services (account, container, and object) listen on the 127.0.0.1 address, and the above command matches this default.However, the service's machine IP address can also be used (for example, to handle distributed services). If you do use a real IP, remember to change the service's bind address to the same IP address or to '0.0.0.0' (configured in the
/etc/swift/
file).service
-server.conf - TCP port 6002 is the default port that the account server uses.
- The
device_mountpoint
is the directory under/srv/node/
that your device is mounted at. - The recommended minimum number for
partition_count
is 100, use the partition count you used to calculate your partition power.
- Add devices to the containers ring. Repeat for each device on each node in the cluster that you want added to the ring.
#
swift-ring-builder /etc/swift/container.builder add z
X
-127.0.0.1:6001/device_mountpoint
partition_count
- TCP port 6001 is the default port that the container server uses.
- Add devices to the objects ring. Repeat for each device on each node in the cluster that you want added to the ring.
#
swift-ring-builder /etc/swift/object.builder add z
X
-127.0.0.1:6000/device_mountpoint
partition_count
- TCP port 6000 is the default port that the object server uses.
- Distribute the partitions across the devices in the ring using the
swift-ring-builder
command'srebalance
argument.#
swift-ring-builder /etc/swift/account.builder rebalance
#
swift-ring-builder /etc/swift/container.builder rebalance
#
swift-ring-builder /etc/swift/object.builder rebalance
- Check to see that you now have 3 ring files in the directory
/etc/swift
. The command:#
ls /etc/swift/*gz
should reveal:/etc/swift/account.ring.gz /etc/swift/container.ring.gz /etc/swift/object.ring.gz
- Ensure that all files in the
/etc/swift/
directory including those that you have just created are owned by theroot
user andswift
group.Important
All mount points must be owned byroot
; all roots of mounted file systems must be owned byswift
. Before running the following command, ensure that all devices are already mounted and owned byroot
.#
chown -R root:swift /etc/swift
- Copy each ring builder file to each node in the cluster, storing them under
/etc/swift/
.# scp /etc/swift/*.gz
node_ip_address
:/etc/swift
- On your proxy server node, use the
openstack-config
command to turn on debug level logging:# openstack-config --set /etc/swift/proxy-server.conf DEFAULT log_level debug
- Set up the shell to access Keystone as a user that has the admin role. The admin user is shown in this example:Use the
swift
list to make sure you can connect to your proxy server:$
swift list
Message from syslogd@thildred-swift-01 at Jun 14 02:46:00 ... �135 proxy-server Server reports support for api versions: v3.0, v2.0 - Use the
swift
command to upload some files to your Object Storage Service nodes$
head -c 1024 /dev/urandom > data1.file ; swift upload c1 data1.file
$
head -c 1024 /dev/urandom > data2.file ; swift upload c1 data2.file
$
head -c 1024 /dev/urandom > data3.file ; swift upload c1 data3.file
- Use the
swift
command to take a listing of the objects held in your Object Storage Service cluster.$
swift list
$
swift list c1
data1.file data2.file data3.file
.data
files, based on your replica count.
$
find /srv/node/ -type f -name "*data"
- 11.1. Image Service Requirements
- 11.2. Installing the Image Service Packages
- 11.3. Creating the Image Service Database
- 11.4. Configuring the Image Service
- 11.4.1. Configuration Overview
- 11.4.2. Creating the Image Identity Records
- 11.4.3. Setting the Database Connection String
- 11.4.4. Configuring the Use of the Identity Service
- 11.4.5. Using the Object Storage Service for Image Storage
- 11.4.6. Configuring the Firewall
- 11.4.7. Populating the Image Service Database
- 11.5. Starting the Image API and Registry Services
- 11.6. Validating the Image Service Installation
- MySQL database server root credentials and IP address
- Identity service administrator credentials and endpoint URL
Note
See Also:
- openstack-glance
- Provides the OpenStack Image service.
- openstack-utils
- Provides supporting utilities to assist with a number of tasks including the editing of configuration files.
- openstack-selinux
- Provides OpenStack specific SELinux policy modules.
root
user.
#
yum install -y openstack-glance openstack-utils openstack-selinux
root
user (or as a user with suitable access: create db
, create user
, grant permissions
).
- Connect to the database service using the
mysql
command.#
mysql -u root -p
- Create the
glance
database.mysql>
CREATE DATABASE glance; - Create a
glance
database user and grant it access to theglance
database.mysql>
GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY 'PASSWORD
';mysql>
GRANT ALL ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'PASSWORD';ReplacePASSWORD
with a secure password that will be used to authenticate with the database server as this user. - Flush the database privileges to ensure that they take effect immediately.
mysql>
FLUSH PRIVILEGES; - Exit the
mysql
client.mysql>
quit
- Configure TLS/SSL.
- Configure the Identity service for Image service authentication (create database entries, set connection strings, and update configuration files).
- Configure the disk-image storage backend (this guide uses the Object Storage service).
- Configure the firewall for Image service access.
- Populate the Image service database.
- Create the
glance
, who has theadmin
role in theservices
tenant. - Create the
glance
service entry and assign it an endpoint.
- Authenticate as the administrator of the identity service by running the
source
command on thekeystonerc_admin
file containing the required credentials:#
source ~/keystonerc_admin
- Create a user named
glance
for the Image service to use:#
+----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | | | enabled | True | | id |keystone user-create --name glance --pass
PASSWORD
8091eaf121b641bf84ce73c49269d2d1
| | name | glance | | tenantId | | +----------+----------------------------------+ReplacePASSWORD
with a secure password that will be used by the image storage service when authenticating with the identity service. Take note of the returned user ID (used in subsequent steps). - Get the ID of the
admin
role:#
keystone role-get admin
If noadmin
role exists, create one:$ keystone role-create --name admin
- Get the ID of the
services
tenant:$
keystone tenant-list | grep services
If noservices
tenant exists, create one:$
keystone tenant-create --name services --description "Services Tenant"
This guide uses one tenant for all service users. For more information, refer to Creating the Services Tenant. - Use the
keystone user-role-add
command to link theglance
user and theadmin
role together within the context of theservices
tenant:#
keystone user-role-add --user-id
USERID
--role-idROLEID
--tenant-idTENANTID
- Create the
glance
service entry:#
keystone service-create --name glance \
--type image \
+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Glance Image Service | | id |--description "Glance Image Service"
7461b83f96bd497d852fb1b85d7037be
| | name | glance | | type | image | +-------------+----------------------------------+Take note of the service's returned ID (used in the next step). - Create the
glance
endpoint entry:#
keystone endpoint-create --service-id
SERVICEID
\--publicurl "http://
IP
:9292" \--adminurl "http://
IP
:9292" \--internalurl "http://
IP
:9292"ReplaceSERVICEID
with the identifier returned by thekeystone service-create
command. ReplaceIP
with the IP address or host name of the system hosting the Image service.
/etc/glance/glance-api.conf
and /etc/glance/glance-registry.conf
files. It must be updated to point to a valid database server before starting the service.
root
user on the server hosting the Image service.
- Use the
openstack-config
command to set the value of thesql_connection
configuration key in the/etc/glance/glance.conf
file.#
openstack-config --set /etc/glance/glance-api.conf \
DEFAULT sql_connection mysql://
USER
:PASS
@IP
/DB
Replace:USER
with the database user name the Image service is to use, usuallyglance
.PASS
with the password of the chosen database user.IP
with the IP address or host name of the database server.DB
with the name of the database that has been created for use by the identity service, usuallyglance
.
- Use the
openstack-config
command to set the value of thesql_connection
configuration key in the/etc/glance/glance-registry.conf
file.#
openstack-config --set /etc/glance/glance-registry.conf \
DEFAULT sql_connection mysql://
USER
:PASS
@IP
/DB
Replace the placeholder valuesUSER
,PASS
,IP
, andDB
with the same values used in the previous step.
root
user on each node hosting the Image service:
- Configure the
glance-api
service:#
openstack-config --set /etc/glance/glance-api.conf \
paste_deploy flavor keystone
#
openstack-config --set /etc/glance/glance-api.conf \
keystone_authtoken auth_host
IP
#
openstack-config --set /etc/glance/glance-api.conf \
keystone_authtoken auth_port 35357
#
openstack-config --set /etc/glance/glance-api.conf \
keystone_authtoken auth_protocol http
#
openstack-config --set /etc/glance/glance-api.conf \
keystone_authtoken admin_tenant_name
services
#
openstack-config --set /etc/glance/glance-api.conf \
keystone_authtoken admin_user
glance
#
openstack-config --set /etc/glance/glance-api.conf \
keystone_authtoken admin_password
PASSWORD
- Configure the
glance-registry
service:#
openstack-config --set /etc/glance/glance-registry.conf \
paste_deploy flavor keystone
#
openstack-config --set /etc/glance/glance-registry.conf \
keystone_authtoken auth_host
IP
#
openstack-config --set /etc/glance/glance-registry.conf \
keystone_authtoken auth_port 35357
#
openstack-config --set /etc/glance/glance-registry.conf \
keystone_authtoken auth_protocol http
#
openstack-config --set /etc/glance/glance-registry.conf \
keystone_authtoken admin_tenant_name
services
#
openstack-config --set /etc/glance/glance-registry.conf \
keystone_authtoken admin_user
glance
#
openstack-config --set /etc/glance/glance-registry.conf \
keystone_authtoken admin_password
PASSWORD
IP
- The IP address or host name of the Identity server.services
- The name of the tenant that was created for the use of the Image service (previous examples set this toservices
).glance
- The name of the service user that was created for the Image service (previous examples set this toglance
).PASSWORD
- The password associated with the service user.
file
) for its storage backend. However, either of the following storage backends can be used to store uploaded disk images:
file
- Local file system of the Image server (/var/lib/glance/images/
directory)swift
- OpenStack Object Storage service
Note
openstack-config
command. However, the /etc/glance/glance-api.conf
file can also be manually updated. If manually updating the file:
- Ensure that the
default_store
parameter is set to the correct backend (for example, 'default_store=rbd
'). - Update the parameters in that backend's section (for example, under '
RBD Store Options
').
root
user:
- Set the
default_store
configuration key toswift
:#
openstack-config --set /etc/glance/glance-api.conf \
DEFAULT default_store swift
- Set the
swift_store_auth_address
configuration key to the public endpoint for the Identity service:#
openstack-config --set /etc/glance/glance-api.conf \
DEFAULT swift_store_auth_address http://
IP
:5000/v2.0/ - Add the container for storing images in the Object Storage Service:
#
openstack-config --set /etc/glance/glance-api.conf \ DEFAULT swift_store_create_container_on_put True
- Set the
swift_store_user
configuration key to contain the tenant and user to use for authentication in the formatTENANT
:USER
:- If you followed the instructions in this guide to deploy Object Storage, these values must be replaced with the
services
tenant and theswift
user respectively. - If you did not follow the instructions in this guide to deploy Object Storage, these values must be replaced with the appropriate Object Storage tenant and user for your environment.
#
openstack-config --set /etc/glance/glance-api.conf \
DEFAULT swift_store_user
services:swift
- Set the
swift_store_key
configuration key to the password of the user to be used for authentication (that is, the password that was set for theswift
user when deploying the Object Storage service.#
openstack-config --set /etc/glance/glance-api.conf \
DEFAULT swift_store_key
PASSWORD
9292
.
root
user.
- Open the
/etc/glance/glance-api.conf
file in a text editor, and remove any comment characters from in front of the following parameters:bind_host = 0.0.0.0 bind_port = 9292
- Open the
/etc/sysconfig/iptables
file in a text editor. - Add an INPUT rule allowing TCP traffic on port
9292
to the file. The new rule must appear before any INPUT rules that REJECT traffic.-A INPUT -p tcp -m multiport --dports 9292 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptables
file. - Restart the
iptables
service to ensure that the change takes effect.#
service iptables restart
iptables
firewall is now configured to allow incoming connections to the image storage service on port 9292
.
root
user initially. The database connection string must already be defined in the configuration of the service.
- Use the
su
command to switch to theglance
user.#
su glance -s /bin/sh
- Run the
glance-manage db_sync
command to initialize and populate the database identified in/etc/glance/glance-api.conf
and/etc/glance/glance-registry.conf
.#
glance-manage db_sync
glance-api
and glance-registry
services as the root
user:
#
service openstack-glance-registry start
#
service openstack-glance-api start
#
chkconfig openstack-glance-registry on
#
chkconfig openstack-glance-api on
wget
command below uses an example URL.
#
mkdir /tmp/images
#
cd /tmp/images
#
wget -c -O rhel-6-server-x86_64-disc1.iso "https://content-web.rhn.redhat.com/rhn/isos/xxxx/rhel-6-server-x86_64-disc1.isoxxxxxxxx"
- Template Description Language (TDL) Files - Oz accepts input in the form of XML-based TDL files, which describe the operating system being installed, the installation media's source, and any additional packages or customization changes that must be applied to the image.
virt-sysprep
- It is also recommended that thevirt-sysprep
command is run on Linux-based virtual machine images prior to uploading them to to the Image service. Thevirt-sysprep
command re-initializes a disk image in preparation for use in a virtual environment. Default operations include the removal of SSH keys, removal of persistent MAC addresses, and removal of user accounts.Thevirt-sysprep
command is provided by the libguestfs-tools package.
Important
default
Libvirt network. It is recommended that you do not build images using Oz on a system that is running either the nova-network
service or any of the OpenStack Networking components.
Procedure 11.1. Building Images using Oz
- Use the
yum
command to install the oz and libguestfs-tools packages.#
yum install -y oz libguestfs-tools
- Download the Red Hat Enterprise Linux 6 Server installation DVD ISO file.Although Oz supports the use of network-based installation media, in this procedure a Red Hat Enterprise Linux 6 DVD ISO will be used.
- Use a text editor to create a TDL file for use with Oz. The following example displays the syntax for a basic TDL file.
Example 11.1. TDL File
The template below can be used to create a Red Hat Enterprise Linux 6 disk image. In particular, note the use of therootpw
element to set the password for theroot
user and theiso
element to set the path to the DVD ISO.<template> <name>rhel65_x86_64</name> <description>Red Hat 6.5 x86_64 template</description> <os> <name>RHEL-6</name> <version>4</version> <arch>x86_64</arch> <rootpw>PASSWORD</rootpw> <install type='iso'> <iso>file:///home/user/rhel-server-6.5-x86_64-dvd.iso</iso> </install> </os> <commands> <command name='console'> sed -i 's/ rhgb//g' /boot/grub/grub.conf sed -i 's/ quiet//g' /boot/grub/grub.conf sed -i 's/ console=tty0 / serial=tty0 console=ttyS0,115200n8 /g' /boot/grub/grub.conf </command> </commands> </template>
- Run the
oz-install
command to build an image:#
oz-install -u -d3
TDL_FILE
Syntax:-u
ensures any required customization changes to the image are applied after guest operating installation.-d3
enables the display of errors, warnings, and informational messages.TDL_FILE
provides the path to your TDL file.
By default, Oz stores the resultant image in the/var/lib/libvirt/images/
directory. This location is configurable by editing the/etc/oz/oz.cfg
configuration file. - Run the
virt-sysprep
command on the image to re-initialize it in preparation for upload to the Image service. ReplaceFILE
with the path to the disk image.#
virt-sysprep --add
FILE
Refer to thevirt-sysprep
manual page by running theman virt-sysprep
command for information on enabling and disabling specific operations.
Important
virt-sysprep
command be run on all Linux-based virtual machine images prior to uploading them to the Image service. The virt-sysprep
command re-initializes a disk image in preparation for use in a virtual environment. Default operations include the removal of SSH keys, removal of persistent MAC addresses, and removal of user accounts.
virt-sysprep
command is provided by the RHEL libguestfs-tools package. As the root
user, execute:
#
yum install -y libguestfs-tools
#
virt-sysprep --add
FILE
#
man virt-sysprep
- Set the environment variables used for authenticating with the Identity service by loading them from the
keystonerc
file associated with your user (an administrative account is not required):#
source
~/keystonerc_userName
- Use the
glance image-create
command to import your disk image:#
glance image-create --name "
NAME
" \--is-public
IS_PUBLIC
\--disk-format
DISK_FORMAT
\--container-format
CONTAINER_FORMAT
\--file
IMAGE
Where:NAME
= The name by which users will refer to the disk image.IS_PUBLIC
= Eithertrue
orfalse
:true
- All users will be able to view and use the image.false
- Only administrators will be able to view and use the image.
DISK_FORMAT
= The disk image's format. Valid values include:aki
,ami
,ari
,iso
,qcow2
,raw
,vdi
,vhd
, andvmdk
.If the format of the virtual machine disk image is unknown, use theqemu-img info
command to try and identify it.Example 11.2. Using
qemu-img info
In the following example, theqemu-img info
is used to determine the format of a disk image stored in the file./RHEL65.img
.#
image: ./RHEL65.img file format: qcow2 virtual size: 5.0G (5368709120 bytes) disk size: 136K cluster_size: 65536qemu-img info ./RHEL65.img
CONTAINER_FORMAT
= The container format of the image. The container format isbare
unless the image is packaged in a file format such asovf
orami
that includes additional metadata related to the image.IMAGE
= The local path to the image file (for uploading).
For more information about theglance image-create
syntax, execute:#
glance help image-create
Note
If the image being uploaded is not locally accessible but is available using a remote URL, provide the URL using the--location
parameter instead of using the--file
parameter.However, unless you also specify the--copy-from
argument, the Image service will not copy the image into the object store. Instead, the image will be accessed remotely each time it is required.Example 11.3. Uploading an Image to the Image service
In this example theqcow2
format image in the file namedrhel-65.qcow2
is uploaded to the Image service. It is created in the service as a publicly accessible image namedRHEL 6.5
.#
glance image-create --name "RHEL 6.5" --is-public true --disk-format qcow2 \
--container-format bare \
+------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum |--file rhel-65.qcow2
2f81976cae15c16ef0010c51e3a6c163
| | container_format |bare
| | created_at |2013-01-25T14:45:48
| | deleted |False
| | deleted_at |None
| | disk_format |qcow2
| | id |0ce782c6-0d3e-41df-8fd5-39cd80b31cd9
| | is_public |True
| | min_disk |0
| | min_ram |0
| | name |RHEL 6.5
| | owner |b1414433c021436f97e9e1e4c214a710
| | protected |False
| | size |25165824
| | status |active
| | updated_at |2013-01-25T14:45:50
| +------------------+--------------------------------------+
- To verify that your image was successfully uploaded, use the
glance image-list
command:#
glance image-list
+--------------+----------+-------------+------------------+----------+--------+ | ID | Name | Disk Format | Container Format |Size | Status | +--------------+----------+-------------+------------------+----------+--------+ |0ce782c6-...
|RHEL 6.5
|qcow2
|bare
|213581824
|active
| +--------------+----------+-------------+------------------+----------+--------+To view detailed information about an uploaded image, execute theglance image-show
command using the image's identifier:#
glance image-show
+------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum |0ce782c6-0d3e-41df-8fd5-39cd80b31cd9
2f81976cae15c16ef0010c51e3a6c163
| | container_format |bare
| | created_at |2013-01-25T14:45:48
| | deleted |False
| | disk_format |qcow2
| | id |0ce782c6-0d3e-41df-8fd5-39cd80b31cd9
| | is_public |True
| | min_disk |0
| | min_ram |0
| | name |RHEL 6.
| | owner |b1414433c021436f97e9e1e4c214a710
| | protected |False
| | size |25165824
| | status |active
| | updated_at |2013-01-25T14:45:50
| +------------------+--------------------------------------+
See Also:
cinder
. The three services are:
- The API service (
openstack-cinder-api
) - The API service provides a HTTP endpoint for block storage requests. When an incoming request is received the API verifies identity requirements are met and translates the request into a message denoting the required block storage actions. The message is then sent to the message broker for processing by the other block storage services.
- The scheduler service (
openstack-cinder-scheduler
) - The scheduler service reads requests from the message queue and determines on which block storage host the request must be actioned. The scheduler then communicates with the volume service on the selected host to process the request.
- The volume service (
openstack-cinder-volume
) - The volume service manages the interaction with the block storage devices. As requests come in from the scheduler, the volume service creates, modifies, and removes volumes as required.
- Preparing for Block Storage Installation
- Steps that must be performed before installing any of the block storage services. These procedures include the creation of identity records, the database, and a database user.
- Common Block Storage Configuration
- Steps that are common to all of the block storage services and as such must be performed on all block storage nodes in the environment. These procedures include configuring the services to refer to the correct database and message broker. Additionally they include the initialization and population of the database which must only be performed once but can be performed from any of the block storage systems.
- Volume Service Specific Configuration
- Steps that are specific to systems that will be hosting the volume service and as such require direct access to block storage devices.
See Also:
root
user.
- Connect to the database service using the
mysql
command.#
mysql -u root -p
- Create the
cinder
database.mysql>
CREATE DATABASE cinder; - Create a
cinder
database user and grant it access to thecinder
database.mysql>
GRANT ALL ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'PASSWORD
';mysql>
GRANT ALL ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'PASSWORD
';ReplacePASSWORD
with a secure password that will be used to authenticate with the database server as this user. - Flush the database privileges to ensure that they take effect immediately.
mysql>
FLUSH PRIVILEGES; - Exit the
mysql
client command.mysql>
quit
- Create the
cinder
user, who has theadmin
role in theservices
tenant. - Create the
cinder
service entry and assign it an endpoint.
- Authenticate as the administrator of the identity service by running the
source
command on thekeystonerc_admin
file containing the required credentials.#
source ~/keystonerc_admin
- Create a user named
cinder
for the block storage service to use.#
+----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | | | enabled | True | | id |keystone user-create --name cinder --pass
PASSWORD
e1765f70da1b4432b54ced060139b46a
| | name | cinder | | tenantId | | +----------+----------------------------------+ReplacePASSWORD
with a secure password that will be used by the block storage service when authenticating with the identity service. Take note of the created user's returned ID as it will be used in subsequent steps. - Get the ID of the
admin
role:#
keystone role-get admin
If noadmin
role exists, create one:$ keystone role-create --name admin
- Get the ID of the
services
tenant:$
keystone tenant-list | grep services
If noservices
tenant exists, create one:$
keystone tenant-create --name services --description "Services Tenant"
This guide uses one tenant for all service users. For more information, refer to Creating the Services Tenant. - Use the
keystone user-role-add
command to link thecinder
user,admin
role, andservices
tenant together:#
keystone user-role-add --user-id
USERID
--role-idROLEID
--tenant-idTENANTID
Replace the user, role, and tenant IDs with those obtained in the previous steps. - Create the
cinder
service entry:#
keystone service-create --name cinder \
--type volume \
+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Cinder Volume Service | | id |--description "Cinder Volume Service"
dfde7878671e484c9e581a3eb9b63e66
| | name | cinder | | type | volume | +-------------+----------------------------------+Take note of the created service's returned ID as it will be used in the next step. - Create the
cinder
endpoint entry.#
keystone endpoint-create --service-id
SERVICEID
\--publicurl "http://
IP
:8776/v1/\$(tenant_id)s" \--adminurl "http://
IP
:8776/v1/\$(tenant_id)s" \--internalurl "http://
IP
:8776/v1/\$(tenant_id)s"ReplaceSERVICEID
with the identifier returned by thekeystone service-create
command. ReplaceIP
with the IP address or host name of the system that will be hosting the block storage service API (openstack-cinder-api
).Important
If you intend to install and run multiple instances of the API service then you must repeat this step for the IP address or host name of each instance.
- openstack-cinder
- Provides the block storage services and associated configuration files.
- openstack-utils
- Provides supporting utilities to assist with a number of tasks including the editing of configuration files.
- openstack-selinux
- Provides OpenStack specific SELinux policy modules.
root
user:
#
yum install -y openstack-cinder openstack-utils openstack-selinux
root
user.
- Set the authentication strategy (
auth_strategy
) configuration key tokeystone
using theopenstack-config
command.#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT auth_strategy keystone
- Set the authentication host (
auth_host
) configuration key to the IP address or host name of the identity server.#
openstack-config --set /etc/cinder/cinder.conf \
keystone_authtoken auth_host
IP
ReplaceIP
with the IP address or host name of the identity server. - Set the administration tenant name (
admin_tenant_name
) configuration key to the name of the tenant that was created for the use of the block storage service. In this guide, examples useservices
.#
openstack-config --set /etc/cinder/cinder.conf \
keystone_authtoken admin_tenant_name
services
- Set the administration user name (
admin_user
) configuration key to the name of the user that was created for the use of the block storage service. In this guide, examples usecinder
.#
openstack-config --set /etc/cinder/cinder.conf \
keystone_authtoken admin_user
cinder
- Set the administration password (
admin_password
) configuration key to the password that is associated with the user specified in the previous step.#
openstack-config --set /etc/cinder/cinder.conf \
keystone_authtoken admin_password
PASSWORD
root
user.
General Settings
Use theopenstack-config
utility to set the value of therpc_backend
configuration key to Qpid.#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT rpc_backend cinder.openstack.common.rpc.impl_qpid
- Use the
openstack-config
utility to set the value of theqpid_hostname
configuration key to the host name of the Qpid server.#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT qpid_hostname
IP
ReplaceIP
with the IP address or host name of the message broker. Authentication Settings
If you have configured Qpid to authenticate incoming connections then you must provide the details of a valid Qpid user in the block storage configuration.- Use the
openstack-config
utility to set the value of theqpid_username
configuration key to the username of the Qpid user that the block storage services must use when communicating with the message broker.#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT qpid_username
USERNAME
ReplaceUSERNAME
with the required Qpid user name. - Use the
openstack-config
utility to set the value of theqpid_password
configuration key to the password of the Qpid user that the block storage services must use when communicating with the message broker.#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT qpid_password
PASSWORD
ReplacePASSWORD
with the password of the Qpid user.
Encryption Settings
If you configured Qpid to use SSL then you must inform the block storage services of this choice. Useopenstack-config
utility to set the value of theqpid_protocol
configuration key tossl
.#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT qpid_protocol ssl
The value of theqpid_port
configuration key must be set to5671
as Qpid listens on this different port when SSL is in use.#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT qpid_port 5671
Important
To communicate with a Qpid message broker that uses SSL the node must also have:- The nss package installed.
- The certificate of the relevant certificate authority installed in the system NSS database (
/etc/pki/nssdb/
).
Thecerttool
command is able to import certificates into the NSS database. See thecerttool
manual page for more information (man certtool
).
sql_connection
configuration key) is defined in the /etc/cinder/cinder.conf
file. The string must be updated to point to a valid database server before starting the service.
root
user on each system hosting block storage services:
#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT sql_connection mysql://
USER
:PASS
@IP
/DB
USER
with the database user name the block storage services are to use, usuallycinder
.PASS
with the password of the chosen database user.IP
with the IP address or host name of the database server.DB
with the name of the database that has been created for use by the block storage services, usuallycinder
.
3260
and 8776
.
root
user.
- Open the
/etc/sysconfig/iptables
file in a text editor. - Add an INPUT rule allowing TCP traffic on ports
3260
and8776
to the file. The new rule must appear before any INPUT rules that REJECT traffic.-A INPUT -p tcp -m multiport --dports 3260,8776 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptables
file. - Restart the
iptables
service to ensure that the change takes effect.#
service iptables restart
iptables
firewall is now configured to allow incoming connections to the block storage service on ports 3260
and 8776
.
Important
- Use the
su
command to switch to thecinder
user.#
su cinder -s /bin/sh
- Run the
cinder-manage db sync
command to initialize and populate the database identified in/etc/cinder/cinder.conf
.$
cinder-manage db sync
openstack-cinder-volume
) requires access to suitable block storage. The service includes volume drivers for a number of block storage providers. Supported drivers for LVM, NFS, and Red Hat Storage are included.
root
user:
- Use the
pvcreate
command to create a physical volume.#
Physical volume "pvcreate
DEVICE
DEVICE
" successfully createdReplaceDEVICE
with the path to a valid, unused, device. For example:#
pvcreate /dev/sdX
- Use the
vgcreate
command to create a volume group.#
Volume group "vgcreate
cinder-volumes
DEVICE
cinder-volumes
" successfully createdReplaceDEVICE
with the path to the device used when creating the physical volume. Optionally replacecinder-volumes
with an alternative name for the new volume group. - Set the
volume_group
configuration key to the name of the newly created volume group.#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT volume_group
cinder-volumes
The name provided must match the name of the volume group created in the previous step. - Ensure that the correct volume driver for accessing LVM storage is in use by setting the
volume_driver
configuration key tocinder.volume.drivers.lvm.LVMISCSIDriver
.#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT volume_driver cinder.volume.drivers.lvm.LVMISCSIDriver
root
user.
Important
virt_use_nfs
Boolean enabled on any client machine accessing the instance volumes. This includes all compute nodes. Run the following command on each client machine as the root
user to enable the Boolean and make it persistent across reboots:
# setsebool -P virt_use_nfs on
- Create a text file in the
/etc/cinder/
directory containing a list of the NFS shares that the volume service is to use for backing storage.nfs1.example.com
:/export
nfs2.example.com
:/export
Each line must contain an NFS share in the formatHOST
:/SHARE
whereHOST
is replaced by the IP address or host name of the NFS server andSHARE
is replaced with the particular NFS share to be used. - Use the
chown
command to set the file to be owned by theroot
user and thecinder
group.#
chown root:cinderFILE
ReplaceFILE
with the path to the file containing the list of NFS shares. - Use the
chmod
command to set the file permissions such that it can be read by members of thecinder
group.#
chmod 0640FILE
ReplaceFILE
with the path to the file containing the list of NFS shares. - Set the value of the
nfs_shares_config
configuration key to the path of the file containing the list of NFS shares.#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT nfs_shares_config
FILE
ReplaceFILE
with the path to the file containing the list of NFS shares. - The
nfs_sparsed_volumes
configuration key determines whether volumes are created as sparse files and grown as needed or fully allocated up front. The default and recommended value istrue
, which ensures volumes are initially created as sparse files.Setting thenfs_sparsed_volumes
configuration key tofalse
will result in volumes being fully allocated at the time of creation. This leads to increased delays in volume creation.#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT nfs_sparsed_volumes
true
- Optionally, provide any additional NFS mount options required in your environment in the
nfs_mount_options
configuration key. If your NFS shares do not require any additional mount options or you are unsure then skip this step.#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT nfs_mount_options
OPTIONS
ReplaceOPTIONS
with the mount options to be used when accessing NFS shares. See the manual page for NFS for more information on available mount options (man nfs
). - Ensure that the correct volume driver for accessing NFS storage is in use by setting the
volume_driver
configuration key tocinder.volume.drivers.nfs.NfsDriver
.#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT volume_driver cinder.volume.drivers.nfs.NfsDriver
root
user.
Important
Important
virt_use_fusefs
Boolean is enabled on any client machine accessing the instance volumes. This includes all compute nodes. Run the following command on each client machine as the root
user to enable the Boolean and make it persistent across reboots:
# setsebool -P virt_use_fusefs on
- Create a text file in the
/etc/cinder/
directory containing a list of the Red Hat Storage shares that the volume service is to use for backing storage.HOST
:/VOLUME
Each line must contain a Red Hat Storage share in the formatHOST
:VOLUME
whereHOST
is replaced by the IP address or host name of the Red Hat Storage server andVOLUME
is replaced with the name of a particular volume that exists on that host.If required additional mount options must also be added in the same way that they would be provided to themount
command line tool:HOST
:/VOLUME
-oOPTIONS
ReplaceOPTIONS
with a comma separated list of mount options. - Use the
chown
command to set the file to be owned by theroot
user and thecinder
group.#
chown root:cinderFILE
ReplaceFILE
with the path to the file containing the list of Red Hat Storage shares. - Use the
chmod
command to set the file permissions such that it can be read by members of thecinder
group.#
chmod 0640FILE
ReplaceFILE
with the path to the file containing the list of Red Hat Storage shares. - Set the value of the
glusterfs_shares_config
configuration key to the path of the file containing the list of Red Hat Storage shares.#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT glusterfs_shares_config
FILE
ReplaceFILE
with the path to the file containing the list of Red Hat Storage shares. - The
glusterfs_sparsed_volumes
configuration key determines whether volumes are created as sparse files and grown as needed or fully allocated up front. The default and recommended value istrue
, which ensures volumes are initially created as sparse files.Setting theglusterfs_sparsed_volumes
configuration key tofalse
will result in volumes being fully allocated at the time of creation. This leads to increased delays in volume creation.#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT glusterfs_sparsed_volumes
true
- Ensure that the correct volume driver for accessing Red Hat Storage is in use by setting the
volume_driver
configuration key tocinder.volume.drivers.glusterfs
.#
openstack-config --set /etc/cinder/cinder.conf \
DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
root
user. Each system that needs to be able to access multiple storage drivers or volumes must be configured in this way.
- Open the
/etc/cinder/cinder.conf
configuration file in a text editor. - Add a configuration block for each storage driver or volume. This configuration block must have a unique name (avoid spaces or special characters) and contain values for at least these configuration keys:
volume_group
- A volume group name. This is the name of the volume group that will be accessed by the driver.
- v
olume_driver
- A volume driver. This is the name of the driver that will be used when accessing the volume group.
volume_backend_name
- A backend name. This is an administrator-defined name for the backend, which groups the drivers so that user requests for storage served from the given backend can be serviced by any driver in the group. It is not related to the name of the configuration group which must be unique.
Any additional driver specific configuration must also be included in the configuration block.[
NAME
] volume_group=GROUP
volume_driver=DRIVER
volume_backend_name=BACKEND
ReplaceNAME
with a unique name for the backend and replaceGROUP
with the unique name of the applicable volume group. ReplaceDRIVER
with the driver to use when accessing this storage backend, valid values include:cinder.volume.drivers.lvm.LVMISCSIDriver
for LVM and iSCSI storage.cinder.volume.drivers.nfs.NfsDriver
for NFS storage.cinder.volume.drivers.glusterfs.GlusterfsDriver
for Red Hat Storage.
Finally replaceBACKEND
with a name for the storage backend. - Update the value of the
enabled_backends
configuration key in theDEFAULT
configuration block. This configuration key must contain a comma separated list containing the names of the configuration blocks for each storage driver.Example 12.1. Multiple Backend Configuration
In this example two logical volume groups,cinder-volumes-1
andcinder-volumes-2
, are grouped into the storage backend namedLVM
. An additional volume, backed by a list of NFS shares, is grouped into a storage backend namedNFS
.[DEFAULT]
...
enabled_backends=cinder-volumes-1-driver,cinder-volumes-2-driver,cinder-volumes-3-driver...
[cinder-volumes-1-driver] volume_group=cinder-volumes-1 volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver volume_backend_name=LVM [cinder-volumes-2-driver] volume_group=cinder-volumes-2 volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver volume_backend_name=LVM [cinder-volumes-3-driver] nfs_shares_config=/etc/cinder/shares.txt volume_driver=cinder.volume.drivers.nfs.NfsDriver volume_backend_name=NFSImportant
The default block storage scheduler driver in Red Hat Enterprise Linux OpenStack Platform 3 is the filter scheduler. If you have changed the value of thescheduler_driver
configuration key on any of your block storage nodes then you must update the value tocinder.scheduler.filter_scheduler.FilterScheduler
for the multiple storage backends feature to function correctly. - Save the changes to the
/etc/cinder/cinder.conf
file.
openstack-cinder-volume
service has already been started then you must restart it for the changes to take effect.
Note
#
source ~/keystonerc_admin
#
cinder type-create
TYPE
#
cinder type-key
TYPE
set volume_backend_name=BACKEND
TYPE
with the name that users must provide to select this specific storage backend and replace a BACKEND
with the relevant volume_backend_name
as set in the /etc/cinder/cinder.conf
configuration file.
tgtd
, when mounting storage. To support this the tgtd
service must be configured to read additional configuration files.
root
user.
- Open the
/etc/tgt/targets.conf
file. - Add this line to the file:
include /etc/cinder/volumes/*
- Save the changes to the file.
tgtd
service is started it will be configured to support the volume service.
- The API service (
openstack-cinder-api
). - The scheduler service (
openstack-cinder-scheduler
). - The volume service (
openstack-cinder-volume
).
Starting the API Service
Log in to each server that you intend to run the API on as theroot
user and start the API service.- Use the
service
command to start the API service (openstack-cinder-api
).#
service openstack-cinder-api start
- Use the
chkconfig
command to enable the API service permanently (openstack-cinder-api
).#
chkconfig openstack-cinder-api on
Starting the Scheduler Service
Log in to each server that you intend to run the scheduler on as theroot
user and start the scheduler service.- Use the
service
command to start the scheduler (openstack-cinder-scheduler
).#
service openstack-cinder-scheduler start
- Use the
chkconfig
command to enable the scheduler permanently (openstack-cinder-scheduler
).#
chkconfig openstack-cinder-scheduler on
Starting the Volume Service
Log in to each server that block storage has been attached to as theroot
user and start the volume service.- Use the
service
command to start the volume service (openstack-cinder-volume
).#
service openstack-cinder-volume start
- Use the
service
command to start the The SCSI target daemon (tgtd
).#
service tgtd start
- Use the
chkconfig
command to enable the volume service permanently (openstack-cinder-volume
).#
chkconfig openstack-cinder-volume on
- Use the
chkconfig
command to enable the SCSI target daemon permanently (tgtd
).#
chkconfig tgtd on
Testing Locally
The steps outlined in this section of the procedure must be performed while logged in to the server hosting the block storage API service as theroot
user or a user with access to akeystonerc_admin
file containing the credentials of the OpenStack administrator. Transfer thekeystonerc_admin
file to the system before proceeding.- Run the
source
command on thekeystonerc_admin
file to populate the environment variables used for identifying and authenticating the user.#
source ~/keystonerc_admin
- Run the
cinder list
command and verify that no errors are returned.#
cinder list
- Run the
cinder create
command to create a volume.#
cinder create
SIZE
ReplaceSIZE
with the size of the volume to create in Gigabytes (GB). - Run the
cinder delete
command to remove the volume.#
cinder delete
ID
ReplaceID
with the identifier returned when the volume was created.
Testing Remotely
The steps outlined in this section of the procedure must be performed while logged in to a system other than the server hosting the block storage API service. Transfer thekeystonerc_admin
file to the system before proceeding.- Install the python-cinderclient package using the
yum
command. You will need to authenticate as theroot
user for this step.#
yum install -y python-cinderclient
- Run the
source
command on thekeystonerc_admin
file to populate the environment variables used for identifying and authenticating the user.$
source ~/keystonerc_admin
- Run the
cinder list
command and verify that no errors are returned.$
cinder list
- Run the
cinder create
command to create a volume.$
cinder create
SIZE
ReplaceSIZE
with the size of the volume to create in Gigabytes (GB). - Run the
cinder delete
command to remove the volume.$
cinder delete
ID
ReplaceID
with the identifier returned when the volume was created.
- 13.1. OpenStack Networking Installation Overview
- 13.2. Networking Prerequisite Configuration
- 13.3. Common Networking Configuration
- 13.4. Configuring the Networking Service
- 13.5. Configuring the DHCP Agent
- 13.6. Configuring a Provider Network
- 13.7. Configuring the Plug-in Agent
- 13.8. Configuring the L3 Agent
- 13.9. Validating the OpenStack Networking Installation
- Network
- An isolated L2 segment, analogous to VLAN in the physical networking world.
- Subnet
- A block of v4 or v6 IP addresses and associated configuration state.
- Port
- A connection point for attaching a single device, such as the NIC of a virtual server, to a virtual network. Also describes the associated network configuration, such as the MAC and IP addresses to be used on that port.
- Provider Networks
- Provider networks allow the creation of virtual networks that map directly to networks in the physical data center. This allows the administrator to give tenants direct access to a public network such as the Internet or to integrate with existing VLANs in the physical networking environment that have a defined meaning or purpose.When the provider extension is enabled OpenStack networking users with administrative privileges are able to see additional provider attributes on all virtual networks. In addition such users have the ability to specify provider attributes when creating new provider networks.Both the Open vSwitch and Linux Bridge plug-ins support the provider networks extension.
- Layer 3 (L3) Routing and Network Address Translation (NAT)
- The L3 routing API extensions provides abstract L3 routers that API users are able to dynamically provision and configure. These routers are able to connect to one or more Layer 2 (L2) OpenStack networking controlled networks. Additionally the routers are able to provide a gateway that connects one or more private L2 networks to an common public or external network such as the Internet.The L3 router provides basic NAT capabilities on gateway ports that connect the router to external networks. The router supports floating IP addresses which give a static mapping between a public IP address on the external network and the private IP address on one of the L2 networks attached to the router.This allows the selective exposure of compute instances to systems on an external public network. Floating IP addresses are also able to be reallocated to different OpenStack networking ports as necessary.
- Security Groups
- Security groups and security group rules allow the specification of the specific type and direction of network traffic that is allowed to pass through a given network port. This provides an additional layer of security over and above any firewall rules that exist within a compute instance. The security group is a container object which can contain one or more security rules. A single security group can be shared by multiple compute instances.When a port is created using OpenStack networking it is associated with a security group. If a specific security group was not specified then the port is associated with the
default
security group. By default this group will drop all inbound traffic and allow all outbound traffic. Additional security rules can be added to thedefault
security group to modify its behaviour or new security groups can be created as necessary.The Open vSwitch, Linux Bridge, Nicira NVP, NEC, and Ryu networking plug-ins currently support security groups.Note
Unlike Compute security groups, OpenStack networking security groups are applied on a per port basis rather than on a per instance basis.
- Open vSwitch (openstack-neutron-openvswitch)
- Linux Bridge (openstack-neutron-linuxbridge)
- Cisco (openstack-neutron-cisco)
- NEC OpenFlow (openstack-neutron-nec)
- Nicira (openstack-neutron-nicira)
- Ryu (openstack-neutron-ryu)
- L3 Agent
- The L3 agent is part of the openstack-neutron package. It acts as an abstract L3 router that can connect to and provide gateway services for multiple L2 networks.The nodes on which the L3 agent is to be hosted must not have a manually configured IP address on a network interface that is connected to an external network. Instead there must be a range of IP addresses from the external network that are available for use by OpenStack Networking. These IP addresses will be assigned to the routers that provide the link between the internal and external networks.The range selected must be large enough to provide a unique IP address for each router in the deployment as well as each desired floating IP.
- DHCP Agent
- The OpenStack Networking DHCP agent is capable of allocating IP addresses to virtual machines running on the network. If the agent is enabled and running when a subnet is created then by default that subnet has DHCP enabled.
- Plug-in Agent
- Many of the OpenStack Networking plug-ins, including Open vSwitch and Linux Bridge, utilize their own agent. The plug-in specific agent runs on each node that manages data packets. This includes all compute nodes as well as nodes running the dedicated agents
neutron-dhcp-agent
andneutron-l3-agent
.
- Service Node
- The service node exposes the networking API to clients and handles incoming requests before forwarding them to a message queue to be actioned by the other nodes. The service node hosts both the networking service itself and the active networking plug-in.In environments that use controller nodes to host the client-facing APIs and schedulers for all services, the controller node would also fulfil the role of service node as it is applied in this chapter.
- Network Node
- The network node handles the majority of the networking workload. It hosts the DHCP agent, the Layer 3 (L3) agent, the Layer 2 (L2) Agent, and the metadata proxy. In addition to plug-ins that require an agent, it runs an instance of the plug-in agent (as do all other systems that handle data packets in an environment where such plug-ins are in use). Both the Open vSwitch and Linux Bridge plug-ins include an agent.
- Compute Node
- The compute hosts the compute instances themselves. To connect compute instances to the networking services, compute nodes must also run the L2 agent. Like all other systems that handle data packets it must also run an instance of the plug-in agent.
Warning
packstack
utility or manually, can be reconfigured to use OpenStack Networking. This is however currently not recommended for environments where Compute instances have already been created and configured to use Compute networking. If you wish to proceed with such a conversion. you must ensure that you stop the openstack-nova-network
service on each Compute node using the service
command before proceeding.
#
service openstack-nova-network stop
openstack-nova-network
service permanently on each node using the chkconfig
command.
#
chkconfig openstack-nova-network off
Important
nova-consoleauth
on more than one node. Running more than one instance of nova-consoleauth
causes a conflict between nodes with regard to token requests which may cause errors.
See Also:
root
user.
- Connect to the database service using the
mysql
command.#
mysql -u root -p
- Create the database. If you intend to use the:This example uses the database name of '
- Open vSwitch plug-in, the recommended database name is
ovs_neutron
. - Linux Bridge plug-in, the recommended database name is
neutron_linux_bridge
.
ovs_neutron
'.mysql>
CREATE DATABASE ovs_neutron; - Create a
neutron
database user and grant it access to theovs_neutron
database.mysql>
GRANT ALL ON ovs_neutron.* TO 'neutron'@'%' IDENTIFIED BY 'PASSWORD
';mysql>
GRANT ALL ON ovs_neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'PASSWORD
';ReplacePASSWORD
with a secure password that will be used to authenticate with the database server as this user. - Flush the database privileges to ensure that they take effect immediately.
mysql>
FLUSH PRIVILEGES; - Exit the
mysql
client command.mysql>
quit
- Create the
neutron
user, who has theadmin
role in theservices
tenant. - Create the
neutron
service entry and assign it an endpoint.
- Authenticate as the administrator of the identity service by running the
source
command on thekeystonerc_admin
file containing the required credentials:#
source ~/keystonerc_admin
- Create a user named
neutron
for the OpenStack networking service to use:#
+----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | | | enabled | True | | id |keystone user-create --name neutron --pass
PASSWORD
1df18bcd14404fa9ad954f9d5eb163bc
| | name | neutron | | tenantId | | +----------+----------------------------------+ReplacePASSWORD
with a secure password that will be used by the OpenStack networking service when authenticating with the identity service. Take note of the created user's returned ID as it will be used in subsequent steps. - Get the ID of the
admin
role:#
keystone role-get admin
If noadmin
role exists, create one:$ keystone role-create --name admin
- Get the ID of the
services
tenant:$
keystone tenant-list | grep services
If noservices
tenant exists, create one:$
keystone tenant-create --name services --description "Services Tenant"
This guide uses one tenant for all service users. For more information, refer to Creating the Services Tenant. - Use the
keystone user-role-add
command to link theneutron
user,admin
role, andservices
tenant together:#
keystone user-role-add --user-id
USERID
--role-idROLEID
--tenant-idTENANTID
Replace the user, role, and tenant IDs with those obtained in the previous steps. - Create the
neutron
service entry:#
keystone service-create --name neutron \
--type network \
+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | OpenStack Networking Service | | id |--description "OpenStack Networking Service"
134e815915f442f89c39d2769e278f9b
| | name | neutron | | type | network | +-------------+----------------------------------+Take note of the created service's returned ID as it will be used in the next step. - Create the
network
endpoint entry:#
keystone endpoint-create --service-id
SERVICEID
\--publicurl "http://
IP
:9696" /--adminurl "http://
IP
:9696" /--internalurl "http://
IP
:9696"ReplaceSERVICEID
with the ID returned by thekeystone service-create
command. ReplaceIP
with the IP address or host name of the system that will be acting as the network node.
2.6.32-343
.el6.x86_64
.
root
user to complete the procedure.
- Use the
uname
command to identify the kernel that is currently in use on the system.#
uname --kernel-release
- If the output includes the text
openstack
then the system already has a network namespaces enabled kernel.2.6.32-
358
.6.2.openstack.el6.x86_64No further action is required to install a network namespaces enabled kernel on this system. - If the output does not include the text
openstack
then the system does not currently have a network namespaces enabled kernel and further action must be taken.2.6.32-
358
.el6.x86_64Further action is required to install a network namespaces enabled kernel on this system. Follow the remaining steps outlined in this procedure to perform this task.
Note
Note that the release field may contain a higher value than358
. As new kernel updates are released this value is increased. - Install the updated kernel with network namespaces support using the
yum
command.#
yum install "kernel-2.6.*.openstack.el6.x86_64"
The use of the wildcard character (*) ensures that the latest kernel release available will be installed. - Reboot the system to ensure that the new kernel is running before proceeding with OpenStack networking installation.
#
reboot
- Run the
uname
command again once the system has rebooted to confirm that the newly installed kernel is running.#
2.6.32-uname --kernel-release
358
.6.2.openstack.el6.x86_64
NetworkManager
) service enabled. The Network Manager service is currently enabled by default on Red Hat Enterprise Linux installations where one of these package groups was selected during installation:
- Desktop
- Software Development Workstation
- Basic Server
- Database Server
- Web Server
- Identity Management Server
- Virtualization Host
- Minimal Install
root
user on each system in the environment that will handle network traffic. This includes the system that will host the OpenStack Networking service, all network nodes, and all compute nodes.
NetworkManager
service is disabled and replaced by the standard network service for all interfaces that will be used by OpenStack Networking.
- Verify Network Manager is currently enabled using the
chkconfig
command.#
chkconfig --list NetworkManager
The output displayed by thechkconfig
command inicates whether or not the Network Manager service is enabled.- The system displays an error if the Network Manager service is not currently installed:
error reading information on service NetworkManager: No such file or directory
If this error is displayed then no further action is required to disable the Network Manager service. - The system displays a list of numerical run levels along with a value of
on
oroff
indicating whether the Network Manager service is enabled when the system is operating in the given run level.NetworkManager 0:off 1:off 2:off 3:off 4:off 5:off 6:off
If the value displayed for all run levels isoff
then the Network Manager service is disabled and no further action is required. If the value displayed for any of the run levels ison
then the Network Manager service is enabled and further action is required.
- Ensure that the Network Manager service is stopped using the
service
command.#
service NetworkManager stop
- Ensure that the Network Manager service is disabled using the
chkconfig
command.#
chkconfig NetworkManager off
- Open each interface configuration file on the system in a text editor. Interface configuration files are found in the
/etc/sysconfig/network-scripts/
directory and have names of the formifcfg-
whereX
X
is replaced by the name of the interface. Valid interface names includeeth0
,p1p5
, andem1
.In each file ensure that theNM_CONTROLLED
configuration key is set tono
and theON_BOOT
configuration key is set toyes
.NM_CONTROLLED=no ONBOOT=yes
This action ensures that the standard network service will take control of the interfaces and automatically activate them on boot. - Ensure that the network service is started using the
service
command.#
service network start
- Ensure that the network service is enabled using the
chkconfig
command.#
chkconfig network on
- openstack-neutron
- Provides the networking service and associated configuration files.
- openstack-neutron-
PLUGIN
- Provides a networking plug-in. Replace
PLUGIN
with one of the recommended plug-ins (openvswitch
andlinuxbridge
). - openstack-utils
- Provides supporting utilities to assist with a number of tasks including the editing of configuration files.
- openstack-selinux
- Provides OpenStack specific SELinux policy modules.
root
user:
#
yum install -y openstack-neutron \
openstack-neutron-
PLUGIN
\
openstack-utils \
openstack-selinux
PLUGIN
with openvswitch
or linuxbridge
(determines which plug-in is installed).
9696
.
root
user.
- Open the
/etc/sysconfig/iptables
file in a text editor. - Add an INPUT rule allowing TCP traffic on port
9696
to the file. The new rule must appear before any INPUT rules that REJECT traffic.-A INPUT -p tcp -m multiport --dports 9696 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptables
file. - Restart the
iptables
service to ensure that the change takes effect.#
service iptables restart
iptables
firewall is now configured to allow incoming connections to the networking service on port 9696
.
Prerequisites:
/etc/neutron/neutron.conf
file.
root
user.
Setting the Identity Values
The Networking service must be explicitly configured to use the Identity service for authentication.- Set the authentication strategy (
auth_strategy
) configuration key tokeystone
using theopenstack-config
command.#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT auth_strategy keystone
- Set the authentication host (
auth_host
configuration key) to the IP address or host name of the Identity server.#
openstack-config --set /etc/neutron/neutron.conf \
keystone_authtoken auth_host
IP
ReplaceIP
with the IP address or host name of the Identity server. - Set the administration tenant name (
admin_tenant_name
) configuration key to the name of the tenant that was created for the use of the Networking service. Examples in this guide useservices
.#
openstack-config --set /etc/neutron/neutron.conf \
keystone_authtoken admin_tenant_name
services
- Set the administration user name (
admin_user
) configuration key to the name of the user that was created for the use of the networking services. Examples in this guide useneutron
.#
openstack-config --set /etc/neutron/neutron.conf \
keystone_authtoken admin_user
neutron
- Set the administration password (
admin_password
) configuration key to the password that is associated with the user specified in the previous step.#
openstack-config --set /etc/neutron/neutron.conf \
keystone_authtoken admin_password
PASSWORD
The authentication keys used by the Networking service have been set and will be used when the services are started.Setting the Message Broker
The Networking service must be explicitly configured with the type, location, and authentication details of the message broker.- Use the
openstack-config
utility to set the value of therpc_backend
configuration key to Qpid.#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT rpc_backend neutron.openstack.common.rpc.impl_qpid
- Use the
openstack-config
utility to set the value of theqpid_hostname
configuration key to the host name of the Qpid server.#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT qpid_hostname
IP
ReplaceIP
with the IP address or host name of the message broker. - If you have configured Qpid to authenticate incoming connections, you must provide the details of a valid Qpid user in the networking configuration.
- Use the
openstack-config
utility to set the value of theqpid_username
configuration key to the username of the Qpid user that the Networking service must use when communicating with the message broker.#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT qpid_username
USERNAME
ReplaceUSERNAME
with the required Qpid user name. - Use the
openstack-config
utility to set the value of theqpid_password
configuration key to the password of the Qpid user that the Networking service must use when communicating with the message broker.#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT qpid_password
PASSWORD
ReplacePASSWORD
with the password of the Qpid user.
- If you configured Qpid to use SSL, you must inform the Networking service of this choice. Use
openstack-config
utility to set the value of theqpid_protocol
configuration key tossl
.#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT qpid_protocol ssl
The value of theqpid_port
configuration key must be set to5671
as Qpid listens on this different port when SSL is in use.#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT qpid_port 5671
Important
To communicate with a Qpid message broker that uses SSL the node must also have:- The nss package installed.
- The certificate of the relevant certificate authority installed in the system NSS database (
/etc/pki/nssdb/
).
Thecerttool
command is able to import certificates into the NSS database. See thecerttool
manual page for more information (man certtool
).
The OpenStack Networking service has been configured to use the message broker and any authentication schemes that it presents.Setting the Plug-in
Additional configuration settings must be applied to enable the desired plug-in.Open vSwitch
- Create a symbolic link between the
/etc/neutron/plugin.ini
path referred to by the Networking service and the plug-in specific configuration file.#
ln -s /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini \
/etc/neutron/plugin.ini
- Update the value of the
tenant_network_type
configuration key in the/etc/neutron/plugin.ini
file to refer to the type of network that must be used for tenant networks. Supported values areflat
,vlan
, andlocal
.The default islocal
but this is not recommended for real deployments.#
openstack-config --set /etc/neutron/plugin.ini \
OVS tenant_network_type
TYPE
ReplaceTYPE
with the type chosen tenant network type. - If
flat
orvlan
networking was chosen, the value of thenetwork_vlan_ranges
configuration key must also be set. This configuration key maps physical networks to VLAN ranges.Mappings are of the formNAME
:START
:END
whereNAME
is replaced by the name of the physical network,START
is replaced by the VLAN identifier that starts the range, andEND
is replaced by the replaced by the VLAN identifier that ends the range.#
openstack-config --set /etc/neutron/plugin.ini \
OVS network_vlan_ranges
NAME
:START
:END
Multiple ranges can be specified using a comma separated list, for example:physnet1:1000:2999,physnet2:3000:3999
- Update the value of the
core_plugin
configuration key in the/etc/neutron/neutron.conf
file to refer to the Open vSwitch plug-in.#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT core_plugin \
neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
Linux Bridge
- Create a symbolic link between the
/etc/neutron/plugin.ini
path referred to by the Networking service and the plug-in specific configuration file.#
ln -s /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini \
/etc/neutron/plugin.ini
- Update the value of the
tenant_network_type
configuration key in the/etc/neutron/plugin.ini
file to refer to the type of network that must be used for tenant networks. Supported values areflat
,vlan
, andlocal
.The default islocal
but this is not recommended for real deployments.#
openstack-config --set /etc/neutron/plugin.ini \
VLAN tenant_network_type
TYPE
ReplaceTYPE
with the type chosen tenant network type. - If
flat
orvlan
networking was chosen, the value of thenetwork_vlan_ranges
configuration key must also be set. This configuration key maps physical networks to VLAN ranges.Mappings are of the formNAME
:START
:END
whereNAME
is replaced by the name of the physical network,START
is replaced by the VLAN identifier that starts the range, andEND
is replaced by the replaced by the VLAN identifier that ends the range.#
openstack-config --set /etc/neutron/plugin.ini \
LINUX_BRIDGE network_vlan_ranges
NAME
:START
:END
Multiple ranges can be specified using a comma separated list, for example:physnet1:1000:2999,physnet2:3000:3999
- Update the value of the
core_plugin
configuration key in the/etc/neutron/neutron.conf
file to refer to the Linux Bridge plug-in.#
openstack-config --set /etc/neutron/neutron.conf \
DEFAULT core_plugin \
neutron.plugins.linuxbridge.lb_neutron_plugin.LinuxBridgePluginV2
Setting the Database Connection String
The database connection string used by the networking service is defined in the/etc/neutron/plugin.ini
file. It must be updated to point to a valid database server before starting the service.- Use the
openstack-config
command to set the value of theconnection
configuration key.#
openstack-config --set /etc/neutron/plugin.ini \
DATABASE sql_connection mysql://
USER
:PASS
@IP
/DB
Replace:USER
with the database user name the networking service is to use, usuallyneutron
.PASS
with the password of the chosen database user.IP
with the IP address or host name of the database server.DB
with the name of the database that has been created for use by the networking service (ovs_neutron
was used as the example in the previous Creating the OpenStack Networking Database section).
Start the Networking Service
- Start the OpenStack Networking service using the
service
command.#
service neutron-server start
- Enable the Networking service permanantly using the
chkconfig
command.#
chkconfig neutron-server on
Important
force_gateway_on_subnet
configuration key to True
in the /etc/neutron/neutron.conf
file.
Prerequisites:
root
user on the system hosting the DHCP agent.
Configuring Authentication
The DHCP agent must be explicitly configured to use the identity service for authentication.- Set the authentication strategy (
auth_strategy
) configuration key tokeystone
using theopenstack-config
command.#
openstack-config --set /etc/neutron/dhcp_agent.ini \
DEFAULT auth_strategy keystone
- Set the authentication host (
auth_host
configuration key) to the IP address or host name of the identity server.#
openstack-config --set /etc/neutron/dhcp_agent.ini \
keystone_authtoken auth_host
IP
ReplaceIP
with the IP address or host name of the identity server. - Set the administration tenant name (
admin_tenant_name
) configuration key to the name of the tenant that was created for the use of the networking services. Examples in this guide useservices
.#
openstack-config --set /etc/neutron/dhcp_agent.ini \
keystone_authtoken admin_tenant_name
services
- Set the administration user name (
admin_user
) configuration key to the name of the user that was created for the use of the networking services. Examples in this guide useneutron
.#
openstack-config --set /etc/neutron/dhcp_agent.ini \
keystone_authtoken admin_user
neutron
- Set the administration password (
admin_password
) configuration key to the password that is associated with the user specified in the previous step.#
openstack-config --set /etc/neutron/dhcp_agent.ini \
keystone_authtoken admin_password
PASSWORD
Configuring the Interface Driver
Set the value of theinterface_driver
configuration key in the/etc/neutron/dhcp_agent.ini
file based on the networking plug-in being used. Execute only the configuration step that applies to the plug-in used in your environment.Open vSwitch Interface Driver
#
openstack-config --set /etc/neutron/dhcp_agent.ini \
DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
Linux Bridge Interface Driver
#
openstack-config --set /etc/neutron/dhcp_agent.ini \
DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver
Starting the DHCP Agent
- Use the
service
command to start theneutron-dhcp-agent
service.#
service neutron-dhcp-agent start
- Use the
chkconfig
command to ensure that theneutron-dhcp-agent
service will be started automatically in the future.#
chkconfig neutron-dhcp-agent on
Prerequisites:
br-ex
) directly, is only supported when the Open vSwitch plug-in is in use. The second method, which is supported by both the Open vSwitch plug-in and the Linux Bridge plug-in, is to use an external provider network.
keystonerc_admin
file containing the authentication details of the OpenStack administrative user.
- Use the
source
command to load the credentials of the administrative user.$
source
~/keystonerc_admin
- Use the
net-create
action of theneutron
command line client to create a new provider network.$
neutron net-create
EXTERNAL_NAME
\--router:external True \
--provider:network_type
TYPE
\--provider:physical_network
PHYSICAL_NAME
\--provider:segmentation_id
VLAN_TAG
Replace these strings with the appropriate values for your environment:- Replace
EXTERNAL_NAME
with a name for the new external network provider. - Replace
PHYSICAL_NAME
with a name for the physical network. This is not applicable if you intend to use a local network type. - Replace
TYPE
with the type of provider network you wish to use. Supported values areflat
(for flat networks),vlan
(for VLAN networks), andlocal
(for local networks). - Replace
VLAN_TAG
with the VLAN tag that will be used to identify network traffic. The VLAN tag specified must have been defined by the network administrator.If thenetwork_type
was set to a value other thanvlan
then this parameter is not required.
Take note of the unique external network identifier returned, this will be required in subsequent steps. - Use the
subnet-create
action of the command line client to create a new subnet for the new external provider network.$
neutron subnet-create --gateway
GATEWAY
\--allocation-pool start=
IP_RANGE_START
,end=IP_RANGE_END
\--disable-dhcp
EXTERNAL_NAME
EXTERNAL_CIDR
Replace these strings with the appropriate values for your environment:- Replace
GATEWAY
with the IP address or hostname of the system that is to act as the gateway for the new subnet. - Replace
IP_RANGE_START
with the IP address that denotes the start of the range of IP addresses within the new subnet that floating IP addresses will be allocated from. - Replace
IP_RANGE_END
with the IP address that denotes the end of the range of IP addresses within the new subnet that floating IP addresses will be allocated from. - Replace
EXTERNAL_NAME
with the name of the external network the subnet is to be associated with. This must match the name that was provided to thenet-create
action in the previous step. - Replace
EXTERNAL_CIDR
with the Classless Inter-Domain Routing (CIDR) representation of the block of IP addresses the subnet represents. An example would be192.168.100.0/24
.
Take note of the unique subnet identifier returned, this will be required in subsequent steps.Important
The IP address used to replace the stringGATEWAY
must be within the block of IP addresses specified in place of theEXTERNAL_CIDR
string but outside of the block of IP addresses specified by the range started byIP_RANGE_START
and ended byIP_RANGE_END
.The block of IP addresses specifed by the range started byIP_RANGE_START
and ended byIP_RANGE_END
must also fall within the block of IP addresses specified byEXTERNAL_CIDR
. - Use the
router-create
action of theneutron
command line client to create a new router.$
neutron router-create
NAME
ReplaceNAME
with the name to give the new router. Take note of the unique router identifier returned, this will be required in subsequent steps. - Use the
router-gateway-set
action of theneutron
command line client to link the newly created router to the external provider network.$
neutron router-gateway-set
ROUTER
NETWORK
ReplaceROUTER
with the unique identifier of the router, replaceNETWORK
with the unique identifier of the external provider network. - Use the
router-interface-add
action of theneutron
command line client to link the newly created router to the subnet.$
neutron router-interface-add
ROUTER
SUBNET
ReplaceROUTER
with the unique identifier of the router, replaceSUBNET
with the unique identifier of the subnet.
Prerequisites:
- Confirm that the openvswitch package is installed. This is normally installed as a dependency of the neutron-plugin-openvswitch package.
#
openvswitch-rpm -qa | grep openvswitch
1.10.0-1.
el6.x86_64 openstack-neutron-openvswitch-2013.1-3
.el6.noarc - Start the
openvswitch
service.#
service openvswitch start
- Enable the
openvswitch
service permanently.#
chkconfig openvswitch on
- Each host running the Open vSwitch agent also requires an Open vSwitch bridge named
br-int
. This bridge is used for private network traffic. Use theovs-vsctl
command to create this bridge before starting the agent.#
ovs-vsctl add-br br-int
Warning
Thebr-int
bridge is required for the agent to function correctly. Once created do not remove or otherwise modify thebr-int
bridge. - Ensure that the
br-int
device persists on reboot by creating a/etc/sysconfig/network-scripts/ifcfg-br-int
file with these contents:DEVICE=br-int DEVICETYPE=ovs TYPE=OVSBridge ONBOOT=yes BOOTPROTO=none
- Set the value of the
bridge_mappings
configuration key. This configuration key must contain a list of physical networks and the network bridges associated with them.The format for each entry in the comma separated list is:
WherePHYSNET
:BRIDGE
PHYSNET
is replaced with the name of a physical network, andBRIDGE
is replaced by by the name of the network bridge.The physical network must have been defined in thenetwork_vlan_ranges
configuration variable on the OpenStack Networking server.#
openstack-config --set /etc/neutron/plugin.ini \
OVS bridge_mappings
MAPPINGS
ReplaceMAPPINGS
with the physical network to bridge mappings. - Use the
service
command to start theneutron-openvswitch-agent
service.#
service neutron-openvswitch-agent start
- Use the
chkconfig
command to ensure that theneutron-openvswitch-agent
service is started automatically in the future.#
chkconfig neutron-openvswitch-agent on
- Use the
chkconfig
command to ensure that theneutron-ovs-cleanup
service is started automatically on boot. When started at boot time this service ensures that the OpenStack Networking agents maintain full control over the creation and management of tap devices.#
chkconfig neutron-ovs-cleanup on
Prerequisites:
- Set the value of the
physical_interface_mappings
configuration key. This configuration key must contain a list of physical networks and the VLAN ranges associated with them that are available for allocation to tenant networks.The format for each entry in the comma separated list is:
WherePHYSNET
:VLAN_START
:VLAN_END
PHYSNET
is replaced with the name of a physical network,VLAN_START
is replaced by an identifier indicating the start of the VLAN range, andVLAN_END
is replaced by an identifier indicating the end of the VLAN range.The physical networks must have been defined in thenetwork_vlan_ranges
configuration variable on the OpenStack Networking server.#
openstack-config --set /etc/neutron/plugin.ini \
LINUX_BRIDGE physical_interface_mappings
MAPPINGS
ReplaceMAPPINGS
with the physical network to VLAN range mappings. - Use the service command to start the
neutron-linuxbridge-agent
service.#
service neutron-linuxbridge-agent start
- Use the chkconfig command to ensure that the
neutron-linuxbridge-agent
service is started automatically in the future.#
chkconfig neutron-linuxbridge-agent on
Prerequisites:
root
user.
Configuring Authentication
- Set the authentication strategy (
auth_strategy
) configuration key tokeystone
using theopenstack-config
command.#
openstack-config --set /etc/neutron/metadata_agent.ini \
DEFAULT auth_strategy keystone
- Set the authentication host (
auth_host
configuration key to the IP address or host name of the identity server.#
openstack-config --set /etc/neutron/metadata_agent.ini \
keystone_authtoken auth_host
IP
ReplaceIP
with the IP address or host name of the identity server. - Set the administration tenant name (
admin_tenant_name
) configuration key to the name of the tenant that was created for the use of the networking services. Examples in this guide useservices
.#
openstack-config --set /etc/neutron/metadata_agent.ini \
keystone_authtoken admin_tenant_name
services
- Set the administration user name (
admin_user
) configuration key to the name of the user that was created for the use of the networking services. Examples in this guide useneutron
.#
openstack-config --set /etc/neutron/metadata_agent.ini \
keystone_authtoken admin_user
neutron
- Set the administration password (
admin_password
) configuration key to the password that is associated with the user specified in the previous step.#
openstack-config --set /etc/neutron/metadata_agent.ini \
keystone_authtoken admin_password
PASSWORD
Configuring the Interface Driver
Set the value of theinterface_driver
configuration key in the/etc/neutron/l3_agent.ini
file based on the networking plug-in being used. Execute only the configuration step that applies to the plug-in used in your environment.Open vSwitch Interface Driver
#
openstack-config --set /etc/neutron/l3_agent.ini \
DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
Linux Bridge Interface Driver
#
openstack-config --set /etc/neutron/l3_agent.ini \
DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver
Configuring External Network Access
The L3 agent connects to external networks using either an external bridge or an external provider network. When using the Open vSwitch plug-in either approach is supported. When using the Linux Bridge plug-in only the use of an external provider network is supported. Choose the approach that is most appropriate for the environment.Using an External Bridge
To use an external bridge you must create and configure it. Finally the OpenStack networking configuration must be updated to use it. This must be done on each system hosting an instance of the L3 agent.- Use the
ovs-ctl
command to create the external bridge namedbr-ex
.#
ovs-vsctl add-br br-ex
- Ensure that the
br-ex
device persists on reboot by creating a/etc/sysconfig/network-scripts/ifcfg-br-ex
file with these contents:DEVICE=br-ex DEVICETYPE=ovs TYPE=OVSBridge ONBOOT=yes BOOTPROTO=none
- Ensure that the value of the
external_network_bridge
configuration key in the/etc/neutron/l3_agent.ini
file isbr-ex
. This ensures that the L3 agent will use the external bridge.#
openstack-config --set /etc/neutron/l3_agent.ini \
DEFAULT external_network_bridge br-ex
Using a Provider Network
To connect the L3 agent to external networks using a provider network you must first have created the provider network. You must also have created a subnet and router to associate with it. The unique identifier of the router will be required to complete these steps.- Ensure that the value of the
external_network_bridge
configuration key in the/etc/neutron/l3_agent.ini
file is blank. This ensures that the L3 agent does not attempt to use an external bridge.#
openstack-config --set /etc/neutron/l3_agent.ini \
DEFAULT external_network_bridge ""
- Set the value of the
router_id
configuration key in the/etc/neutron/l3_agent.ini
file to the identifier of the external router that must be used by the L3 agent when accessing the external provider network.#
openstack-config --set /etc/neutron/l3_agent.ini \
DEFAULT router_id
ROUTER
ReplaceROUTER
with the unique identifier of the router that has been defined for use when accessing the external provider network.
Starting the L3 Agent
- Use the
service
command to start theneutron-l3-agent
service.#
service neutron-l3-agent start
- Use the
chkconfig
command to ensure that theneutron-l3-agent
service will be started automatically in the future.#
chkconfig neutron-l3-agent on
Starting the Metadata Agent
The OpenStack networking metadata agent allows virtual machine instances to communicate with the compute metadata service. It runs on the same hosts as the Layer 3 (L3) agent.- Use the
service
command to start theneutron-metadata-agent
service.#
service neutron-metadata-agent start
- Use the
chkconfig
command to ensure that theneutron-metadata-agent
service will be started automatically in the future.#
chkconfig neutron-metadata-agent on
All Nodes
- Verify that the customized Red Hat Enterprise Linux kernel intended for use with Red Hat Enterprise Linux OpenStack Platform is running:
$
uname --kernel-release
2.6.32-358.6.2
.openstack.el6.x86_64If the kernel release value returned does not contain the stringopenstack
then update the kernel and reboot the system. - Ensure that the installed IP utilities support network namespaces:
$
ip netns
If an error indicating that the argument is not recognised or supported is returned then update the system usingyum
.
Service Nodes
- Ensure that the
neutron-server
service is running:$
neutron-server (pidservice neutron-server status
3011
) is running...
Network Nodes
- Ensure that the DHCP agent is running:
$
neutron-dhcp-agent (pidservice neutron-dhcp-agent status
3012
) is running... - Ensure that the L3 agent is running:
$
neutron-l3-agent (pidservice neutron-l3-agent status
3013
) is running... - Ensure that the plug-in agent, if applicable, is running:
$
neutron-service neutron-
PLUGIN
-agent statusPLUGIN
-agent (pid3014
) is running...ReplacePLUGIN
with the appropriate plug-in for the environment. Valid values includeopenvswitch
andlinuxbridge
. - Ensure that the metadata agent is running:
$
neutron-metadata-agent (pidservice neutron-metadata-agent status
3015
) is running...
root
user.
- Use the
grep
command to check for the presence of thesvm
orvmx
CPU extensions by inspecting the/proc/cpuinfo
file generated by the kernel:#
grep -E 'svm|vmx' /proc/cpuinfo
If any output is shown after running this command then the CPU is hardware virtualization capable and the functionality is enabled in the system BIOS. - Use the
lsmod
command to list the loaded kernel modules and verify that thekvm
modules are loaded:#
lsmod | grep kvm
If the output includeskvm_intel
orkvm_amd
then thekvm
hardware virtualization modules are loaded and your kernel meets the module requirements for the OpenStack Compute Service.
openstack-nova-xvpvncproxy
service).
root
user.
- Install the VNC proxy utilities and the console authentication service:
- Install the openstack-nova-novncproxy package using the
yum
command:#
yum install -y openstack-nova-novncproxy
- Install the openstack-nova-console package using the
yum
command:#
yum install -y openstack-nova-console
openstack-nova-novncproxy
service listens on TCP port 6080 and the openstack-nova-xvpvncproxy
service listens on TCP port 6081.
root
user.
- Edit the
/etc/sysconfig/iptables
file and add the following on a new line underneath the -A INPUT -i lo -j ACCEPT line and before any -A INPUT -j REJECT rules:-A INPUT -m state --state NEW -m tcp -p tcp --dport 6080 -j ACCEPT
- Save the file and exit the editor.
- Similarly, when using the
openstack-nova-xvpvncproxy
service, enable traffic on TCP port 6081 with the following on a new line in the same location:-A INPUT -m state --state NEW -m tcp -p tcp --dport 6081 -j ACCEPT
root
user to apply the changes:
# service iptables restart
# iptables-save
/etc/nova/nova.conf
file holds the following VNC options:
- vnc_enabled - Default is true.
- vncserver_listen - The IP address to which VNC services will bind.
- vncserver_proxyclient_address - The IP address of the compute host used by proxies to connect to instances.
- novncproxy_base_url - The browser address where clients connect to instance.
- novncproxy_port - The port listening for browser VNC connections. Default is 6080.
- xvpvncproxy_port - The port to bind for traditional VNC clients. Default is 6081.
root
user, use the service
command to start the console authentication service:
#
service openstack-nova-consoleauth start
chkconfig
command to permanently enable the service:
#
chkconfig openstack-nova-consoleauth on
root
user, use the service
command on the nova node to start the browser-based service:
#
service openstack-nova-novncproxy start
chkconfig
command to permanently enable the service:
#
chkconfig openstack-nova-novncproxy on
/etc/nova/nova.conf
file to access instance consoles.
root
user.
- Connect to the database service using the
mysql
command.#
mysql -u root -p
- Create the
nova
database.mysql>
CREATE DATABASE nova; - Create a
nova
database user and grant it access to thenova
database.mysql>
GRANT ALL ON nova.* TO 'nova'@'%' IDENTIFIED BY 'PASSWORD
';mysql>
GRANT ALL ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'PASSWORD
';ReplacePASSWORD
with a secure password that will be used to authenticate with the database server as this user. - Flush the database privileges to ensure that they take effect immediately.
mysql>
FLUSH PRIVILEGES; - Exit the
mysql
client command.mysql>
quit
- Create the
compute
user, who has theadmin
role in theservices
tenant. - Create the
compute
service entry and assign it an endpoint.
- Authenticate as the administrator of the Identity service by running the
source
command on thekeystonerc_admin
file containing the required credentials:#
source ~/keystonerc_admin
- Create a user named
compute
for the OpenStack Compute service to use:#
+----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | | | enabled | True | | id |keystone user-create --name compute --pass
PASSWORD
96cd855e5bfe471ce4066794bbafb615
| | name | compute | | tenantId | | +----------+----------------------------------+ReplacePASSWORD
with a secure password that will be used by the Compute service when authenticating against the Identity service. Take note of the created user's returned ID as it will be used in subsequent steps. - Get the ID of the
admin
role:#
keystone role-get admin
If noadmin
role exists, create one:$ keystone role-create --name admin
- Get the ID of the
services
tenant:$
keystone tenant-list | grep services
If noservices
tenant exists, create one:$
keystone tenant-create --name services --description "Services Tenant"
This guide uses one tenant for all service users. For more information, refer to Creating the Services Tenant. - Use the
keystone user-role-add
command to link thecompute
user,admin
role, andservices
tenant together:#
keystone user-role-add --user-id
USERID
--role-idROLEID
--tenant-idTENANTID
Replace the user, role, and tenant IDs with those obtained in the previous steps. - Create the
compute
service entry:#
keystone service-create --name compute \
--type compute \
+-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | OpenStack Compute Service | | id |--description "OpenStack Compute Service"
8dea97f5ee254b309c1792d2bd821e59
| | name | compute | | type | compute | +-------------+----------------------------------+Take note of the created service's returned ID as it will be used in the next step. - Create the
compute
endpoint entry:#
keystone endpoint-create --service-id
SERVICEID
\--publicurl "http://
IP
:8774/v2/\$(tenant_id)s" \--adminurl "http://
IP
:8774/v2/\$(tenant_id)s" \--internalurl "http://
IP
:8774/v2/\$(tenant_id)s"Replace:SERVICEID
with the ID returned by thekeystone service-create
command.IP
with the IP address or host name of the system that will be acting as the compute node.
- openstack-nova-api
- Provides the OpenStack Compute API service. At least one node in the environment must host an instance of the API service. This imust be the node pointed to by the Identity service endpoint defition for the Compute service.
- openstack-nova-compute
- Provides the OpenStack Compute service.
- openstack-nova-conductor
- Provides the Compute conductor service. The conductor handles database requests made by Compute nodes, ensuring that individual Compute nodes do not require direct database access. At least one node in each environment must act as a Compute conductor.
- openstack-nova-scheduler
- Provides the Compute scheduler service. The scheduler handles scheduling of requests made to the API across the available Compute resources. At least one node in each environment must act as a Compute scheduler.
- python-cinderclient
- Provides client utilities for accessing storage managed by the OpenStack Block Storage service. This package is not required if you do not intend to attach block storage volumes to your instances or you intend to manage such volumes using a service other than the OpenStack Block Storage service.
root
user:
#
yum install -y openstack-nova-api openstack-nova-compute \
openstack-nova-conductor openstack-nova-scheduler \
python-cinderclient
Note
root
user.
- Set the authentication strategy (
auth_strategy
) configuration key tokeystone
using theopenstack-config
command.#
openstack-config --set /etc/nova/nova.conf \
DEFAULT auth_strategy keystone
- Set the authentication host (
auth_host
) configuration key to the IP address or host name of the identity server.#
openstack-config --set /etc/nova/api-paste.ini \
filter:authtoken auth_host
IP
ReplaceIP
with the IP address or host name of the identity server. - Set the administration tenant name (
admin_tenant_name
) configuration key to the name of the tenant that was created for the use of the Compute service. In this guide, examples useservices
.#
openstack-config --set /etc/nova/api-paste.ini \
filter:authtoken admin_tenant_name
services
- Set the administration user name (
admin_user
) configuration key to the name of the user that was created for the use of the Compute service. In this guide, examples usenova
.#
openstack-config --set /etc/nova/api-paste.ini \
filter:authtoken admin_user
nova
- Set the administration password (
admin_password
) configuration key to the password that is associated with the user specified in the previous step.#
openstack-config --set /etc/nova/api-paste.ini \
filter:authtoken admin_password
PASSWORD
/etc/nova/nova.conf
file. It must be updated to point to a valid database server before starting the service.
openstack-nova-conductor
). Compute nodes communicate with the conductor using the messaging infrastructure, the conductor in turn orchestrates communication with the database. As a result individual compute nodes do not require direct access to the database. This procedure only needs to be followed on nodes that will host the conductor service. There must be at least one instance of the conductor service in any compute environment.
root
user on the server hosting the Compute service.
- Use the
openstack-config
command to set the value of thesql_connection
configuration key.#
openstack-config --set /etc/nova/nova.conf \
DEFAULT sql_connection mysql://
USER
:PASS
@IP
/DB
Replace:USER
with the database user name the Compute service is to use, usuallynova
.PASS
with the password of the chosen database user.IP
with the IP address or host name of the database server.DB
with the name of the database that has been created for use by the compute, usuallynova
.
root
user:
General Settings
Use theopenstack-config
utility to set the value of therpc_backend
configuration key to Qpid.#
openstack-config --set /etc/nova/nova.conf \
DEFAULT rpc_backend nova.openstack.common.rpc.impl_qpid
Configuration Key
Use theopenstack-config
utility to set the value of theqpid_hostname
configuration key to the host name of the Qpid server.#
openstack-config --set /etc/nova/nova.conf \
DEFAULT qpid_hostname
IP
ReplaceIP
with the IP address or host name of the message broker.Authentication Settings
If you have configured Qpid to authenticate incoming connections, you must provide the details of a valid Qpid user in the Compute configuration:- Use the
openstack-config
utility to set the value of theqpid_username
configuration key to the username of the Qpid user that the Compute services must use when communicating with the message broker.#
openstack-config --set /etc/nova/nova.conf \
DEFAULT qpid_username
USERNAME
ReplaceUSERNAME
with the required Qpid user name. - Use the
openstack-config
utility to set the value of theqpid_password
configuration key to the password of the Qpid user that the Compute services must use when communicating with the message broker.#
openstack-config --set /etc/nova/nova.conf \
DEFAULT qpid_password
PASSWORD
ReplacePASSWORD
with the password of the Qpid user.
Encryption Settings
If you configured Qpid to use SSL, you must inform the Compute services of this choice. Useopenstack-config
utility to set the value of theqpid_protocol
configuration key tossl
.#
openstack-config --set /etc/nova/nova.conf \
DEFAULT qpid_protocol ssl
The value of theqpid_port
configuration key must be set to5671
as Qpid listens on this different port when SSL is in use.#
openstack-config --set /etc/nova/nova.conf \
DEFAULT qpid_port 5671
Important
To communicate with a Qpid message broker that uses SSL the node must also have:- The nss package installed.
- The certificate of the relevant certificate authority installed in the system NSS database (
/etc/pki/nssdb/
).
Thecerttool
command is able to import certificates into the NSS database. See thecerttool
manual page for more information (man certtool
).
Important
- Default CPU overcommit ratio - 16
- Default memory overcommit ratio - 1.5
- The default CPU overcommit ratio of 16 means that up to 16 virtual cores can be assigned to a node for each physical core.
- The default memory overcommit ratio of 1.5 means that instances can be assigned to a physical node if the total instance memory usage is less than 1.5 times the amount of physical memory available.
cpu_allocation_ratio
and ram_allocation_ratio
directives in /etc/nova/nova.conf
to change these default settings.
/etc/nova/nova.conf
:
reserved_host_memory_mb
- Defaults to 512MB.reserved_host_disk_mb
- Defaults to 0MB.
nova-network
service must not run. Instead all network related decisions are delegated to the OpenStack networking Service.
nova-manage
and nova
to manage networks or IP addressing, including both fixed and floating IPs, is not supported with OpenStack Networking.
Important
nova-network
and reboot any physical nodes that were running nova-network
before using them to run OpenStack Network. Inadvertently running the nova-network
process while using OpenStack Networking service can cause problems, as can stale iptables
rules pushed down by a previously running nova-network
.
root
user.
- Modify the
network_api_class
configuration key to indicate that the OpenStack Networking service is in use.#
openstack-config --set /etc/nova/nova.conf \
DEFAULT network_api_class nova.network.neutronv2.api.API
- Set the value of the
neutron_url
configuration key to point to the endpoint of the networking API.#
openstack-config --set /etc/nova/nova.conf \
DEFAULT neutron_url http://
IP
:9696/ReplaceIP
with the IP address or host name of the server hosting the API of the OpenStack Networking service. - Set the value of the
neutron_admin_tenant_name
configuration key to the name of the tenant used by the OpenStack Networking service. Examples in this guide useservices
.#
openstack-config --set /etc/nova/nova.conf \
DEFAULT neutron_admin_tenant_name
services
- Set the value of the
neutron_admin_username
configuration key to the name of the administrative user for the OpenStack Networking service. Examples in this guide useneutron
.#
openstack-config --set /etc/nova/nova.conf \
DEFAULT neutron_admin_username
neutron
- Set the value of the
neutron_admin_password
configuration key to the password associated with the administrative user for the networking service.#
openstack-config --set /etc/nova/nova.conf \
DEFAULT neutron_admin_password
PASSWORD
- Set the value of the
neutron_admin_auth_url
configuration key to the URL associated with the identity service endpoint.#
openstack-config --set /etc/nova/nova.conf \
DEFAULT neutron_admin_auth_url http://
IP
:35357/v2.0ReplaceIP
with the IP address or host name of the Identity service endpoint. - Set the value of the
security_group_api
configuration key toneutron
.#
openstack-config --set /etc/nova/nova.conf \
DEFAULT security_group_api neutron
This enables the use of OpenStack Networking security groups. - Set the value of the
firewall_driver
configuration key tonova.virt.firewall.NoopFirewallDriver
.#
openstack-config --set /etc/nova/nova.conf \
DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
This must be done when OpenStack Networking security groups are in use.
nova-compute
creates an instance, it must 'plug' each of the vNIC associated with the instance into a OpenStack networking controlled virtual switch. It must also inform the virtual switch of the OpenStack networking port identifier associated with each vNIC.
libvirt_vif_driver
field in the /etc/nova/nova.conf
configuration file. In Red Hat Enterprise Linux OpenStack Platform 3 a generic virtual interface driver, nova.virt.libvirt.vif.LibvirtGenericVIFDriver
, is provided. This driver relies on OpenStack networking being able to return the type of virtual interface binding required. These plug-ins support this operation:
- Linux Bridge
- Open vSwitch
- NEC
- BigSwitch
- CloudBase Hyper-V
- brocade
openstack-config
command to set the value of the libvirt_vif_driver
configuration key appropriately:
#
openstack-config --set /etc/nova/nova.conf \
DEFAULT libvirt_vif_driver \
nova.virt.libvirt.vif.LibvirtGenericVIFDriver
Important
nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
, instead of the generic driver.
Important
/etc/libvirt/qemu.conf
file to ensure that the virtual machine launches properly:
user = "root" group = "root" cgroup_device_acl = [ "/dev/null", "/dev/full", "/dev/zero", "/dev/random", "/dev/urandom", "/dev/ptmx", "/dev/kvm", "/dev/kqemu", "/dev/rtc", "/dev/hpet", "/dev/net/tun", ]
5900
to 5999
.
root
user. Repeat the process for each compute node.
- Open the
/etc/sysconfig/iptables
file in a text editor. - Add an INPUT rule allowing TCP traffic on ports in the ranges
5900
to5999
by adding these lines to the file.-A INPUT -p tcp -m multiport --dports 5900:5999 -j ACCEPT
The new rule must appear before any INPUT rules that REJECT traffic. - Save the changes to the
/etc/sysconfig/iptables
file. - Restart the
iptables
service to ensure that the change takes effect.#
service iptables restart
iptables
firewall is now configured to allow incoming connections to the Compute services.
Important
- Use the
su
command to switch to thenova
user.#
su nova -s /bin/sh
- Run the
nova-manage db sync
command to initialize and populate the database identified in/etc/nova/nova.conf
.$
nova-manage db sync
Starting the Message Bus Service
Libvirt requires that themessagebus
service be enabled and running.- Use the
service
command to start themessagebus
service.#
service messagebus start
- Use the
chkconfig
command to enable themessagebus
service permanently.#
chkconfig messagebus on
Starting the Libvirtd Service
The Compute service requires that thelibvirtd
service be enabled and running.- Use the
service
command to start thelibvirtd
service.#
service libvirtd start
- Use the
chkconfig
command to enable thelibvirtd
service permanently.#
chkconfig libvirtd on
Starting the API Service
Start the API service on each system that will be hosting an instance of it. Note that each API instance should either have its own endpoint defined in the identity service database or be pointed to by a load balancer that is acting as the endpoint.- Use the
service
command to start theopenstack-nova-api
service.#
service openstack-nova-api start
- Use the
chkconfig
command to enable theopenstack-nova-api
service permanently.#
chkconfig openstack-nova-api on
Starting the Scheduler
Start the scheduler on each system that will be hosting an instance of it.- Use the
service
command to start theopenstack-nova-scheduler
service.#
service openstack-nova-scheduler start
- Use the
chkconfig
command to enable theopenstack-nova-scheduler
service permanently.#
chkconfig openstack-nova-scheduler on
Starting the Conductor
The conductor is intended to minimize or eliminate the need for Compute nodes to access the database directly. Compute nodes instead communicate with the conductor via a message broker and the conductor handles database access.Start the conductor on each system that is intended to host an instance of it. Note that it is recommended that this service is not run on each and every Compute node as this eliminates the security benefits of restricting direct database access from the Compute nodes.- Use the
service
command to start theopenstack-nova-conductor
service.#
service openstack-nova-conductor start
- Use the
chkconfig
command to enable theopenstack-nova-conductor
service permanently.#
chkconfig openstack-nova-conductor on
Starting the Compute Service
Start the Compute service on every system that is intended to host virtual machine instances.- Use the
service
command to start theopenstack-nova-compute
service.#
service openstack-nova-compute start
- Use the
chkconfig
command to enable theopenstack-nova-compute
service permanently.#
chkconfig openstack-nova-compute on
Starting Optional Services
Depending on environment configuration you may also need to start these services:openstack-nova-cert
- The X509 certificate service, required if you intend to use the EC2 API to the Compute service.
openstack-nova-network
- The Nova networking service. Note that you must not start this service if you have installed and configured, or intend to install and configure, OpenStack networking.
openstack-nova-objectstore
- The Nova object storage service. It is recommended that the OpenStack Object Storage service (Swift) is used for new deployments.
- The system hosting the Dashboard service must have:
- The following already installed:
httpd
,mod_wsgi
, andmod_ssl
(for security purposes). - A connection to the Identity service, as well as to the other OpenStack API services (OpenStack Compute, Block Storage, Object Storage, Image, and Networking services).
- The installer must know the URL of the Identity service endpoint.
Note
mod_wsgi
, httpd
, and mod_ssl
, execute as root:
#
yum install -y mod_wsgi httpd mod_ssl
Note
- openstack-dashboard
- Provides the OpenStack Dashboard service.
- memcached
- Memory-object caching system, which speeds up dynamic web applications by alleviating database load.
- python-memcached
- Python interface to the memcached daemon.
root
user.
- Install the memcached object caching system:
#
yum install -y memcached python-memcached
- Install the Dashboard package:
#
yum install -y openstack-dashboard
httpd
service. To start the service, execute the following commands as the root
user:
- To start the
service
, execute on the command line:#
service httpd start - To ensure that the httpd service starts automatically in the future, execute:
#
chkconfig httpd on - You can confirm that
httpd
is running by executing:#
service --status-all | grep httpd
/etc/openstack-dashboard/local_settings
file (sample files can be found in the Configuration Reference Guide):
- Cache Backend - As the
root
user, update theCACHES
settings with the memcached values:SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION' : 'memcacheURL:port', } }
Where:memcacheURL
is the host on which memcache was installedport
is the value from thePORT
parameter in the/etc/sysconfig/memcached
file.
- Dashboard Host - Specify the host URL for your OpenStack Identity service endpoint. For example:
OPENSTACK_HOST="127.0.0.1"
- Time Zone - To change the dashboard's timezone, update the following (the time zone can also be changed using the dashboard GUI):
TIME_ZONE="UTC"
- To ensure the configuration changes take effect, restart the Apache web server.
Note
HORIZON_CONFIG
dictionary contains all the settings for the Dashboard. Whether or not a service is in the Dashboard depends on the Service Catalog configuration in the Identity service. For a full listing, refer to http://docs.openstack.org/developer/horizon/topics/settings.html (Horizon Settings and Configuration).
- Edit the
/etc/openstack-dashboard/local_settings
file, and uncomment the following parameters:SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTOCOL', 'https') CSRF_COOKIE_SECURE = True SESSION_COOKIE_SECURE = True
The latter two settings instruct the browser to only send dashboard cookies over HTTPS connections, ensuring that sessions will not work over HTTP. - Edit the
/etc/httpd/conf/httpd.conf
file, and add the following line:NameVirtualHost *:443
- Edit the
/etc/httpd/conf.d/openstack-dashboard.conf
file, and substitute the 'Before' section for 'After':Before:WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi Alias /static /usr/share/openstack-dashboard/static/ <Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi> <IfModule mod_deflate.c> SetOutputFilter DEFLATE <IfModule mod_headers.c> # Make sure proxies don’t deliver the wrong content Header append Vary User-Agent env=!dont-vary </IfModule> </IfModule> Order allow,deny Allow from all </Directory>
After:<VirtualHost *:80> ServerName openstack.example.com RedirectPermanent / https://openstack.example.com/ </VirtualHost> <VirtualHost *:443> ServerName openstack.example.com SSLEngine On SSLCertificateFile /etc/httpd/SSL/openstack.example.com.crt SSLCACertificateFile /etc/httpd/SSL/openstack.example.com.crt SSLCertificateKeyFile /etc/httpd/SSL/openstack.example.com.key SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi WSGIDaemonProcess horizon user=apache group=apache processes=3 threads=10 RedirectPermanent /dashboard https://openstack.example.com Alias /static /usr/share/openstack-dashboard/static/ <Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi> Order allow,deny Allow from all </Directory> </VirtualHost>
In the 'After' configuration, Apache listens on port 443 and redirects all non-secured requests to the HTTPs protocol. In the secured section, the private key, the public key, and the certificate are defined for usage. - As the
root
user, restart Apache and memcached:
If the HTTP version of the dashboard is used now via the browser, the user should be redirected to the HTTPs version of the page.#
service httpd restart
#
service memcached restart
- Log in to the system on which your
keystonerc_admin
file resides and authenticate as the Identity administrator:#
source~/keystonerc_admin
- Use the
keystone role-create
command to create theMember
role:#
keystone role-create --nameMember
+----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | id |8261ac4eabcc4da4b01610dbad6c038a
| | name | Member | +----------+----------------------------------+
Note
Member
role, change the value of the OPENSTACK_KEYSTONE_DEFAULT_ROLE
configuration key, which is stored in:
/etc/openstack-dashboard/local_settings
httpd
service must be restarted for the change to take effect.
- Use the
getenforce
command to check the status of SELinux on the system:#
getenforce - If the resulting value is 'Enforcing' or 'Permissive', use the
setsebool
command as theroot
user to allowhttpd
-Identity service connections:#
setsebool -P httpd_can_network_connect on
Note
#
sestatus
SELinux status: enabled
SELinuxfs mount: /selinux
Current mode: permissive
Mode from config file: enforcing
Policy version: 24
Policy from config file: targeted
For more information, refer to the Security-Enhanced Linux User Guide for Red Hat Enterprise Linux.
Note
root
user:
- Edit the
/etc/sysconfig/iptables
configuration file:- Allow incoming connections using just HTTPS by adding this firewall rule to the file:
-A INPUT -p tcp --dport 443 -j ACCEPT
- Allow incoming connections using both HTTP and HTTPS by adding this firewall rule to the file:
-A INPUT -p tcp -m multiport --dports 80,443 -j ACCEPT
- Restart the iptables service for the changes to take effect.
#
service iptables restart
Important
- No shared storage across processes or workers.
- No persistence after a process terminates.
/etc/openstack-dashboard/local_settings
file:
SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache' } }
root
user to initialize the database and configure it for use:
- Start the MySQL command-line client, by executing:
#
mysql -u root -p
- Specify the MySQL root user's password when prompted.
- Create the
dash
database:mysql>
CREATE DATABASE dash;
- Create a MySQL user for the newly-created dash database who has full control of the database.
mysql>
GRANT ALL ON dash.* TO 'dash'@'%' IDENTIFIED BY '
PASSWORD
';
Replacemysql>
GRANT ALL ON dash.* TO 'dash'@'localhost' IDENTIFIED BY '
PASSWORD
';PASSWORD
with a secure password for the new database user to authenticate with. - Enter quit at the
mysql>
prompt to exit the MySQL client. - In the
/etc/openstack-dashboard/local_settings
file, change the following options to refer to the new MySQL database:SESSION_ENGINE = 'django.contrib.sessions.backends.cached_db' DATABASES = { 'default': { # Database configuration here 'ENGINE': 'django.db.backends.mysql', 'NAME': 'dash', 'USER': 'dash', 'PASSWORD': '
ReplacePASSWORD
', 'HOST': 'HOST
', 'default-character-set': 'utf8' } }PASSWORD
with the password of thedash
database user and replaceHOST
with the IP address or fully qualified domain name of the databse server. - Populate the new database by executing:
Note: You will be asked to create an admin account; this is not required.#
cd /usr/share/openstack-dashboard
#
python manage.py syncdb
As a result, the following should be displayed:Installing custom SQL ... Installing indexes ... DEBUG:django.db.backends:(0.008) CREATE INDEX `django_session_c25c2c28` ON `django_session` (`expire_date`);; args=() No fixtures found.
- Restart Apache to pick up the default site and symbolic link settings:
#
service httpd restart
- Restart the
openstack-nova-api
service to ensure the API server can connect to the Dashboard and to avoid an error displayed in the Dashboard.#
service openstack-nova-api restart
cached_db session
backend can be used, which utilizes both the database and caching infrastructure to perform write-through caching and efficient retrieval.
SESSION_ENGINE = "django.contrib.sessions.backends.cached_db"
- Advantages:
- Does not require additional dependencies or infrastructure overhead.
- Scales indefinitely as long as the quantity of session data being stored fits into a normal cookie.
- Disadvantages:
- Places session data into storage on the user’s machine and transports it over the wire.
- Limits the quantity of session data which can be stored.
Note
- In the
/etc/openstack-dashboard/local_settings
file, set:SESSION_ENGINE = "django.contrib.sessions.backends.signed_cookies"
- Add a randomly-generated
SECRET_KEY
to the project by executing on the command line:$
django-admin.py startproject
Note
TheSECRET_KEY
is a text string, which can be specified manually or automatically generated (as in this procedure). You will just need to ensure that the key is unique (that is, does not match any other password on the machine).
HOSTNAME
with the host name or IP address of the server on which you installed the Dashboard service:
- HTTPS
https://HOSTNAME/dashboard/
- HTTP
http://HOSTNAME/dashboard/
Table of Contents
- Installed the dashboard (refer to Installing the Dashboard).
- Have an available image for use (refer to Obtaining a Test Disk Image).
- Click Images & Snapshots in the menu.
- button. The
- Configure the settings that define your instance on the Details tab.
- Enter a name for the image.
- Use the location of your image file in the Image Location field.
- Select the correct type from the drop-down menu in the Format field (for example,
QCOW2
). - Leave the Minimum Disk (GB) and Minimum RAM (MB) fields empty.
- Select the Public box.
- Click thebutton.
See Also:
Note
- When a keypair is created, a keypair file is automatically downloaded through the browser. You can optionally load this file into ssh, for command-line ssh connections, by executing:
#
ssh-add ~/.ssh/NAME
.pem - To delete an existing keypair, click the keypair's Keypairs tab.button on the
See Also:
- Installed the dashboard (refer to Installing the Dashboard).
- Installed OpenStack Networking Services (refer to Installing the OpenStack Networking Service).
- Log in to the dashboard.
- Click Networks in the menu.
- button. The
- By default, the dialog opens to the Network tab. You have the option of specifying a network name.
- To define the network's subnet, click on the Subnet and Subnet Detail tabs. Click into each field for field tips.
Note
You do not have to initially specify a subnet (although this will result in any attached instance having the status of 'error'). If you do not define a specific subnet, clear the Create Subnet check box. - Click thebutton.
See Also:
- Uploaded an image to use as the basis for your instances (refer to Uploading a Disk Image).
- Created a network (refer to Creating a Network).
- Log in to the dashboard.
- Click Instances in the menu.
- button. The
- By default, the dialog opens to the Details tab:
- Select an Instance Source for your instance. Available values are:
- Image
- Snapshot
- Select an Image or Snapshot to use when launching your instance. The image selected defines the operating system and architecture of your instance.
- Enter an Instance Name to identify your instance.
- Select a Flavor for your instance. The flavor determines the compute resources available to your instance. After a flavor is selected, their resources are displayed in the Flavor Details pane for preview.
- Enter an Instance Count. This determines how many instances to launch using the selected options.
- Click the Access & Security tab and configure the security settings for your instance:
- Either select an existing keypair from the Keypair drop down box, or click the + button to upload a new keypair.
- Select the Security Groups that you wish to apply to your instances. By default, only the default security group will be available.
- Click thebutton.
Note
- On the Instances tab, click the name of your instance. The Instance Detail page is displayed.
- Click thetab on the resultant page.
See Also:
- Installed the dashboard (refer to Installing the Dashboard).
- Installed the Block Storage service (refer to Installing OpenStack Block Storage).
- Log in to the dashboard.
- Click Volumes in the menu.
- button. The
- To configure the volume:
- Enter a Volume Name to identify your new volume by.
- Enter a Description to further describe your new volume.
- Enter the Size of your new volume in gigabytes (GB).
Important
In this guide, LVM storage is configured as thecinder-volumes
volume group (refer to Configuring for LVM Storage Backend). There must be enough free disk space in thecinder-volumes
volume group for your new volume to be allocated. - Click thebutton to create the new volume.
- Launched an instance (refer to Launching an Instance).
- Created a volume (refer to Creating a Volume).
- Log in to the dashboard as a user.
- Click Volumes in the menu.
- button on the row associated with the volume that you want to attach to an instance. The
- Select the instance for the volume in the Attach to Instance field.
- Specifiy the device name in the Device Name field (for example, '/dev/vdc').
- Click thebutton.
- Log in to the dashboard.
- Click Instances in the menu.
- button on the row associated with the instance of which you want to take a snapshot.
The Create Snapshot dialog is displayed.
- Enter a descriptive name for your snapshot in the Snapshot Name field.
- Click thebutton to create the snapshot.Your new snapshot will appear in the Image Snapshots table in the Images & Snapshots screen.
See Also:
See Also:
- Create an external network for the pool:
#
neutron net-createnetworkName
--router:external=TrueExample 17.1. Defining an External Network
#
neutron net-create ext-net --router:external=True Created a new network: +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | 3a53e3be-bd0e-4c05-880d-2b11aa618aff | | name | ext-net | | provider:network_type | local | | provider:physical_network | | | provider:segmentation_id | | | router:external | True | | shared | False | | status | ACTIVE | | subnets | | | tenant_id | 6b406408dff14a2ebf6d2cd7418510b2 | +---------------------------+--------------------------------------+
- Create the pool of floating IP addresses:
$
neutron subnet-create --allocation-pool start=IPStart
,end=IPStart
--gatewayGatewayIP
--disable-dhcpnetworkName
CIDR
Example 17.2. Defining a Pool of Floating IP Addresses
$
neutron subnet-create --allocation-pool start=10.38.15.128,end=10.38.15.159 --gateway 10.38.15.254 --disable-dhcp ext-net 10.38.15.0/24 Created a new subnet: +------------------+--------------------------------------------------+ | Field | Value | +------------------+--------------------------------------------------+ | allocation_pools | {"start": "10.38.15.128", "end": "10.38.15.159"} | | cidr | 10.38.15.0/24 | | dns_nameservers | | | enable_dhcp | False | | gateway_ip | 10.38.15.254 | | host_routes | | | id | 6a15f954-935c-490f-a1ab-c2a1c1b1529d | | ip_version | 4 | | name | | | network_id | 4ad5e73b-c575-4e32-b581-f9207a22eb09 | | tenant_id | e5be83dc0a474eeb92ad2cda4a5b94d5 | +------------------+--------------------------------------------------+
- Installed the dashboard (refer to Installing the Dashboard).
- Created an external network (refer to Defining a Floating IP-Address Pool).
- Created an internal network (refer to Creating a Network).
- Log in to the dashboard.
- Click Routers in the Manage Network menu.
- button. The
- Specify the router's name and click thebutton. The new router is now displayed in the router list.
- Click the new router'sbutton.
- Specify the network to which the router will connect in the External Network field, and click the button.
- To connect a private network to the newly created router:
See Also:
- Created a pool of floating IP addresses (refer to Defining a Floating IP-Address Pool).
- Launched an instance (refer to Launching an Instance).
- Log in to the dashboard as a user that has the
Member
role. - Click Access & Security in the menu.
- Click the Allocate Floating IP window is displayed.button. The
- Select a pool of addresses from the Pool list.
- Click the Floating IPs table.button. The allocated IP address will appear in the
- Locate the newly allocated IP address in the Floating IPs table. On the same row click the button to assign the IP address to a specific instance.
- The IP Address field is automatically set to the selected floating IP address.Select the instance (with which to associate the floating IP address) from the Port to be associated list.
- Click thebutton to associate the IP address with the selected instance.
Note
To disassociate a floating IP address from an instance when it is no longer required use thebutton.
- Installed the dashboard (refer to Installing the Dashboard).
- Installed OpenStack Networking (refer to Installing the OpenStack Networking Service).
Note
- Log into the dashboard.
- Click Access & Security in the menu.
- In the Security Groups pane, click the button on the row for the
default
security group. The Edit Security Group Rules window is displayed. - Click the Add Rule window is displayed.button. The
- Configure the rule:
- Select the protocol to which the rule must apply from the IP Protocol list.
- Define the port or ports to which the rule will apply using the Open field:
Port
- Define a specific port in the Port fieldPort Range
- Define the port range using the From Port and To Port fields.
- Define the IP address from which connections should be accepted on the defined port using the Source field:
CIDR
- Enter a specific IP address in the CIDR field using the Classless Inter-Domain Routing (CIDR) notation. A value of 0.0.0.0/0 allows connections from all IP addresses.Security Group
- Select an existing security group from the Source Group drop-down list. This allows connections from any instances from the specified security group.
- Click thebutton to add the new rule to the security group.
See Also:
Table of Contents
- nagios
- Nagios program that monitors hosts and services on the network, and which can send email or page alerts when a problem arises and when a problem is resolved.
- nagios-devel
- Includes files which can be used by Nagios-related applications.
- nagios-plugins*
- Nagios plugins for Nagios-related applications (including ping and nrpe).
- gd and gd-devel
- gd Graphics Library, for dynamically creating images, and the gd development libraries for gd.
- php
- HTML-embedded scripting language, used by Nagios for the web interface.
- gcc, glibc, and glibc-common
- GNU compiler collection, together with standard programming libraries and binaries (including locale support).
- openssl
- OpenSSL toolkit, which provides support for secure communication between machines.
root
user, using the yum
command:
#
yum install nagios nagios-devel nagios-plugins* gd gd-devel php gcc glibc glibc-common openssl
Note
#
subscription-manager repos --enable rhel-6-server-optional-rpms
root
user, execute the following:
#
yum install -y nrpe nagios-plugins* openssl
/usr/lib64/nagios/plugins
directory (depending on the machine, they may be in /usr/lib/nagios/plugins
).
Note
- Check web-interface user name and password, and check basic configuration.
- Add OpenStack monitoring to the local server.
- If the OpenStack cloud includes distributed hosts:
- Install and configure NRPE on each remote machine (that has services to be monitored).
- Tell Nagios which hosts are being monitored.
- Tell Nagios which services are being monitored for each host.
Table 18.1. Nagios Configuration Files
File Name | Description |
---|---|
/etc/nagios/nagios.cfg
|
Main Nagios configuration file.
|
/etc/nagios/cgi.cfg
|
CGI configuration file.
|
/etc/httpd/conf.d/nagios.conf
|
Nagios configuration for httpd.
|
/etc/nagios/passwd
|
Password file for Nagios users.
|
/usr/local/nagios/etc/ResourceName.cfg
|
Contains user-specific settings.
|
/etc/nagios/objects/ObjectsDir/ObjectsFile.cfg
|
Object definition files that are used to store information about items such as services or contact groups.
|
/etc/nagios/nrpe.cfg
|
NRPE configuration file.
|
nagiosadmin / nagiosadmin
. This value can be viewed in the /etc/nagios/cgi.cfg
file.
root
user:
- To change the default password for the user nagiosadmin, execute:
#
htpasswd -c /etc/nagios/passwdnagiosadmin
Note
To create a new user, use the following command with the new user's name:#
htpasswd /etc/nagios/passwdnewUserName
- Update the
nagiosadmin
email address in/etc/nagios/objects/contacts.cfg
define contact{ contact_name nagiosadmin ; Short name of user [...snip...] email yourName@example.com ; <<*****CHANGE THIS****** }
- Verify that the basic configuration is working:
#
nagios -v /etc/nagios/nagios.cfgIf errors occur, check the parameters set in/etc/nagios/nagios.cfg
- Ensure that Nagios is started automatically when the system boots.
#
chkconfig --add nagios#
chkconfig nagios on - Start up Nagios and restart httpd:
#
service httpd restart#
service nagios start
Note
/etc/nagios/objects/localhost.cfg
file is used to define services for basic local statistics (for example, swap usage or the number of current users). You can always comment these services out if they are no longer needed by prefacing each line with a '#' character. This same file can be used to add new OpenStack monitoring services.
Note
cfg_file
parameter in the /etc/nagios/nagios.cfg
file.
root
user:
- Write a short script for the item to be monitored (for example, whether a service is running), and place it in the
/usr/lib64/nagios/plugins
directory.For example, the following script checks the number of Compute instances, and is stored in a file namednova-list
:#!/bin/env bash export OS_USERNAME=
userName
export OS_TENANT_NAME=tenantName
export OS_PASSWORD=password
export OS_AUTH_URL=http://identityURL
:35357/v2.0/ data=$(nova list 2>&1) rv=$? if [ "$rv" != "0" ] ; then echo $data exit $rv fi echo "$data" | grep -v -e '--------' -e '| Status |' -e '^$' | wc -l - Ensure the script is executable:
#
chmod u+x nova-list - In the
/etc/nagios/objects/commands.cfg
file, specify a command section for each new script:define command { command_line /usr/lib64/nagios/plugins/nova-list command_name nova-list }
- In the
/etc/nagios/objects/localhost.cfg
file, define a service for each new item, using the defined command. For example:define service { check_command nova-list host_name
localURL
name nova-list normal_check_interval 5 service_description Number of nova vm instances use generic-service } - Restart nagios using:
#
service nagios restart
root
user:
- In the
/etc/nagios/nrpe.cfg
file, add the central Nagios server IP address in theallowed_hosts
line:allowed_hosts=127.0.0.1,
NagiosServerIP
- In the
/etc/nagios/nrpe.cfg
file, add any commands to be used to monitor the OpenStack services. For example:command[keystone]=/usr/lib64/nagios/plugins/check_procs -c 1: -w 3: -C keystone-all
Each defined command can then be specified in theservices.cfg
file on the Nagios monitoring server (refer to Creating Service Definitions).Note
Any complicated monitoring can be placed into a script, and then referred to in the command definition. For an OpenStack script example, refer to Configuring OpenStack Services. - Configure the
iptables
firewall to allow nrpe traffic.- Open the
/etc/sysconfig/iptables
file in a text editor. - Add an
INPUT
rule allowing UDP traffic on port 5666 to this file. The new rule must appear before anyINPUT
rules thatREJECT
traffic.-A INPUT -p tcp --dport 5666 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptables
file. - Restart the
iptables
service for the firewall changes to take effect.#
service iptables restart
- Start the NRPE service:
#
service nrpe start
root
user:
- In the
/etc/nagios/objects
directory, create ahosts.cfg
file. - In the file, specify a
host
section for each machine on which an OpenStack service is running and should be monitored:define host{ use linux-server host_name
remoteHostName
aliasremoteHostAlias
addressremoteAddress
}where:host_name
= Name of the remote machine to be monitored (typically listed in the local/etc/hosts
file). This name is used to reference the host in service and host group definitions.alias
= Name used to easily identify the host (typically the same as thehost_name
).address
= Host address (typically its IP address, although a FQDN can be used instead, just make sure that DNS services are available).
For example:define host{ host_name Server-ABC alias OS-ImageServices address 192.168.1.254 }
- In the
/etc/nagios/nagios.cfg
file, under theOBJECT CONFIGURATION FILES
section, specify the following line:cfg_file=/etc/nagios/objects/hosts.cfg
/etc/nagios/objects/services.cfg
(as the root
user):
- In the
/etc/nagios/objects/commands.cfg
file, specify the following to handle the use of thecheck_nrpe
plugin with remote scripts or plugins:define command{ command_name check_nrpe command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$ }
- In the
/etc/nagios/objects
directory, create theservices.cfg
file. - In the file, specify the following
service
sections for each remote OpenStack host to be monitored:##Basic remote checks############# ##Remember that
remoteHostName
is defined in the hosts.cfg file. define service{ use generic-service host_nameremoteHostName
service_description PING check_command check_ping!100.0,20%!500.0,60% } define service{ use generic-service host_nameremoteHostName
service_description Load Average check_command check_nrpe!check_load } ##OpenStack Service Checks####### define service{ use generic-service host_nameremoteHostName
service_description Identity Service check_command check_nrpe!keystone }The above sections ensure that a server heartbeat, load check, and the OpenStack Identity service status are reported back to the Nagios server. All OpenStack services can be reported, just ensure that a matching command is specified in the remote server'snrpe.cfg
file. - In the
/etc/nagios/nagios.cfg
file, under theOBJECT CONFIGURATION FILES
section, specify the following line:cfg_file=/etc/nagios/objects/services.cfg
root
user:
- Verify that the updated configuration is working:
#
nagios -v /etc/nagios/nagios.cfgIf errors occur, check the parameters set in/etc/nagios/nagios.cfg
,/etc/nagios/services.cfg
, and/etc/nagios/hosts.cfg
. - Restart Nagios:
#
service nagios restart - Log into the Nagios dashboard again by using the following URL in your browser, and using the
nagiosadmin
user and the password that was set in Step 1:http://nagiosHostURL/nagios
rsyslog
service provides facilities both for running a centralized logging server and for configuring individual systems to send their log files to the centralized logging server. This is referred to as configuring the systems for "remote logging".
- While logged in as the
root
user install the rsyslog package. Using theyum
command.#
yum install rsyslog
root
user.
- Configure SELinux to allow rsyslog traffic.
# semanage -a -t syslogd_port_t -p udp 514
- Configure the
iptables
firewall to allow rsyslog traffic.- Open the
/etc/sysconfig/iptables
file in a text editor. - Add an
INPUT
rule allowing UDP traffic on port514
to the file. The new rule must appear before anyINPUT
rules thatREJECT
traffic.-A INPUT -m state --state NEW -m udp -p udp --dport 514 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptables
file. - Restart the
iptables
service for the firewall changes to take effect.#
service iptables restart
- Open the
/etc/rsyslog.conf
file in a text editor.- Add this line to the file, defining the location logs will be saved to:
$template TmplAuth, "/var/log/%HOSTNAME%/%PROGRAMNAME%.log" authpriv.* ?TmplAuth *.info,mail.none,authpriv.none,cron.none ?TmplMsg
- Remove the comment character (#) from the beginning of these lines in the file:
#$ModLoad imudp #$UDPServerRun 514
Save the changes to the/etc/rsyslog.conf
file.
root
user.
- Edit the
/etc/rsyslog.conf
, and specify the address of your centralized log server by adding the following:*.* @
YOURSERVERADDRESS
:YOURSERVERPORT
ReplaceYOURSERVERADDRESS
with the address of the centralized logging server. ReplaceYOURSERVERPORT
with the port on which thersyslog
service is listening. For example:*.* @
192.168.20.254
:514
Or:*.* @
log-server.company.com
:514
The single@
specifies the UDP protocol for transmission. Use a double@@
to specify the TCP protocol for transmission.Important
The use of the wildcard * character in these example configurations indicates torsyslog
that log entries from all log facilities and of all log priorities must be sent to the remotersyslog
server.For information on applying more precise filtering of log files refer to the manual page for thersyslog
configuration file,rsyslog.conf
. Access the manual page by running the commandman rsyslog.conf
.
rsyslog
service is started or restarted the system will send all log messages to the centralized logging server.
rsyslog
service must be running on both the centralized logging server and the systems attempting to log to it.
root
user.
- Use the
service
command to start thersyslog
service.#
service rsyslog start
- Use the
chkconfig
command to ensure thersyslog
service starts automatically in future.#
chkconfig rsyslog on
rsyslog
service has been started. The service will start sending or receiving log messages based on its local configuration.
Table of Contents
regionOne
.
--region
argument when adding service endpoints.
$
keystone endpoint-create --region
REGION
\
--service-id
SERVICEID
\
--publicurl
PUBLICURL
--adminurl
ADMINURL
--internalurl
INTERNALURL
REGION
with the name of the region that the endpoint belongs to. When sharing an endpoint between regions create an endpoint entry containing the same URLs for each applicable region. For information on setting the URLs for each service refer to the identity service configuration information of the service in question.
Example 20.1. Endpoints within Discrete Regions
APAC
and EMEA
regions share an identity server (identity.example.com
) endpoint while providing region specific compute API endpoints.
$
keystone endpoint-list
+---------+--------+------------------------------------------------------+
| id | region | publicurl |
+---------+--------+------------------------------------------------------+
| 0d8b... | APAC | http://identity.example.com:5000/v3 |
| 769f... | EMEA | http://identity.example.com:5000/v3 |
| 516c... | APAC | http://nova-apac.example.com:8774/v2/$(tenant_id)s |
| cf7e... | EMEA | http://nova-emea.example.com:8774/v2/$(tenant_id)s |
+---------+--------+------------------------------------------------------+
openstack-nova-compute
) service. Once the Compute service is configured and running it communicates with other nodes in the environment, including Compute API endpoints and Compute conductors, via the message broker.
openstack-nova-conductor
) then you must also ensure that the service is configured to access the compute database, refer to Section 14.3.4.2, “Setting the Database Connection String” for more information.
nova service-list
while authenticated as the OpenStack administrator (using a keystonerc
file) to confirm the status of the new node.
- Use the
source
command to load the administrative credentials from thekeystonerc_admin
file.$
source ~/keystonerc_admin
- Use the
nova service-list
command to identify the compute node to be removed.$
+------------------+----------+----------+---------+-------+ | Binary | Host | Zone | Status | State | +------------------+----------+----------+---------+-------+ | nova-cert | node0001 | internal | enabled | up | | nova-compute | node0001 | nova | enabled | up | | nova-conductor | node0001 | internal | enabled | up | | nova-consoleauth | node0001 | internal | enabled | up | | nova-network | node0001 | internal | enabled | up | | nova-scheduler | node0001 | internal | enabled | up | | ... | ... | ... | ... | ... | +------------------+----------+----------+---------+-------+nova service-list
- Use the
nova service-disable
command to disable thenova-compute
service on the node. This prevents new instances from being scheduled to run on the host.$
+----------+--------------+----------+ | Host | Binary | Status | +----------+--------------+----------+ | node0001 | nova-compute | disabled | +----------+--------------+----------+nova service-disable
HOST
nova-computeReplaceHOST
with the name of the node to disable as indicated in the output of thenova service-list
command in the previous step. - Use the
nova service-list
command to verify that the relevant instance of thenova-compute
service is now disabled.$
+------------------+----------+----------+----------+-------+ | Binary | Host | Zone | Status | State | +------------------+----------+----------+----------+-------+ | nova-cert | node0001 | internal | enabled | up | | nova-compute | node0001 | nova | disabled | up | | nova-conductor | node0001 | internal | enabled | up | | nova-consoleauth | node0001 | internal | enabled | up | | nova-network | node0001 | internal | enabled | up | | nova-scheduler | node0001 | internal | enabled | up | | ... | ... | ... | ... | ... | +------------------+----------+----------+----------+-------+nova service-list
- Use the
nova migrate
command to migrate running instances to other compute nodes.$
nova migrate
HOST
ReplaceHOST
with the name of the host being removed as indicated by thenova service-list
command in the previous steps.
- Has been built with the cloud-init package, it can automatically access metadata passed via config drive.
- Does not have the cloud-init package installed, it must be customized to run a script that mounts the config drive on boot, reads the data from the drive, and takes appropriate action such as adding the public key to an account.
- Use the
--config-drive=true
parameter when callingnova boot
.The following complex example enables the config drive as well as passing user data, two files, and two key/value metadata pairs, all of which are accessible from the config drive.$
nova boot --config-drive=true --image my-image-name \
--key-name mykey --flavor 1 --user-data ./my-user-data.txt myinstance \
--file /etc/network/interfaces=/home/myuser/instance-interfaces \
--file known_hosts=/home/myuser/.ssh/known_hosts --meta role=webservers \
--meta essential=false
- Configure the Compute service to automatically create a config drive when booting by setting the following option in
/etc/nova/nova.conf
:force_config_drive=true
Note
If a user uses the--config-drive=true
flag with thenova boot
command, an administrator cannot disable the config drive.
Warning
genisoimage
program must be installed on each Compute host before attempting to use config drive (or the instance will not boot properly).
- ISO 9660 format, add the following line to
/etc/nova/nova.conf
:config_drive_format=iso9660
- VFAT format, add the following line to
/etc/nova/nova.conf
:config_drive_format=vfat
Note
For legacy reasons, the config drive can be configured to use VFAT format instead of ISO 9660. However, it is unlikely that you would require VFAT format, since ISO 9660 is widely supported across operating systems. If you use the VFAT format, the config drive will be 64 MBs.
nova boot
options for config drive:
Table 20.1. Description of configuration options for config drive
Configuration option=Default value | (Type) Description |
---|---|
config_drive_cdrom=False
|
(BoolOpt) Whether Compute attaches the Config Drive image as a cdrom drive instead of a disk drive.
|
config_drive_format=iso9660
|
(StrOpt) Config drive format (valid options: iso9660 or vfat).
|
config_drive_inject_password=False
|
(BoolOpt) Sets the administrative password in the config drive image.
|
config_drive_skip_versions=1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01
|
(StrOpt) List of metadata versions to skip placing into the config drive.
|
config_drive_tempdir=None
|
(StrOpt) Where to put temporary files associated with config drive creation.
|
force_config_drive=False
|
(StrOpt) Whether Compute automatically creates config drive on startup.
|
mkisofs_cmd=genisoimage
|
(StrOpt) Name and optional path of the tool used for ISO image creation. Ensure that the specified tool is installed on each Compute host before attempting to use config drive (or the instance will not boot properly).
|
config-2
volume label.
Procedure 20.1. Mount the Drive by Label
/dev/disk/by-label/config-2
device. For example:
- Create the directory to use for access:
#
mkdir -p /mnt/config
- Mount the device:
#
mount /dev/disk/by-label/config-2 /mnt/config
Procedure 20.2. Mount the Drive using Disk Identification
udev
, then the /dev/disk/by-label
directory will not be present.
- Use the
blkid
command to identify the block device that corresponds to the config drive. For example, when booting the cirros image with the m1.tiny flavor, the device will be/dev/vdb
:#
blkid -t LABEL="config-2" -odevice
/dev/vdb
- After you have identified the disk, the device can then be mounted:
- Create the directory to use for access:
#
mkdir -p /mnt/config
- Mount the device:
#
mount /dev/vdb /mnt/config
Note
- If using a Windows guest, the config drive is automatically displayed as the next available drive letter (for example, 'D:/').
- When accessing the config drive, do not rely on the presence of the EC2 metadata (files under the
ec2
directory) to be present in the config drive (this content may be removed in a future release). - When creating images that access config drive data, if there are multiple directories under the
openstack
directory, always select the highest API version by date that your consumer supports. For example, if your guest image can support versions 2012-03-05, 2012-08-05, 2013-04-13, it is best to try 2013-04-13 first and only revert to an earlier version if 2013-04-13 is absent.
Procedure 20.3. View the Mounted Config Drive
- Move to the newly mounted drive's files. For example:
$
cd /mnt/config
- The files in the resulting config drive vary depending on the arguments that were passed to
nova boot
. Based on the example in Enabling Config Drive, the contents of the config drive would be:$
ls
ec2/2013-04-13/meta-data.json ec2/2013-04-13/user-data ec2/latest/meta-data.json ec2/latest/user-data openstack/2013-08-10/meta_data.json openstack/2013-08-10/user_data openstack/content openstack/content/0000 openstack/content/0001 openstack/latest/meta_data.json openstack/latest/user_data
openstack/2012-08-10/meta_data.json
, openstack/latest/meta_data.json
, these two files are identical), formatted to improve readability:
{ "availability_zone": "nova", "files": [ { "content_path": "/content/0000", "path": "/etc/network/interfaces" }, { "content_path": "/content/0001", "path": "known_hosts" } ], "hostname": "test.novalocal", "launch_index": 0, "name": "test", "meta": { "role": "webservers" "essential": "false" }, "public_keys": { "mykey": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDBqUfVvCSez0/Wfpd8dLLgZXV9GtXQ7hnMN+Z0OWQUyebVEHey1CXuin0uY1cAJMhUq8j98SiW+cU0sU4J3x5l2+xi1bodDm1BtFWVeLIOQINpfV1n8fKjHB+ynPpe1F6tMDvrFGUlJs44t30BrujMXBe8Rq44cCk6wqyjATA3rQ== Generated by Nova\n" }, "uuid": "83679162-1378-4288-a2d4-70e13ec132aa" }
Note
--file /etc/network/interfaces=/home/myuser/instance-interfaces
is used with the nova boot
command, the contents of this file are contained in the file openstack/content/0000
file on the config drive, and the path is specified as /etc/network/interfaces
in the meta_data.json
file.
ec2/2009-04-04/meta-data.json
, latest/meta-data.json
, formatted to improve readability (the two files are identical) :
{ "ami-id": "ami-00000001", "ami-launch-index": 0, "ami-manifest-path": "FIXME", "block-device-mapping": { "ami": "sda1", "ephemeral0": "sda2", "root": "/dev/sda1", "swap": "sda3" }, "hostname": "test.novalocal", "instance-action": "none", "instance-id": "i-00000001", "instance-type": "m1.tiny", "kernel-id": "aki-00000002", "local-hostname": "test.novalocal", "local-ipv4": null, "placement": { "availability-zone": "nova" }, "public-hostname": "test.novalocal", "public-ipv4": "", "public-keys": { "0": { "openssh-key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDBqUfVvCSez0/Wfpd8dLLgZXV9GtXQ7hnMN+Z0OWQUyebVEHey1CXuin0uY1cAJMhUq8j98SiW+cU0sU4J3x5l2+xi1bodDm1BtFWVeLIOQINpfV1n8fKjHB+ynPpe1F6tMDvrFGUlJs44t30BrujMXBe8Rq44cCk6wqyjATA3rQ== Generated by Nova\n" } }, "ramdisk-id": "ari-00000003", "reservation-id": "r-7lfps8wj", "security-groups": [ "default" ] }
--user-data
flag was passed to nova boot
, and will contain the contents of the user data file passed as the argument.
openstack/2013-08-10/user_data
openstack/latest/user_data
ec2/2013-04-13/user-data
ec2/latest/user-data
Note
- Updating Compute Quotas using the Command Line
- Updating Block Storage Quotas using the Command Line
- Select the Admin > Projects option in the navigation sidebar.
- Click the project's Modify Quotas. The Edit Project window is displayed.button, and then select
- Edit quota values on the Quota tab, and click the button. The table below provides parameter descriptions (listed in order of appearance).
Table 21.1. Compute Quota Descriptions
Quota | Description | Service |
---|---|---|
Metadata Items
|
Number of metadata items allowed per instance.
|
Compute
|
VCPUs
|
Number of instance cores allowed per tenant.
|
Compute
|
Instances
|
Number of instances allowed per tenant.
|
Compute
|
Injected Files
|
Number of injected files allowed per tenant.
|
Compute
|
Injected File Content Bytes
|
Number of content bytes allowed per injected file.
|
Compute
|
Volumes
|
Number of volumes allowed per tenant.
|
Block Storage
|
Gigabytes
|
Number of volume gigabtyes allowed per tenant.
|
Block Storage
|
RAM (MB)
|
Megabytes of ram allowed per instance.
|
Compute
|
Floating IPs
|
Number of floating IP addresses allowed per tenant.
|
Compute
|
Fixed IPs
|
Number of fixed IP addresses allowed per tenant. This number should equal at least the number of allowed instances.
|
Compute
|
Security Groups
|
Number of security groups allowed per tenant.
|
Compute
|
Security Group Rules
|
Number of rules per security group.
|
Compute
|
- Because Compute quotas are managed per tenant, use the following to obtain a tenant list:
#
keystone tenant-list +----------------------------------+----------+---------+ | id | name | enabled | +----------------------------------+----------+---------+ | a981642d22c94e159a4a6540f70f9f8d | admin | True | | 934b662357674c7b9f5e4ec6ded4d0e7 | redhat01 | True | | 7bc1dbfd7d284ec4a856ea1eb82dca80 | redhat02 | True | | 9c554aaef7804ba49e1b21cbd97d218a | services | True | +----------------------------------+----------+---------+ - To update a particular Compute quota for the tenant, use:
For example:#
nova-manage project quotatenantName
--keyquotaName
--valuequotaValue
#
nova-manage project quota redhat01 --key floating_ips --value 20 metadata_items: 128 injected_file_content_bytes: 10240 ram: 51200 floating_ips: 20 security_group_rules: 20 instances: 10 key_pairs: 100 injected_files: 5 cores: 20 fixed_ips: unlimited injected_file_path_bytes: 255 security_groups: 10 - Restart the Compute service:
#
service openstack-nova-api restart
Note
/etc/nova/nova.conf
file.
Table 21.2. Compute Quota Descriptions
Quota | Description | Parameter |
---|---|---|
Injected File Content Bytes
|
Number of bytes allowed per injected file.
|
injected_file_content_bytes
|
Metadata Items
|
Number of metadata items allowed per instance
|
metadata_items
|
Ram
|
Megabytes of instance ram allowed per tenant.
|
ram
|
Floating Ips
|
Number of floating IP addresses allowed per tenant.
|
floating_ips
|
Key Pairs
|
Number of key pairs allowed per user.
|
key_pairs
|
Injected File Path Bytes
|
Number of bytes allowed per injected file path.
|
injected_file_path_bytes
|
Instances
|
Number of instances allowed per tenant.
|
instances
|
Security Group Rules
|
Number of rules per security group.
|
security_group_rules
|
Injected Files
|
Number of allowed injected files.
|
injected_files
|
Cores
|
Number of instance cores allowed per tenant.
|
cores
|
Fixed Ips
|
Number of fixed IP addresses allowed per tenant. This number should equal at least the number of allowed instances.
|
fixed_ips
|
Security Groups
|
Number of security groups per tenant.
|
security_groups
|
- Because Block Storage quotas are managed per tenant, use the following to obtain a tenant list:
#
keystone tenant-list +----------------------------------+----------+---------+ | id | name | enabled | +----------------------------------+----------+---------+ | a981642d22c94e159a4a6540f70f9f8d | admin | True | | 934b662357674c7b9f5e4ec6ded4d0e7 | redhat01 | True | | 7bc1dbfd7d284ec4a856ea1eb82dca80 | redhat02 | True | | 9c554aaef7804ba49e1b21cbd97d218a | services | True | +----------------------------------+----------+---------+ - To view Block Storage quotas for a tenant, use:
For example:#
cinder quota-showtenantName
#
cinder quota-show redhat01 +-----------+-------+ | Property | Value | +-----------+-------+ | gigabytes | 1000 | | snapshots | 10 | | volumes | 10 | +-----------+-------+ - To update a particular quota value, use:
For example:#
cinder quota-updatetenantName
--quotaKey
=NewValue
#
cinder quota-update redhat01 --volumes=15#
cinder quota-show redhat01 +-----------+-------+ | Property | Value | +-----------+-------+ | gigabytes | 1000 | | snapshots | 10 | | volumes | 15 | +-----------+-------+
Note
/etc/cinder/cinder.conf
file.
Note
root
access to the host machine (to install components, as well other administrative tasks such as updating the firewall).- Administrative access to the Identity service.
- Administrative access to the database (ability to add both databases and users).
Table A.1. OpenStack Installation-General
Item | Description | Value/Verified |
---|---|---|
Hardware Requirements
|
Requirements in section 3.2 Hardware Requirements must be verified.
|
Yes | No
|
Operating System
|
Red Hat Enterprise Linux 6.5 Server
|
Yes | No
|
Red Hat Subscription
|
You must have a subscription to:
|
Yes | No
|
Administrative access on all installation machines | Almost all procedures in this guide must be performed as the root user, so the installer must have root access. | Yes | No |
Red Hat Subscriber Name/Password
|
You must know the Red Hat subscriber name and password.
|
|
Machine addresses
|
You must know the host IP address of the machine or machines on which any OpenStack components and supporting software will be installed.
|
Provide host addresses for the following:
|
Table A.2. OpenStack Identity Service
Item | Description | Value |
---|---|---|
Host Access
|
The system hosting the Identity service must have:
|
Verify whether the system has:
|
SSL Certificates
|
If using external SSL certificates, you must know where the database and certificates are located, and have access to them.
|
Yes | No
|
LDAP Information | If using LDAP, you must have administrative access to configure a new directory server schema. | Yes | No |
Connections | The system hosting the Identity service must have a connection to all other OpenStack services. | Yes | No |
Table A.3. OpenStack Object Storage Service
Item | Description | Value |
---|---|---|
File System
|
Red Hat currently supports the
XFS and ext4 file systems for object storage; one of these must be available.
|
|
Mount Point
|
The /srv/node mount point must be available.
|
Yes | No
|
Connections | For the cloud installed in this guide, the system hosting the Object Storage service will need a connection to the Identity service. | Yes | No |
Table A.4. OpenStack Image Service
Item | Description | Value |
---|---|---|
Backend Storage
|
The Image service supports a number of storage backends. You must decide on one of the following:
|
Storage:
|
Connections | The system hosting the Image service must have a connection to the Identity, Dashboard , and Compute services, as well as to the Object Storage service if using OpenStack Object Storage as its backend. | Yes | No |
Table A.5. OpenStack Block Storage Service
Item | Description | Value |
---|---|---|
Backend Storage
|
The Block Storage service supports a number of storage backends. You must decide on one of the following:
|
Storage:
|
Connections | The system hosting the Block Storage service must have a connection to the Identity, Dashboard, and Compute services. | Yes | No |
Table A.6. OpenStack Networking Service
Item | Description | Value |
---|---|---|
Plugin agents
|
In addition to the standard OpenStack Networking components, a wide choice of plugin agents are also available that implement various networking mechanisms.
You'll need to decide which of these apply to your network and install them.
|
Circle appropriate:
|
Connections | The system hosting the OpenStack Networking service must have a connection to the Identity, Dashboard, and Compute services. | Yes | No |
Table A.7. OpenStack Compute Service
Item | Description | Value |
---|---|---|
Hardware virtualization support
|
The OpenStack Compute service requires hardware virtualization support. Note: a procedure is included in this Guide to verify this (refer to Checking for Hardware Virtualization Support).
|
Yes | No
|
VNC client
|
The Compute service supports the Virtual Network Computing (VNC) console to instances through a web browser. You must decide whether this will be provided to your users.
|
Yes | No
|
Resources: CPU and Memory
|
OpenStack supports overcommitting of CPU and memory resources on Compute nodes (refer to Configuring Resource Overcommitment).
|
Decide:
|
Resources:Host
|
You can reserve resources for the host, to prevent a given amount of memory and disk resources from being automatically assigned to other resources on the host (refer to Reserving Host Resources.
|
Decide:
|
libvirt Version | You will need to know the version of your libvirt for the configuration of Virtual Interface Plugging (refer to Configuring Virtual Interface Plugging). | Version: |
Connections | The system hosting the Compute service must have a connection to all other OpenStack services. | Yes | No |
Table A.8. OpenStack Dashboard Service
Item | Description | Value |
---|---|---|
Host software
|
The system hosting the Dashboard service must have the following already installed:
|
Yes | No
|
Connections
|
The system hosting the Dashboard service must have a connection to all other OpenStack services.
|
Yes | No
|
ERROR
message is displayed. Determining the actual cause of the failure requires the use of the command line tools.
nova list
to locate the unique identifier of the instance. Then use this identifier as an argument to the nova show
command. One of the items returned will be the error condition. The most common value is NoValidHost
.
Note
nova console-log
command. Sometimes however the log of a running instance will either appear to be completely empty or contain a single errant character, often a ? character.
console=tty0 console=ttyS0,115200n8
to the list of kernel arguments specified in the boot loader.
keystone
) is unable to contact the identity service it returns an error:
$
Unable to communicate with identity service: [Errno 113] No route to host. (HTTP 400)
keystone
ACTION
ACTION
with any valid identity service client action such as user-list
or service-list
. When the service is unreachable any identity client command that requires it will fail.
- Service Status
- On the system hosting the identity service check the service status:
$
keystone (pidservice openstack-keystone status
2847
) is running...If the service is not running then log in as theroot
user and start it.#
service openstack-keystone start
- Firewall Rules
- On the system hosting the identity service check that the firewall allows TCP traffic on ports
5000
and35357
. This command must be run while logged in as theroot
user.#
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 5000,35357iptables --list -n | grep -P "(5000|35357)"
If no rule is found then add one to the/etc/sysconfig/iptables
file:-A INPUT -p tcp -m multiport --dports 5000,35357 -j ACCEPT
Restart theiptables
service for the change to take effect.#
service iptables restart
- Service Endpoints
- On the system hosting the identity service check that the endpoints are defined correctly.
- Obtain the administration token:
$
admin_token =grep admin_token /etc/keystone/keystone.conf
0292d404a88c4f269383ff28a3839ab4
- Determine the correct administration endpoint for the identity service:
http://
IP
:35357/VERSION
ReplaceIP
with the IP address or host name of the system hosting the identity service. ReplaceVERSION
with the API version (v2.0
, orv3
) that is in use. - Unset any pre-defined identity service related environment variables:
$
unset OS_USERNAME OS_TENANT_NAME OS_PASSWORD OS_AUTH_URL
- Use the administration token and endpoint to authenticate with the identity service. Confirm that the identity service endpoint is correct:
$
keystone --os-token=
TOKEN
\--os-endpoint=
ENDPOINT
\endpoint-list
Verify that the listedpublicurl
,internalurl
, andadminurl
for the identity service are correct. In particular ensure that the IP addresses and port numbers listed within each endpoint are correct and reachable over the network.If these values are incorrect then refer to Section 9.6, “Creating the Identity Service Endpoint” for information on adding the correct endpoint. Once the correct endpoints have been added remove any incorrect endpoints using theendpoint-delete
action of thekeystone
command.$
keystone --os-token=
TOKEN
\--os-endpoint=
ENDPOINT
\endpoint-delete
ID
ReplaceTOKEN
andENDPOINT
with the values identified previously. ReplaceID
with the identity of the endpoint to remove as listed by theendpoint-list
action.
/var/log/cinder/
directory of the host on which each service runs.
Table C.1. Log Files
File name | Description |
---|---|
api.log
|
The log of the API service (
openstack-cinder-api ).
|
cinder-manage.log
|
The log of the management interface (
cinder-manage ).
|
scheduler.log
|
The log of the scheduler service (
openstack-cinder-scheduler ).
|
volume.log
|
The log of the volume service (
openstack-cinder-volume ).
|
/var/log/nova/
directory of the host on which each service runs.
Table C.2. Log Files
File name | Description |
---|---|
api.log
|
The log of the API service (
openstack-nova-api ).
|
cert.log
|
The log of the X509 certificate service (
openstack-nova-cert ). This service is only required by the EC2 API to the Compute service.
|
compute.log
|
The log file of the Compute service itself (
openstack-nova-compute ).
|
conductor.log
|
The log file of the conductor (
openstack-nova-conductor ) that provides database query support to the Compute service.
|
consoleauth.log
|
The log file of the console authentication service (
openstack-nova-consoleauth ).
|
network.log
|
The log of the network service (
openstack-nova-network ). Note that this service only runs in environments that are not configured to use OpenStack networking.
|
nova-manage.log
|
The log of the management interface (
nova-manage ).
|
scheduler.log
|
The log of the scheduler service (
openstack-nova-scheduler ).
|
httpd
). As a result, the log files for the dashboard are stored in the /var/log/httpd
directory.
Table C.3. Log Files
File name | Description |
---|---|
access_log
|
The access log details all attempts to access the web server.
|
error_log
|
The error log details all unsuccessful attempts to access the web server and the reason for each failure.
|
/var/log/keystone/
directory of the host on which it runs.
Table C.4. Log File
File name | Description |
---|---|
keystone.log
|
The log of the identity service (
openstack-keystone ).
|
/var/log/glance/
directory of the host on which each service runs.
Table C.5. Log Files
File name | Description |
---|---|
api.log
|
The log of the API service (
openstack-glance-api ).
|
registry.log
|
The log of the image registry service (
openstack-glance-registry ).
|
/var/log/neutron/
directory of the host on which each service runs.
Table C.6. Log Files
File name | Description |
---|---|
dhcp-agent.log
|
The log for the DHCP agent (
neutron-dhcp-agent ).
|
l3-agent.log
|
The log for the L3 agent (
neutron-l3-agent ).
|
lbaas-agent.log
|
The log for the Load Balancer as a Service (LBaaS) agent (
neutron-lbass-agent ).
|
linuxbridge-agent.log
|
The log for the Linux Bridge agent (
neutron-linuxbridge-agent ).
|
metadata-agent.log
|
The log for the metadata agent (
neutron-metadata-agent ).
|
openvswitch-agent.log
|
The log for the Open vSwitch agent (
neutron-openvswitch-agent ).
|
server.log
|
The log for the OpenStack networking service itself (
neutron-server ).
|
Revision History | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Revision 3-44 | Fri Nov 15 2013 | ||||||||||
| |||||||||||
Revision 3-42 | Fri Nov 15 2013 | ||||||||||
| |||||||||||
Revision 3-41 | Tue Nov 12 2013 | ||||||||||
| |||||||||||
Revision 3-40 | Tue Nov 12 2013 | ||||||||||
| |||||||||||
Revision 3-39 | Fri Oct 25 2013 | ||||||||||
| |||||||||||
Revision 3-38 | Mon Sep 16 2013 | ||||||||||
| |||||||||||
Revision 3-37 | Fri Sep 06 2013 | ||||||||||
| |||||||||||
Revision 3-36 | Fri Sep 06 2013 | ||||||||||
| |||||||||||
Revision 3-35 | Tue Sep 03 2013 | ||||||||||
| |||||||||||
Revision 3-34 | Mon Sep 02 2013 | ||||||||||
| |||||||||||
Revision 3-33 | Thu Aug 08 2013 | ||||||||||
| |||||||||||
Revision 3-32 | Tue Aug 06 2013 | ||||||||||
| |||||||||||
Revision 3-31 | Tue Aug 06 2013 | ||||||||||
| |||||||||||
Revision 3-30 | Wed Jul 17 2013 | ||||||||||
| |||||||||||
Revision 3-29 | Mon Jul 08 2013 | ||||||||||
| |||||||||||
Revision 3-26 | Mon Jul 01 2013 | ||||||||||
| |||||||||||
Revision 3-24 | Mon Jun 24 2013 | , | |||||||||
| |||||||||||
Revision 3-23 | Thu Jun 20 2013 | ||||||||||
| |||||||||||
Revision 3-22 | Tue Jun 18 2013 | , | |||||||||
| |||||||||||
Revision 3-21 | Thu Jun 13 2013 | ||||||||||
| |||||||||||
Revision 3-13 | Wed May 29 2013 | ||||||||||
|