Home

OpenStack Deployment Manual - Support

image

Contents

1. lt type gt linux lt type gt lt filesystem gt ext3 lt filesystem gt lt mount Point gt 1local lt mountPoinc gt lt mountOptions gt defaults noatime nodiratime lt mountOptions gt lt partition gt lt device gt lt diskSetup gt Installation Logs Installation logs to Ceph are kept at var log cm ceph setup log 3 3 2 Ceph Management With cmgui And cmsh Only one instance of Ceph is supported at a time Its name is ceph Ceph Overview And General Properties From within cmsh ceph mode can be accessed Example root bright71 cmsh bright71 ceph bright 71 gt ceph From within ceph mode the overview command lists an overview of Ceph OSDs MONs and placement groups for the ceph instance Parts of the displayed output are elided in the example that follows for viewing convenience Example bright71 gt ceph overview ceph Parameter Value Status HEALTH_OK Number of OSDs 2 Bright Computing Inc 3 3 Checking And Getting Familiar With Ceph Items After cm ceph setup 53 Number of OSDs up Number of OSDs in Number of mons Number of placements groups Placement groups data size Placement groups used size Placement groups available size Placement groups total size BETOnHeTLo EW Grigat lidaca bright71 metadata brigney Lirbd 2 2 1 192 OB LO UTGE Tu 7 LGB Ls SB The cmgui equivalent of the overview command is the Overview tab accessed from within the Ceph resource
2. Type RadosGatewayRole Module mod_fastcgi so Server Port 18888 Server Root var www Server Script s3gw fcgi Server Socket tmp radosgw sock Enable Keystone Authentication yes Keystone Accepted Roles Keystone Revocation Interval 600 Keystone Tokens Cache Size 500 NSS DB Path var lib ceph nss For example setting enablekeystoneauthentication to yes and committing it makes RADOS GW services available to OpenStack instances if they have already been initialized section 3 4 1 to work with Bright Cluster Manager RADOS GW Properties In cmgui RADOS GW properties can be accessed in cmgui by selecting the device from the resource tree then selecting the Ceph subtab for that device then ticking the Rados Gateway panel then clicking on the Advanced button figure 3 9 Bright Computing Inc 58 Ceph Installation Eie Monitoring Filter View Bookmarks Help RESOURCES ey node00lL openstack6 5 My Clusters vE openstacks b 9 Switches gt 9 Networks Monitor w OSD gt 9 Power Distri gt Ey Software Im osd0 Laso Advances Advance gt Node Categ gt 3 Head Nodes gt Racks gt y Chassis b EJ Virtual SMP Keystone Authentication enabled Advanced a dad ment Network setup static Routes FSMounts FSExports holes Ceph MSK Se AID Setup Rados Gateway gt Cloud Nodes gt J MIC Nodes Reach Figure 3 9 Accessing The RADOS GW Properties Button In
3. You can assign IP addresses to your virtual nodes using DHCP assigned IP addresses using static IP addresses These will be in sequence starting from an address that you must specify Do you want to use DHCP or assign static IP addresses to your virtual nodes DHCP Statically Figure 2 46 DHCP Or Static IP Address Selection The instances can be configured to obtain their IP addresses either via DHCP or via static address as signment figure 2 47 Bright Computing Inc 38 OpenStack Installation 2 2 20 Floating IPs Do you want to enable Floating IPs Enable floating IPs no Figure 2 47 Floating IPs The Floating IPs screen figure 2 47 lets the administrator enable Floating IPs on the external network so that instances can be accessed using these 2 2 21 External Network Starting Floating IP The network externalnet has Base IP address 10 2 0 0 Broadcast IP address 10 2 255 255 Usable addresses 65534 You have to specify the IP address range for Floating IPs What is the first IP address of the Floating IP address range 10 2 0 640 Figure 2 48 External Network Starting Floating IP A screen similar to figure 2 48 allows the administrator to specify the starting floating IP address on the external network 2 2 22 External Network Ending Floating IP First IP 10 2 0 64 now specify the last IP The network externalnet has Base IP address 10 2 0 0 Broadcast IP address
4. 2 1 16 Inbound External Traffic 1 Introduction 4 Bright managed instances 2 General questions 3 User Instances 6 Summary Deployment Inbound External Traffic Enabling floating IPs makes both user and Bright managed instances accessible to inbound connections coming from the external network Each instance can be accessed via a dedicated floating IP address Floating IPs are assigned to the instances from a preconfigured IP pool of available IP addresses The IP pool must be specified and cannot include the IP address of the external network s default gateway Enabling floating IPs also automatically enables outbound connectivity from instances even if they don t have a floating IP assigned OpenStack will reserve a single IP from the floating IP pool for those outbound connections Therefore i the OpenStack deployment is to have n floating IPs available to instances the floating IP allocation pool should span n 1 IP addresses Do you want to enable Floating IPs Yes No IP range start mo le E O IP range end 130 2 lo lo Previous Next Figure 2 17 Inbound External Traffic Screen All OpenStack hosted virtual machines are typically attached to one or more virtual networks How ever unless they also connected to the internal network of the cluster there is no simple way to connect to them from outside their virtual network To solve this Floating IPs have been introduced by Open Stack
5. Ceph Concepts On top of the object store layer are 3 kinds of access layers 1 Block device access RADOS Block Device RBD access can be carried out in two slightly different ways i via a Linux kernel module based interface to RADOS The module presents itself as a block device to the machine running that kernel The machine can then use the RADOS storage that is typically provided elsewhere ii via the 1ibrbd library used by virtual machines based on qemu or KVM A block device that uses the library on the virtual machine then accesses the RADOS storage which is typically located elsewhere Gateway API access RADOS Gateway RADOS GW access provides an HTTP REST gateway to RADOS Applications can talk to RADOS GW to access object storage in a high level manner instead of talking to RADOS directly at a lower level The RADOS GW API is compatible with the APIs of Swift and Amazon 83 Ceph Filesystem access CephFS provides a filesystem access layer A component called MDS Metadata Server is used to manage the filesystem with RADOS MDS is used in addition to the OSD and MON components used by the block and object storage forms when CephF5 talks to RADOS The Ceph filesystem is not regarded as production ready by the Ceph project at the time of writing July 2014 and is therefore not yet supported by Bright Cluster Manager 3 1 2 Ceph Software Considerations Before Use Recommended Filesystem For Ceph Use The sto
6. Some of the major Ceph configuration parameters can be viewed and their values managed by CM Daemon from ceph mode The show command shows parameters and their values for the ceph in stance Example bright 71 gt ceph show ceph Parameter Admin keyring path Bootstrapped Client admin key Cluster networks Config file path Creation time Extra config parameters Monitor daemon port Monitor key Monitor keyring path Public networks Revision auth client required cephx auth cluster required cephx auth service required cephx filestore xattr use omap fsid mon max osd mon osd full ratio mon osd nearfull ratio name osd pool default min size osd pool default pg num osd pool default pgp num osd pool default size version bright 71 gt ceph etc ceph ceph client admin keyring yes AODkUM5T4LhZFxAA JOHvzvbyb9txH0bwvxUSQ ete cephn ceph cont Thu 25 Sep 2014 13 54 11 CEST 6789 AQODKkUM5TwM21EhAA0OCcdH UFhGJ902n3y Avng etc ceph ceph mon keyring yes yes yes no abf8ebaf 71c0 4d75 badc 3b81bc2b74d8 10000 0 95 0 85 ceph 0 8 8 2 0 80 5 The cmgui equivalent of these settings is the Settings tab accessed from within the Ceph resource Bright Computing Inc 54 Ceph Installation Ceph extraconfigparameters setting The Extra config parameters property of a ceph mode object can be used to customize the Ceph configuration file The Ceph configuration file is typi cally in etc ceph conf
7. lt Back gt Figure 3 4 Ceph Installation Monitors Configuration In this screen e Ceph Monitors can be added to nodes or categories e Existing Ceph Monitors can be edited or removed figure 3 5 from nodes or categories e The OSD configuration screen can be reached after making changes if any to the Ceph Monitor configuration Typically in a first run the head node has a Ceph Monitor added to it Bright Computing Inc 46 Ceph Installation Editing Ceph Monitors Edit Monitor role for node bright Bootstrap is used to specify whether node s running this Monitor service must be up during the setup process Possible values are auto true and false It is recommended to use the default value auto Data path is used to specify data path for the Monitor Ey default its value is var lib ceph mon cluster hostname where cluster is the name of the Ceph instance usually ceph and hostname is the It is recommended to use the default value Bootstrap Fier Data path HERAS eed ards ee Reta ees a ae L Back gt Figure 3 5 Ceph Installation Monitors Editing Bootstrap And Data Path The Edit option in figure 3 4 opens up a screen figure 3 5 that allows the editing of existing or newly added Ceph Monitors for a node or category e The bootstrap option can be set The option configures initialization of the maps on the Ceph Monitors services prior to the actual setup proce
8. or do you want to share the interface connected to the internal network nternalnet e Share the interface with internalnet via an alias interface Create a dedicated physical interface Cancel Previous Next Figure 2 14 Dedicated Physical Networks Screen The dedicated physical networks screen figure 2 14 allows the following to be configured e the compute nodes that host the OpenStack instances e the network nodes that provides networking services for OpenStack For each of these types of node compute node or network node the interface can e either be shared with internalnet using an alias interface Bright Computing Inc 2 1 Installation Of OpenStack From cmgui 19 e or created separately on a physical network interface The interface must then be given a name The name can be arbitrary 2 1 14 Bright Managed Instances 1 Introduction 2 General questions 5 External Network 3 User Instances 6 Summary Deployment Bright Managed Instances Bright managed instances are created by the cluster administrators using the Bright CMDaemon API e g using CMSH or CMGUI They are provisioned using a Bright software image and thus have CMDaemon running and are managed just like regular Bright compute nodes Regular OpenStack end users are typically not able to create Bright Managed instances unless given access to the Bright API Do you want to enable Bright managed instances on your Op
9. 10 2 255 255 Usable addresses 65534 You have to specify the IP address range for Floating IPs What is the last IP address of the Floating IP address range 10 2 0 1280 Figure 2 49 External Network Ending Floating IP A screen similar to figure 2 49 allows the administrator to specify the ending floating IP address on the external network Bright Computing Inc 2 2 Installation Of OpenStack From The Shell 39 2 2 23 VNC Proxy Hostname The VNC Proxy Hostname must be set to allow remote access to the virtual consoles of the virtual nodes from anywhere on the external networks The VNC Proxy Hostname is an OpenStack terminology for the Fully Qualified Domain Name FQDN which resolves and routes to the headnode You can always change the value later and an IP address can also be used instead What s the externally visible FQDN or an external IP of the headnode max 32 characters bright70 brightcomputing con Figure 2 50 VNC Proxy Hostname The VNC Proxy Hostname screen figure 2 50 lets the administrator set the FODN as seen from the external network An IP address can be used instead of the FODN 2 2 24 Nova Compute Hosts The Nova compute hosts will be the guest hosts of the virtual machines and are the core of your OpenStack deployment nova compute These nodes can also optionally run compute API endpoints nova api Which compute nodes to use for Nova compute hosts default node001
10. Configuration Precheck Final Configuration Precheck Initializing Setup Cloning Base Software Image into Image openstackimage QG000000O Detailed deployment log Set the OpensStack software image path to cmfimages openstack image FFFS Final Configuration FPrecheck Calculating space requirements for software image clones Free space on cm images is 195617506 KB Base software image size is 3532149 KE Both software image paths exist Ekip calculating free space requirements Initializing Eetup Ensuring Open amp tack shared directory com shared apps openstack Ensuring cinder NFS export directory exists om shared apps openstack cinder volumes Ensuring that repositories exists Ensuring Open amp tack RFM repo Betting up EFEL repository Removing any existing boto packages conflicts with EDOS packages H mhe He mhe e e he Ensuring that a software image cpenstack image exists FFFF Cloning Base foeftware Image inte Image cpenstack image Removing the path com images cpenstack image Cloning the image might take a while 4 Figure 2 23 OpenStack Deployment Progress Screen At the end of its run the cluster has OpenStack set up and running in an integrated manner with Bright Cluster Manager The administrator can now configure the cluster to suit the particular site requirements 2 2 Installation Of OpenStack From The Shell The cmgui OpenStack installation section 2 1 us
11. Floating IPs are a range of IP addresses that the administrator specifies on the external network and they are made available to the users tenants for assignment to their user instances The number of Floating IPs available to users is limited by the Floating IP quotas set for the tenants Administrators can also assign Floating IP addresses to Bright managed instances Currently how ever this has to be done via the OpenStack API or Dashboard The inbound external traffic screen figure 2 17 reserves a range of IP addresses within the used external network In this example it happens to fall within the 10 2 0 0 16 network range If specifying a Floating IP range a single IP address from this range is always reserved by OpenStack for outbound traffic This is implemented via sNAT The address is reserved for instances which have not been assigned a Floating IP Therefore the IP address range specified in this screen normally expects a minimum of two IP addresses one for reserved for outbound traffic and one Floating IP If the administrator would like to allow OpenStack instances to have outbound connectivity but at the same time not have floating IPs then this can be done by e not configuring Floating IPs by selecting the No option and e in the next screen figure 2 18 specifying a single IP address Alternatively a user instance can be configured to access the external network via the head node however this is a bit more complicate
12. Leph Usb roles to categories or nodes Please note that nodes that are to run UsDs must be down during the setup process dd Add OSD Edit Edit OSDs Remove Remove Uslis Finish Fun setup procedure lt Back gt Figure 3 6 Ceph OSDs Configuration If Proceed to OSDs is chosen from the Ceph Monitors configuration screen in figure 3 4 then a screen for Ceph OSDs configuration figure 3 6 is displayed where e OSDs can be added to nodes or categories On adding the OSDs must be edited with the edit menu e Existing OSDs can be edited or removed figure 3 7 from nodes or categories e To finish up on the installation after any changes to the OSD configuration have been made the Finish option runs the Ceph setup procedure itself Bright Computing Inc 48 Editing Ceph OSDs Edit OSD role for node node Ceph Installation You can specify either the number of OsDs or the list of block devices to be used by the OS0s You can also specify both but then the number of OS50s must match the number of block devices If you leave the block devices field blank then each OSD gets its own filesystem under the specified data It is recommended to use a separate separated list of block devices can field e g sdb sdc In this case and dews sdcl will be formatted and Please note that you must specify a path block device for each USD A space be specified in the Block devices when a storage
13. and using ext raconfiparameters settings Ceph can be configured with changes that CMDaemon would otherwise not manage After the changes have been set CMDaemon manages them further Thus the following configuration section in the Ceph configuration file mds 2 host rabbit could be placed in the file via cmsh with Example root bright71 cmsh bright71 ceph bright71 gt ceph ceph append extraconfigparameters mds 2 host rabbit bright 71 gt cephx ceph commit If a section name enclosed in square brackets is used then the section is recognized at the start of an appended line by CMDaemon If a section that is specified in the square brackets does not already exist in etc ceph conf then it will be created The n is interpreted as a new line at its position After the commit the extra configu ration parameter setting is maintained by the cluster manager If the section already exists in etc ceph conf then the associated key value pair is appended For example the following appends host 2 bunny to an existing mds 2 section oright71 gt ceph ceph append extraconfigparameters mds 2 host2 bunny bright 71 gt cephx ceph commit If no section name is used then the key value entry is appended to the global section oright71 gt ceph ceph append extraconfigparameters osd journal size 128 bright71 gt cephx ceph commit The etc ceph conf file has the c
14. cmgui This brings up the RADOS GW Advanced Role Settings window figure 3 10 which allows RADOS GW properties to be managed For example ticking the Enable Keystone Authentication checkbox and saving the setting makes RADOS GW services available to OpenStack instances if they have already been initialized section 3 4 1 to work with Bright Cluster Manager Server Port 18888 Server Root variweww Server Script s3qwfegi Server Socket implradosgw sock Module mod_fastegi so Enable Keystone Authentication NSS database path varilib cephinss Keystone Revocation Interval 600 Keystone Tokens Cache Size 500 Keystone Accepted Roles 4 Cancel Ok Figure 3 10 RADOS GW Advanced Role Settings In cmgui 3 4 3 Turning Keystone Authentication On And Off For RADOS GW If command line installation is carried out for Bright Cluster Manager for the first time with cm radosgw setup o section 3 4 1 then RADOS GW is initialized for OpenStack instances This initialization for RADOS GW for OpenStack is not done by the cmsh commits or cmgui check box changes section 3 4 2 So toggling Ceph availability to OpenStack instances with cmgui and cmsh section 3 4 3 is only possible if cm radosgw setup o has been run once before to initialize the con figuration section 3 4 1 If RADOS GW has been installed then OpenStack instances can access it if Keystone authentication for RADOS
15. if in the previous screen figure 2 17 Floating IPs have not been configured and if the No option has been selected and if there are user instances being config ured The Allow Outbound Traffic screen does not appear if the cluster has only Bright managed in stances and no user managed instances This is because Bright managed instances can simply route their traffic through the head node without needing the network node to be adjusted with the configu ration option of this screen If the Allow Outbound Traffic screen appears then specifying a single IP address for the Outbound IP value in the current screen figure 2 18 sets up the configuration to allow outbound connections only A decision was made earlier about allowing user instances to access the internal network of the clus ter section 2 1 10 In the dialog of figure 2 18 if user instances are not enabled then the administrator is offered the option once more to allow access to the internal network of the cluster by user instances Bright Computing Inc 24 OpenStack Installation 2 1 18 External Network Interface for Network Node 1 Introduction 4 Bright managed instances 2 General questions 3 User Instances 6 Summary amp Deployment External Network Interface for Network Node In order for the network node to provide routing functionality t needs a connection to the external network That connection could be set up using a dedicated interface or if the
16. images Nova Compute Using Ceph as the backend introduces noticeable performance improvements in the areas of instance migration volume snapshotting booting new instances and overall reliability Do you want to use Ceph as the backend for the following services Volume storage Cinder Yes store volumes in Ceph No store volumes on an NFS share Image storage Glance e Yes store images inCeph No store images on the image storage nodes Root and ephemeral disks storage Nova Yes store root and ephemeral disks in Ceph No store root and ephemeral disks on compute hosts RADOS Object Gateway also known as Ceph Object Gateway is an HTTP gateway for the Ceph object store It exposes Ceph s storage capabilities using the OpenStack Swift or AWS 53 compatible APIS It effectively allows the end users to store their own data as objects inside Ceph using a REST HTTP interface RADOS gateway is an optional component of an Openstack deployment Do you want to deploy the RADOS Object Gateway e Yes give users access to object storage No do not configure object storage Note that Ceph object storage capabilities can be easily added later on by manually running cm radosqw setup See the Bright Cluster Manager Administrator Manual for details Figure 2 7 Ceph Configuration Screen If Ceph has been configured with Bright Cluster Manager before the wizard is run then the Ceph con figuration screen figure 2 7 is displayed Ch
17. internal network of the OpenStack deployment Depending on the choices made later in the wizard the selected network might become the network used for optionally hosting Bright managed instances and or optionally the network to which user created instances can connect to Please specify which internal network is to be used for connecting OpenStack compute hosts OpenStack internal network a internalnet x Cancel Figure 2 8 OpenStack Internal Network Selection Screen The OpenStack internal network selection screen figure 2 8 decides the network to be used on the nodes that run OpenStack 2 1 8 OpenStack Software Image Selection 4 Bright managed instances 5 External Network 3 User Instances 6 Summary amp Deployment OpenStack Software Image Selection All of the nodes participating in an OpenStack deployment must use a dedicated customized software image You can either specify an existing software image which is to be configured for OpenStack nodes or provide the name for the new software image which will then be created The software image used for OpenStack nodes will have to be automatically customized by the wizard It is therefore highly advisable to create a new software image dedicated only for OpenStack nodes instead of sharing the same software image with nodes not being a part of the OpenStack cluster The automatic customization of the software image involves among other things installing OpenStack RPM
18. node boots dew sdbl mounted under the specified data path Whole block device not a partition The Data path field can be used to specify data path for OsDs Ey default lts value is varslibscephrosdr cluster 1d where cluster is the name of the Ceph instance usually ceph and id is the unique OUSD s id It is recommended to use the default value of the data path field Humber of OSOs ul Block devices Data path f ar 1ib ceph osd cluster 1d Journal path Fate ial aram pd a Noe Rae ees ll Journal size Journal on partition shared journal device shared journal size Figure 3 7 Ceph Installation OSDs Editing Block Device Path OSD Path Journals For Categories Or Nodes The Edit option in figure 3 6 opens up a screen figure 3 7 that allows the editing of the properties of existing or newly added Ceph OSDs for a node or category In this screen e When considering the Number of OSDs and the Block devices then it is best to set either the Number of OSDs or the Block devices Setting both the number of OSDs and block devices is also possible but then the number of OSDs must match the number of block devices e If only anumber of OSDs is set and the block devices field is left blank then each OSD is given its own filesystem under the data path specified e Block devices can be set as a comma or a space separated list with no difference in meaning Example dev
19. sda dev sdb dev sdc and dev sda dev sdb dev sdc are equivalent Bright Computing Inc 3 2 Ceph Installation With cm ceph setup 49 e For the OSD Data path the recommended and default value is var Lio ecepharosd scluster Sid Here Scluster is the name of the head node of the cluster idis a number starting from 0 e Forthe Journal path the recommended and default value is ivar lib cepha osda coluster 1id J onbrnal e The Journal size in MiB can be set for the category or node A value set here overrides the default global journal size setting figure 3 3 This is just the usual convention where a node setting can override a category setting and a node or category setting can both override a global setting Also just like in the case of the global journal size setting a journal size for categories or nodes must always be greater than zero Defining a value of 0 MiB means that the default that the Ceph software itself provides is set At the time of writing March 2015 Ceph software provides a default of 5GiB The Journal size for a category or node is unset by default which means that the value set for Journal size in this screen is determined by whatever the global journal size setting is by default e Setting Journal on partition to yes means that the OSD uses a dedicated partition In this case The disk setup used is modified so that the first partition with a size of Jour
20. software image choice is default image Use existing software image An existing software image can be set The only value for the existing image in a newly configured cluster is default image Use software image from category The software image that the virtual node cate gory inherits from its base category is set as the software image Setting the software image means that the wizard will copy over the properties of the associated base image to the new software image and configure the new software image with the required virtualization modules The instance that uses this category then uses a modified image with virtualization modules that enable it to run on a virtual node e Ifthe Use existing category button is selected then The Virtual node category can be selected from the existing categories In a newly installed cluster the only possible value is default Setting the category means that the wizard will copy over the properties of the existing category to the new virtual category and configure the new software image with the required virtualization modules The instance that uses the configured image is then able to run it on a virtual node The virtual nodes can be configured to be assigned one of the following types of addresses e DHCP e Static The addresses in either case go in a sequence and begin with an address that the administrator sets in the wizard O Bright Computing Inc 22 OpenStack Installation
21. 19 PHC And State IrAddresseS a dor AA AS 37 22200 Plane LES Ems ed E ak eat ee ee Eee IA AS 38 2 2 21 External Network Starting Floating IP ii oe wow ERY na lS 38 2 2 22 External Network Ending Floating IP 040i oes a ew AY Ses 38 2279 NNC Proxy OS tE do Oe Re AURA A oe Bark oe 39 2224 Nova COMPU OSS 23 4 2 5 54 ado sei 4 39 2220 Neutron Network NOG 5 tt 6 ao 8 a e o eae wee 39 2220 Pre deployment Summa es S Bk A ES ore HOU AA 40 2 227 The state After Running cm gt openstack setup ey eerie fev oe ae 40 3 Ceph Installation 41 Sl Gep i Introcuchons lt s 0 as EAN Oa eee A Poe ee we A gi Os 41 Ole Ceph Object And Block StOtage sy 4 Gh Gos a Se REO SSE SOG 4b Ode ems 41 3 1 2 Ceph Software Considerations Before Use ooo o 42 Oo Hardware For Cepa Use ias rene a A AAA A 43 39 2 Ceph Installation Withem ceph setup cy ta a A Rak a eat 43 Deel CMT SU dera dd BB ae rd ad e ee a 43 3 2 2 Starting With Ceph Installation Removing Previous Ceph Installation 44 32 3 Ceph Monitors Contieuratior s si s mero A a ta an a As 45 2324 Cepa OSOS Conieuranon sera A a a aa 47 3 3 Checking And Getting Familiar With Ceph Items After cm ceph setup 50 3 3 1 Checking On Ceph And Ceph related Files From The Shell 50 3 3 2 Ceph Management With cmgui And cmsh o o o 52 3 4 RADOS GW Installation Initialization And Properties o o 56 3 4 1 RA
22. 24 2119 NING Proxy Hostales a lt 4 y oe ae Be a a AE gt OS OSA DS es 25 2120 SUMING eos ak os A Se eo EE BR OR eee ee Sek 26 2 2 Installation Of OpenStack From The Shell 2 2 2 ee 28 DOW GOLATE OCEAN ic Sy Geter e a ee ae tI ee eee a a 29 2 2 2 Informative Text Prior To Deployment 0 0 00 0 30 Zo EE SUSCeESHONS a a eee oh OO SE EH aa as 30 2 24 MySQL root And OpenStack admin Passwords 0 30 22 5 Reboot After Conheuration cesa aa aa Be aR ARE EA EES OEE ASS 31 22 06 CEPOS a 2 Aa ho a EY RE RIRS 31 2 2 7 Internal Network To Be Used For OpenStack o o o o o ooo ooo 33 2220 SER LASA AS A DADA 33 2 2 9 Virtual Instance Access To Internal Network o o 34 22 10 Network Isolanom ly pe rs AU A A A oS Aa 34 2 2 11 Choosing The Network That Hosts The User Networks 35 11 Table of Contents 2 2 12 Setting The Name Of The Hosting Network For User Networks 35 2 2 13 Setting The Base Address Of The Hosting Network For User Networks 35 2 2 14 Setting The Number Of Netmask Bits Of The Hosting Network For User Networks 36 2 2 15 Enabling Support For Bright managed Instances o o 36 2 2 16 Starting IP Address For Bright managed Instances ooo 36 2 2 17 Ending IP Address For Bright managed Instances o ooo o o o 37 2 2 18 Number Of Virtual Nodes For Bright managed Instances 37 22
23. ADOS a lower level interface that the gateway relies on If RADOS Object Gateway is enabled it means that users have access to object storage using Ceph instead of using the OpenStack Swift reference project implementation for object storage The RADOS Object Gateway can be added separately from this OpenStack installation wizard after it has completed by using the cm radosgw setup utility section 3 4 1 An advantage of Ceph and one of the reasons for its popularity in OpenStack implementations is that it supports volume snapshots in OpenStack Snapshotting is the ability to take a copy of a chosen storage and is normally taken to mean using copy on write COW technology More generally assuming enough storage is available non COW technology can also be used to make a snapshot despite its relative wastefulness In contrast to Ceph the reference Cinder implementation displays an error if attempting to use the snapshot feature due to its NFS driver The administrator should understand that root or ephemeral storage are concepts that are valid only inside the Nova project and are completely separate storages from Cinder These have nothing to do with the Cinder reference implementation so that carrying out a snapshot for these storages does not display such an error 2 Compute hosts need to have the root and ephemeral device data belonging to their hosted virtual machines stored some where The default directory location for these images
24. Bright Cluster Manager 7 1 OpenStack Deployment Manual Revision 6821 Date Thu 10 Dec 2015 2 Bright Computing 2015 Bright Computing Inc All Rights Reserved This manual or parts thereof may not be reproduced in any form unless permitted by contract or by written permission of Bright Computing Inc Trademarks Linux is a registered trademark of Linus Torvalds PathScale is a registered trademark of Cray Inc Red Hat and all Red Hat based trademarks are trademarks or registered trademarks of Red Hat Inc SUSE is a registered trademark of Novell Inc PGI is a registered trademark of NVIDIA Corporation FLEXIm is a registered trademark of Flexera Software Inc ScaleMP is a registered trademark of ScaleMP Inc All other trademarks are the property of their respective owners Rights and Restrictions All statements specifications recommendations and technical information contained herein are current or planned as of the date of publication of this document They are reliable as of the time of this writing and are presented without warranty of any kind expressed or implied Bright Computing Inc shall not be liable for technical or editorial errors or omissions which may occur in this document Bright Computing Inc shall not be liable for any damages resulting from the use of this document Limitation of Liability and Damages Pertaining to Bright Computing Inc The Bright Cluster Manager product principally consists of free
25. Cluster Manager e The UCS Deployment Manual describes how to deploy the Cisco UCS server with Bright Cluster Manager If the manuals are downloaded and kept in one local directory then in most pdf viewers clicking on a cross reference in one manual that refers to a section in another manual opens and displays that section in the second manual Navigating back and forth between documents is usually possible with keystrokes or mouse clicks For example lt A1t gt lt Backarrow gt in Acrobat Reader or clicking on the bottom leftmost naviga tion button of xpaf both navigate back to the previous document The manuals constantly evolve to keep up with the development of the Bright Cluster Manager envi ronment and the addition of new hardware and or applications The manuals also regularly incorporate customer feedback Administrator and user input is greatly valued at Bright Computing So any com ments suggestions or corrections will be very gratefully accepted at manuals brightcomputing com 0 3 Getting Administrator Level Support Unless the Bright Cluster Manager reseller offers support support is provided by Bright Computing over e mail via support brightcomputing com Section 10 2 of the Administrator Manual has more details on working with support Introduction OpenStack is an open source implementation of cloud services It is currently 2015 undergoing rapid development and its roadmap is promising A relatively stable
26. DOS GW Installation And Initialization With cm radosgw setup 56 342 setting RADOS GW Properties aia AAS OEE RS AS Ee eS 57 3 4 3 Turning Keystone Authentication On And Off For RADOS GW 58 Preface Welcome to the OpenStack Deployment Manual for Bright Cluster Manager 7 1 0 1 About This Manual This manual is aimed at helping cluster administrators install understand configure and manage ba sic OpenStack capabilities easily using Bright Cluster Manager The administrator is expected to be reasonably familiar with the Administrator Manual 0 2 About The Manuals In General Regularly updated versions of the Bright Cluster Manager 7 1 manuals are available on updated clus ters by default at cm shared docs cm The latest updates are always online at http support brightcomputing com manuals e The Installation Manual describes installation procedures for a basic cluster e The Administrator Manual describes the general management of the cluster e The User Manual describes the user environment and how to submit jobs for the end user e The Cloudbursting Manual describes how to deploy the cloud capabilities of the cluster e The Developer Manual has useful information for developers who would like to program with Bright Cluster Manager e The OpenStack Deployment Manual describes how to deploy OpenStack with Bright Cluster Man ager e The Hadoop Deployment Manual describes how to deploy Hadoop with Bright
27. Figure 2 51 Nova Compute Hosts The Nova compute hosts screen figure 2 51 prompts the administrator to set the nodes to use as the hosts for the virtual machines 2 2 25 Neutron Network Node The Neutron network node is responsible for DHCP floating IPs and routing within the virtual network infrastructure neutron dhcp agent neutron 13 agent neutron metadata agent The network node can be shared also with other OpenStack nodes e g on small deployments it can be the same as one of the nova compute hosts The neutron server process is always run on the headnode s Which compute node to use for the network node default node001 nocie0011 Figure 2 52 Neutron Network Node The Neutron network node screen figure 2 52 prompts the administrator to set the node to use for Neutron network node Bright Computing Inc 40 2 2 26 Pre deployment Summary OpenStack is ready to be deployed SUMMARY nodes with OpenStack roles internal network noVNC hostname access starage backend cinder storage backend glance storage backend nova Rados Gateway USER INSTANCES network isolation internal net access EXTERNAL NETWORK external network external net IP range floating IPs outbound access for VMs BRIGHT MANAGED INSTANCES vnode count vnode IP range 2 internalnet bright70 brightcomputing com nfs ceph ceph yes yes VXLAN yes es externalnet Ala 064A to TO ola algas no
28. GW is on The following table summarizes how authentication is turned on and off Bright Computing Inc 3 4 RADOS GW Installation Initialization And Properties 59 Via RADOS GW Access On RADOS GW Access Off command line em radosgw setup 0 installation cmsh device ra set enablekeystoneauth dosgateway role entication yes commit cmgui device Ticked checkbox for Enable Ceph subtab Keystone Authentication cm radosgw setup set enablekeystoneauth entication no commit Unticked checkbox for Enable Keystone Authentication Table 3 4 3 Allowing OpenStack instances to access Ceph RADOS GW Bright Computing Inc
29. a gt var lib ceph osd Scluster Sid lt osddata gt lt journaldata gt var lib ceph osd Scluster Sid journal lt journaldata gt lt journalsize gt 0 lt journalsize gt lt osdassociation gt lt osd gt lt cephtontig gt A disk setup section 3 9 3 of the Administrator Manual can be specified to place the OSDs on an XFS device on partition a2 as follows Example lt diskSetup gt lt device gt lt blockdev gt dev sda lt blockdev gt lt partition id al gt lt size gt 10G lt size gt lt type gt linux lt type gt lt filesystem gt ext3 lt filesystem gt lt mountPoint gt lt mountPoint gt lt mountOptions gt defaults noatime nodiratime lt mountOptions gt Bright Computing Inc 52 Ceph Installation lt partition gt lt partition 1d aZz gt lt size gt 10G lt size gt lt type gt linux lt type gt lt filesystem gt xfs lt filesystem gt lt mountPoint gt var lt mountPoint gt lt mountOptions gt defaults noatime nodiratime lt mountOptions gt lt partition gt lt partition ad as gt lt size gt 2G lt size gt lt type gt linux lt type gt lt filesystem gt ext3 lt filesystem gt lt mountPoint gt tmp lt mountPoint gt lt mountOptions gt defaults noatime nodiratime nosuid nodev lt mountOptions gt lt partition gt lt partition id a4 gt lt size gt 1G lt size gt lt type gt linux swap lt type gt lt partition gt lt partition ld a5 gt lt size gt max lt size gt
30. alue of VNC Proxy Hostname should be set to bright iebrignt cComoul ing gt com Bright Computing Inc 26 OpenStack Installation 2 1 20 Summary 1 Introduction 4 Bright managed instances 2 General questions 5 External Network 3 User Instances Summary OpenStack setup wizard has been completed however the specified OpenStack deployment configuration has not been deployed to the cluster yet This can be done automatically by clicking the Deploy button below Alternatively clicking the Show Configuration button will produce an XML configuration file which can be further customized f needed and then used as the input configuration file for the cm openstack setup command line utility Overview Support forUserinstances yes Network isolation WALAN User instances access tointernalnet yes Support for Bright managed instances yes Floating IPs yes Compute nodes nodeOOl Network node nodeQOl Automatically deploying the configuration will take several minutes during which a log window will be shown displaying the progress of the deployment Press Deploy to start deployment Press Cancel to close the wizard no changes will be introduced Press Save Configuration to get the OpenStack deployment configuration as XML show Configuration save Configuration Cancel WIOUS Deploy Figure 2 21 Summary Screen Viewing And Saving The Configuration The summary screen figure 2 21 g
31. and related data is under var 1ib nova If Ceph is not enabled then a local mount point of every compute host is used for data storage by default If Ceph is enabled then Ceph storage is used instead according to its mount configuration In either case compute hosts that provide storage for virtual machines by default store the virtual machine images under the var partition However the default partition size for var is usually only large enough for basic testing purposes For production use with or without Ceph the administrator is advised to either adjust the size of the existing partition using the disksetup command section 3 9 3 of the Administrator Manual or to use sufficient storage mounted from elsewhere To change the default paths used by OpenStack images the following two path variables in the Nova configuration file et c nova nova conf should be changed 1 state_path var lib nova 2 lock_path var lib nova tmp If state_path has been hardcoded by the administrator elsewhere in the file the location defined there should also be changed accordingly Bright Computing Inc 2 1 Installation Of OpenStack From cmgui 13 2 1 7 OpenStack Internal Network Selection 4 Bright managed instances 5 External Network 3 User Instances 6 Summary amp Deployment Internal Network Selection There are multiple internal networks configured in this cluster One of those internal networks must now be selected as the main
32. ard runs The Learn more button displays a pop up screen to further explain what information is gathered and what the wizard intends to do with the information The main overview screen also asks for input on the following e Should the regular nodes that become part of the OpenStack cluster be rebooted A reboot installs a new image onto the node and is recommended if interface objects need to be created in CM Daemon for OpenStack use Creating the objects is typically the case during the first run ever for a particular configuration Subsequent runs of the wizard do not normally create new interfaces and for small changes do not normally require a node reboot If in doubt the reboot option can be set to enabled Reboot is enabled as the default e Should a dry run be done In a dry run the wizard pretends to carry out the installation but the changes are not really implemented This is useful for getting familiar with options and their possible consequences A dry run is enabled as the default Bright Computing Inc 2 1 Installation Of OpenStack From cmgui 7 2 1 2 MySQL Credentials amp OpenStack admin User 1 Introduction 4 Bright managed instances 5 External Network 3 User Instances 6 Summary amp Deployment MySQL Credentials amp OpenStack Admin User Please specify the password for the root user of the MySQL server on the head node This password will be used by the wizard and will then be forgotten it wi
33. ate user instances no Figure 2 35 User Instances The user instances screen figure 2 35 lets the administrator decide if end users are to be allowed to create user instances Bright Computing Inc 34 OpenStack Installation 2 2 9 Virtual Instance Access To Internal Network This OpenStack deployment can optionally be configured to give the virtual instances created by end users access to the cluster s internal network Doing so might have serious security implications and is advised only if the users creating the VMs can be considered trusted Essentially enabling this gives users the ability to start a virtual machine to which they have root access that is connected directly to the cluster s internal network Do you want to give user instances access to the internal network no Figure 2 36 Virtual Instance Access To Internal Network The screen in figure 2 36 lets the administrator allow virtual instances access to the internal network This should only be allowed if the users creating the instances are trusted This is because the creator of the instance has root access to the instance which is in turn connected directly to the internal network of the cluster which means all the packets in that network can be read by the user 2 2 10 Network Isolation Type Which network isolation type is to be used for isolating the traffic on user networks User network isolation is optional as user instance can be created o
34. axbout INS Manual 400010 0d e eek eee a a dl iad Ae Rees AA iii 02 About The Mantials In General a ERA A iii 0 3 Getting Administrator Level Support 2 20 va was ee RM oho e es iii Introduction 1 OpenStack Installation 3 2 1 Installation Of OpenStack From EMQU se e ec oae 0404 4 AA e 4 2 1 1 OpenStack Setup Wizard Overview aaa 2 ee ee 6 2 1 2 MySQL Credentials amp OpenStack admin User soaa o 7 2 1 3 OpenStack Category Configuration oaoa 0 000 000 0000 8 214 OpenStack Compute Hosts ia idas oe eS ESOS HES Bos 9 Zico Openstack Network Node e 6 44 gia sine aw Ys Bae Be ae 10 2505 Cepa Contisuratl N gach bia ood 6h Re a Ae oe y Poe oe BS 11 2 1 7 OpenStack Internal Network Selection ooo o 13 2 1 8 OpenStack Software Image Selection o oo o 13 Zo Ser Instantes ele bae ala oe Se GE ERA AR 14 2 1 10 User Instance Isolation from Internal Cluster Network 15 2s INGIWOrK Isolado eoar o A a dd A E A he eS Ged 16 ZA VALLAN CONSUL ON Ea he SRR Aaa 17 ZAS Dedicated Physical NetwWOrks 4 554 4 2 iria ia Se ha eh ee 4 18 2 1 14 B gbhtManpaged Instantes coda to a A Re Re RA ges A 19 2 lS Virtual Node Configura lO es 66344 4 AA a eS GS SPHERE GR OS 20 21 16 Inbound External rate ani ia it ua H htt 22 217 AO Outbound Trane s ami 6 4 ach ede be eh bah Sra apo bad deed 23 2 1 18 External Network Interface for Network Node 044
35. been carried out on the node The reboot action is carried about by default as shown in the preceding output while the option is set by the cm openstack setup script page 31 Next the administrator can further configure the cluster to suit requirements Bright Computing Inc Ceph Installation 3 1 Ceph Introduction Ceph at the time of writing is the recommended storage software for OpenStack for serious use The Ceph RADOS Gateway is a drop in replacement for Swift with a compatible API Ceph is the recom mended backend driver for Cinder Glance and Nova The current chapter discusses e The concepts and required hardware for Ceph section 3 1 e Ceph installation and management section 3 2 e RADOS GW installation and management section 3 4 3 1 1 Ceph Object And Block Storage Ceph is a distributed storage software It is based on an object store layer called RADOS Reliable Autonomic Distributed Object Store which consists of Ceph components called OSDs Object Storage Devices and MONs Monitoring Servers These components feature heavily in Ceph OSDs deal with storing the objects to a device while MONSs deal with mapping the cluster OSDs and MONs together carry out object storage and block storage within the object store layer The stack diagram of figure 3 1 illustrates these concepts Bright Computing Inc Ceph Installation CephFS RBD RADOS GW MDS RAD OSD Ha MON OS Hardware Figure 3 1
36. d to set up Bright Computing Inc 2 1 Installation Of OpenStack From cmgui 23 Since outgoing connections use one IP address for all instances the remaining number of IP ad dresses is what is then available for Floating IPs The possible connection options are therefore as indi cated by the following table Floating IP Address Outbound Allowed Inbound Allowed Not enabled no no One set disabled yes no Two or more enabled yes yes 2 1 17 Allow Outbound Traffic 1 Introduction 2 General questions 5 External Network 3 User Instances 6 Summary amp Deployment Allow Outbound Traffic Floating IP configuration was disabled in the previous step and the networking configuration will be altered to support user instances Due to both of those facts in order for user instances to have outbound network connectivity a single IP address from the external network should be allocated Enable outbound connectivity for user instances Yes Mo Outbound IP 10 l2 lo lo Without the dedicated IP for outbound connections user instances will only be able to reach the outside if they are also connected to the cluster s internal network Note that earlier in the wizard access to the cluster s internal network for user instances has been disabled Should it be enabled Yes e No Cancel Previous Figure 2 18 Allow Outbound Traffic Screen The Allow Outbound Traffic screen appears
37. de categories dedicated for OpenStack ensuring all selected OpenStack nodes are in a OpenStack enabled category introducing network interface changes to OpenStack nodes Continue Figure 2 25 Informative Text Prior To Deployment If deployment is selected in the preceding screen an informative text screen figure 2 25 gives a sum mary of what the script does 2 2 3 Pre Setup Suggestions Before starting deployment take note of the following Notice A node named networknode was not found It is advised but not required to rename one of the compute nodes to networknode and reboot it before deploying OpenStack The node can then later be selected by the wizard as the network node of the OpenStack deployment Renaming the node before deploying OpenStack is recommended because changing the hostname of a network node after deployment can take more effort Figure 2 26 Pre Setup Suggestions The pre setup suggestions screen figure 2 26 suggests changes to be done before going on 2 2 4 MySQL root And OpenStack admin Passwords What is the MySQL password for the root user The script requires it to configure OpenStack MySQL databases for OpenStack services Password text is hidden porto Figure 2 27 MySQL root Password Screen Bright Computing Inc 2 2 Installation Of OpenStack From The Shell 31 Specify the password for the OpenStack main admin user This username and password can be used
38. e used to refer to a group of users Bright Computing Inc 2 Introduction Service OpenStack Project Managed By Bright Compute Nova Y Object Storage Swift depends Block Storage Cinder Y Networking Neutron Y Dashboard Horizon Y Identity Service Keystone Y Orchestration Heat Y Telemetry Ceilometer x Database Service Trove x Image Service Glance Y Bright Cluster Manager does not manage the OpenStack reference implementation for Swift object storage but does manage a replace ment the API compatible Ceph RADOS Gateway implementation Not all of these projects are integrated or needed by Bright Cluster Manager for a working Open Stack system For example Bright Cluster Manager already has an extensive monitoring system and therefore does not for now implement Ceilometer while Trove is ignored for now because it is not yet production ready Projects that are not yet integrated can in principle be added by administrators on top of what is deployed by Bright Cluster Manager even though this is not currently supported or tested by Bright Computing Integration of the more popular of such projects and greater integration in general is planned in future versions of Bright Cluster Manager This manual explains the installation configuration and some basic use examples of the OpenStack projects that have so far been integrated with Bright Cluster Manager Bright Computing Inc OpenStack Installation To Use Ce
39. egory default gt roles use cephosd bright71 default gt roles cephosd osdassociations Beriott les osd gt osdassociations s list E names10 osddata 30 name key osddata osd0 var lib ceph osd Scluster Sid bright 7L sosd gt o0sdassociations list f jJournaldata s3 journal size name key journaldata journalsize osd0 var lib ceph osd Scluster Sid journal 0 The option is used here with the list command merely in order to format the output so that it stays within the margins of this manual The cmgui equivalent of the preceding cmsh settings is accessed from within a particular Nodes or Categories item in the resource tree then accessing the Ceph tab and then choosing the OSD checkbox The Advanced button allows cephosd role parameters to be set for the node or category OSD filesystem association extraconfigparameters setting Extra configuration parame ters can be set for an OSD filesystem association such as ods0 by setting values for its extraconfigparameters option This is similar to how it can be done for Ceph general configu ration page 54 Example Bright Computing Inc 56 Ceph Installation obright71 osd gt osdassociations use osd0 bright71 osdassociations ods0 show Parameter Value Automatically adjust weight off Extra config parameters bright71 osdassociations osd0 set extraconfigparameters a b Ceph Monitoring Properties Similarly to Ceph OSD properties the pa
40. enStack deployment Yes No Bright managed instances will be connected to the clusters internal network internalnet OpenStack needs to know which range of IP addresses it can use for Bright managed instances The range should not include any IP addresses which are already in use by other devices on the internal network Please specify the allocation pool for the Bright managed instances IP range start 10 141 96 0 IP range end 10 141 159 255 Figure 2 15 Bright Managed Instances Screen The Bright managed instances screen figure 2 15 allows administrators to enable Bright managed in stances These instances are also known as virtual nodes Administrators can then run OpenStack instances using Bright Cluster Manager End users are allowed to run OpenStack instances managed by OpenStack only if explicit permis sion has been given This permission is the default that is set earlier on in the user instances screen figure 2 10 If Bright managed instances are enabled then an IP allocation scheme must be set The values used to define the pool are e IP range start By default this is 10 141 96 0 e IP range end By default this is 10 141 159 255 The screens shown in figures 2 16 to 2 20 are displayed next if Bright managed instances are enabled O Bright Computing Inc 20 OpenStack Installation 2 1 15 Virtual Node Configuration Virtual Node Configuration Ho
41. er of virtual networks Do you want to use VLANs or VXLANS VLAN e VALAN Cancel Previous Figure 2 12 Network Isolation Screen The network isolation screen figure 2 12 allows the administrator to set the virtual LAN technology that user instances can use for their user defined private networks Using virtual LANs isolates the IP networks used by the instances from each other This means that the instances attached to one private network will always avoid network conflicts with the instances of another other network even if using the same IP address ranges Bright Cluster Manager supports two virtual LAN technologies e VLAN VLAN technology tags Ethernet frames as belonging to a particular VLAN However it re quires manual configuration of the VLAN IDs in the switches and also the number of IDs available is limited to 4094 e VXLAN VXLAN technology has more overhead per packet than VLANs because it adds a larger ID tag and also because it encapsulates layer 2 frames within layer 3 IP packets However unlike with VLANs configuration of the VXLAN IDs happens automatically and the number of IDs available is about 16 million By default VXLAN technology is chosen This is because for VXLAN the number of network IDs available along with the automatic configuration of these IDs means that the cluster can scale further and more easily than for VLAN Selecting a network isolation type is mandatory unless user instances are co
42. es the cm openstack setup utility dur ing deployment hidden from normal view The installation can also be done directly with Bright Computing Inc 2 2 Installation Of OpenStack From The Shell 29 cm openstack setup The cm openstack setup utility is a less preferred alternative to the in stallation of OpenStack from cmgui The cm openstack setup utility is a part of the standard cluster tools package Details on its use are given in its manual page man 8 cm openstack setup When run the regular nodes that are to run OpenStack instances are rebooted by default at the end of the dialogs in order to deploy them A prerequisite for running cm openstack setup is that the head node should be connected to the distribution repositories A sample cm openstack setup wizard session is described next starting from section 2 2 1 The session runs on a cluster consisting of one head node and one regular node The wizard can be inter rupted gracefully with a lt ctrl c gt 2 2 1 Start Screen This utility can be used to create a new DpenStack cloud deployment deploy OpenStack remove remove OpenStack deployment exit exit the utility Figure 2 24 Start Screen The start screen figure 2 24 lets the administrator e deploy Bright Cluster Manager OpenStack e remove Bright Cluster Manager s OpenStack if it is already on the cluster e exit the installation Removal removes OpenStack related database entries roles networks vir
43. gory that will be used as the template for several OpenStack categories that are going to be created by the wizard Bright Computing Inc 2 1 Installation Of OpenStack From cmgui 9 2 1 4 OpenStack Compute Hosts 1 Introduction 4 Bright managed instances o External Network 3 User Instances 6 Summary amp Deployment OpenStack Compute Hosts In OpenStack deployments the majority of machines will likely be used to host the virtual machines managed by OpenStack These nodes will be referred to as the compute hosts In this screen a Bright category will be set up to hold all the compute hosts Do you want to use an existing category for the compute hosts or create a new category Using the existing category will effectively assign the OpenStack compute role to all the nodes already in that category Use existing category Create new category OpenStack compute hosts category openstack compute hosts Please select any additional nodes which are to be moved to the category specified above All Nodes Nodes nodeggl QC Filter matches m0 on Figure 2 5 OpenStack Compute Hosts Screen The compute hosts configuration screen figure 2 5 allows the administrator to take nodes which are still available and put them into a category that will have a compute role The category can be set either to be an existing category or a new category can be created If an ex
44. hanges written into it about a minute after the commit and may then look like some lines removed for clarity global auth client required cephx osd journal size 128 mds 2 host rabbit host2 bunny As usual in cmsh operations section 2 5 3 of the Administrator Manual e The set command clears extraconfigparameters before setting its value e The removefrom command operates as the opposite of the append command by removing key value pairs from the specified section There are similar extraconfigparameters for Ceph OSD filesystem associations page 55 and for Ceph monitoring page 56 Bright Computing Inc 3 3 Checking And Getting Familiar With Ceph Items After cm ceph setup 55 Ceph OSD Properties From within ceph mode the osdinfo command for the Ceph instance displays the nodes that are providing OSDs along with their OSD IDs Example bright 71 gt ceph osdinfo ceph OSD id Node OSD name node001 osd0 node002 osdo0 Within a device or category mode the roles submode allows parameters of an assigned cephosd role to be configured and managed Example bright 71 gt category default gt roles show cephosd Parameter Value Name cephosd OSD associations lt 1 in submode gt Provisioning associations lt 0 internally used gt Revision Type CephOSDRole Within the cephosd role the templates for OSD filesystem associations osdassociations can be set or modified Example bright 71 gt cat
45. hat level of simultaneous capability By default if the VXLAN network and VXLAN network object do not exist then the wizard helps the administrator create a vx lanhostnet network and network object section 2 1 12 The network is attached to and the object is associated with all non head nodes taking part in the OpenStack deployment If a vxlanhostnet network is pre created beforehand then the wizard can guide the administrator to associate a network object with it and ensure that all the non head nodes participating in the OpenStack deployment are attached and associated accordingly The VXLAN network runs over an IP network It should therefore have its own IP range and each node on that network should have an IP address By default a network range of 10 161 0 0 16 is suggested in the VXLAN configuration screen section 2 1 12 figure 2 13 The VXLAN network can run over a dedicated physical network but it can also run over an alias interface on top of an existing internal network interface The choice is up to the administrator It is possible to deploy OpenStack without VXLAN overlay networks if user instances are given access to the internal network Care must then be taken to avoid IP addressing conflicts e Changing the hostname of a node after OpenStack has been deployed is a best practice but in volves some manual steps So instead it is recommended to change the hostnames before running the wizard For example t
46. he intended configuration Then in the back end largely hidden from the user it runs the text based cm openstack setup script section 2 2 with this configuration on the active head node In other words the wizard can be regarded as a GUI front end to the cm openstack setup utility The practicalities of executing the wizard The explanations given by the wizard during its execution steps are intended to be verbose enough so that the administrator can follow what is happening The wizard is accessed via the OpenStack resource in the left pane of cmgui figure 2 1 Launching the wizard is only allowed if the Bright Cluster Manager license Chapter 4 of the Installation Manual entitles the license holder to use OpenStack x Bright Cluster Manager 006 File Monitoring Filter Wiew Bookmarks Help RESOURCES m OpenStack E Bright trunk Cluster TF Ceph Openstack A Users Groups To configure OpenStack on your cluster run the Setup Wizard Workload Management o bp Monitoring Configuration Authorization B Authentication 14 Figure 2 1 The Setup Wizard Button In cmgui s OpenStack Resource The wizard runs through the screens in sections 2 1 1 2 1 20 described next Bright Computing Inc EEE 2 1 1 OpenStack Setup Wizard Overview MA Gright managed instances 2 General questions 5 External Network 3 User Instances 6 Summary amp Deployment OpenStack Setup Wizard Overview This wizard helps the administ
47. hould be set For example et h0 e Create tagged VLAN interface If this option is chosen then a tagged VLAN interface is used for the connection from the network node to the external network Base interface The base interface is selected Typically the interface selected is BOOTIF Tagged VLAN ID The VLAN ID for the interface is set O Bright Computing Inc 2 1 Installation Of OpenStack From cmgui 25 2 1 19 VNC Proxy Hostname 1 Introduction 4 Bright managed instances 2 General questions 3 User Instances 6 Summary amp Deployment VNC Proxy Hostname In order to be able to remotely access the virtual consoles of the virtual nodes from anywhere on the external networks a Fully Qualified Domain Name FQDN domain name or an IP address resolving and routing to the headnode has to be specified It will be used as the so called VNC Proxy Hostname What s the externally visible FQDN or an external IP of the headnode VNC Proxy Hostname bright 0 brightcomputing com Cancel Previous Figure 2 20 VNC Proxy Hostname Screen The VNC proxy hostname screen figure 2 20 sets the FODN or external IP address of the head node of the OpenStack cluster as seen by a user that would like to access the consoles of the virtual nodes from the external network Example If the hostname resolves within the brightcomput ing com network domain then for an Open Stack head hostname that resolves to bright 71 the v
48. ide For example the OpenStack Cinder project provides block storage capabilities to OpenStack via the implementation of for example NFS or Ceph block storage The OpenStack s block storage service can therefore be implemented by the interchangable backends of the NFS or Ceph projects As far as the user is concerned the result is the same An analogy to OpenStack is operating system packaging as provided by distributions An operating system distribution consists of subsystems maintained as packages and their depen dencies Some subsystems provide capabilities to the operating system via the implementation of a backend service The service can often be implemented by interchangeable backends for the subsystem A specific example for an operating system distribution would be the mailserver subsystem that provides mail delivery capabilities to the operating system via the implementation of for example Postfix or Sendmail The mailserver package and dependencies can therefore be implemented by the interchangeable backends of the Postfix or Sendmail software As far as the e mail user is concerned the end result is the same The project that implements the backend can also change if the external functionality of the project remains the same Some of the more common OpenStack projects are listed in the following table The term projects must not be confused with the term used in OpenStack elsewhere where projects or sometimes tenants ar
49. ight Computing Inc 2 1 Installation Of OpenStack From cmgui 15 Deploying OpenStack without configuring it for either type of instance is also possible but such an OpenStack cluster is very limited in its functionality and typically has to be customized further by the administrator Both types of instances are virtual machines hosted within a hypervisor managed by the OpenStack compute project Nova The main difference between these two types instances include the following e User instances are typically created and managed by the end users of the deployment either directly via the OpenStack API or via OpenStack Dashboard outside of direct influence from Bright Cluster Manager User instances are provisioned using any OpenStack compatible software image provided by the user and thus have no CMDaemon running on them User instances are attached to user created virtual networks Optionally they can be allowed to connect directly to the cluster s internal network section 2 1 10 The number of user instances that can be run is not restricted in any way by the Bright Cluster Manager license e Bright managed instances sometimes also called virtual nodes are typically created and managed by the cluster cloud administrators using CMDaemon via cmsh or cmgui or pythoncm They are provisioned using a Bright software image and therefore have CMDae mon running on them Because of CMDaemon the administrator can manage Bright managed instances
50. implementation of OpenStack based on the OpenStack Juno release https www openstack org software juno is integrated into the Bright Cluster Manager 7 1 for OpenStack edition It is supported for versions of RHEL7 onwards By relatively stable it is meant that OpenStack itself is usable and stable for regular use in common configurations but not quite production ready when carrying out some less common configuration changes In a complex and rapidly evolving product such as OpenStack the number of possible unusual configuration changes is vast As a result the experience of Bright Computing is that Bright Cluster Manager can sometimes run into OpenStack issues while implementing the less common OpenStack configurations As one of the supporting organizations of OpenStack Bright Computing is committed towards working together with OpenStack developers to help Bright customers resolve any such issue The end result after resolving the issue means that there is a selection pressure that helps evolve that aspect of OpenStack to become convenient and stable for regular use This process benefits all participants in the OpenStack software ecosystem OpenStack consists of subsystems developed as software projects A software project provides ca pabilities to OpenStack via the implementation of a backend service and thereby provides an OpenStack service The OpenStack service can thus be implemented by interchangeable backends which projects can prov
51. isting category can be used or a new category can be created If a new category is to be created then openstack network hosts is its suggested name The option to specify a node as a network node is most convenient in the typical case when all of the non head nodes have been set as belonging to the compute node category in the preceding screen figure 2 5 Indeed in the case that all non head nodes have been set to be in the compute node category the category options displayed in figure 2 6 are then not displayed leaving only the option to specify a particular node A network node inherits many of the OpenStack related compute node settings but will have some exceptions to the properties of a compute node Many of the exceptions are taken care of by assigning the openstacknetwork role to any network nodes or network node categories as is done in this screen Bright Computing Inc 2 1 Installation Of OpenStack From cmgui 11 2 1 6 Ceph Configuration 1 Introduction 4 Bright managed instances o External Network 3 User Instances 6 Summary amp Deployment Ceph Configuration Ceph isis a distributed object store and file system designed to provide excellent performance reliability and scalability It can be used also as a drop in replacement for OpensStack s Swift This wizard can configure the existing Bright managed Ceph deployment as a backend data storage system for volumes Cinder images Glance and root ephemeral disk
52. isting category is used then default can be chosen If Ceph has been integrated with Bright Cluster Manager then the ceph category is another available option Creating a new category is recommended and is the default option The suggested default category name is openstack compute hosts Bright Computing Inc 10 OpenStack Installation 2 1 5 OpenStack Network Node 1 Introduction 4 Bright managed instances 5 External Network 3 User Instances 6 Summary amp Deployment OpenStack Network Node OpenStack deployments require a single dedicated network node to run several networking related services To do so you can either select an existing category containing a single node or create a new category and put a single node into it a node which has not been specified as a compute host in the previous screen or do not create a new category but instead select a single node to have the network node role assigned to it directly in this case the node can also be a compute host specified in the previous screen Use existing category Create new category Don t create category specify anode C node001 x Cancel Previous Next Figure 2 6 OpenStack Network Node Screen The network node screen figure 2 6 makes a node the network node or makes a category of nodes the network nodes A network node is a dedicated node that handles OpenStack networking services If a category is used to set up network nodes then either an ex
53. ives a summary of the configuration The configuration can be changed in cmgui if the administrator goes back through the screens to adjust settings The full configuration is kept in an XML file which can be viewed by clicking on the Show Configuration button The resulting read only view is shown in figure 2 22 Bright Computing Inc 2 1 Installation Of OpenStack From cmgui 27 lt properties gt lt re booth odes ponCompletion true lt rebootWodesUponCompletion emysqlRootPassword gt system lt mysqlRootPassword gt emainAdminPassword system lt mainAdminPassword gt lt baseCategoryForNewly Created OpensStackCate gories gt default lt base Cate goryForNewly Created Opensta ckCategories gt lt backendCinder ceph lt backendCinder lt backendGlance ceph lt backendGlance gt lt backendNova ceph lt backendNova gt lt setupRadosgw gt yes lt setupRadosgw lt virtualNode Count 3 lt virtualNode Count gt lt virtualNodePretix vnode lt virtualNodePrefix gt lt vittualNode Cate gory virtual nodes lt virtualNode Cate gory gt lt virtualNodeSottwarel mage default image lt virtualNodeSoftwarelmaqge lt softwarelmage openstack image lt sottwarelmage gt lt VNCProxyHostname gt openstacks brightcomputing com lt VNCProxyHostname lt roles group L gt categories group 1 gt computeRole ittemList 1 gt lt jtem gt openstack compute hosts lt item gt lt computeR ole gt lt categores gt anodes group 1 gt
54. just like regular nodes under Bright Cluster Manager Bright managed instances are always connected to the cluster s internal network but can also be attached to user created net works To allow user instances to be created the Yes radio button should be ticked in this screen This will lead to the wizard asking about user instance network isolation VLAN VXLAN Whether or not Bright managed instances are to be allowed is set later in the Bright managed in stances screen figure 2 15 2 1 10 User Instance Isolation from Internal Cluster Network 1 Introduction 4 Bright managed instances 2 General questions 9 External Network 6 Summary amp Deployment User Instance Isolation from Internal Cluster Network This OpenStack deployment can optionally be configured to give the virtual instances created by end users access to the cluster s internal network Doing so might have serious security implications and is advised only if the users creating the VMs can be considered trusted Essentially enabling this gives users the ability to start a virtual machine to which they have root access that is connected directly to the cluster s internal network Do you want to allow user instances to be able to connect to the cluster s internal network internalnet Yes No Cancel Previous Figure 2 11 User Instance Isolation from Internal Cluster Network Screen If the creation of user instances has been enabled figure 2 10 the
55. k Create new Use existing No available internal networks The internalnet cannot be used for VXLAN host Base address 10 161 o 0 Netmaskbits 16 Name vxlanhostnet Cancel Figure 2 13 VXLAN Configuration Screen The VXLAN screen figure 2 13 shows configuration options for the VXLAN network if VXLAN has been chosen as the network isolation technology in the preceding screen If the network isolation tech nology chosen was VLAN then a closely similar screen is shown instead For the VXLAN screen the following options are suggested with overrideable defaults as listed e VXLAN Range start default 1 e VXLAN Range end default 50000 The VXLAN range defines the number of user IP networks that can exist at the same time While the range can be set to be 16 million it is best to keep it to a more reasonable size such as 50 000 since a larger range slows down Neutron significantly An IP network is needed to host the VXLANSs and allow the tunneling of traffic between VXLAN endpoints This requires e either choosing an existing network that has already been configured in Bright Cluster Manager but not internalnet O Bright Computing Inc 18 OpenStack Installation e or it requires specifying the following in order to create the network Base address default 10 161 0 0 Netmask bits default 16 A new network Name default vxlanhostnet VXLAN networking uses a multicast address t
56. ll not be stored anywhere The password is required to create individual MySQL databases for various OpenStack Services Show password The main administrative user in an OpenStack cluster is the admin user Please specify the desired password for the admin user account Show password Cancel Previous Next Figure 2 3 MySQL Credentials amp OpenStack admin User Screen The MySQL and OpenStack credentials screen figure 2 3 allows the administrator to set passwords for the MySQL root user and the OpenStack admin user The admin user is how the administrator logs in to the Dashboard URL to manage OpenStack when it is finally up and running Bright Computing Inc 8 OpenStack Installation 2 1 3 OpenStack Category Configuration 4 Bright managed instances 5 External Network 3 User Instances 6 Summary amp Deployment OpenStack Category Configuration OpenStack nodes will be distributed across a set of categories which will specialize them according to the roles they play in the OpenStack deployment compute network storage etc These categories can be specified at later stages At this stage please specify which existing category is to be used as the base for any newly created OpenStack categories later in the wizard OpenStack category q default y Cancel Previous Figure 2 4 OpenStack Category Configuration Screen The category configuration screen figure 2 4 sets the node cate
57. lso be used for querying the status of the Leph cluster Figure 3 8 Ceph Installation Completion After selecting the Finish option of figure 3 6 the Ceph setup proceeds On successful completion a screen as in figure 3 8 is displayed 3 3 Checking And Getting Familiar With Ceph Items After cm ceph setup 3 3 1 Checking On Ceph And Ceph related Files From The Shell The status of Ceph can be seen from the command line by running Example root bright71 ceph s cluster d9422c23 321e 4fa0 b510 ca8e09a0alfc health HEALTH_OK monmap el 1 mons at bright71 10 141 255 254 6789 0 election ep och 2 quoruo U bright 71 osdmap e6 2 osds 2 up 2 in pgmap v9 192 pgs 3 pools O bytes data U objects 2115 MB used 18340 MB 20456 MB avail 192 activetclean The h option to ceph lists many options Users of Bright Cluster Manager should usually not need to use these and should find it more convenient to use the cmgui or cmsh front ends instead Generated XML Configuration File By default an XML configuration file is generated by the cm ceph set up utility and stored after a run in the current directory as cm ceph setup config xml The name of the Ceph instance is by default ceph If a new instance is to be configured with the cm ceph setup utility then a new name must be set in the configuration file and the new configura tion file must be used Using An XML Configuration File The c option to cm ceph setup allo
58. lt networkR ole itemList 1 gt ttemennade 001 lt item gt Figure 2 22 OpenStack Configuration Screen The configuration can be saved with the Save Configuration option of figure 2 21 After exiting the wizard the XML file can be directly modified if needed in a separate text based editor Using A Saved Configuration And Deploying The Configuration Using a saved XML file is possible e The XML file can be used as the configuration starting point for the text based cm openstack setup utility section 2 2 if run as root bright71 cm openstack setup c lt XML file gt e Alternatively the XML file can be deployed as the configuration by launching the cmgui wizard and then clicking on the Load XML button of first screen figure 2 2 After loading the configura tion a Deploy button appears Clicking the Deploy button that appears in figure 2 2 after loading the XML file or clicking the Deploy button of figure 2 21 sets up OpenStack in the background The direct background progress is hidden from the administrator and relies on the text based cm openstack setup script section 2 2 Some log excerpts from the script are displayed within a Deployment Progress window figure 2 23 O Bright Computing Inc 28 OpenStack Installation Deployment Progress Deployment progress Is shown in the window below En Deploying OpenStack Performing Environment Precheck Ensuring all properties are set License Check Cluster
59. m can conflict with the default range 6800 7300 used by the Ceph OSD daemons If there is a need to run Slurm on an OSD node then it is necessary to arrange it so that the ports used do not conflict with each other During installation a warning is given when this conflict is present 3 1 3 Hardware For Ceph Use An absolute minimum installation can be carried out on two nodes where e 1 node the head node runs one Ceph Monitor and the first OSD e 1 node the regular node runs the second OSD This is however not currently recommended because the first OSD on the head node requires its own Ceph compatible filesystem If that filesystem is not provided then Ceph on the cluster will run but in a degraded state Using such a system to try to get familiar with how Ceph behaves in a production environment with Bright Cluster Manager is unlikely to be worthwhile A more useful minimum if there is a node to spare installing Ceph over 3 nodes is suggested where e 1 node the head node runs one Ceph Monitor e 1 node the regular node runs the first OSD e 1 more node also a regular node runs the second OSD For production use a redundant number of Ceph Monitor servers is recommended Since the number of Ceph Monitoring servers must be odd then at least 3 Ceph Monitor servers with each on a separate node are recommended for production purposes The recommended minimum of nodes for production purposes is then 5 e 2 regular nodes ru
60. n a screen for the general Ceph cluster settings figure 3 3 is dis played The general settings can be adjusted via subscreens that open up when selected The possible general settings are e Public network This is the network used by Ceph Monitoring to communicate with OSDs For a standard default Type 1 network this is internalnet Bright Computing Inc 3 2 Ceph Installation With cm ceph setup 45 e Private network This is the network used by OSDs to communicate with each other For a standard default Type 1 network this is internalnet e Journal size The default OSD journal size in MiBs used by an OSD The actual size must always be greater than zero This is a general setting and can be overridden by a category or node setting later on Defining a value of 0 MiB here means that the default that the Ceph software itself provides is set At the time of writing March 2015 Ceph software provides a default of 5GiB Network Types are discussed in section 3 3 6 of the Installation Manual Selecting the Next option in figure 3 3 continues on with the next major screen of the setup proce dure and displays a screen for Ceph Monitors configuration figure 3 4 3 2 3 Ceph Monitors Configuration This section allows you to assign Leph Monitor roles to categories or nodes After assigning a role its configuration can be edited or the role can be removed dd Edit Edit monitors Remove Remove monitors Mext Proceed to US0s
61. n the cluster internal network Network isolation based on VLANs is faster than WXLANs but allows only for 4096 networks and typically requires additional manual changes in the switch configuration Network isolation based on VXLANS does not require changes in the switch configuration allows for over 16 000 000 networks but is typically slower than VLANs User network isolation type VLAN Virtual LAN EJ XLANBBVirtual Extensible LA is none access via internal network Figure 2 37 Network Isolation Type The network isolation type screen figure 2 37 allows the administrator to choose what kind of network isolation type if any should be set for the user networks Bright Computing Inc 2 2 Installation Of OpenStack From The Shell 35 2 2 11 Choosing The Network That Hosts The User Networks There is one existing network which can be used as the VLAN host network Create a new VXLAN host network or use an existing network Create a new VXLAN host network reate new internalneti 10 141 0 0 16 nodes on network O Figure 2 38 User Networks Hosting If the user networks has their type VXLAN VLAN or no virtual LAN chosen in section 2 2 10 then a screen similar to figure 2 38 is displayed This allows one network to be set as the host for the user networks If there are one or more possible networks already available for hosting the user networks then one of them can be selected Alternatively a completel
62. nal size is used A value of 0 for the Journal size is invalid and does not cause a Ceph default size to be used The default value of Journal on partitionisno e The Shared journal device path must be set if a shared device is used for all the OSD journals in the category or node for which this screen applies The path is unset by default which means it is not used by default e The Shared journal size in MiB can be set For n OSDs each of size x MiB the value of Shared journal sizeisn x x That is its value is the sum of the sizes of all the individual OSD journals that are kept on the shared journal device If it is used then The value of Shared journal size is used to automatically generate the disk layout setup of the individual OSD journals A value of 0 for the Journal size is invalid and does not cause a Ceph default size to be used The Shared journal size value is unset by default The Back option can be used after accessing the editing screen to return to the Ceph OSDs configu ration screen figure 3 6 Bright Computing Inc 50 Ceph Installation Successful Completion Of The Ceph Installation Leph has been configured successfully For details see los file var log cm ceph setup log After booting the nodes configured with Leph roles 1t will take some time to create and start USD and Monitor services on those nodes The progress can be monitored with the ceph s command which can a
63. network node does not have an extra network interface available a tagged VLAN interface can be used The network node node001 does not have a interface configured on the external network externalnet lt needs to have one for the Floating IPs to work Do you want the wizard to configure node001 s network connectivity to the external network by creating a dedicated physical interface or by creating a tagged VLAN interface e Create dedicated physical interface Create tagged VLAN interface Select which base interface is to be used for the tagged VLAN interface mi Cancel Previous Next Figure 2 19 External Network Interface for Network Node Screen The external network interface for network node screen figure 2 19 allows the administrator to config ure either a dedicated physical interface or a tagged VLAN interface for the network node The interface is to the external network and is used to provide routing functions for OpenStack The network node must have a connectivity with the external network when Floating IPs or and outbound traffic for instances is being configured If the node already has a connection to the external network configured in Bright the wizard will skip this step The options are e Create dedicated physical interface If this option is chosen then a dedicated physical interface is used for connection from the network node to the external network Interface name The name of the physical node s
64. nfigured to allow access to the internal network of the cluster by the administrator figure 2 10 Presently only one type of network isolation is supported at a time Bright Computing Inc 2 1 Installation Of OpenStack From cmgui 17 2 1 12 VXLAN Configuration 1 Introduction 4 Bright managed instances 2 General questions 5 External Network 6 summary amp Deployment VXLAN Configuration Please specify the VXLAN network ID VNID range which will be used for individual isolated user networks Each such new OpenStack user network created as part of a tenant will be automatically assigned a VNID from within this range Therefore the wider the range the more networks will be able to exist at the same time The range should be composed of consecutive numbers e g 10 3000 The default VXLAN ranges should be sufficient in most cases VALAN Range start 1 VXLAN Range end 50000 VXLAN networking makes use of multicast for certain functionality Therefore a specific multicast address has to be dedicated to VXLAN networking The default multicast IP address which will be used by the wizard is 224 0 0 1 If there are any other applications in the cluster which already use this IP please refer to the OpenStack Administrator Manual on how to change it to a different IP When using VLANs to isolate user networks an IP network is needed to host the VLANs Please specify below a network that can be used as the VXLAN host networ
65. nning OSDs e 2 regular nodes running Ceph Monitors e 1 head node running a Ceph Monitor Drives usable by Ceph Ceph OSDs can use any type of disk that presents itself as a block device in Linux This means that a variety of drives can be used 3 2 Ceph Installation With cm ceph setup 3 2 1 cm ceph setup Ceph installation for Bright Cluster Manager can be carried out with the ncurses based cm ceph setup utility It is part of the cluster tools package that comes with Bright Cluster Manager If the Ceph packages are not already installed then the utility is able to install them for the head and regular nodes assuming the repositories are accessible and that the package manager priorities are at their defaults Bright Computing Inc 44 Ceph Installation 3 2 2 Starting With Ceph Installation Removing Previous Ceph Installation The cm ceph setup utility can be run as root from the head node Welcome to the Bright Lluster Manager Leph setup utility setup Cep Uninstall Uninstall Leph Figure 3 2 Ceph Installation Welcome At the welcome screen figure 3 2 the administrator may choose to e Set up Ceph e Remove Ceph if it is already installed Leneral Leph cluster settings Configure public network Cluster network Configure cluster network Journal size Default journal size Next Proceed to Monitors lt Back gt Figure 3 3 Ceph Installation General Cluster Settings If the setup option is chosen the
66. no yes 3 AO ale alo sl od em alfa abel alae ala OpenStack Installation Generate XML config Dump the current configuration to a file Continue Deploy OpenStack Abort deployment abort Figure 2 53 Pre deployment summary The pre deployment summary screen figure 2 53 displays a summary of the settings that have been entered using the wizard and prompts the administrator to deploy or abort the installation with the chosen settings The options can also be saved as an XML configuration by default cm openstack setup conf in the directory under which the wizard is running This can then be used as the input configuration file for the cm openstack setup utility using the c option 2 2 27 The State After Running cm openstack setup At this point the head node has OpenStack installed on it A regular node that has been configured with the OpenStack compute host role ends up with OpenStack installed on it after the operating system running on the node is updated and the newly configured interfaces are set up The update can be carried using imageupdate section 5 6 of the Administrator Manual or by rebooting the regular node A reboot is however required if the interfaces have been changed which is normally the case unless the script is being run after a first run has al ready set up the changes A reboot of the regular node is therefore normally a recommended action because it ensures the updates and the interface changes have
67. o handle broadcast traffic in a virtual network The default multicast IP address that is set 224 0 0 1 is unlikely to be used by another application How ever if there is a conflict then the address can be changed using the CMDaemon OpenStackVXLANGroup directive Appendix C page 528 of the Administrator Manual 2 1 13 Dedicated Physical Networks 1 Introduction 4 Bright managed instances 2 General questions 5 External Network 6 Summary amp Deployment Dedicated Physical Networks The VXLAN host network is the network used for transmitting the traffic of the user created virtual networks The VXLAN host network can be either set up on top of a dedicated network fabric or alternatively it can share the fabric with the main internal network internalnet Compute hosts All compute nodes need to have a network interface on the VXLAN Host network vxlanhostnet For the compute nodes which do not yet have such an interface do you want the wizard to create a dedicated physical interface on that network or do you want it to configure the node to share the interface connected to the internal network internalnet via an alias interface e Share the interface with internalnet via an alias interface Create a dedicated physical interface Network node The network node needs to have an interface on the VXLAN host network vxlanhostnet Do you want the setup to create a dedicated physical interface on that network
68. o set up a network node Bright Computing Inc 2 1 Installation Of OpenStack From cmgui 5 a single regular node which is to be the network node of the deployment should be chosen and renamed to networknode The Setup Wizard recognizes net worknode as a special name and will automatically suggest using it during its run The soon to be network node should then be restarted the Setup Wizard is then run from the head node When the wizard reaches the network node selection screen section 2 1 5 figure 2 6 networknode is suggested as the network node e When allowing for Floating IPs and or enabling outbound connectivity from the virtual machines VMs to the external network via the network node the network node can be pre configured manually according to how it is connected to the internal and external networks Otherwise if the node is not pre configured manually the wizard then carries out a basic configuration on the network node that configures one physical interface of the network node to be connected to the internal network so that the network node can route packets for nodes on the internal network configures the other physical interface of the network node to be connected to the external network so that the network node can route packets from external nodes The wizard asks the user several questions on the details of how OpenStack is to be deployed From the answers it generates an XML document with t
69. oosing any of the Ceph options requires that Ceph be pre installed This is normally done with the cm ceph setup script section 3 2 Ceph is an object based distributed parallel filesystem with self managing and self healing features Object based means it handles each item natively as an object along with meta data for that item Ceph is a drop in replacement for Swift storage which is the reference OpenStack object storage software project The administrator can decide on Bright Computing Inc 12 OpenStack Installation e Using Ceph for volume storage instead of on NFS shares This is then instead of using the Open Stack Cinder reference project for the implementation of volume storage e Using Ceph for image storage instead of on image storage nodes This is instead of using the OpenStack Glance reference project for the implementation of virtual machine image storage e Using Ceph for root and ephemeral disks storage instead of on the filesystem of the compute hosts This is instead of using the OpenStack Nova reference project implementation for the im plementation for disk filesystem storage Object Storage Swift Project In addition the screen also asks if the RADOS Object Gateway should be deployed RADOS Object Gateway is a high level interface that allows Ceph to be accessed in the same way as Amazon S3 or OpenStack Swift using their HTTP APIs The higher level RADOS Object Gateway should not be con fused with R
70. ph It Must Be Installed Before Deploying OpenStack If OpenStack is to access Ceph for storage purposes for any combination of block storage Cinder image storage Glance ephemeral storage Nova or object storage RADOS Gateway then the Ceph components must first be installed with cm ceph setup Chapter 3 before starting the OpenStack installation procedure covered here Hardware Requirement For Running OpenStack The optimum hardware requirements for OpenStack depend on the intended use A rule of thumb is that the number of cores on the compute nodes determines the number of virtual machines OpenStack itself can run entirely on one physical machine for demonstration purposes However if running OpenStack with Bright Cluster Manager then a standard demonstration con figuration can be considered to be a head node a network node and several regular nodes Regular nodes are commonly called compute nodes while the network node is typically a re purposed regular node For such a standard configuration recommended hardware specifications for useful demonstra tion purposes are e A head node with 8GB RAM and two network interfaces e A network node with 2GB RAM and two network interfaces e Three regular nodes with 2GB RAM per core Each regular node has a network interface Running OpenStack under Bright Cluster Manager with fewer resources is possible but may run into issues While such issues can be resolved they are usually not wor
71. rage forms of Ceph object block or filesystem can use a filesystem for storage For production use of Ceph XFS is currently the recommended filesystem option due to its stability ability to handle extreme storage sizes and its intrinsic ability to deal with the significant sizes of the extended attributes required by Ceph The nodes that run OSDs are typically regular nodes Within the nodes the storage devices used by the OSDs automatically have their filesystems configured to be of the XFS type during the installation of Ceph with Bright Cluster Manager Bright Computing Inc 3 2 Ceph Installation With cm ceph setup 43 Use Of datanode For Protection Of OSD Data Typically a filesystem used for an OSD is not on the same device as that of the regular node filesystem Instead typically OSD storage consists of several devices that contain an XFS filesytem with the devices attached to the node These devices need protection from being wiped during the reprovisioning that takes place during a reboot of regular nodes The recommended way to protect storage devices from being wiped is to set the datanode property of their node to yes page 179 of the Administrator Manual Use Of Slurm On OSD Nodes Ceph can be quite demanding of the network and I O Running Slurm jobs on an OSD node is therefore not recommended In addition if Slurm roles are to be assigned to nodes that have OSD roles then the default ports 6817 and 6818 used by Slur
72. rameters of the cephmonitor role can be configured and managed from within the node or category that runs Ceph monitoring Example bright71 device use bright71 bright 71 gt device bright71 roles use cephmonitor ceph gt device bright71 gt roles cephmonitor show Parameter Value Extra config parameters Monitor data var lib ceph mon Scluster Shostname Name cephmonitor Provisioning associations lt 0 internally used gt Revision Type CephMonitorRole Ceph monitoring ext raconfigparameters setting Ceph monitoring can also have extra config urations set via the ext raconfigparameters option in a similar way to how it is done for Ceph general configuration page 54 Monitors are similarly accessible from within cmgui for nodes and categories with an Advanced button in their Ceph tab allowing the parameters for the Monitor checkbox to be set Ceph bootstrap For completeness the boot strap command within ceph mode can be used by the administrator to initialize Ceph Monitors on specified nodes if they are not already initialized Administrators are how ever not expected to use it because they are expected to use the cm ceph setup installer utility when installing Ceph in the first place The installer utility carries out the bootstrap initialization as part of its tasks The boot strap command is therefore only intended for use in the unusual case where the administrator would like to set up Ceph storage without using
73. rator plan and carry out a configuration of OpenStack The wizard has two main stages The first stage gathers information on the intended configuration and presents a summary of it The second stage carries out the planned configuration After the two stages are completed OpenStack is ready to be used To learn more about this deployment wizard click here After completing the deployment itis advised to reboot the non headnode nodes participating in the OpenStack deployment ata minimum perform a SYNC install Ifthe selected nodes are UP then the reboot can also be done by the wizard Should the wizard reboot the OpenStack nodes as part of the deployment process e Yes No tis possible to run this wizard in a normal mode and in dry run mode When run in normal mode a summary of the changes will be presented and ifthe administrator agrees to it the changes will be carried out When run in dry run mode no changes will be carried out Dry mode Yes No Cephis not configured It will not be possible to select Ceph as the OpenStack s storage backend If you want to use Ceph with OpenStack you must deploy Ceph before deploying OpenStack Before deploying OpenStack it s advised to go through the OpenStack deployment Manual lt contains multiple tips on how to prepare your cluster for deploying OpenStack Load AML Figure 2 2 OpenStack Setup Wizard Overview Screen The main overview screen figure 2 2 gives an overview of how the wiz
74. rnalnet has Base IP address 10 141 0 0 Broadcast IP address 110 141 255 255 Usable addresses 65534 You have to specify the IP address range for Bright managed instances What is the last IP address of the Bright managed instance address range 10 141 459 255 Figure 2 44 Ending IP Address For Bright managed Instances An ending IP address must be set for the Bright managed instances figure 2 44 2 2 18 Number Of Virtual Nodes For Bright managed Instances How many Bright managed virtual nodes do you want this setup script to create A Bright managed virtual node is a regular node that is instantiated as a Virtual Machine upon power on inside your OpenStack cluster It is then provisioned with a Bright software image It will run a CMDaemon and will be accessible via an IP address from your internal network You can always easily add additional Bright managed virtual nodes later for example by cloning the existing ones How many virtual nodes do you want to create Figure 2 45 Number Of Virtual Nodes For Bright managed Instances The number of Bright managed virtual machines must be set figure 2 45 The suggested number of instances in the wizard conforms to the defaults that OpenStack sets These defaults are based on an overcommit ratio of virtual CPU real CPU of 16 1 and virtual RAM real RAM of 1 5 1 The instance flavor chosen then determines the suggested number of instances 2 2 19 DHCP And Static IP Addresses
75. s to the software image Create reuse image e Create new image Use existing image cannot be used if only one software image is defined OpensStack software image openstack image Base software image default image Figure 2 9 OpenStack Software Image Selection Screen Bright Computing Inc 14 OpenStack Installation The OpenStack software image selection screen figure 2 9 decides the software image name to be used on the nodes that run OpenStack An existing image name can be used if there is more than one available image name Creating a new image with the default name of openstack image is recommended This image openstack image is to be the base OpenStack image and it is cloned and modified from an original pre OpenStack deployment image By default the name openstack image is chosen This is recommended because the image that is to be used by the OpenStack nodes has many modifications from the default image and it is useful to keep the default image around for comparison purposes 2 1 9 User Instances 1 Introduction 4 Bright managed instances 2 General questions 5 External Network 6 Summary amp Deployment User Instances There are two classes of virtual instances User instances and Bright managed instances User instances are created by cluster end users using OpenStack API and or the OpenStack dashboard They can run any OpenStack Image Bright managed instances are created by the clu
76. set for virtual nodes further down in the screen During a new deployment virtual nodes can be placed in categories either by creating a new cate gory or by using an existing category e Ifthe Create a new category radio button is selected recommended then The Virtual node category is given a default value of virtual nodes This is a sen sible setting for a new deployment The Base category can be selected This is the category from which the new virtual node category is derived Category settings are copied over from the base category to the virtual Bright Computing Inc 2 1 Installation Of OpenStack From cmgui 21 node category The only category choice for the Base category in a newly installed cluster is default Some changes are then made to the category settings in order to make virtual nodes in that category run as virtual instances One of the changes that needs to be made to the category settings for a virtual node is that a software image must be set The following options are offered Create a new software image This option is recommended for a new installation Choosing this option presents the following suboptions The Software image is given a default value of virtual node image This is a sensible setting for a new deployment The Base software image can be selected This is the software image from which the new virtual node software image is derived In a newly installed cluster the only base
77. software that is licensed by the Linux authors free of charge Bright Computing Inc shall have no liability nor will Bright Computing Inc provide any warranty for the Bright Cluster Manager to the extent that is permitted by law Unless confirmed in writing the Linux authors and or third parties provide the program as is without any warranty either expressed or implied including but not limited to marketability or suitability for a specific purpose The user of the Bright Cluster Manager product shall accept the full risk for the qual ity or performance of the product Should the product malfunction the costs for repair service or correction will be borne by the user of the Bright Cluster Manager product No copyright owner or third party who has modified or distributed the program as permitted in this license shall be held liable for damages including general or specific damages damages caused by side effects or consequential damages resulting from the use of the program or the un usability of the program including but not limited to loss of data incorrect processing of data losses that must be borne by you or others or the inability of the program to work together with any other program even if a copyright owner or third party had been advised about the possibility of such damages unless such copyright owner or third party has signed a writing to the contrary Table of Contents Tables Oh COMES tas e ee eat eee ee a ee ee a i Od
78. ss The boot st rap option can take the following values auto This is the default and recommended option If the majority of nodes are tagged with auto during the current configuration stage and configured to run Ceph Monitors then If they are up according to Bright Cluster Manager at the time of deployment of the setup process then the Monitor Map is initialized for those Ceph Monitors on those nodes If they are down at the time of deployment of the setup process then the maps are not initialized true If nodes are tagged true and configured to run Ceph Monitors then they will be initialized at the time of deployment of the setup process even if they are detected as being down during the current configuration stage false If nodes are tagged false and configured to run Ceph Monitors then they will not be initialized at the time of deployment of the setup process even if they are detected as being up during the current configuration stage e The data path is set by default to var lib ceph mon Scluster Shostname where Scluster is the name of the Ceph instance This is ceph by default Shostname is the name of the node being mapped e The Back option can be used after accessing the editing screen to return to the Ceph Monitors configuration screen figure 3 4 Bright Computing Inc 3 2 Ceph Installation With cm ceph setup 47 3 2 4 Ceph OSDs Configuration This section allows you to assign
79. ster administrators using the Bright CMDaemon API e g using CMSH or CMGUI They are provisioned using a Bright software image and thus have CMDaemon running and are managed just like regular Bright compute nodes Regular OpenStack end users are typically not able to create Bright Managed instances unless they are given access to Bright API If you decide to enable support for user instances you will be asked several additional questions related to configuring the underlying networking for those instances Those questions will include selecting the user network isolation method VLAN or VXLAN and the possibility to give the user instances access to the cluster s internal network internalnet Do you want to enable User Instances on your OpenStack deployment Yes No Cancel Previous Next Figure 2 10 User Instances Screen The User Instances screen figure 2 10 allows the administrator to allow the Bright end user to create user instances The following overview may help get a perspective on this part of the wizard configuration proce dure The main function of OpenStack is to manage virtual machines From the administrator s point of view there are two classes of virtual machines or instances e User instances configured in this section e Bright managed instances configured with the help of figure 2 15 The wizard allows OpenStack to be configured to support both types of instances or only one of them Br
80. t For Bright managed Instances Do you want to enable support for Bright managed instances no Figure 2 42 Enabling Support For OpenStack Instances Under Bright Cluster Manager There are two kinds of OpenStack instances also known as virtual nodes that can run on the cluster These are called user instances and Bright managed instances The screen in figure 2 42 decides if Bright managed instances to run Bright managed instances are actually a special case of user instances just managed much more closely by Bright Cluster Manager Only if permission is set in the screen of section 2 2 9 can an end user access Bright managed in stances The screens from figure 2 43 to figure 2 45 are only shown if support for Bright managed instances is enabled 2 2 16 Starting IP Address For Bright managed Instances The network internalnet has Base IP address 10 141 0 0 Broadcast IP address 100 141 255 255 Usable addresses 65534 You have to specify the IP address range for Bright managed instances What is the first IP address of the Bright managed instance address range 10 141 96 00 Figure 2 43 Starting IP Address For Bright managed Instances A starting IP address must be set for the Bright managed instances figure 2 43 Bright Computing Inc 2 2 Installation Of OpenStack From The Shell 37 2 2 17 Ending IP Address For Bright managed Instances First IP 10 141 96 0 now specify the last IP The network inte
81. t storage 2 1 Installation Of OpenStack From cmgui The cmgui OpenStack Setup Wizard is the preferred way to install OpenStack A prerequisite for running it is that the head node should be connected to the distribution repositories Some suggestions and background notes These are given here to help the administrator understand what the setup configuration does and to help simplify deployment Looking at these notes after a dry run with the wizard will probably be helpful e A VXLAN Virtual Extensible LAN network is similar to a VLAN network in function but has features that make it more suited to cloud computing If VXLANSs are to be used then the wizard is able to help create a VXLAN overlay network for OpenStack tenant networks An OpenStack tenant network is a network used by a group of users allocated to a particular virtual cluster A VXLAN overlay network is a Layer 2 network overlaid on top of a Layer 3 network The VXLAN overlay network is a virtual LAN that runs its frames encapsulated within UDP packets over the regular TCP IP network infrastructure It is very similar to VLAN technol ogy but with some design features that make it more useful for cloud computing needs One major improvement is that around 16 million VXLANSs can be made to run over the under lying Layer 3 network This is in contrast to the 4 000 or so VLANs that can be made to run over their underlying Layer 2 network if the switch port supports t
82. teway Ceph RADOS Gateway RADOS gateway as known as Ceph Object Gateway is an HTTP gateway for the Ceph Object Store It exposes Ceph s storage capabilities using the OpenStack Swift or AWS 53 compatible APIs It effectively allows the end users to store their own data as objects inside Ceph using REST HTTP interface RADOS gateway is an optional component of an OpenStack deployment Do you want to enable Rados Object Gateway m Figure 2 33 Root And Ephemeral Device Storage With Ceph Bright Computing Inc 2 2 Installation Of OpenStack From The Shell 33 The Ceph RADOS gateway screen figure 2 33 lets the administrator set the Ceph RADOS gateway service to run when deployment completes 2 2 7 Internal Network To Be Used For OpenStack There are multiple internal network on the cluster One of those networks has to picked as the main OpenStack network All nodes which will become OpenStack nodes must be connected to this network Which internalnet net to use for OpenStack ED nternalnet g10 141 0 0 16 nodes on network internalnet1 10 141 0 0 16 nodes on network 0 Figure 2 34 Internal Network To Be Used For OpenStack If there are multiple internal networks then the internal network selection screen figure 2 34 lets the administrator choose which of them is to be used as the internal network to which the OpenStack nodes are to be connected 2 2 0 User Instances Do you want to allow end users to cre
83. th the time spent analyzing them It is better to run with ample resources and then analyze the resource consumption to see what issues to be aware of when scaling up to a production system Ways Of Installing OpenStack The version of OpenStack that is integrated with Bright Cluster Manager can be installed in the follow ing two ways e Using the GUl based Setup Wizard button from within cmgui section 2 1 This is the recom mended installation method e Using the text based cm openstack setup utility section 2 2 The utility is a part of the stan dard cluster tools package The priorities that the package manager uses are expected to be at their default settings in order for the installation to work By default deploying OpenStack installs the following projects Keystone Nova Cinder Glance Neutron Heat and Horizon the dashboard Bright Computing Inc 4 OpenStack Installation If Ceph is used then Bright also can also optionally deploy RADOS Gateway to be used as a Swift API compatible object storage system Using RADOS Gateway instead of the reference Swift object storage is regarded in the OpenStack community as good practice and is indeed the only object storage system that Bright Cluster Manager manages for OpenStack Alternative backend storage is possible at the same time as object storage which means for example that block and image storage are options that can be used in a cluster at the same time as objec
84. the cm ceph setup utility 3 4 RADOS GW Installation Initialization And Properties 3 4 1 RADOS GW Installation And Initialization With cm radosgw setup The cm radosgw setup utility can be run on the head node after installing Ceph with cm ceph setup The cm rados setup utility configures RADOS so that it provides a REST API called the RADOS GW service If cm radosgw setup is run with the o option then RADOS GW is integrated with OpenStack by enabling Keystone authentication for it Example Bright Computing Inc 3 4 RADOS GW Installation Initialization And Properties 57 root bright71 cm radosgw setup o root bright71 If cm radosgw setup is run without the o option then RADOS GW is installed but Keystone authentication is disabled and the gateway is therefore then not available to OpenStack instances Command line installation with the o option initializes RADOS GW for OpenStack instances the first time it is run in Bright Cluster Manager 3 4 2 Setting RADOS GW Properties RADOS GW Properties In cmsh RADOS GW properties can be managed in cmsh by selecting the device then dropping into the radosgateway role bright71 device use bright 71 bright 71 gt device bright71 roles bright 71 gt device bright71 gt roles use radosgateway Q bright71 gt device bright71 gt roles radosgateway show Parameter Value Name radosgateway Provisioning associations lt 0 internally used gt Revision
85. to interact with OpenStack directly using the OpenStack dashboard You can easily change this password at a later time Admin password esse Figure 2 28 OpenStack admin Password Screen The MySQL root password screen figure 2 27 prompts for the existing root password to MySQL to be entered while the OpenStack admin password screen figure 2 28 prompts for a password to be entered and then re entered for the soon to be created admin user in OpenStack 2 2 5 Reboot After Configuration Once the configuration and deployment of OpenStack is completed all compute nodes participating in the OpenStack deployment should be rebooted to update their software images and network interfaces Do you want the cm openstack setup to reboot the nodes as part of the deployment process or do you want to reboot the nodes yourself manually Do you want the setup to reboot compute nodes after completion no Figure 2 29 Reboot After Configuration Screen A screen is shown asking if the compute host nodes that is the nodes used to host the virtual nodes should be re installed after configuration figure 2 29 A re install is usually best for reasons discussed on page 6 for the cmgui installation wizard equivalent of this screen option 2 2 6 Ceph Options Background notes on the Ceph options can be read on page 11 in the section on the cmgui wizard configuration of Ceph options This section 2 2 6 covers the Ncurses cm openstack set
86. tual nodes and interfaces Images and categories related to OpenStack are however not removed Bright Computing Inc 30 OpenStack Installation 2 2 2 Informative Text Prior To Deployment This wizard will guide you through the process of deploying OpenStack The wizard will first ask you several questions It will then display a summary outlining the OpenStack deployment Once the deplyment summary has been accepted it will deploy OpenStack Information gathered from the user by the wizard includes basic cluster credentials with MySQL root password among those category and software image selection for the OpenStack compute nodes selecting OpenStack compute host nodes run nova compute host VMs selecting the network node hosts neutron agents and Floating IPs whether to configure OpenStack to support Bright managed and or user instances tenant network isolation type VLAN or VXLAN generic network configuration configuring access to external network if any for network node gathering other network interface information major changes introduced in the cluster s configuration by the wizard may include creation of network objects to represent the VLAN Host or VXLAN host network creation of a OpenStack software image used for booting OpenStack nodes creation of a virtual node software image used for booting Bright managed instances assignment of OpenStack roles to selected OpenStack nodes creation of no
87. up wizard configuration of Ceph options Glance Image Storage With Ceph OpenStack Glance needs a place to store the images They can be stored either on a NFS share cm shared by default or in Ceph Do you want to usel Ceph to store OpenStack images m Figure 2 30 Image Storage With Ceph Bright Computing Inc 32 OpenStack Installation Ceph can be set for storing virtual machine images instead of the OpenStack reference Glance using the Ceph image storage screen figure 2 30 Block Storage With Ceph OpenStack Cinder needs a place to store the block volumes Volumes can be stored either on a NFS share cm shared by default or in Ceph Do you want to use Ceph to store OpenStack volumes m Figure 2 31 Block Storage With Ceph Ceph can be set for handling block volume storage read and writes instead of the OpenStack reference Cinder by using the Ceph for OpenStack volumes screen figure 2 31 Root And Ephemeral Device Storage With Ceph OpenStack Nova compute needs a place to store the root and ephemeral devices of VMs These can be stored either in Ceph or on a local mountpoint of every compute hosts Do you want to use Ceph to store root and ephemeral disks of vms m Figure 2 32 Root And Ephemeral Device Storage With Ceph Data storage with Ceph can be enabled by the administrator by using the Ceph for OpenStack root and ephemeral device storage screen figure 2 32 Ceph Object Ga
88. user instance internal cluster net work isolation screen figure 2 11 allows the administrator to allow OpenStack end users to create user instances which have direct network connectivity to the cluster s internal network Bright Computing Inc 16 OpenStack Installation If the network isolation restriction is removed so that there is network promiscuity between user instances on the internal network then this allows user instances figure 2 10 to connect to other user instances on the internal network of the cluster End users can then manage other user instances Allowing this should only be acceptable if all users that can create instances are trusted 2 1 11 Network Isolation 1 Introduction 4 Bright managed instances 2 General questions 5 External Network 6 Summary amp Deployment Network Isolation OpenStack can allow users to create their own private networks and connect their user instances to it The user defined networks must be isolated in the backend using either VLAN or VXLAN technology Using VLAN Isolation in general results in better performance However the downside ts that the administrator needs to configure the usable VLAN IDs in the network switches Therefore the number of user defined networks Is limited by the number of available VLAN IDs Using VXLANs on the other hand generates some overhead but does not require speciic switch configuration and allows for creating a greater numb
89. w many Bright managed instances should be created Additional Bright managed instances can easily be added later on Virtual node count 5 Virtual node prefix vnode base name for the virtual nodes Do you want to use an existing category for your virtual nodes or do you want to create a new one Choosing to create a new category for virtual nodes will allow you to also specify a different software image tor the virtual nodes Create anew category Use existing category Virtual node category virtual nodes Base category A default Do you want to use an existing software image for your virtual nodes or do you want to create a new one The selected software image will have to be modified by the wizard Create anew software image Use existing software image Use software image from category Software image virtual node image Base software image default image You can assign IP addresses to your virtual nodes using DHCP assigned IP addresses or static IP addresses These will be in sequence starting from an address that you must specify DHCP Static Cancel Figure 2 16 Virtual Node Configuration Screen The virtual node configuration screen figure 2 16 allows the administrator to set the number category and image for the virtual nodes The suggestions presented in this screen can be deployed in a test cluster Virtual node count o e Virtual node prefix vnode A Virtual node category can be
90. ws an existing XML configuration file to be used Example root bright71 cm ceph setup c root myconfig xml Bright Computing Inc 3 3 Checking And Getting Familiar With Ceph Items After cm ceph setup 51 A Sample XML Configuration File A Ceph XML configuration schema with MONs and OSDs running on different hosts could be as follows Example lt cephConfig gt lt networks gt lt pu ublic gt internalnet lt public gt lt cluster gt internalnet lt cluster gt lt networks gt lt journalsize gt 0 lt journalsize gt lt monitor gt lt hostname gt raid test lt hostname gt lt monitordata gt var lib ceph mon cluster hostname lt monitordata gt lt monitor gt lt osd gt lt hostname gt node001 lt hostname gt lt osdassociation gt lt name gt osd0 lt name gt lt blockdev gt dev sdd lt blockdev gt lt osddata gt var lib ceph osd Scluster Sid lt osddata gt lt Jjournaldata gt var lib ceph osd cluster id journal lt journaldata gt lt Journalsize gt 0 lt journalsize gt lt osdassociation gt lt osdassociation gt lt name gt osd1 lt name gt lt blockdev gt dev sde lt blockdev gt lt osddata gt var lib ceph osd Scluster Sid lt osddata gt lt journaldata gt var lib ceph osd Scluster Sid journal lt journaldata gt lt journalsize gt 0 lt journalsize gt lt osdassociation gt lt osdassociation gt lt name gt osd2 lt name gt lt blockdev gt dev sdf lt blockdev gt lt osddat
91. y new network can be created to host them 2 2 12 Setting The Name Of The Hosting Network For User Networks Specify a name for the VXLAN host network vxlanhostnet Figure 2 39 Setting The Name Of The Network For User Networks If a network to host the user networks is chosen in section 2 2 11 then a screen similar to figure 2 39 is displayed This lets the administrator set the name of the hosting network for user networks 2 2 13 Setting The Base Address Of The Hosting Network For User Networks What s the base address for the new YXLAN host network 10 464 0 0 Figure 2 40 Setting The Base Address Of The Network For User Networks If the network name for the network that hosts the user networks is chosen in section 2 2 12 then a screen similar to figure 2 40 is displayed This lets the administrator set the base address of the hosting network for user networks Bright Computing Inc 36 OpenStack Installation 2 2 14 Setting The Number Of Netmask Bits Of The Hosting Network For User Networks What s the number of netmask bits for the new WXLAN host network Figure 2 41 Setting The Number Of Netmask Bits Of The Network For User Networks If the base address for the network that hosts the user networks is set in section 2 2 13 then a screen similar to figure 2 41 is displayed This lets the administrator set the number of netmask bits of the hosting network for user networks 2 2 15 Enabling Suppor

Download Pdf Manuals

image

Related Search

Related Contents

Philips SCO3200 User's Manual  お客様へ  PLD-FPGA`s design software, IP`s and hardware  إﻋﻼن ﻋن طﻟب ﻋروض ﻣﻔﺗوح 2013 / رﻗم 11 - Ministère de l`Habitat et de la  RSE-1F2 - Ansaldo STS  Impex NS-1003R Owner's Manual  Recommandations en matière de sécurité - Energie  Kalmar DRF420-450L Reachstackers de 42 – 45 toneladas  第61期 平成21年6月20日 PDF 形式 453 KB  Craftsman 917.389061 Lawn Mower User Manual  

Copyright © All rights reserved.
Failed to retrieve file