Red Hat Enterprise Linux

These instructions will take you through a first-time install of Calico. If you are upgrading an existing system, please see Upgrading Calico on OpenStack instead.

There are two sections to the install: adding Calico to OpenStack control nodes, and adding Calico to OpenStack compute nodes. Follow the Common steps on each node before moving on to the specific instructions in the control and compute sections. If you want to create a combined control and compute node, work through all three sections.

Before you begin

  • Ensure that you meet the requirements.
  • Confirm that you have SSH access to and root privileges on one or more Red Hat Enterprise Linux (RHEL) hosts.
  • Make sure you have working DNS between the RHEL hosts (use /etc/hosts if you don’t have DNS on your network).
  • Install OpenStack with Neutron and ML2 networking on the RHEL hosts.

Common steps

Some steps need to be taken on all machines being installed with Calico. These steps are detailed in this section.

  1. Add the EPEL repository. You may have already added this to install OpenStack.

  2. Configure the Calico repository:

    1. cat > /etc/yum.repos.d/calico.repo <<EOF
    2. [calico]
    3. name=Calico Repository
    4. baseurl=https://binaries.projectcalico.org/rpm/calico-3.28/
    5. enabled=1
    6. skip_if_unavailable=0
    7. gpgcheck=1
    8. gpgkey=https://binaries.projectcalico.org/rpm/calico-3.28/key
    9. priority=97
    10. EOF
  3. Install version 1.0.1 of the etcd3gw Python package. This is needed by Calico’s OpenStack driver and DHCP agent.

    1. yum install python3-pip
    2. pip3 install etcd3gw==1.0.1
  4. Edit /etc/neutron/neutron.conf. Add a [calico] section with the following content, where <ip> is the IP address of the etcd server.

    1. [calico]
    2. etcd_host = <ip>

Control node install

On each control node, perform the following steps:

  1. Delete all configured OpenStack state, in particular any instances, routers, subnets and networks (in that order) created by the install process referenced above. You can do this using the web dashboard or at the command line.

    Red Hat Enterprise Linux - 图1tip

    The Admin and Project sections of the web dashboard both have subsections for networks and routers. Some networks may need to be deleted from the Admin section.

    Red Hat Enterprise Linux - 图2caution

    The Calico install will fail if incompatible state is left around.

  2. Edit /etc/neutron/neutron.conf. In the [DEFAULT] section, find the line beginning with core_plugin, and change it to read core_plugin = calico. Also remove any existing setting for service_plugins.

  3. Install the calico-control package:

    1. yum install -y calico-control
  4. Restart the neutron server process:

    1. service neutron-server restart

Compute node install

On each compute node, perform the following steps:

  1. Open /etc/nova/nova.conf and remove the line from the [DEFAULT] section that reads:

    1. linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver

    Remove the lines from the [neutron] section setting service_neutron_metadata_proxy or service_metadata_proxy to True, if there are any. Additionally, if there is a line setting metadata_proxy_shared_secret, comment that line out as well.

    Restart nova compute.

    1. service openstack-nova-compute restart

    If this node is also a controller, additionally restart nova-api.

    1. service openstack-nova-api restart
  2. If they’re running, stop the Open vSwitch services.

    1. service neutron-openvswitch-agent stop
    2. service openvswitch stop

    Then, prevent the services running if you reboot.

    1. chkconfig openvswitch off
    2. chkconfig neutron-openvswitch-agent off

    Then, on your control node, run the following command to find the agents that you just stopped.

    1. neutron agent-list

    For each agent, delete them with the following command on your control node, replacing <agent-id> with the ID of the agent.

    1. neutron agent-delete <agent-id>
  3. Install Neutron infrastructure code on the compute host.

    1. yum install -y openstack-neutron
  4. Edit /etc/neutron/neutron.conf. In the [oslo_concurrency] section, ensure that the lock_path variable is uncommented and set as follows.

    1. # Directory to use for lock files. For security, the specified directory should
    2. # only be writable by the user running the processes that need locking.
    3. # Defaults to environment variable OSLO_LOCK_PATH. If external locks are used,
    4. # a lock path must be set.
    5. lock_path = $state_path/lock
  5. Stop and disable the Neutron DHCP agent, and install the Calico DHCP agent (which uses etcd, allowing it to scale to higher numbers of hosts).

    1. service neutron-dhcp-agent stop
    2. chkconfig neutron-dhcp-agent off
    3. yum install -y calico-dhcp-agent
  6. Stop and disable any other routing/bridging agents such as the L3 routing agent or the Linux bridging agent. These conflict with Calico.

    1. service neutron-l3-agent stop
    2. chkconfig neutron-l3-agent off

    Repeat for bridging agent and any others.

  7. If this node is not a controller, install and start the Nova Metadata API. This step is not required on combined compute and controller nodes.

    1. yum install -y openstack-nova-api
    2. service openstack-nova-metadata-api restart
    3. chkconfig openstack-nova-metadata-api on
  8. Install the BIRD BGP client.

    1. yum install -y bird bird6
  9. Install the calico-compute package.

    1. yum install -y calico-compute
  10. Configure BIRD. By default Calico assumes that you will deploy a route reflector to avoid the need for a full BGP mesh. To this end, it includes configuration scripts to prepare a BIRD config file with a single peering to the route reflector. If that’s correct for your network, you can run either or both of the following commands.

    1. For IPv4 connectivity between compute hosts:
    2. ```bash
    3. calico-gen-bird-conf.sh <compute_node_ip> <route_reflector_ip> <bgp_as_number>
    4. ```
    5. And/or for IPv6 connectivity between compute hosts:
    6. ```bash
    7. calico-gen-bird6-conf.sh <compute_node_ipv4> <compute_node_ipv6> <route_reflector_ipv6> <bgp_as_number>
    8. ```
    9. You will also need to [configure your route reflector to allow connections from the compute node as a route reflector client](/calico/latest/networking/configuring/bgp)

    .

    1. If you _are_ configuring a full BGP mesh you need to handle the BGP
    2. configuration appropriately on each compute host. The scripts above can be
    3. used to generate a sample configuration for BIRD, by replacing the
    4. `<route_reflector_ip>` with the IP of one other compute hostthis will
    5. generate the configuration for a single peer connection, which you can
    6. duplicate and update for each compute host in your mesh.
    7. To maintain connectivity between VMs if BIRD crashes or is upgraded,
    8. configure BIRD graceful restart. Edit the systemd unit file
    9. /usr/lib/systemd/system/bird.service (and bird6.service for IPv6):
    10. - Add `-R` to the end of the `ExecStart` line.
    11. - Add `KillSignal=SIGKILL` as a new line in the `[Service]` section.
    12. - Run `systemctl daemon-reload` to tell systemd to reread that file.
    13. Ensure that BIRD (and/or BIRD 6 for IPv6) is running and starts on
    14. reboot.
    15. ```bash
    16. service bird restart
    17. service bird6 restart
    18. chkconfig bird on
    19. chkconfig bird6 on
    20. ```
  11. Create /etc/calico/felix.cfg with the following content, where <ip> is the IP address of the etcd server.

    1. [global]
    2. DatastoreType = etcdv3
    3. EtcdAddr = <ip>:2379
  12. Restart the Felix service.

    1. service calico-felix restart

Configuration for etcd authentication

If your etcd cluster has authentication enabled, you must also configure the relevant Calico components with an etcd user name and password. You can create a single etcd user for Calico that has permission to read and write any key beginning with /calico/, or you can create specific etcd users for each component, with more precise permissions.

This table sets out where to configure each component of Calico for OpenStack, and the detailed access permissions that each component needs:

ComponentConfigurationAccess
FelixCALICO_ETCD_USERNAME and CALICO_ETCD_PASSWORD variables in Felix’s environment on each compute node.See here
Neutron driveretcd_username and etcd_password in [calico] section of /etc/neutron/neutron.conf on each control node.See here
DHCP agentetcd_username and etcd_password in [calico] section of /etc/neutron/neutron.conf on each compute node.See here