Starting up a TightGate-Pro cluster
The following instructions describe the procedure for starting a TightGate-Pro cluster with external Ceph data storage. The start-up procedure must be followed so that the cluster components are optimised and the cluster is ready for use as quickly as possible.
The startup is carried out in several steps:
Step 1: Booting the TightGate-Pro time server
The first step is to start the TightGate-Pro server that serves as the time server for the Ceph servers. This should already be known, usually the first TightGate-Pro server is used for this. The TightGate-Pro will start, but cannot connect to the cluster as it is not yet running. This is fine at this stage, only the service is required as a timer. The integration into the cluster will then take place at a later time.
Note
The determination of the TightGate-Pro time server used by the Ceph servers can be determined on a running Ceph system by running the following command as administrator root in the console with the following command: cat /etc/ntp.conf | grep server
The output of the command lists the IP addresses of the TightGate-Pro servers that are used as time servers.
Step 2: Booting the Ceph servers
To activate data storage for TightGate-Pro, all Ceph servers should now be switched on at the same time if possible. These start up and are ready when a login prompt is displayed on the console. Once all Ceph servers have been started, check that all Ceph servers have connected and that the cluster is running properly. The check is carried out as follows:
- Login as administrator root on the console of the first Ceph server.
- Execute the command ceph osd tree
- The output of the command shows the status up the cluster has started successfully.
Note! The Ceph servers need some time after a restart to bring each other to the same data storage status. If the status is not upyou can start troubleshooting.
Step 3: Booting the TightGate-Pro server
Once the Ceph cluster is running and has found itself, the TightGate-Pro servers are started one after the other. Once they have all started and connected to the cluster, the TightGate-Pro that was started in step 1 is restarted (reboot). When booting the TightGate-Pro servers, the following message can be observed during the boot process, provided the connection to the Ceph servers is successful (the IPs of the Cephs are exemplary):
Mounting Ceph from /home/user: trying monitors: 192.168.111.101:6789, 192.168.111.102:6789, 192.168.111.103:6789 mounted ... ready! Mounting Ceph from /home/backuser/backup: trying monitors: 192.168.111.101:6789, 192.168.111.102:6789, 192.168.111.103:6789 mounted ... ready!
Step 4: Function check
Once all TightGate-Pro servers have been started up and connected in the cluster, the correct status can be checked as follows:
- Login with the TightGate-Viewer as a normal user at the first TightGate-Pro and call up the following URL in the Firefox browser: http://localhost/
- In the login window that appears with the user name status and the corresponding password from the password list.
- On the status page on the right, click on the menu item Cluster status menu item on the right. The cluster status is displayed for all TightGate-Pro servers and their services. If all TightGate-Pro servers are available and the status for all systems is green, the start is successfully completed.