Clusters¶
A cluster is a group of machines running in the cloud. We use Terraform to build the cluster insfrastructure.
Create a cluster¶
To create a cluster, the Genvid Bastion must be running. You can use the genvid-bastion script to manage it. The following command starts the minimal services required.
genvid-bastion install --bastionid mybastion --checkmodules --update-global-tfvars --loadconfig
--bastionid mybastion
A unique identifier for your bastion. It must:
- Be between 3 and 32 characters.
- Only contains lowercase letters, numbers or hyphen.
- Start with a letter.
--checkmodules
- Use this option to install new modules if none exist or update the ones already present.
--update-global-tfvars
- Use this option to update the global Terraform variables.
--loadconfig
- Use this option to load the jobs and logs.
The next step is to open a Bastion-UI web site to manage the Clusters.
genvid-bastion monitor
On the Bastion-UI page, you may choose to change the Bastion name.
- Modify Bastion name.
- Click on Update
On the Terraform page:
- Click
Add Config
.- Enter a unique
ID
.- Choose
cluster
as the category.
You can select another backend
if needed. For now, you can stick with
the default values. Some backends may require configuring variables.
Clone a cluster¶
It is also a good practice to use the cloning function when you want to create different clusters with a similar configuration. You can do so by clicking on the clone button on the Terraform configuration view.
Cluster status¶
After creating a cluster, its status is EMPTY. This means that the cluster needs a module. The cluster statuses are:
- VOID: The cluster doesn’t exist.
- EMPTY: Cluster created but without a module.
- DOWN: Cluster is initialized but resources are empty.
- UP: Terraform apply has succeeded.
- BUSY: A command is actually running.
- ERROR: An error occured when checking the cluster.
- INVALID: Invalid or unknown status.
Terraform module¶
Before configuring the cluster, you must initialize it with a module. This will copy the module template and will download any modules and plugins required by it.
- Click the Commands sub link.
- Select the SDK-{version}/basic/basic_cluster module.
- Click the Import module button.
You will see an initialization log appear. This step should last only a few seconds.
Terraform settings¶
We use Terraform to build the cluster insfrastructure, so the setting values configure the cluster infrastructure. Click the Settings sub link for your cluster and edit the settings.
For ease of use you can also download the settings on the page to a JSON file. You can drag and drop the edited file on the form to edit multiple settings at once or revert to a previous configuration. Please note that all unsaved configuration will be lost.
Setting | Description | Default Value |
---|---|---|
admin_password | The administrator password for the Windows machine. The Windows machine is only accessible from the other server instances and from other machines that share the same external IP as yours [2]. | 1genvid6 |
ami_prefix | A common prefix for the game AMI. This is the prefix we use in the Saving the AMI section. You can change it if you want to experiment with your own AMI. | default |
ami_version | A filter for finding the right version of the AMIs. You can change it if you want to bind your cluster to a different SDK version. | Current SDK version |
datacenter | The datacenter name Nomad uses to group local agents together. | default |
force_az | Force all instances to be configured in a given availability zone. It should be the same as the one in Starting the initialization AMI. The availability zone has to be specified in full, such as us-east-1d. If a zone doesn’t have any G2 or C5 instances available, you must change to a zone that does. Currently, there’s no way to know in advance if an instance will be available in a specific zone. | none |
instance_encoding_count | Specifies the number of encoding servers that Terraform should create. | 1 |
instance_encoding_type | Specifies the type of AWS instance that Terraform should use for encoding servers. | c5.2xlarge |
instance_game_count | Specifies the number of game servers that Terraform should create. | 1 |
instance_game_type | Specifies the type of AWS instance that Terraform should use for game servers. | g2.2xlarge |
instance_internal_count | Specifies the number of internal servers that Terraform should create. | 1 |
instance_internal_type | Specifies the type of AWS instance that Terraform should use for internal servers. | t2.small |
instance_public_count | Specifies the number of public servers that Terraform should create. | 1 |
instance_public_type | Specifies the type of AWS instance that Terraform should use for public servers. | t2.small |
instance_server_count | Specifies the number of admin servers that Terraform should create. | 1 |
instance_server_type | Specifies the type of AWS instance that Terraform should use for admin servers. | t2.small |
region | The AWS region where you want to create your cluster. You can use any region that contains G2 instances [1]. | us-east-1 |
namespace | The name of the current deployment project on your cluster. This value is part of the AWS Tags configuration, making it easier to group resources together in the AWS Console. | deployment |
stage | The name of the stage your cluster should belong to. This value is part of the AWS Tags configuration, making it easier to group resources together in the AWS Console. | dev |
vpc_cidr_block | Specifies the CIDR block Terraform uses to create new subnets in a newly created VPC. It works in conjunction with vpc_auto_create. You can switch it to any CIDR block you would prefer. | 10.0.0.0/16 |
The next 2 settings’ values are provided automatically to the Terraform configuration via environment variables. So they don’t need to be set manually.
cluster
is the name of your cluster.bastionid
is the name of your Bastion.
The basic_cluster
take care of everything, creating a new VPC, key pair, etc. In case you want more control
or your permissions doesn’t allow you to create AWS VPC or Roles, we also have the minimal_cluster
configuration which allow you to provide those elements yourself:
Setting | Description | Default Value |
---|---|---|
key_pair_private | Specifies the private part of the SSH key used to manage the EC2 instances. The key should look something like this: —–BEGIN RSA PRIVATE KEY—–n… n—–END RSA PRIVATE KEY—–n | null |
key_pair_public | Specifies the public part of the SSH key used to manage the EC2 instances. The key should look something like this: ssh-rsa … | null |
iam_policy_name_game | Specifies the IAM policy that Terraform should attach to game servers. The default value lets Terraform create new policies. | null |
iam_policy_name_server | Specifies the IAM policy that Terraform should attach to Genvid servers. The default value lets Terraform create new policies. | null |
vpc_id | Specifies the VPC ID to use instead of creating a new one, which enables sharing VPCs between clusters. A vpc_id should look something like this: vpc-… | null |
See the Terraform Configuration Documentation for more information on how to setup those variables globally or locally.
[1] | The region currently must be the same as the default region in your AWS configuration. |
[2] | Future versions of the Cluster will use a bastion host instead for increased security. |
Applying Terraform infrastructure¶
Terraform Apply is the operation that builds the cluster infrastructure.
- Click Plan Apply to create an execution plan.
- Verify the changes match what you need.
- After verifying the changes, click Apply to execute the plan.
You should see the log starting to appear.
- If Terraform fails to build the infrastructure:
- Check the error message.
- Update the settings.
- Apply again.
At the end of this step, there is a cluster running on the cloud. When you check
All Configs on the Terraform page, the status is UP
. This means that the
infrastructure is up, but the Genvid SDK is not yet present in it.
Important
Beginning in version 1.19.0, instances provision in the background after Terraform creates them. You need to wait until an instance registers as a Nomad client in the Cluster UI before using it. In particular, Windows instances may take up to 30 minutes before they are ready.
Note
The public machine IP is not the same as the one used when setting up your AMI.
To access your game machine, use the IP from game_public_ips
with the same password set during the AMI setup.
To retrieve the game_public_ips
:
- Make sure the cluster is
UP
. - Go to the Commands page of your cluster.
- Click the OUTPUT button.
You’ll find the game_public_ips
listed in the JSON file.
Setting up the SDK on the running cluster¶
Once a cluster status is UP
, you will need to set up the Genvid SDK on it and run your
project there.
Follow the SDK in the cloud guide to do so.
Destroying Terraform infrastructure¶
You should only detroy Terrafrom infrastructure of the cluster when you no longer need to run your project on it. To destroy the Terraform infrastructure, click Plan destroy button. If you’re sure you want to destroy the current Terraform infrastructure, click the Destroy button to confirm.
See the Destroy Infrastructure for more information.
Deleting a cluster¶
To delete a cluster, you have to destroy the Terraform infrastructure first. Go to All Configs and click the Delete button.
Using custom repositories¶
You can add and remove individual Terraform repository in the bastion. Each repository contain one or more modules that can be used to instantiate a cluster. To list the current repositories, you can use:
genvid-clusters module-list
To add a new module, just run:
genvid-clusters module-add -u {URL} {name}
Where {URL}
can be any source compatible with go-getter,
including local files, and {name}
is the destination folder to
this repository on bastion. After the URL being cloned in the bastion
repository, it will be available as a source under
modules/module
. See Terraform’s Module Configuration for
more details on using modules.
Bastion remembers the origin of each modules, so you can easily update them by running:
genvid-clusters module-update [name]
The name is optional. If you don’t provide it, all repositories will be update.
Removing a module is as simple as running:
genvid-clusters module-remove {name}
See also
- genvid-clusters
- Genvid Clusters Management script documentation.
- Bastion API for Terraform
- Bastion API for Terraform
- Terraform’s Module Configuration
- Documentation of Modules on Terraform
- go-getter
- A library for fetching URL in Go.