Managed Clusters
Last updated
Was this helpful?
Last updated
Was this helpful?
Learn about Managed Clusters and get started quickly with custom backend solutions.
If you're new to Edgegap, we recommend starting with:
Getting Started - Servers (Unity),
Getting Started - Servers (Unreal Engine).
Managed Clusters make hosting self-managed game services and game backend easy and fast. You prepare the service image and we provide a high-availability, resilient cloud environment to run them:
player authentication,
data storage - accounts, progression, inventory, rewards, etc.,
social services - chat, clans, leaderboards, tournaments, etc.,
custom matchmaking - using Advanced Matchmaker, Nakama, or others,
serverless compute - managed functions as a service (alt. cloudscript, lambda),
and more.
Private clusters ensure your services have dedicated compute to serve your players 24/7.
If you see an opportunity for improvement, please let us know in our Community Discord.
We hope you will enjoy a smooth experience. 🚀
To help make your server reliable, we use Docker - virtualization software to ensuring that all of your server code dependencies down to the operating system level are going to be always exactly the same, no matter how or where the server is launched.
Kubernetes, also known as K8s, is an open source system for automating deployment, scaling, and management of containerized applications (Docker Images). It groups containers that make up an application into logical units for easy management and discovery.
Edgegap Managed Clusters provide a Kubernetes API for administration purposes.
With over 1 million users, K8s Lens is the most popular Kubernetes IDE in the world. Connect to clusters, explore, gain insights, learn and take an action when needed. Lens provides all the information from your workloads and resources in real-time, always in the right context.
Edgegap Cluster Kubernetes API can be used through Lens or other Kubernetes IDEs.
Helm is the best way to find, share, and use software built for Kubernetes. Helm helps you manage Kubernetes applications - Helm Charts help you define, install, and upgrade even the most complex Kubernetes application. Charts are easy to create, version, share, and publish - so start using Helm and stop the copy-and-paste.
Installing Helm CLI provides developers with a simple interface to manage their cluster packages.
☑️ Registered for your free Edgegap account and upgrade to pay as you go tier to unlock Clusters.
☑️ Navigate to Managed Clusters page.
☑️ Click on Create Cluster first, then input:
Label for your cluster to find it later easily,
Cluster Size - see ✔️ Introduction.
We strongly recommend creating separate clusters for your dev & production environments.
☑️ Review estimated cost and click Create Cluster to start your new cluster.
☑️ Once the cluster is ready, click Kubeconfig to download your configuration and credentials for connecting and administrating your new cluster.
☑️ Move your kubeconfig file for kubectl
to find it.
☑️ Lens users: import your kubeconfig file.
☑️ Test your cluster connection with command kubectl get nodes
:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
lke334087-533013-294dcfe70000 Ready <none> 10m v1.31.0
lke334087-533013-4e69edc10000 Ready <none> 10m v1.31.0
lke334087-533013-50bf39880000 Ready <none> 10m v1.31.0
🙌 Congratulations, you’ve completed Managed Cluster setup! You may now install your services.
Follow these steps to host your OpenMatch on a Managed Cluster.
☑️ Create a DNS record type A in your DNS provider (e.g. Cloudflare), note the URL for later. Your external IP for the DNS record can be found in Lens under Services / ingress-nginx-controller
.
☑️ Verify your DNS is set up correctly by performing a lookup using DNSchecker.
☑️ Create file named values.yaml
with contents (use your own values):
isProductionEnvironment: false
director:
credential:
registry: <YOUR_DIRECTOR_REGISTRY>
username: <YOUR_DIRECTOR_REGISTRY_USERNAME>
password: <YOUR_DIRECTOR_REGISTRY_PASSWORD>
image: <DIRECTOR_IMAGE>
env: {
"KEY": "VALUE"
}
mmf:
credential:
registry: <MATCHMAKER_FUNCTION_REGISTRY>
username: <MATCHMAKER_FUNCTION_REGISTRY_USERNAME>
password: <MATCHMAKER_FUNCTION_REGISTRY_PASSWORD>
image: <MATCHMAKER_FUNCTION_IMAGE>
env: {
"KEY": "VALUE"
}
frontend:
credential:
registry: <FRONTEND_REGISTRY>
username: <FRONTEND_REGISTRY_USERNAME>
password: <FRONTEND_REGISTRY_PASSWORD>
externalHostName: <YOUR_CLOUDFLARE_HOST_NAME> # e.g. exemple.test.com
image: <FRONTEND_IMAGE>
env: {
"KEY": "VALUE"
}
# Global configurations that are visible to all subcharts
global:
kubernetes:
resources:
requests:
memory: 100Mi
cpu: 100m
limits:
memory: 100Mi
cpu: 100m
Replace <VALUES> above with your own values in the file above.
☑️ Add Edgegap repository to your list of repositories:
helm repo add edgegap-public https://registry.edgegap.com/chartrepo/edgegap-public
☑️ Deploy advanced matchmaker helm chart:
helm upgrade --install \
--namespace matchmaker --create-namespace -f <FILE_PATH>/values.yaml \
--version 1.0.1 <RELEASE_NAME> edgegap-public/open-match-edgegap
🙌 Congratulations, you’ve completed Advanced Matchmaker setup!
Follow these steps to update your service hosted in the Managed Cluster:
☑️ Update your value.yaml
file with new files.
☑️ Update your helm chart using this command:
helm upgrade --reuse-values \
--namespace matchmaker -f <FILE_PATH>/values.yaml \
--version 1.0.1 <RELEASE_NAME> edgegap-public/open-match-edgegap
☑️ Reload your changes by closing the updated pods (director, mmf, frontend), causing the new helm chart to be used after we automatically restart the pods.
🙌 Congratulations, you’ve completed Advanced Matchmaker update!
Automate updating your services by adding this shell script to your deployment pipeline:
#!/bin/bash
RELEASE_NAME="<RELEASE_NAME>"
NAMESPACE="matchmaker" # Change this if you changed the namespace.
helm upgrade --reuse-values -f <FILE_PATH>/value.yaml --namespace $NAMESPACE --version 1.0.1 $RELEASE_NAME edgegap-public/open-match-edgegap
echo "Installing redis-tools"
apt-get update
apt-get install -y redis-tools
DIRECTOR_DEPLOYMENT_NAME="$RELEASE_NAME-director"
MMF_DEPLOYMENT_NAME="$RELEASE_NAME-mmf"
CUSTOM_FRONTEND_DEPLOYMENT_NAME="$RELEASE_NAME-custom-frontend"
REDIS_HOST="$RELEASE_NAME-redis-master"
declare -A replicas
# For each deployment (director, mmf, custom-frontend) stop the pods
for deployment in $DIRECTOR_DEPLOYMENT_NAME $MMF_DEPLOYMENT_NAME $CUSTOM_FRONTEND_DEPLOYMENT_NAME
do
echo "Stopping pods for deployment: $deployment"
replicas[$deployment]=$(kubectl get deployment $deployment -o=jsonpath='{.spec.replicas}' --namespace $NAMESPACE)
kubectl scale deployment/$deployment --replicas=0 --namespace $NAMESPACE
done
# Wait until the pods are terminated
for deployment in $DIRECTOR_DEPLOYMENT_NAME $MMF_DEPLOYMENT_NAME $CUSTOM_FRONTEND_DEPLOYMENT_NAME
do
echo "Waiting for pods to be terminated for deployment: $deployment"
kubectl wait --for=delete pod -l app=$deployment --timeout=60s --namespace $NAMESPACE
# Check if the wait command was successful. If not, exit the script
if [ $? -ne 0 ]; then
echo "Failed to wait for pods to be terminated for deployment: $deployment"
exit 1
fi
done
# Clean up redis database
echo "Cleaning up redis database"
redis-cli -h $REDIS_HOST flushall
# For each deployment (director, mmf, custom-frontend) rescale the pods to their original count
for deployment in $DIRECTOR_DEPLOYMENT_NAME $MMF_DEPLOYMENT_NAME $CUSTOM_FRONTEND_DEPLOYMENT_NAME
do
echo "Rescaling pods to ${replicas[$deployment]} for deployment: $deployment"
kubectl scale deployment/$deployment --replicas=${replicas[$deployment]} --namespace $NAMESPACE
done
For some clients, the recommended Letsencrypt certificate validation may fail with error:
Curl error 60: Cert verify failed. Certificate has expired. UnityTls error code: 7
Updating your operating system may resolve issues with outdated root certificate authority.
As a last resort, game clients may implement a custom certificate handler function:
```csharp
public class CustomCertificateHandler : CertificateHandler
{
private readonly string EXPECTED_CERT = "-----BEGIN CERTIFICATE-----<key>-----END CERTIFICATE-----\r\n";
protected override bool ValidateCertificate(byte[] certificateData)
{
X509Certificate2 certificate = new X509Certificate2(certificateData);
X509Certificate2 expectedCert = new X509Certificate2(Encoding.ASCII.GetBytes(EXPECTED_CERT));
using (SHA256 sha256 = SHA256.Create())
{
Debug.Log("certificate.Thumbprint: " + certificate.Thumbprint);
Debug.Log("expectedCert.Thumbprint: " + expectedCert.Thumbprint);
return certificate.Thumbprint == expectedCert.Thumbprint;
}
}
}
```
Usage:
UnityWebRequest request = UnityWebRequest.Get(...);
request.certificateHandler = new BypassCertificateHandler();
request.SendWebRequest();
request.certificateHandler.Dispose();
We recommend storing the EXPECTED_CERT
value in your own file storage, and retrieving it on runtime, so you can update it without releasing a game client update.
Follow these steps to host your own Nakama Game Backend on Managed Clusters:
☑️ Create a DNS record type A in your DNS provider (e.g. Cloudflare), note the URL for later. Your external IP for the DNS record can be found in Lens under Services / ingress-nginx-controller
.
☑️ Verify your DNS is set up correctly by performing a lookup using DNSchecker.
☑️ Create file named values.yaml
with contents (use your own values):
isProductionEnvironment: true
# The external hostname for the Nakama server - DNS record from last step
externalHostName:
nakama:
# The version of Nakama to deploy.
# See https://hub.docker.com/r/heroiclabs/nakama/tags for available versions.
version: 3.26.0
# Username and password for the Nakama console
username:
password:
Replace <VALUES> above with your own values in the file above.
☑️ Deploy Nakama helm chart:
helm upgrade --install \
--namespace nakama --create-namespace -f <FILE_PATH>/values.yaml \
--version 1.0.0 <RELEASE_NAME> oci://registry-1.docker.io/edgegap/heroiclabs-nakama
☑️ Lens: verify installation in Workloads / Deployments section, nakama
should be running.
✅ Connect to your Nakama Console with URL and credentials from values.yaml
file.
🙌 Congratulations, you’ve completed self-hosted Nakama Game Backend setup!
Follow these steps to update your service hosted in the Managed Cluster:
☑️ Update your value.yaml
file with new files.
☑️ Update your helm chart using this command:
helm upgrade --reuse-values \
--namespace nakama -f <FILE_PATH>/values.yaml \
--version 1.0.0 <RELEASE_NAME> oci://registry-1.docker.io/edgegap/heroiclabs-nakama
☑️ Reload your changes by closing the updated pods, causing the new helm chart to be used after we automatically restart the pods.
🙌 Congratulations, you’ve completed Advanced Matchmaker update!
Prepare for success and optimize after launch, so you don’t block your players on release day.
Changing cluster size is currently not possible on running clusters. See blue/green deployment for more.
Your success is our priority. If you'd like to send custom requests, ask for missing critical features , or express any thoughts, please reach out in our Community Discord.
See Getting Started for your first steps with Matchmaker, game integration, and detailed examples.
We currently offer 3 private cluster tiers to cater to everybody’s needs:
Best Suited For
enthusiasts, solo developers
commercial releases
high-traffic launches
Resources
1 vCPU + 2GB RAM
6 vCPU + 12GB RAM
18 vCPU + 48GB RAM
Redundancy
1x virtual node
3x virtual nodes
3x virtual nodes
Price (hourly)
$0.0312
$0.146
$0.548
Price (30 days)
$22.464
$105.12
$394.56
See before setting up your services on Managed Clusters.
See before setting up your services on Managed Clusters.
☑️ Install the nginx Ingress for receiving client requests and passing them to services in cluster:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx --create-namespace
☑️ Lens: verify installation in Services / Network section, nginx
should be running.
☑️ Install the nginx Ingress for receiving client requests and passing them to services in cluster:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx --create-namespace
☑️ Lens: verify installation in Services / Network section, nginx
should be running.
☑️ Install a Certificate Manager to support usage of HTTPS requests from game clients:
helm repo add jetstack https://charts.jetstack.io --force-update;
helm upgrade --install cert-manager jetstack/cert-manager \
--namespace cert-manager --create-namespace \
--version v1.16.1 --set crds.enabled=true;
☑️ Lens: verify installation in Services / Network section, cert-manager
should be running.
☑️ Write a Cluster Issuer file, remember to replace <YOUR_EMAIL>
below:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
namespace: cert-manager
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email:
privateKeySecretRef:
name:
solvers:
- http01:
ingress:
class: nginx
Use letsencrypt-prod
for your production cluster privateKeySecretRef
.
☑️ Create a Cluster Issuer in command line:
kubectl create -f <FILE_PATH>/issuer.yaml
☑️ Lens: verify installation in Custom Resources section / cert-manager.io
- cluster issuer created.
☑️ Install a Certificate Manager to support usage of HTTPS requests from game clients:
helm repo add jetstack https://charts.jetstack.io --force-update;
helm upgrade --install cert-manager jetstack/cert-manager \
--namespace cert-manager --create-namespace \
--version v1.16.1 --set crds.enabled=true;
☑️ Lens: verify installation in Services / Network section, cert-manager
should be running.
☑️ Write a Cluster Issuer file, remember to replace <YOUR_EMAIL>
below:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
namespace: cert-manager
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email:
privateKeySecretRef:
name:
solvers:
- http01:
ingress:
class: nginx
Use letsencrypt-prod
for your production cluster privateKeySecretRef
.
☑️ Create a Cluster Issuer in command line:
kubectl create -f <FILE_PATH>/issuer.yaml
☑️ Lens: verify installation in Custom Resources section / cert-manager.io
- cluster issuer created.
This update will cause a short downtime while the pods restart.
This update will cause a short downtime while the pods restart.
This update will cause a short downtime while the pods restart.