π§ EGS Troubleshooting Bundle Generator
Overview
The EGS Troubleshooting script (egs-troubleshoot.sh) provides a one-command diagnostic bundle generation for EGS deployments. This guide is designed for users who need to collect logs, configurations, CRDs, and cluster state for troubleshooting and support purposes.
β¨ Features
- π― One-Command Generation: Generate diagnostic bundles with a single curl command
- π Auto-Detection: Automatically detects cluster type (Controller/Worker/Standalone)
- π Comprehensive Collection: Collects all EGS-related resources, CRDs, logs, and configurations
- π€ Smart Discovery: Automatically discovers EGS namespaces (project namespaces, slice namespaces)
- β‘ Fast Collection: Skip logs option for faster bundle generation
- βοΈ S3 Upload: Direct upload to AWS S3 with presigned URL generation
- π¦ Organized Output: Well-structured bundle with summary report
- π Multi-Cluster: Support for collecting from controller and worker clusters separately
π Table of Contents
| Section | Description |
|---|---|
| Quick Start | Get started with basic bundle generation |
| Prerequisites | Required tools before running the script |
| Command Options | All available options and flags |
| Whatβs Collected | Detailed list of collected resources |
| Multi-Cluster Collection | Collecting from controller and workers |
| S3 Upload | Upload bundles to AWS S3 |
| Bundle Structure | Understanding the output directory |
| Examples | Common usage examples |
| Troubleshooting the Script | Common issues and solutions |
π¦ Quick Start
Simplest Bundle Generation
# ============ CUSTOMIZE THESE VALUES ============
export KUBECONFIG_PATH="~/.kube/config" # Path to your kubeconfig file
# ============ GENERATE THE BUNDLE ============
curl -fsSL https://repo.egs.avesha.io/egs-troubleshoot.sh | bash -s -- \
--kubeconfig $KUBECONFIG_PATH
Thatβs it! The script will:
- β Auto-detect your cluster type (Controller/Worker/Standalone)
- β Discover all EGS-related namespaces
- β Collect cluster information, nodes, CRDs, and resources
- β Collect container logs (current and previous)
- β Collect Helm releases and values
- β Generate a summary report
- β
Create a compressed archive (
.tar.gz)
π Prerequisites
The script requires the following tools:
| Tool | Purpose | Required |
|---|---|---|
kubectl |
Kubernetes CLI | β Yes |
jq |
JSON processor | β Yes |
tar |
Archive creation | β Yes |
gzip |
Compression | β Yes |
aws |
S3 upload | βͺ Optional |
helm |
Helm release info | βͺ Optional |
βοΈ Command Options
Basic Options
| Option | Short | Description | Default |
|---|---|---|---|
--help |
-h |
Show help message | - |
--version |
-v |
Show script version | - |
--verbose |
- | Enable verbose output | false |
--kubeconfig PATH |
-k |
Path to kubeconfig file | $KUBECONFIG or ~/.kube/config |
--context CONTEXT |
-c |
Kubernetes context to use | Current context |
--output-dir DIR |
-o |
Output directory for the bundle | ./egs-troubleshoot-bundle-TIMESTAMP |
--namespace NS |
-n |
Additional namespace to include | - |
--cluster-name NAME |
- | Identifier for this cluster in the bundle | Auto-detected |
Collection Options
| Option | Description | Default |
|---|---|---|
--all-namespaces |
Collect from all namespaces (use with caution) | false |
--include-secrets |
Include secrets in the bundle (base64 encoded) | false |
--log-lines NUM |
Number of log lines per container | 1000 |
--skip-logs |
Skip collecting container logs | false |
--skip-metrics |
Skip collecting Prometheus metrics | false |
--no-previous-logs |
Donβt collect previous container logs | false |
S3 Upload Options
| Option | Description | Default |
|---|---|---|
--s3-bucket BUCKET |
S3 bucket name for upload | - |
--s3-region REGION |
S3 bucket region | us-east-1 |
--s3-prefix PREFIX |
S3 key prefix for the bundle | - |
--aws-profile PROFILE |
AWS profile to use | - |
π Whatβs Collected
Cluster Information
| Category | Resources |
|---|---|
| Cluster Info | Kubernetes version, API resources, component statuses |
| Nodes | Node list, details, labels, annotations, capacity, allocatable resources, conditions, taints |
| GPU Info | GPU node details, NVIDIA labels, GPU capacity |
EGS CRDs (Custom Resource Definitions)
The script collects all CRDs from the following API groups:
| API Group | Description | Resources |
|---|---|---|
controller.kubeslice.io |
KubeSlice Controller | clusters, projects, sliceconfigs, slicegateways, slicenodeaffinities, sliceresourcequotas, slicerolebindings, serviceexportconfigs, gpuprovisioningrequests, workspaces, clustergpuallocations |
worker.kubeslice.io |
KubeSlice Worker | gpuworkloads, workerserviceimports, workersliceconfigs, workerslicegateways, workerslicegwrecyclers, workerslicenodeaffinities, workersliceresourcequotas, workerslicerolebindings, workerslicegpuprovisioningrequests, workloadplacements, workerclustergpuallocations |
networking.kubeslice.io |
KubeSlice Networking | slices, slicegateways, serviceexports, serviceimports, slicenodeaffinities, sliceresourcequotas, slicerolebindings, vpcserviceimports |
inventory.kubeslice.io |
KubeSlice Inventory | clustergpuallocations, workerclustergpuallocations |
aiops.kubeslice.io |
KubeSlice AI/Ops | clustergpuallocations, gpuprovisioningrequests, workloadplacements |
gpr.kubeslice.io |
GPU Provisioning | gprautoevictions, gprtemplatebindings, gprtemplates, gpuprovisioningrequests, workloadplacements, workloadtemplates, workspacepolicies |
monitoring.coreos.com |
Prometheus Operator | servicemonitors, podmonitors, prometheusrules, alertmanagerconfigs |
nvidia.com |
NVIDIA GPU Operator | clusterpolicies, nvidiadrivers |
nfd.k8s-sigs.io |
Node Feature Discovery | nodefeatures, nodefeaturerules |
serving.kserve.io |
KServe | inferenceservices, servingruntimes, clusterservingruntimes |
networkservicemesh.io |
Network Service Mesh | networkservices, networkserviceendpoints |
spire.spiffe.io |
SPIRE/SPIFFE | spireservers, spireagents |
gateway.networking.k8s.io |
Gateway API | gateways, gatewayclasses, httproutes |
crd.projectcalico.org |
Calico | networkpolicies, globalnetworkpolicies |
Namespace Resources
For each EGS-related namespace, the script collects:
| Resource Type | Description |
|---|---|
pods |
All pods with status and details |
deployments |
Deployment configurations and status |
daemonsets |
DaemonSet configurations |
statefulsets |
StatefulSet configurations |
replicasets |
ReplicaSet details |
jobs |
Job configurations and status |
cronjobs |
CronJob configurations |
configmaps |
ConfigMap data (non-sensitive) |
services |
Service configurations |
endpoints |
Endpoint details |
serviceaccounts |
ServiceAccount configurations |
roles |
Role definitions |
rolebindings |
RoleBinding configurations |
ingresses |
Ingress configurations |
networkpolicies |
NetworkPolicy rules |
persistentvolumeclaims |
PVC details |
events |
Recent events |
Namespaces Discovered
The script automatically discovers and collects from:
| Namespace Pattern | Description |
|---|---|
kubeslice-controller |
KubeSlice Controller namespace |
kubeslice-system |
KubeSlice Worker namespace |
kubeslice-* |
Project namespaces (e.g., kubeslice-avesha, kubeslice-vertex) |
egs-monitoring |
Prometheus/Grafana monitoring |
egs-gpu-operator |
NVIDIA GPU Operator |
kt-postgresql |
KubeTally PostgreSQL |
minio |
MinIO for controller replication |
spire |
SPIRE for identity |
| Slice namespaces | Application namespaces onboarded to slices |
Additional Data
| Category | Details |
|---|---|
| Logs | Container logs (current and previous, configurable lines) |
| Helm | Helm releases, values, and history |
| Storage | StorageClasses, PersistentVolumes, PersistentVolumeClaims |
| Network | Network policies, services, endpoints |
| Metrics | Node metrics, pod metrics (if metrics-server available) |
π Multi-Cluster Collection
For EGS deployments with multiple clusters (controller + workers), run the script separately on each cluster.
πΉ Scenario 1: Single Cluster Bundle
Use case: Standalone cluster or collecting from one cluster at a time.
# ============ CUSTOMIZE THESE VALUES ============
export KUBECONFIG_PATH="~/.kube/config" # Path to your kubeconfig file
export CLUSTER_NAME="my-cluster" # Name for your cluster (optional)
# ============ GENERATE THE BUNDLE ============
curl -fsSL https://repo.egs.avesha.io/egs-troubleshoot.sh | bash -s -- \
--kubeconfig $KUBECONFIG_PATH \
--cluster-name $CLUSTER_NAME
πΉ Scenario 2: Controller Cluster Bundle
Use case: Collect diagnostic data from the EGS controller cluster.
# ============ CUSTOMIZE THESE VALUES ============
export CONTROLLER_KUBECONFIG="~/.kube/controller-kubeconfig.yaml" # Controller kubeconfig path
export CONTROLLER_NAME="egs-controller" # Controller cluster name
# ============ GENERATE THE BUNDLE ============
curl -fsSL https://repo.egs.avesha.io/egs-troubleshoot.sh | bash -s -- \
--kubeconfig $CONTROLLER_KUBECONFIG \
--cluster-name $CONTROLLER_NAME
πΉ Scenario 3: Worker Cluster Bundles
Use case: Collect diagnostic data from worker clusters.
Worker 1
# ============ CUSTOMIZE THESE VALUES ============
export WORKER1_KUBECONFIG="~/.kube/worker1-kubeconfig.yaml" # Worker 1 kubeconfig path
export WORKER1_NAME="worker-1" # Worker 1 cluster name
# ============ GENERATE THE BUNDLE ============
curl -fsSL https://repo.egs.avesha.io/egs-troubleshoot.sh | bash -s -- \
--kubeconfig $WORKER1_KUBECONFIG \
--cluster-name $WORKER1_NAME
Worker 2
# ============ CUSTOMIZE THESE VALUES ============
export WORKER2_KUBECONFIG="~/.kube/worker2-kubeconfig.yaml" # Worker 2 kubeconfig path
export WORKER2_NAME="worker-2" # Worker 2 cluster name
# ============ GENERATE THE BUNDLE ============
curl -fsSL https://repo.egs.avesha.io/egs-troubleshoot.sh | bash -s -- \
--kubeconfig $WORKER2_KUBECONFIG \
--cluster-name $WORKER2_NAME
πΉ Scenario 4: Complete Multi-Cluster Collection
Use case: Collect bundles from controller and all workers for comprehensive support.
π Note: Run these commands sequentially. Each command generates a separate bundle for that cluster.
# ============ CUSTOMIZE THESE VALUES ============
export CONTROLLER_KUBECONFIG="~/.kube/controller.yaml" # Controller kubeconfig
export WORKER1_KUBECONFIG="~/.kube/worker1.yaml" # Worker 1 kubeconfig
export WORKER2_KUBECONFIG="~/.kube/worker2.yaml" # Worker 2 kubeconfig
# ============ CONTROLLER BUNDLE ============
curl -fsSL https://repo.egs.avesha.io/egs-troubleshoot.sh | bash -s -- \
--kubeconfig $CONTROLLER_KUBECONFIG \
--cluster-name "controller"
# ============ WORKER 1 BUNDLE ============
curl -fsSL https://repo.egs.avesha.io/egs-troubleshoot.sh | bash -s -- \
--kubeconfig $WORKER1_KUBECONFIG \
--cluster-name "worker-1"
# ============ WORKER 2 BUNDLE ============
curl -fsSL https://repo.egs.avesha.io/egs-troubleshoot.sh | bash -s -- \
--kubeconfig $WORKER2_KUBECONFIG \
--cluster-name "worker-2"
βοΈ S3 Upload
The script can automatically upload the generated bundle to an AWS S3 bucket for easy sharing with the support team.
Prerequisites for S3 Upload
- AWS CLI installed and configured
- IAM permissions for
s3:PutObjectand optionallys3:GetObject(for presigned URLs) - S3 bucket created and accessible
πΉ Basic S3 Upload
π Note: Generates bundle and uploads directly to S3 bucket.
# ============ CUSTOMIZE THESE VALUES ============
export KUBECONFIG_PATH="~/.kube/config" # Path to your kubeconfig file
export S3_BUCKET="my-support-bucket" # S3 bucket name
export S3_REGION="us-west-2" # S3 bucket region
# ============ GENERATE AND UPLOAD ============
curl -fsSL https://repo.egs.avesha.io/egs-troubleshoot.sh | bash -s -- \
--kubeconfig $KUBECONFIG_PATH \
--s3-bucket $S3_BUCKET \
--s3-region $S3_REGION
πΉ S3 Upload with Prefix
π Note: Organize bundles by customer or environment using S3 prefixes.
# ============ CUSTOMIZE THESE VALUES ============
export KUBECONFIG_PATH="~/.kube/config" # Path to your kubeconfig file
export S3_BUCKET="support-bundles" # S3 bucket name
export S3_REGION="us-east-1" # S3 bucket region
export S3_PREFIX="customer-xyz/production/" # S3 key prefix for organization
# ============ GENERATE AND UPLOAD ============
curl -fsSL https://repo.egs.avesha.io/egs-troubleshoot.sh | bash -s -- \
--kubeconfig $KUBECONFIG_PATH \
--s3-bucket $S3_BUCKET \
--s3-region $S3_REGION \
--s3-prefix $S3_PREFIX
πΉ S3 Upload with AWS Profile
π Note: Use a specific AWS profile for authentication.
# ============ CUSTOMIZE THESE VALUES ============
export KUBECONFIG_PATH="~/.kube/config" # Path to your kubeconfig file
export S3_BUCKET="support-bundles" # S3 bucket name
export S3_REGION="us-east-1" # S3 bucket region
export AWS_PROFILE_NAME="support-team" # AWS profile name
# ============ GENERATE AND UPLOAD ============
curl -fsSL https://repo.egs.avesha.io/egs-troubleshoot.sh | bash -s -- \
--kubeconfig $KUBECONFIG_PATH \
--s3-bucket $S3_BUCKET \
--s3-region $S3_REGION \
--aws-profile $AWS_PROFILE_NAME
πΉ Multi-Cluster S3 Upload
π Note: Upload all cluster bundles to the same S3 bucket for easy sharing.
# ============ CUSTOMIZE THESE VALUES ============
export S3_BUCKET="avesha-support-bundles" # S3 bucket name
export S3_REGION="us-east-1" # S3 bucket region
export CONTROLLER_KUBECONFIG="~/.kube/controller.yaml" # Controller kubeconfig
export WORKER1_KUBECONFIG="~/.kube/worker1.yaml" # Worker 1 kubeconfig
export WORKER2_KUBECONFIG="~/.kube/worker2.yaml" # Worker 2 kubeconfig
# ============ CONTROLLER BUNDLE ============
curl -fsSL https://repo.egs.avesha.io/egs-troubleshoot.sh | bash -s -- \
--kubeconfig $CONTROLLER_KUBECONFIG \
--cluster-name "controller" \
--s3-bucket $S3_BUCKET \
--s3-region $S3_REGION
# ============ WORKER 1 BUNDLE ============
curl -fsSL https://repo.egs.avesha.io/egs-troubleshoot.sh | bash -s -- \
--kubeconfig $WORKER1_KUBECONFIG \
--cluster-name "worker-1" \
--s3-bucket $S3_BUCKET \
--s3-region $S3_REGION
# ============ WORKER 2 BUNDLE ============
curl -fsSL https://repo.egs.avesha.io/egs-troubleshoot.sh | bash -s -- \
--kubeconfig $WORKER2_KUBECONFIG \
--cluster-name "worker-2" \
--s3-bucket $S3_BUCKET \
--s3-region $S3_REGION
Presigned URL
After successful upload, the script generates a presigned URL valid for 7 days. This URL can be shared with the support team without requiring them to have AWS credentials.
β
Bundle uploaded successfully to: s3://support-bundles/egs-troubleshoot-bundle-20260119-120000.tar.gz
π Presigned URL (valid for 7 days):
https://support-bundles.s3.us-east-1.amazonaws.com/egs-troubleshoot-bundle-20260119-120000.tar.gz?X-Amz-...
π Bundle Structure
The generated bundle has the following structure:
egs-troubleshoot-bundle-YYYYMMDD-HHMMSS/
βββ SUMMARY.md # Collection summary report
βββ cluster-info/
β βββ version.txt # Kubernetes version
β βββ api-resources.txt # Available API resources
β βββ component-statuses.txt # Component health
β βββ cluster-info.txt # Cluster information
βββ nodes/
β βββ nodes-list.txt # Node list
β βββ nodes-wide.txt # Node details
β βββ nodes-detailed.yaml # Full node YAML
β βββ node-labels.txt # Node labels
β βββ node-taints.txt # Node taints
β βββ node-capacity.txt # Node capacity
β βββ gpu-nodes.txt # GPU node information
βββ crds/
β βββ all-crds.yaml # All CRD definitions
β βββ controller-*.yaml # Controller CRs
β βββ worker-*.yaml # Worker CRs
β βββ networking-*.yaml # Networking CRs
β βββ inventory-*.yaml # Inventory CRs
β βββ aiops-*.yaml # AI/Ops CRs
β βββ gpr-*.yaml # GPR CRs
β βββ nvidia-*.yaml # NVIDIA CRs
β βββ monitoring-*.yaml # Prometheus CRs
βββ namespaces/
β βββ <namespace>/
β βββ pods.yaml
β βββ pods-wide.txt
β βββ deployments.yaml
β βββ services.yaml
β βββ configmaps.yaml
β βββ events.txt
β βββ logs/
β βββ <pod>/
β βββ <container>.log
β βββ <container>-previous.log
βββ helm/
β βββ releases.txt # Helm releases
β βββ <release>-values.yaml # Release values
βββ storage/
β βββ storageclasses.yaml
β βββ persistentvolumes.yaml
β βββ persistentvolumeclaims.yaml
βββ network/
β βββ services-all.yaml
β βββ endpoints-all.yaml
β βββ networkpolicies-all.yaml
βββ metrics/
βββ node-metrics.txt
βββ pod-metrics.txt
π Examples
Example 1: Basic Bundle Generation
π Note: Simplest way to generate a bundle.
# ============ CUSTOMIZE THESE VALUES ============
export KUBECONFIG_PATH="~/.kube/config" # Path to your kubeconfig file
# ============ GENERATE THE BUNDLE ============
curl -fsSL https://repo.egs.avesha.io/egs-troubleshoot.sh | bash -s -- \
--kubeconfig $KUBECONFIG_PATH
Example 2: Skip Logs for Faster Collection
π Note: Use this when logs are not needed or for faster bundle generation.
# ============ CUSTOMIZE THESE VALUES ============
export KUBECONFIG_PATH="~/.kube/config" # Path to your kubeconfig file
# ============ GENERATE THE BUNDLE ============
curl -fsSL https://repo.egs.avesha.io/egs-troubleshoot.sh | bash -s -- \
--kubeconfig $KUBECONFIG_PATH \
--skip-logs
Example 3: Collect More Log Lines
π Note: Increase log lines when detailed log analysis is needed.
# ============ CUSTOMIZE THESE VALUES ============
export KUBECONFIG_PATH="~/.kube/config" # Path to your kubeconfig file
export LOG_LINES="5000" # Number of log lines to collect
# ============ GENERATE THE BUNDLE ============
curl -fsSL https://repo.egs.avesha.io/egs-troubleshoot.sh | bash -s -- \
--kubeconfig $KUBECONFIG_PATH \
--log-lines $LOG_LINES
Example 4: Include Secrets (Use with Caution)
β οΈ Warning: Only use this when specifically requested by support. Secrets will be base64 encoded in the bundle.
# ============ CUSTOMIZE THESE VALUES ============
export KUBECONFIG_PATH="~/.kube/config" # Path to your kubeconfig file
# ============ GENERATE THE BUNDLE ============
curl -fsSL https://repo.egs.avesha.io/egs-troubleshoot.sh | bash -s -- \
--kubeconfig $KUBECONFIG_PATH \
--include-secrets
Example 5: Specific Kubernetes Context
π Note: Use this when your kubeconfig has multiple contexts.
# ============ CUSTOMIZE THESE VALUES ============
export KUBECONFIG_PATH="~/.kube/config" # Path to your kubeconfig file
export KUBE_CONTEXT="production-cluster" # Kubernetes context to use
# ============ GENERATE THE BUNDLE ============
curl -fsSL https://repo.egs.avesha.io/egs-troubleshoot.sh | bash -s -- \
--kubeconfig $KUBECONFIG_PATH \
--context $KUBE_CONTEXT
Example 6: Custom Output Directory
π Note: Specify where to save the bundle.
# ============ CUSTOMIZE THESE VALUES ============
export KUBECONFIG_PATH="~/.kube/config" # Path to your kubeconfig file
export OUTPUT_DIR="/tmp/egs-bundle" # Custom output directory
# ============ GENERATE THE BUNDLE ============
curl -fsSL https://repo.egs.avesha.io/egs-troubleshoot.sh | bash -s -- \
--kubeconfig $KUBECONFIG_PATH \
--output-dir $OUTPUT_DIR
Example 7: Add Additional Namespaces
π Note: Include custom namespaces that arenβt auto-discovered.
# ============ CUSTOMIZE THESE VALUES ============
export KUBECONFIG_PATH="~/.kube/config" # Path to your kubeconfig file
export EXTRA_NS_1="my-app-namespace" # Additional namespace 1
export EXTRA_NS_2="another-namespace" # Additional namespace 2
# ============ GENERATE THE BUNDLE ============
curl -fsSL https://repo.egs.avesha.io/egs-troubleshoot.sh | bash -s -- \
--kubeconfig $KUBECONFIG_PATH \
--namespace $EXTRA_NS_1 \
--namespace $EXTRA_NS_2
Example 8: Verbose Output for Debugging
π Note: Enable verbose output to see detailed progress.
# ============ CUSTOMIZE THESE VALUES ============
export KUBECONFIG_PATH="~/.kube/config" # Path to your kubeconfig file
# ============ GENERATE THE BUNDLE ============
curl -fsSL https://repo.egs.avesha.io/egs-troubleshoot.sh | bash -s -- \
--kubeconfig $KUBECONFIG_PATH \
--verbose
Example 9: Complete Bundle with S3 Upload
π Note: Generate bundle and upload to S3 for easy sharing.
# ============ CUSTOMIZE THESE VALUES ============
export KUBECONFIG_PATH="~/.kube/config" # Path to your kubeconfig file
export CLUSTER_NAME="production-cluster" # Cluster name
export S3_BUCKET="avesha-support" # S3 bucket name
export S3_REGION="us-east-1" # S3 bucket region
# ============ GENERATE AND UPLOAD ============
curl -fsSL https://repo.egs.avesha.io/egs-troubleshoot.sh | bash -s -- \
--kubeconfig $KUBECONFIG_PATH \
--cluster-name $CLUSTER_NAME \
--s3-bucket $S3_BUCKET \
--s3-region $S3_REGION
π Troubleshooting the Script
Common Issues
Issue: βkubectl not foundβ
Solution: Install kubectl and ensure itβs in your PATH.
# Check kubectl
which kubectl
kubectl version --client
Issue: βjq not foundβ
Solution: Install jq.
# Ubuntu/Debian
sudo apt-get install jq
# macOS
brew install jq
# RHEL/CentOS
sudo yum install jq
Issue: βCannot connect to clusterβ
Solution: Verify kubeconfig and context.
# Test connectivity
kubectl --kubeconfig ~/.kube/config get nodes
# List available contexts
kubectl config get-contexts
Issue: βS3 upload failedβ
Solution: Verify AWS credentials and permissions.
# Check AWS configuration
aws sts get-caller-identity
# Test S3 access
aws s3 ls s3://your-bucket/
Issue: βBundle is too largeβ
Solution: Skip logs or reduce log lines.
# ============ CUSTOMIZE THESE VALUES ============
export KUBECONFIG_PATH="~/.kube/config" # Path to your kubeconfig file
# ============ SKIP LOGS ENTIRELY ============
curl -fsSL https://repo.egs.avesha.io/egs-troubleshoot.sh | bash -s -- \
--kubeconfig $KUBECONFIG_PATH \
--skip-logs
Or reduce log lines:
# ============ CUSTOMIZE THESE VALUES ============
export KUBECONFIG_PATH="~/.kube/config" # Path to your kubeconfig file
export LOG_LINES="100" # Reduced log lines
# ============ GENERATE WITH FEWER LOGS ============
curl -fsSL https://repo.egs.avesha.io/egs-troubleshoot.sh | bash -s -- \
--kubeconfig $KUBECONFIG_PATH \
--log-lines $LOG_LINES
π Related Documentation
| Document | Description |
|---|---|
| Quick Install Guide | Single-command EGS installer |
| Configuration Reference | Config-based installer reference |
| EGS License Setup | License configuration guide |
| Controller Prerequisites | Controller cluster requirements |
| Worker Prerequisites | Worker cluster requirements |