The REKKI command-line
REKKI has a dedicated Platform Team in charge of building a robust foundation that can be leveraged by the Tech & Product teams. Part of this foundation is the Developer Experience. To that end, we are developping an in-house CLI that allows the engineers to easily interact with our stack.
Introduction
This CLI is a companion used by REKKI engineers to improve their day-to-day workflow. It takes care of everything from installing the required system dependencies to providing an efficient and easy-to-use hot-reloading mecanism.
Table of Contents
- Install & Update
- General Commands
- Services/Jobs Commands
rekki build
rekki delete
rekki deploy
rekki diff
rekki env
rekki history
rekki logs
rekki new
rekki open
rekki pods
rekki status
rekki report
rekki restart
rekki rollback
rekki run
rekki tune
rekki aws:secrets:open
rekki aws:secrets:list
rekki aws:secrets:edit
rekki dd:apm
rekki dd:containers
rekki dd:errors
rekki dd:logs
rekki dd:metrics
rekki dd:pods
rekki dd:ps
rekki dd:traces
rekki job:run
rekki job:list
- Cluster Commands
- Buyer App Commands
- Advanced Commands
- Topics
Install & Update
To download the latest release and set up your system:
curl --proto '=https' --tlsv1.2 -sSf https://beta-cli.rekki.team/install.sh | bash
To keep your system up to date after the first install:
rekki init
General Commands
rekki clone
Clone a REKKI repository into a new directory.
Clone clones the remote repository. It is possible to only provide the repository name, in which case it will clone from the REKKI GitHub organization. It creates the remote-tracking branches for the master branch only.
Usage
rekki clone [options] <repository> [directory]
Examples
# Clone the git@github.com:rekki/go.git repository into ./go
rekki clone go
# Clone the git@github.com:rekki/go.git repository into /tmp/go-tmp
rekki clone go /tmp/go-tmp
# Clone the https://github.com/golang/go.git repository into /tmp/rekki-go
rekki clone https://github.com/golang/go.git /tmp/golang-go
rekki docs
Print a complete documentation for the CLI.
Generates a complete documentation for this command-line, including
documentation for all the commands and all the topics. Output will be printed
on stdout and will be Markdown by default. It is possible to output HTML by
supplying the --html
flag. It is also possible to serve the HTML
documentation to be consumed from your web browser with --serve
.
Usage
rekki docs [options]
Options
--html produce html instead of markdown
-p, --port uint16 port to listen on when serving doc (default 4242)
--serve serve HTML documentation
Examples
# Print markdown documentation on the standard output
rekki docs
# Print HTML documentation on the standard output
rekki docs --html
# Serve HTML documentation via an http server (http://localhost:4242)
rekki docs --serve
# Serve HTML documentation via an http server (http://localhost:9090)
rekki docs --serve -p 9090
rekki flare
Send a debug trace to the #rekki-cli-flares Slack channel.
It is being used by the Platform team to investigate a bug you might be facing. This is automatically done whenever an error or panic occurs.
Usage
rekki flare [-- args...]
Examples
# Send a flare
rekki flare
# Send a flare with some additional context
rekki flare -- I am trying to deploy my project but it fails. Can you help?
rekki help
Help about any command or topic.
Get more information about any command or topic in the CLI. Execute rekki
help
without arguments to see all the available commands and topics.
Usage
rekki help [command|topic]
Examples
# main help page
rekki help
# help for the run command
rekki help run
# help for the options topic
rekki help options
rekki init
Initialize the ~/.rekki directory.
The initialization is composed of the following steps:
- update the CLI
- install the Xcode command line tools (macOS only)
- load your SSH key
- check that the SSH key can be used to log in a GitHub account; and make sure your AWS credentials are valid
- generate the kubernetes configuration
- generate the ephemeral ssh keys
- fetch the latest version of https://github.com/rekki/devops
- install the system dependencies with the proper versions
- install the go dependencies with the proper versions
- alias
rekki
tork
This command is idempotent. You can, and should, run it as often as you need.
Usage
rekki init [options]
aliased as rekki update
, rekki upgrade
, rekki innit
Options
--hostname string hostname to download updates (default "cli.rekki.team")
--no-banner true to disable the REKKI banner
--no-self-update true to disable the CLI self update
--pat string when a github personal token is given HTTPS connection will be used
--tasks strings run only given tasks
--version string specific commit to install instead of the latest release
Examples
# Initialize or update your machine
rekki init
# Initialize or update your machine, but skip the CLI self update
rekki init --no-self-update
rekki shellenv
Print export statements.
These export statements have to be imported in your shell environment for this CLI to work properly.
Usage
rekki shellenv
Options
--restore true to print the code to restore rekki-cli
Examples
# Add this to your ~/.zprofile to integrate with your shell:
eval "$("$HOME/.rekki/bin/rekki" shellenv)"
rekki version
Print the version.
Print the git sha1 of the commit used to build this rekki-cli, followed by a newline.
Usage
rekki version
Examples
# Print the rekki-cli version
rekki version
rekki whoami
Output the identities you are logged with.
Check AWS and GitHub and print the accounts your are logged with on stdout.
Usage
rekki whoami
Examples
# Print your identities
rekki whoami
Services/Jobs Commands
rekki build
Build and push the service or job using docker.
Build the docker image for the current REKKI service or job.
This command will attempt to build the docker image for the current REKKI service or job and then push it to the ECR repository.
In order for a build to happen your current directory must:
- contain a Dockerfile
- contain a Makefile in the usual format (see example-job/example-service)
Once the build is complete and the image is pushed, it can be deployed to the cluster with the ‘deploy’ command.
This command will only build images targeting linux/amd64.
Usage
rekki build [options]
Options
--no-push don't push the built image to the ECR repository
Examples
# Build and push hulk
cd go/cmd/hulk && rekki build
# Build hulk and deploy it to the live envrionment
cd go/cmd/hulk && rekki build && rekki deploy -nlive
# Build alfred but don't push it to the ECR repository
cd alfred && rekki build --no-push
rekki delete
Delete a service or job.
Delete a service or job by uninstalling the corresponding helm release.
Usage
rekki delete [options] [resource]
Options
--mine uninstall a personalised helm chart
Examples
# Delete hulk on feat
cd go/cmd/hulk && rekki delete
# Delete hulk on live
cd go/cmd/hulk && rekki delete -nlive
# Same but can be executed anywhere on your system
rekki delete -nlive hulk
rekki deploy
Deploy the service or job.
Deploy the current REKKI service or job into a cluster.
First it will send the request to Sauron (our internal builder service) to build the docker image and publish it to our AWS ECR Docker repository.
In order for a build to happen your current directory must:
- contain a Dockerfile
- be part of a Rekki owned git repository (or worktree of a repository)
When Sauron is finished building and pushing to ECR, it will take care of creating a new Helm release into the cluster.
Usage
rekki deploy [options]
Options
--allow-dirty allow to deploy with a dirty git state
--build-only performs all steps up to sauron build and no further. Exclusive with 'local'
--dry-run performs all steps up to deployment and then prints the config as yaml, this is a functional Helm Values file
--force force the helm deploy
--force-live-branch force deployment to live from a non-master branch
--local skip remote building before deploying. Exclusive with 'build-only'
--mine deploy a personalised helm chart prefixing a user's github account name on everything other than external-secret
-o, --optimisation-level int the optimisation level to use when building the docker image (default 2)
--pat string when a github personal token is given HTTPS connection will be used
--sauron-url string the url of the sauron service (default "https://sauron.hetzner.rekki.com")
--stream stream docker build output to stderr (default true)
--suspend set to instruct Flux that it should suspend management of this application
--version string git commit to be deployed (default to using the current git head)
Examples
# Deploy the hulk service on the feat environment
cd go/cmd/hulk && rekki deploy
# Deploy the hulk service on the live environment
cd go/cmd/hulk && rekki deploy -nlive
# Deploy a non-go application
cd alfred && rekki deploy
# Deploy a custom version of the hulk service on the live environment
cd go/cmd/hulk && rekki deploy -nlive --version=a8287fcf155f8ed70808d260d9ac8d05491372cf
rekki diff
Display diff between local and deployed code.
Diff uses the current git head code, and compares it with the remote code deployed for the service or job.
Usage
rekki diff [options]
Examples
# Diff hulk on feat
cd go/cmd/hulk && rekki diff
# Diff hulk on live
cd go/cmd/hulk && rekki diff -nlive
rekki env
Print the environment variables for a service or job.
The environment variables are fetched from AWS Secrets Manager.
Usage
rekki env [options] [resource]
Examples
# Print the environment variables for the wasabi service on feat
cd go/cmd/wasabi && rekki env
# Print the environment variables for the hulk service on live
cd go/cmd/hulk && rekki env -nlive
# Same but can be executed anywhere on your system
rekki env -nlive hulk
rekki history
Show deployments history for a service or job.
List all the deployed Helm releases for a service or job.
Usage
rekki history [options] [resource]
Examples
# Print the history for wasabi in feat
cd go/cmd/wasabi && rekki history
# Print the history for wasabi in live
cd go/cmd/wasabi && rekki history -nlive
# Same but can be executed anywhere on your system
rekki history -nlive wasabi
rekki logs
Fetch logs for a service or job.
Contact the kubernetes cluster to fetch logs for the given service or job.
Usage
rekki logs [options] [resource]
Options
-f, --follow specify if the logs should be streamed
Examples
# Get logs for wasabi on feat
cd go/cmd/wasabi && rekki logs
# Get logs for wasabi on live
cd go/cmd/wasabi && rekki logs -nlive
# Same but can be executed anywhere on your system
rekki logs -nlive wasabi
rekki new
Creates a new application definition in the current directory.
Use this to create a new application definition, the created application will be a minimal wireframe for you to build on.
You can expect it to make the following file structure application/ entrypoint-file (main.go, init.py, index.js) Dockerfile rekki.toml .dockerignore
Usage
rekki new [options]
Options
--name string the name of your new application
Examples
# Create a new application interactively
cd go/cmd && rekki new
rekki open
Open the current service.
Open the current service in a web browser.
Usage
rekki open [options] [resource]
Examples
# Open wasabi in feat
cd go/cmd/wasabi && rekki open
# Open wasabi in live
cd go/cmd/wasabi && rekki open -nlive
# Same but can be executed anywhere on your system
rekki open -nlive wasabi
rekki pods
List all the pods for a service or job.
Query the cluster to fetch all the pods for the service or job.
Usage
rekki pods [options] [resource]
Examples
# List all the pods for the wasabi service
cd go/cmd/wasabi && rekki pods
# List all the pods for the wasabi service on live
cd go/cmd/wasabi && rekki pods -nlive
# Same but can be executed anywhere on your system
rekki pods -nlive wasabi
rekki status
Displays the status of a service or job.
Query k8s and GitHub to fetch the status of the service or job.
Usage
rekki status [options] [resource]
Options
-o, --output string Output format. One of: json|yaml. (default "json") (default "json")
Examples
# Satus of the wasabi service on feat
cd go/cmd/wasabi && rekki status
# Status of the wasabi service on live
cd go/cmd/wasabi && rekki status -nlive
# Same but can be executed anywhere on your system
rekki status -nlive wasabi
rekki report
Runs reports to expose information about various aspects of the system.
Contains some reporting tools that are useful for debugging and monitoring.
Available Reports: helm-release-age bytes-memory-usage appinfo
Usage
rekki report <report>
Examples
# Run a specific report
rekki report helm-release-age
rekki restart
Restart the deployments for a service.
Sequentially restart all the pods for all the deployments of the service.
Usage
rekki restart [options] [resource]
Options
--mine restart your personalised helm chart
Examples
# Restart wasabi on feat
cd go/cmd/wasabi && rekki restart
# Restart wasabi on live
cd go/cmd/wasabi && rekki restart -nlive
# Same but can be executed anywhere on your system
rekki restart -nlive wasabi
rekki rollback
Rollback a service or job to a specific release.
Rollback a service or job deployment to the specific helm release.
Usage
rekki rollback [options] [resource]
Examples
# Rollback hulk on feat
cd go/cmd/hulk && rekki rollback
# Rollback hulk on live
cd go/cmd/hulk && rekki rollback -nlive
# Same but can be executed anywhere on your system
rekki rollback -nlive hulk
rekki run
Run the service or job for development.
This command is used to run a service or job locally. This takes care of:
- fetching the environment variables
- creating the tunnel to the given resources (by default: DB and Redis)
- starting the service
This command will try to infer the correct command and arguments when starting the service. If your need differs from what’s been inferred, you can easily override. See the examples.
If not specified in the -t
flags, random local ports will be chosen when
additional resources are added to be tunnelled. This behavior diverges from
what you would observe in rekki tunnel
as the goal here is to allow you to
run several instances of rekki run
in parallel without having to worry about
port collision. See the examples on how you can force a specific port to be
used.
Note that tunnels to services will automatically populate the _SERVICE_HOST
and _SERVICE_PORT
environment variables.
Usage
rekki run [options] [-- cmd args...]
Options
--app string specify which app configuration to run
--as string specify which resource the command should be run as
--clear clear screen before executing command
-e, --env strings set environment variables
--no-default-tunnels disable the default tunnels to the database and redis
-t, --tunnel strings set additional resources for which tunnel must be created
--watch watch for changes and autoreload service or job
Examples
# Start the hulk service with the feat environment
cd go/cmd/hulk && rekki run
# or using the change directory flag
rekki run -C go/cmd/hulk
# Start the wasabi service with the live environment
cd go/cmd/wasabi && rekki run -nlive
# Define custom environment variables
cd go/cmd/wasabi && rekki run -e LOG_LEVEL=info
# Run insights API service
rekki run -C cmd/insights
# Run insights's client UI app
rekki run -C cmd/insights --app=ui
# Define additional resources to be tunnelled
cd go/cmd/wasabi && rekki run -t svc/hulk
# Define additional resources to be tunnelled on a specific local port
cd go/cmd/wasabi && rekki run -t 5423:db/live -t 6379:redis/live
# The name fallbacks to be the namespace if not specified for database and redis (default namespace is feat)
cd go/cmd/wasabi && rekki run -t 5423:db -t 6379:redis
# Start a service with a custom command and arguments
cd go/cmd/wasabi && rekki run -- go run . --admin=true
# Start a service, autoreload and clear screen every time it restarts
cd go/cmd/wasabi && rekki run --watch --clear
# Start a feat service, and tunnel to both a local service on port 9090 and a live service
cd go/cmd/hulk && rekki run -t marketplace-everything@local:9090 -t blackrock-search-grpc@live
# Run a go script with the shared live secret
cd go && rekki run --as=shared@live ./scripts/generate-model.go
rekki tune
Updates application configuration files to fit it to our platform.
We have a couple of conventions that we follow when building applications. This command will update your application to fit these conventions.
For example, your application can and should specify a memory limit. If it doesn’t, this command will be able to suggest a value for you based on the memory usage of your application in the kubernetes cluster and update rekki.toml with this value.
Usage
rekki tune [options]
Options
--only strings Only run a subset of the tune commands: limits (default [limits])
--toml string Path to a rekki.toml file
Examples
# Tune the application in the current directory (hulk)
cd go/cmd/hulk; rekki tune; git push; rekki deploy
# Run a subset of the tune commands
cd go/cmd/hulk; rekki tune --only=limits
rekki aws:secrets:open
Open AWS Secrets Manager in the default web browser.
Open the interface to manage services and jobs secrets in the default web browser.
Usage
rekki aws:secrets:open [options] [resource]
Examples
# Open the AWS Secrets for wasabi in feat
cd go/cmd/wasabi && rekki aws:secrets
# Open the AWS Secrets for wasabi in live
cd go/cmd/wasabi && rekki aws:secrets -nlive
# Same but can be executed anywhere on your system
rekki aws:secrets -nlive wasabi
rekki aws:secrets:list
List secrets stored in AWS secrets manager.
Lists secrets that are stored in AWS secrets manager
Usage
rekki aws:secrets:list [options]
Options
-o, --output string set output type, one of: name, json, table (default "name")
Examples
# List the AWS Secrets for feat
rekki aws:secrets:list
# List the AWS Secrets for live
rekki aws:secrets:list -nlive
# List the AWS Secrets for everything in kubernetes
rekki aws:secrets:list -nall
# Format the output as JSON:
rekki aws:secrets:list -ojson
rekki aws:secrets:edit
Edit a secret stored in AWS secrets manager.
Edit a secret that’s stored in AWS secrets manager Secrets can be directly named or inferred from the namespace flag and pwd
A shortcut exists for editing the shared secret for a namespace, see the example
Usage
rekki aws:secrets:edit [options]
Examples
# Edit a secret based on its name
rekki aws:secrets:edit $secret-name
# Edit a secret based on its arn
rekki aws:secrets:edit $arn
# Edit the secret for Hulk in feat
cd go/cmd/hulk; rekki aws:secrets:edit
# Edit the secret for Hulk in live
cd go/cmd/hulk; rekki aws:secrets:edit -nlive
# Edit the shared secret for feat
rekki aws:secrets:edit shared
# Or for live
rekki aws:secrets:edit shared -nlive
rekki dd:apm
Open Datadog APM in the default web browser.
Open the Datadog APM interface for the service or job.
Usage
rekki dd:apm [options] [resource]
Examples
# Open the Datadog APM for wasabi in feat
cd go/cmd/wasabi && rekki dd:apm
# Open the Datadog APM for wasabi in live
cd go/cmd/wasabi && rekki dd:apm -nlive
# Same but can be executed anywhere on your system
rekki dd:apm -nlive wasabi
rekki dd:containers
Open Datadog Containers in the default web browser.
Open the Datadog Containers interface for the service or job.
Usage
rekki dd:containers [options] [resource]
Examples
# Open the Datadog Containers for wasabi in feat
cd go/cmd/wasabi && rekki dd:containers
# Open the Datadog Containers for wasabi in live
cd go/cmd/wasabi && rekki dd:containers -nlive
# Same but can be executed anywhere on your system
rekki dd:containers -nlive wasabi
rekki dd:errors
Open Datadog Error Tracking in the default web browser.
Open the Datadog Error Tracking interface for the service or job.
Usage
rekki dd:errors [options] [resource]
Examples
# Open the Datadog Error Tracking for wasabi in feat
cd go/cmd/wasabi && rekki dd:errors
# Open the Datadog Error Tracking for wasabi in live
cd go/cmd/wasabi && rekki dd:errors -nlive
# Same but can be executed anywhere on your system
rekki dd:errors -nlive wasabi
rekki dd:logs
Open Datadog Logs in the default web browser.
Open the Datadog Logs interface for the service or job.
Usage
rekki dd:logs [options] [resource]
Examples
# Open the Datadog Logs for wasabi in feat
cd go/cmd/wasabi && rekki dd:logs
# Open the Datadog Logs for wasabi in live
cd go/cmd/wasabi && rekki dd:logs -nlive
# Same but can be executed anywhere on your system
rekki dd:logs -nlive wasabi
rekki dd:metrics
Open Datadog Metrics in the default web browser.
Open the Datadog Metrics interface for the service or job.
Usage
rekki dd:metrics [options] [resource]
Examples
# Open the Datadog Metrics for wasabi in feat
cd go/cmd/wasabi && rekki dd:metrics
# Open the Datadog Metrics for wasabi in live
cd go/cmd/wasabi && rekki dd:metrics -nlive
# Same but can be executed anywhere on your system
rekki dd:metrics -nlive wasabi
rekki dd:pods
Open Datadog Pods in the default web browser.
Open the Datadog Pods interface for the service or job.
Usage
rekki dd:pods [options] [resource]
Examples
# Open the Datadog Pods for wasabi in feat
cd go/cmd/wasabi && rekki dd:pods
# Open the Datadog Pods for wasabi in live
cd go/cmd/wasabi && rekki dd:pods -nlive
# Same but can be executed anywhere on your system
rekki dd:pods -nlive wasabi
rekki dd:ps
Open Datadog Processes in the default web browser.
Open the Datadog Processes interface for the service or job.
Usage
rekki dd:ps [options] [resource]
Examples
# Open the Datadog Processes for wasabi in feat
cd go/cmd/wasabi && rekki dd:ps
# Open the Datadog Processes for wasabi in live
cd go/cmd/wasabi && rekki dd:ps -nlive
# Same but can be executed anywhere on your system
rekki dd:ps -nlive wasabi
rekki dd:traces
Open Datadog Traces in the default web browser.
Open the Datadog Traces interface for the service or job.
Usage
rekki dd:traces [options] [resource]
Examples
# Open the Datadog Traces for wasabi in feat
cd go/cmd/wasabi && rekki dd:traces
# Open the Datadog Traces for wasabi in live
cd go/cmd/wasabi && rekki dd:traces -nlive
# Same but can be executed anywhere on your system
rekki dd:traces -nlive hulk
rekki job:run
Create a batch job from a CronJob.
Create a single shot batch job from a deployed cronjob.
Usage
rekki job:run [options] [resource]
Options
--kubeconfig string set a custom kubeconfig to use for this command (default "/home/github/.rekki/.kube/config")
Examples
# Create a job from timetoorder in feat
cd go/cmd/timetoorder && rekki job:run
# Create a job from timetoorder in feat in live
cd go/cmd/timetoorder && rekki job:run -nlive
# Same but can be executed anywhere on your system
rekki job:run -nlive timetoorder
rekki job:list
List jobs by their name and kind.
Produces a json doc enumerating all of the jobs in the namespace by kind (batch/cron)
Usage
rekki job:list [options] [resource]
Options
--kubeconfig string set a custom kubeconfig to use for this command (default "/home/github/.rekki/.kube/config")
Examples
# List jobs in this namespace
rekki job:list
# List jobs in another namespace
rekki job:list -nlive
Cluster Commands
rekki repl
Start a REPL to a remote resource.
Start a read–eval–print loop with a remote resource (only postgres is supported at the moment).
Usage
rekki repl [options] <resource>
Examples
# repl the feat database
rekki repl db
# repl the live database
rekki repl db/live
rekki ssh
Open an interactive SSH session to a remote resource.
Using either a tunnel through our bastion instance or kubectl exec, an interactive session is being started on a remote resource.
Usage
rekki ssh [options] <resource>
Examples
# ssh to the bastion instance
rekki ssh bastion
# ssh to a hulk service pod on feat
rekki ssh svc/hulk
# ssh to a hulk service pod on live
rekki ssh svc/hulk -nlive
rekki tunnel
Create SSH tunnels to remote resources.
Create a tunnel between your local machine and a remote resource. This can be used to access database and redis instances, but also kubernetes pods, services, replica sets and deployments.
If no ports are specified, then the default protocol ports will be used:
- 5432 for postgres
- 6379 for redis
- 8080 for HTTP tunnels to Kubernetes resources (as port 80 usually requires root)
Usage
rekki tunnel [options] <resources...>
Examples
# Create a tunnel to the live database
rekki tunnel -nlive db
# Create a tunnel to the live redis instance
rekki tunnel -nlive redis
# Create tunnels to the live database and hulk service
rekki tunnel -nlive db svc/hulk
# Create a tunnel to the feat hulk service
rekki tunnel svc/hulk
# Create a tunnel to the live hulk service on local port 9090
rekki tunnel -nlive 9090:svc/hulk
# Create a tunnel to a specific database
rekki tunnel db/feat-xxxxx
Buyer App Commands
rekki codepush:release
Release the tip of buyer-app master branch as a codepush.
This command runs the release-codepush.yml workflow in the buyer-app repo
Usage
rekki codepush:release
Examples
rekki codepush:release
rekki codepush:promote
Promotes a the latest codepush release to live.
This command triggers the promote-codepush.yml workflow in the buyer-app repo
Usage
rekki codepush:promote
Examples
rekki codepush:promote
rekki binary:release
Releases a new binary version to beta/testflight.
This command triggers the binary-release.yml workflow in the buyer-app repo
Usage
rekki binary:release [options]
Examples
# Create a binary deployment and associated git tags/changelog
# You'll be prompted for a new git tag and changelog
rekki binary:release
Advanced Commands
rekki bsdiff
Generate a patch between two files.
Generate a patch using a Go implementation of https://www.daemonology.net/bsdiff/
Usage
rekki bsdiff <oldfile> <newfile> <patchfile>
Examples
# Generate patch between /tmp/a and /tmp/b into /tmp/patch
rekki bsdiff /tmp/a /tmp/b /tmp/patch
rekki bspatch
Apply a patch between two files.
Apply a patch using a Go implementation of https://www.daemonology.net/bsdiff/
Usage
rekki bspatch <oldfile> <newfile> <patchfile>
Examples
# Apply patch /tmp/patch to /tmp/a and obtain /tmp/b
rekki bsdiff /tmp/a /tmp/b /tmp/patch
rekki hash
Hash the given files.
Produce a SHA-256 hash with the same implementation as the one used to generate go.sum hashes. It supports relative and absolute paths. It is deterministic independently of the inputs order and duplicate inputs. If a directory is given, it will be recursively expanded to all the files it contains. Symbolic links are followed.
Usage
rekki hash <files...>
Examples
# Generate a hash of your ~/.zshrc
rekki hash ~/.zshrc
# Generate a single hash for both the nvim and tmux conf files
rekki hash ~/.config/nvim/init.lua ~/.tmux.conf
# Same, the order doesn't matter
rekki hash ~/.tmux.conf ~/.config/nvim/init.lua
# Same, paths are cleaned
rekki hash ~/.tmux.conf ~/.config/nvim/../nvim/init.lua
rekki infer
Infer the current service or job.
Run the inference process used by the service or job commands.
Usage
rekki infer [options] [resource]
Examples
# Infer for wasabi
cd go/cmd/wasabi && rekki infer
# Infer for hulk from anywhere on the system
rekki infer -nlive hulk
Topics
Application Configuration
Some values have defaults or are required depending on Kind/Environment and other things not immediately obvious.
In order to confirm that your rekki.toml
file is working you can run rekki deploy --dry-run
which will output your config with all defaults and updates in place.
Name | Type | For Kinds | Usage |
---|---|---|---|
dockerContextPath | string |
* | The relative path from rekki.toml to the location for the docker build context |
kind | string |
* | The kind of the application (one of: service, job) |
name | string |
* | The name of the application |
owner | string |
* | The team that owns this application (one of: platform cse data suppliers buyers marketing) |
schedule | string |
job | The schedule for the job, must be a cron schedule |
backoffLimit | uint |
* | Configures the backoffLimit for jobs |
cpu | string |
* | The CPU request in millicores |
cpuLimit | string |
* | The CPU limit in millicores |
disableWait | bool |
* | Disables waiting for the service or job to be ready after deployment, alerts may not be coherent if this is set to true |
extraLabels | map[string]interface “ |
* | Extra labels that are attached to k8s objects |
memory | string |
* | The Memory request in Mebibytes |
memoryLimit | string |
* | The Memory limit in Mebibytes |
nodeSelector | map[string]string |
* | Direct mapping to kubernetes nodeSelector |
run | appconfig.RunConfig |
* | Configures the rekki run command |
secretName | string |
* | |
serviceAccount | appconfig.ServiceAccount |
* | |
tcpPort | uint |
* | For TCP services, exposes the specified port to kubernetes with protocol TCP |
tolerations | []map[string]string |
* | Direct mapping to kubernetes tolerations |
notifyStart | bool |
job | Indicates to the job that it should notify slack that it started |
notifySuccess | bool |
job | Indicates to the job that it should notify slack that it succeeded |
authAdmin | bool |
service | Adds authz (oauth2-proxy) to the /admin request path |
authApi | bool |
service | Adds authz (berserk) to the /api request path |
authRoot | bool |
service | Adds authz (oauth2-proxy) to the / request path |
hostNetwork | bool |
service | Enables host networking for this application |
limitBurstMultiplier | uint |
service | Sets the nginx burst multipler for rate limiting |
limitRps | uint |
service | If set >0 then nginx rate limiting is enabled on all ingresses for this service |
livenessProbeInitDelay | uint |
service | |
livenessProbePath | string |
service | For liveness and readiness see kubernetes docs |
livenessProbePeriodScan | uint |
service | |
livenessProbeTimeout | uint |
service | |
maxSurge | uint |
service | Sets maxSurge on the deployment |
maxUnavailable | uint |
service | Sets maxUnavailable on the deployment |
port | uint |
service | Port for http traffic |
public | bool |
service | Exposes the service’s port to the internet |
publicHostname | string |
service | |
publicHostnames | []string |
service | |
readinessProbeInitDelay | uint |
service | |
readinessProbePath | string |
service | |
readinessProbePeriodScan | uint |
service | |
readinessProbeTimeout | uint |
service | |
replicaCount | uint |
service | Replicas desired on spot instances |
replicaCountOnDemand | uint |
service | For services that must never be down, ever. Runs on On Demand instances. Has no effect on feat |
restart | string |
service | Include a job with the application that does restarts to the specified cron schedule |
Automatically Generated from: rekki-cli/pkg/appconfig/toml.go
Feedbacks & Issues
You can get in touch with the Platform Team in the #platform-public channel for any feedback, issue or suggestion that you might have.
Options & Environment
The following options can be passed to any command:
--auto-approve accept defaults for all prompts
-C, --chdir string directory to change to prior to command invocation
-c, --cluster string the cluster you want to interact with (default "eu-west-2")
-h, --help print the help
-i, --identity-file string an SSH private key used for public key authentication
-n, --namespace string the kubernetes namespace for this command (default "feat")
--no-color force color output to be disabled
Additionally, the following environment variables are supported:
Key | Value |
---|---|
DEBUG |
set to "true" to enable debug information on stderr |
NO_COLOR |
set to any value to disable color output |
REKKI_CLI_NO_COLOR |
set to any value to disable color output |
REKKI_CLI_NO_REPORTING |
set to any value to disable slack error reporting |
REKKI_CLI_NO_VERSION_CHECK |
set to any value to disable the remote version check |
REKKI_CLI_STACKTRACE |
set to any value to enable stacktraces |
Options take precedence when they conflict with environment variables.
Resources
A resource represents the concept of a remote resource available in the cluster or infrastructure. It is in various commands accross the CLI.
The full syntax for a resource is:
[localPort:][kind/]name[@namespace][:remotePort]
The syntax is actually quite permissive, when the values are omitted then the defaults are as follow:
localPort
: random forrekki run
tunnels, deterministic forrekki tunnel
,0
otherwisekind
:service
by defaultnamespace
: the command namespace (from-n
or--namespace
) is used by defaultremotePort
: asking the Kubernetes API forrekki run
andrekki tunnel
tunnels,0
otherwise
The different kinds are:
bastion
: for our bastion instancedatabase
: for our AWS RDS instances (alias:db
,pg
,postgre
,postgres
,postgresql
,rds
)deployment
: for Kubernetes deployments (alias:deploy
)job
: for Kubernetes jobspod
: for Kubernetes podsredis
: for our AWS Elasticache instancesreplicaset
: for Kubernetes replica sets (alias:rs
)service
: for Kubernetes services (alias:svc
)
Special case for bastion
, database
and redis
: the 3 of them can be used
as names (even though they are technically kinds). That means you can do the
following: rk tunnel db
. The proper name will be inferred based on the
namespace. This is especially useful for the feat database or the redis
instances, where the names are hard to remember or generated dynamically.
Resources are used in different locations. Here is a non-exhaustive list of valid examples:
rekki delete job/notemailer
rekki env svc/hulk@live
rekki history hulk
rekki logs svc/hulk
rekki run -t marketplace-everything@local:9090 -t blackrock-search-grpc@live
rekki run -t svc/marketplace-everything@local:9090 -t svc/blackrock-search-grpc@live
rekki ssh bastion
rekki tunnel db/live
rekki tunnel db@live
rekki tunnel db
rekki tunnel hulk
rekki tunnel svc/hulk@live
rekki tunnel svc/hulk
Services & Jobs
Services are long running processes that handle HTTP requests.
Jobs are short running processes that perform computing tasks.