glitchier-soc/chart/values.yaml
Sheogorath cddcafec31
Helm: Major refactoring regarding Deployments, Environment variables and more (#20733)
* fix(chart): Remove non-functional Horizontal Pod Autoscaler

The Horizontal Pod Autoscaler (HPA) refers to a Deployment that
doesn't exist and therefore can not work. As a result it's
pointless to carry it around in this chart and give the wrong
impression it could work. This patch removes it from the helm
chart and drops all references to it.

* refactor(chart): Refactor sidekiq deployments to scale

This patch reworks how the sidekiq deployment is set up, by
splitting it into many sidekiq deployments, but at least one,
which should allow to scale the number of sidekiq jobs as
expected while being friendly to single user instances as well
as larger ones.

Further it introduces per deployment overwrites for the most
relevant pod fields like resources, affinities and processed
queues, number of jobs and the sidekiq security contexts.

The exact implementation was inspired by an upstream issue:

https://github.com/mastodon/mastodon/issues/20453

* fix(chart): Remove linode default values from values

This patch drops the linode defaults from the values.yaml since
these are not obvious and can cause unexpected connections as
well as leaking secrets to linode, when other s3 storage
backends are used and don't explicitly configure these options
by accident.

Mastodon will then try to authenticate to the linode backends
and therefore disclose the authentication secrets.

* refactor(chart): Rework reduce value reference duplication

Since most of the values are simply setup like this:

```
{{- if .Values.someVariable }}
SOME_VARIABLE: {{ .Values.someVariable }}
{{- end }}
```

There is a lot of duplication in the references in order to
full in the variables. There is an equivalent notation, which
reduces the usage of the variable name to just once:

```
{{- with .Values.someVariable }}
SOME_VARIABLE: {{ . }}
{{- end }}
```

What seems like a pointless replacement, will reduce potential
mistakes down the line by possibly only adjusting one of the
two references.

* fix(chart): Switch to new OMNIAUTH_ONLY variable

This patch adjusts the helm chart to use the new `OMNIAUTH_ONLY`
variable, which replaced the former
`OAUTH_REDIRECT_AT_SIGN_IN` variable in the following commit:

https://github.com/mastodon/mastodon/pull/17288
3c8857917e

* fix(chart): Repair connection test to existing service

Currently the connect test can't work, since it's connecting to
a non-existing service this patch fixes the service name to
make the job connect to the mastodon web service to verify the
connection.

* docs(chart): Adjust values.yaml to support helm-docs

This patch updates most values to prepare an introduction of
helm-docs. This should help to make the chart more user
friendly by explaining the variables and provide a standardised
README file, like many other helm charts do.

References:
https://github.com/norwoodj/helm-docs

* refactor(chart): Allow individual overwrites for streaming and web deployment

This patch works how the streaming and web deployments work by
adding various fields to overwrite values such as affinities,
resources, replica count, and security contexts.

BREAKING CHANGE: This commit removes `.Values.replicaCount` in
favour of `.Values.mastodon.web.replicas` and
`.Values.mastodon.streaming.values`.

* feat(chart): Add option for authorized fetch

Currently the helm chart doesn't support authorized fetch aka.
"Secure Mode" this patch fixes that by adding the needed config
option to the values file and the configmap.

* docs(chart): Improve helm-docs compatiblity

This patch adjust a few more comments in the values.yaml to be
picked up by helm-docs. This way, future adoption is properly
prepared.

* fix(chart): Add automatic detection of scheduler sidekiq queue

This patch adds an automatic switch to the `Recreate` strategy
for the sidekiq Pod in order to prevent accidental concurrency
for the scheduler queue.

* fix(chart): Repair broken DB_POOL variable
2022-11-24 21:30:29 +01:00

419 lines
13 KiB
YAML

image:
repository: tootsuite/mastodon
# https://hub.docker.com/r/tootsuite/mastodon/tags
#
# alternatively, use `latest` for the latest release or `edge` for the image
# built from the most recent commit
#
# tag: latest
tag: ""
# use `Always` when using `latest` tag
pullPolicy: IfNotPresent
mastodon:
# -- create an initial administrator user; the password is autogenerated and will
# have to be reset
createAdmin:
# @ignored
enabled: false
# @ignored
username: not_gargron
# @ignored
email: not@example.com
cron:
# -- run `tootctl media remove` every week
removeMedia:
# @ignored
enabled: true
# @ignored
schedule: "0 0 * * 0"
# -- available locales: https://github.com/mastodon/mastodon/blob/main/config/application.rb#L71
locale: en
local_domain: mastodon.local
# -- Use of WEB_DOMAIN requires careful consideration: https://docs.joinmastodon.org/admin/config/#federation
# You must redirect the path LOCAL_DOMAIN/.well-known/ to WEB_DOMAIN/.well-known/ as described
# Example: mastodon.example.com
web_domain: null
# -- If set to true, the frontpage of your Mastodon server will always redirect to the first profile in the database and registrations will be disabled.
singleUserMode: false
# -- Enables "Secure Mode" for more details see: https://docs.joinmastodon.org/admin/config/#authorized_fetch
authorizedFetch: false
persistence:
assets:
# -- ReadWriteOnce is more widely supported than ReadWriteMany, but limits
# scalability, since it requires the Rails and Sidekiq pods to run on the
# same node.
accessMode: ReadWriteOnce
resources:
requests:
storage: 10Gi
system:
accessMode: ReadWriteOnce
resources:
requests:
storage: 100Gi
s3:
enabled: false
access_key: ""
access_secret: ""
# -- you can also specify the name of an existing Secret
# with keys AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
existingSecret: ""
bucket: ""
endpoint: ""
hostname: ""
region: ""
# -- If you have a caching proxy, enter its base URL here.
alias_host: ""
# these must be set manually; autogenerated keys are rotated on each upgrade
secrets:
secret_key_base: ""
otp_secret: ""
vapid:
private_key: ""
public_key: ""
# -- you can also specify the name of an existing Secret
# with keys SECRET_KEY_BASE and OTP_SECRET and
# VAPID_PRIVATE_KEY and VAPID_PUBLIC_KEY
existingSecret: ""
sidekiq:
# -- Pod security context for all Sidekiq Pods, overwrites .Values.podSecurityContext
podSecurityContext: {}
# -- (Sidekiq Container) Security Context for all Pods, overwrites .Values.securityContext
securityContext: {}
# -- Resources for all Sidekiq Deployments unless overwritten
resources: {}
# -- Affinity for all Sidekiq Deployments unless overwritten, overwrites .Values.affinity
affinity: {}
# limits:
# cpu: "1"
# memory: 768Mi
# requests:
# cpu: 250m
# memory: 512Mi
workers:
- name: all-queues
# -- Number of threads / parallel sidekiq jobs that are executed per Pod
concurrency: 25
# -- Number of Pod replicas deployed by the Deployment
replicas: 1
# -- Resources for this specific deployment to allow optimised scaling, overwrites .Values.mastodon.sidekiq.resources
resources: {}
# -- Affinity for this specific deployment, overwrites .Values.affinity and .Values.mastodon.sidekiq.affinity
affinity: {}
# -- Sidekiq queues for Mastodon that are handled by this worker. See https://docs.joinmastodon.org/admin/scaling/#concurrency
# See https://github.com/mperham/sidekiq/wiki/Advanced-Options#queues for how to weight queues as argument
queues:
- default
- push
- mailers
- pull
- scheduler # Make sure the scheduler queue only exists once and with a worker that has 1 replica.
#- name: push-pull
# concurrency: 50
# resources: {}
# replicas: 2
# queues:
# - push
# - pull
#- name: mailers
# concurrency: 25
# replicas: 2
# queues:
# - mailers
#- name: default
# concurrency: 25
# replicas: 2
# queues:
# - default
smtp:
auth_method: plain
ca_file: /etc/ssl/certs/ca-certificates.crt
delivery_method: smtp
domain:
enable_starttls: 'auto'
from_address: notifications@example.com
openssl_verify_mode: peer
port: 587
reply_to:
server: smtp.mailgun.org
tls: false
login:
password:
# -- you can also specify the name of an existing Secret
# with the keys login and password
existingSecret:
streaming:
port: 4000
# -- this should be set manually since os.cpus() returns the number of CPUs on
# the node running the pod, which is unrelated to the resources allocated to
# the pod by k8s
workers: 1
# -- The base url for streaming can be set if the streaming API is deployed to
# a different domain/subdomain.
base_url: null
# -- Number of Streaming Pods running
replicas: 1
# -- Affinity for Streaming Pods, overwrites .Values.affinity
affinity: {}
# -- Pod Security Context for Streaming Pods, overwrites .Values.podSecurityContext
podSecurityContext: {}
# -- (Streaming Container) Security Context for Streaming Pods, overwrites .Values.securityContext
securityContext: {}
# -- (Streaming Container) Resources for Streaming Pods, overwrites .Values.resources
resources: {}
# limits:
# cpu: "500m"
# memory: 512Mi
# requests:
# cpu: 250m
# memory: 128Mi
web:
port: 3000
# -- Number of Web Pods running
replicas: 1
# -- Affinity for Web Pods, overwrites .Values.affinity
affinity: {}
# -- Pod Security Context for Web Pods, overwrites .Values.podSecurityContext
podSecurityContext: {}
# -- (Web Container) Security Context for Web Pods, overwrites .Values.securityContext
securityContext: {}
# -- (Web Container) Resources for Web Pods, overwrites .Values.resources
resources: {}
# limits:
# cpu: "1"
# memory: 1280Mi
# requests:
# cpu: 250m
# memory: 768Mi
metrics:
statsd:
# -- Enable statsd publishing via STATSD_ADDR environment variable
address: ""
ingress:
enabled: true
annotations:
# For choosing an ingress ingressClassName is preferred over annotations
# kubernetes.io/ingress.class: nginx
#
# To automatically request TLS certificates use one of the following
# kubernetes.io/tls-acme: "true"
# cert-manager.io/cluster-issuer: "letsencrypt"
#
# ensure that NGINX's upload size matches Mastodon's
# for the K8s ingress controller:
# nginx.ingress.kubernetes.io/proxy-body-size: 40m
# for the NGINX ingress controller:
# nginx.org/client-max-body-size: 40m
# -- you can specify the ingressClassName if it differs from the default
ingressClassName:
hosts:
- host: mastodon.local
paths:
- path: '/'
tls:
- secretName: mastodon-tls
hosts:
- mastodon.local
# -- https://github.com/bitnami/charts/tree/master/bitnami/elasticsearch#parameters
elasticsearch:
# `false` will disable full-text search
#
# if you enable ES after the initial install, you will need to manually run
# RAILS_ENV=production bundle exec rake chewy:sync
# (https://docs.joinmastodon.org/admin/optional/elasticsearch/)
# @ignored
enabled: true
# @ignored
image:
tag: 7
# https://github.com/bitnami/charts/tree/master/bitnami/postgresql#parameters
postgresql:
# -- disable if you want to use an existing db; in which case the values below
# must match those of that external postgres instance
enabled: true
# postgresqlHostname: preexisting-postgresql
# postgresqlPort: 5432
auth:
database: mastodon_production
username: mastodon
# you must set a password; the password generated by the postgresql chart will
# be rotated on each upgrade:
# https://github.com/bitnami/charts/tree/master/bitnami/postgresql#upgrade
password: ""
# Set the password for the "postgres" admin user
# set this to the same value as above if you've previously installed
# this chart and you're having problems getting mastodon to connect to the DB
# postgresPassword: ""
# you can also specify the name of an existing Secret
# with a key of password set to the password you want
existingSecret: ""
# https://github.com/bitnami/charts/tree/master/bitnami/redis#parameters
redis:
# -- you must set a password; the password generated by the redis chart will be
# rotated on each upgrade:
password: ""
# you can also specify the name of an existing Secret
# with a key of redis-password set to the password you want
# auth:
# existingSecret: ""
# @ignored
service:
type: ClusterIP
port: 80
externalAuth:
oidc:
# -- OpenID Connect support is proposed in PR #16221 and awaiting merge.
enabled: false
# display_name: "example-label"
# issuer: https://login.example.space/auth/realms/example-space
# discovery: true
# scope: "openid,profile"
# uid_field: uid
# client_id: mastodon
# client_secret: SECRETKEY
# redirect_uri: https://example.com/auth/auth/openid_connect/callback
# assume_email_is_verified: true
# client_auth_method:
# response_type:
# response_mode:
# display:
# prompt:
# send_nonce:
# send_scope_to_token_endpoint:
# idp_logout_redirect_uri:
# http_scheme:
# host:
# port:
# jwks_uri:
# auth_endpoint:
# token_endpoint:
# user_info_endpoint:
# end_session_endpoint:
saml:
enabled: false
# acs_url: http://mastodon.example.com/auth/auth/saml/callback
# issuer: mastodon
# idp_sso_target_url: https://login.example.com/auth/realms/example/protocol/saml
# idp_cert: '-----BEGIN CERTIFICATE-----[your_cert_content]-----END CERTIFICATE-----'
# idp_cert_fingerprint:
# name_identifier_format: urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified
# cert:
# private_key:
# want_assertion_signed: true
# want_assertion_encrypted: true
# assume_email_is_verified: true
# uid_attribute: "urn:oid:0.9.2342.19200300.100.1.1"
# attributes_statements:
# uid: "urn:oid:0.9.2342.19200300.100.1.1"
# email: "urn:oid:1.3.6.1.4.1.5923.1.1.1.6"
# full_name: "urn:oid:2.16.840.1.113730.3.1.241"
# first_name: "urn:oid:2.5.4.42"
# last_name: "urn:oid:2.5.4.4"
# verified:
# verified_email:
oauth_global:
# -- Automatically redirect to OIDC, CAS or SAML, and don't use local account authentication when clicking on Sign-In
omniauth_only: false
cas:
enabled: false
# url: https://sso.myserver.com
# host: sso.myserver.com
# port: 443
# ssl: true
# validate_url:
# callback_url:
# logout_url:
# login_url:
# uid_field: 'user'
# ca_path:
# disable_ssl_verification: false
# assume_email_is_verified: true
# keys:
# uid: 'user'
# name: 'name'
# email: 'email'
# nickname: 'nickname'
# first_name: 'firstname'
# last_name: 'lastname'
# location: 'location'
# image: 'image'
# phone: 'phone'
pam:
enabled: false
# email_domain: example.com
# default_service: rpam
# controlled_service: rpam
ldap:
enabled: false
# host: myservice.namespace.svc
# port: 389
# method: simple_tls
# base:
# bind_on:
# password:
# uid: cn
# mail: mail
# search_filter: "(|(%{uid}=%{email})(%{mail}=%{email}))"
# uid_conversion:
# enabled: true
# search: "., -"
# replace: _
# -- https://github.com/mastodon/mastodon/blob/main/Dockerfile#L75
#
# if you manually change the UID/GID environment variables, ensure these values
# match:
podSecurityContext:
runAsUser: 991
runAsGroup: 991
fsGroup: 991
# @ignored
securityContext: {}
serviceAccount:
# -- Specifies whether a service account should be created
create: true
# -- Annotations to add to the service account
annotations: {}
# -- The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
# -- Kubernetes manages pods for jobs and pods for deployments differently, so you might
# need to apply different annotations to the two different sets of pods. The annotations
# set with podAnnotations will be added to all deployment-managed pods.
podAnnotations: {}
# -- The annotations set with jobAnnotations will be added to all job pods.
jobAnnotations: {}
# -- Default resources for all Deployments and jobs unless overwritten
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
# @ignored
nodeSelector: {}
# @ignored
tolerations: []
# -- Affinity for all pods unless overwritten
affinity: {}