Skip to content
Snippets Groups Projects
Commit 881d8a66 authored by Caspar Martens's avatar Caspar Martens
Browse files

Add kubernetes configuration

parent 3f54e348
Branches
Tags
No related merge requests found
......@@ -4,89 +4,12 @@ Kubernetes YAML files required for deployment of the application
## Getting started
To make it easy for you to get started with GitLab, here's a list of recommended next steps.
To deploy this kubernetes environment, the first step required is to create a pulling secret, so your kubernetes is able to pull the container images of this projects directly from this gitlabs container registry. This can by issuing the following command:
Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)!
`kubectl create secret docker-registry regcred --docker-server=registry.gitlab.com --docker-username=kube-puller --docker-password=<token>`
## Add your files
The `regrec` is the name of the created secret, it is specified in the deployment resources in the container section. The token is already created on the repository, and can be shared on demand by the maintainer.
- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
- [ ] [Add files using the command line](https://docs.gitlab.com/ee/gitlab-basics/add-file.html#add-a-file-using-the-command-line) or push an existing Git repository with the following command:
After now having access to the registry and being able to pull the required images all should be ready to start the cluster. For this two bash-utilities have been created for a simplified use of this repository.
```
cd existing_repo
git remote add origin https://gitlab.ost.ch/amygdala/k8s.git
git branch -M main
git push -uf origin main
```
## Integrate with your tools
- [ ] [Set up project integrations](https://gitlab.ost.ch/amygdala/k8s/-/settings/integrations)
## Collaborate with your team
- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/)
- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
- [ ] [Automatically merge when pipeline succeeds](https://docs.gitlab.com/ee/user/project/merge_requests/merge_when_pipeline_succeeds.html)
## Test and Deploy
Use the built-in continuous integration in GitLab.
- [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/index.html)
- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing(SAST)](https://docs.gitlab.com/ee/user/application_security/sast/)
- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
- [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/)
- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)
***
# Editing this README
When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thank you to [makeareadme.com](https://www.makeareadme.com/) for this template.
## Suggestions for a good README
Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.
## Name
Choose a self-explaining name for your project.
## Description
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
## Badges
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
## Visuals
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
## Installation
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
## Usage
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
## Support
Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.
## Roadmap
If you have ideas for releases in the future, it is a good idea to list them in the README.
## Contributing
State if you are open to contributions and what your requirements are for accepting them.
For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self.
You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.
## Authors and acknowledgment
Show your appreciation to those who have contributed to the project.
## License
For open source projects, say how it is licensed.
## Project status
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.
These are the `amygdala-helper.sh` and the `database-helper.sh` scripts. Both can be run with the parameters `up`/`down`. In the database-helper script a `cred` option is also provided that return the username and password of the elastic-search cluster.
#!/bin/bash
case "$1" in
up )
kubectl apply -f amygdala.yaml;
kubectl create -f elastic_search/elastic-resources.yaml;
kubectl apply -f elastic_search/elastic-operator.yaml;
kubectl apply -f elastic_search/elastic-cluster.yaml;
kubectl apply -f ltm.yaml;
kubectl apply -f stt.yaml;
kubectl apply -f tts.yaml;
kubectl apply -f cnc.yaml;
kubectl apply -f memorize.yaml;
kubectl apply -f remember.yaml;
kubectl apply -f vis.yaml;;
down )
kubectl delete -f vis.yaml;
kubectl delete -f remember.yaml;
kubectl delete -f memorize.yaml;
kubectl delete -f cnc.yaml;
kubectl delete -f tts.yaml;
kubectl delete -f stt.yaml;
kubectl delete -f ltm.yaml;
kubectl delete -f elastic_search/elastic-cluster.yaml;
kubectl delete -f elastic_search/elastic-operator.yaml;
kubectl delete -f elastic_search/elastic-resources.yaml;
kubectl delete -f amygdala.yaml;;
* )
echo Unknown Option;;
esac
apiVersion: v1
kind: Namespace
metadata:
name: amygdala
labels:
namespace: amygdala
cnc.yaml 0 → 100644
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: cnc-nwp
namespace: amygdala
spec:
podSelector:
matchLabels:
network/cnc: "true"
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
network-access/cnc: "true"
---
apiVersion: v1
kind: Service
metadata:
name: cnc-svc
namespace: amygdala
labels:
app.kubernetes.io/name: cnc-svc
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/managed-by: hand
app.kubernetes.io/component: cnc
app.kubernetes.io/part-of: amygdala
spec:
ports:
- name: "cnc"
port: 8080
targetPort: 8080
selector:
app.kubernetes.io/name: cnc-pod
---
apiVersion: v1
kind: ConfigMap
metadata:
name: cnc-cfg
namespace: amygdala
labels:
app.kubernetes.io/name: cnc-cfg
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/managed-by: hand
app.kubernetes.io/component: cnc
app.kubernetes.io/part-of: amygdala
data:
GPT3_API: host
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cnc-dep
namespace: amygdala
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app.kubernetes.io/name: cnc-pod
template:
metadata:
labels:
network/cnc: "true"
app.kubernetes.io/name: cnc-pod
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/managed-by: hand
app.kubernetes.io/component: cnc
app.kubernetes.io/part-of: amygdala
spec:
containers:
- image: registry.gitlab.ost.ch:45023/amygdala/cnc/cnc:latest
name: cnc
ports:
- containerPort: 8080
# env:
# - name: OPENAI_TOKEN
# valueFrom:
# secretKeyRef:
# name: openai_secrets
# key: token
envFrom:
- configMapRef:
name: cnc-cfg
imagePullSecrets:
- name: regcred
restartPolicy: Always
#!/bin/bash
case "$1" in
up )
kubectl create -f ../amygdala.yaml;
kubectl create -f elastic-resources.yaml;
kubectl apply -f elastic-operator.yaml;
kubectl apply -f elastic-cluster.yaml;;
down )
kubectl delete -f elastic-cluster.yaml;
kubectl delete -f elastic-operator.yaml;
kubectl delete -f elastic-resources.yaml;;
creds )
echo -e "{\n username=\"elastic\",\n password=\"$(kubectl get secret elastic-cluster-es-elastic-user -o go-template='{{.data.elastic | base64decode}}')\"\n}";;
* )
echo Unknown Option;;
esac
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: elastic-cluster-nwp
namespace: amygdala
spec:
podSelector:
matchLabels:
network/elastic-cluster: "true"
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
network-access/elastic-cluster: "true"
---
apiVersion: v1
kind: Service
metadata:
name: elastic-cluster-svc
namespace: amygdala
labels:
app.kubernetes.io/name: elastic-cluster-svc
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/managed-by: hand
app.kubernetes.io/component: ltm
app.kubernetes.io/part-of: amygdala
spec:
ports:
- name: "elastic-cluster"
port: 8080
targetPort: 8080
selector:
app.kubernetes.io/name: memorize-pod
---
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elastic-cluster
namespace: amygdala
spec:
version: 8.1.1
nodeSets:
# This configuration is for testing only
- name: default
count: 1
podTemplate:
metadata:
labels:
network/elastic-cluster: "true"
config:
node.store.allow_mmap: false
# # This is the production environment
# - name: master-nodes
# count: 3
# config:
# node.master: true
# node.data: false
# - name: data-nodes
# count: 3
# config:
# node.master: false
# node.data: true
# # Per default a data pod of elastic search only has 1Gi of Data storage, for production we need more.
# volumeClaimTemplates:
# - metadata:
# name: elasticsearch-data
# spec:
# resource:
# requests:
# storage: 10Gi
# storageClassName: local-storage # Any Storage class can be used, also e.g. (gce-pd, aws-ebs, etc.)
# Source: eck-operator/templates/operator-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: elastic-system
labels:
name: elastic-system
---
# Source: eck-operator/templates/service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: elastic-operator
namespace: elastic-system
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.1.0"
---
# Source: eck-operator/templates/webhook.yaml
apiVersion: v1
kind: Secret
metadata:
name: elastic-webhook-server-cert
namespace: elastic-system
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.1.0"
---
# Source: eck-operator/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: elastic-operator
namespace: elastic-system
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.1.0"
data:
eck.yaml: |-
log-verbosity: 0
metrics-port: 0
container-registry: docker.elastic.co
max-concurrent-reconciles: 3
ca-cert-validity: 8760h
ca-cert-rotate-before: 24h
cert-validity: 8760h
cert-rotate-before: 24h
exposed-node-labels: [topology.kubernetes.io/.*,failure-domain.beta.kubernetes.io/.*]
set-default-security-context: auto-detect
kube-client-timeout: 60s
elasticsearch-client-timeout: 180s
disable-telemetry: false
distribution-channel: all-in-one
validate-storage-class: true
enable-webhook: true
webhook-name: elastic-webhook.k8s.elastic.co
---
# Source: eck-operator/templates/cluster-roles.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: elastic-operator
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.1.0"
rules:
- apiGroups:
- "authorization.k8s.io"
resources:
- subjectaccessreviews
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- pods
- events
- persistentvolumeclaims
- secrets
- services
- configmaps
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- apps
resources:
- deployments
- statefulsets
- daemonsets
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- elasticsearch.k8s.elastic.co
resources:
- elasticsearches
- elasticsearches/status
- elasticsearches/finalizers # needed for ownerReferences with blockOwnerDeletion on OCP
verbs:
- get
- list
- watch
- create
- update
- patch
- apiGroups:
- kibana.k8s.elastic.co
resources:
- kibanas
- kibanas/status
- kibanas/finalizers # needed for ownerReferences with blockOwnerDeletion on OCP
verbs:
- get
- list
- watch
- create
- update
- patch
- apiGroups:
- apm.k8s.elastic.co
resources:
- apmservers
- apmservers/status
- apmservers/finalizers # needed for ownerReferences with blockOwnerDeletion on OCP
verbs:
- get
- list
- watch
- create
- update
- patch
- apiGroups:
- enterprisesearch.k8s.elastic.co
resources:
- enterprisesearches
- enterprisesearches/status
- enterprisesearches/finalizers # needed for ownerReferences with blockOwnerDeletion on OCP
verbs:
- get
- list
- watch
- create
- update
- patch
- apiGroups:
- beat.k8s.elastic.co
resources:
- beats
- beats/status
- beats/finalizers # needed for ownerReferences with blockOwnerDeletion on OCP
verbs:
- get
- list
- watch
- create
- update
- patch
- apiGroups:
- agent.k8s.elastic.co
resources:
- agents
- agents/status
- agents/finalizers # needed for ownerReferences with blockOwnerDeletion on OCP
verbs:
- get
- list
- watch
- create
- update
- patch
- apiGroups:
- maps.k8s.elastic.co
resources:
- elasticmapsservers
- elasticmapsservers/status
- elasticmapsservers/finalizers # needed for ownerReferences with blockOwnerDeletion on OCP
verbs:
- get
- list
- watch
- create
- update
- patch
- apiGroups:
- storage.k8s.io
resources:
- storageclasses
verbs:
- get
- list
- watch
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
---
# Source: eck-operator/templates/cluster-roles.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: "elastic-operator-view"
labels:
rbac.authorization.k8s.io/aggregate-to-view: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
control-plane: elastic-operator
app.kubernetes.io/version: "2.1.0"
rules:
- apiGroups: ["elasticsearch.k8s.elastic.co"]
resources: ["elasticsearches"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apm.k8s.elastic.co"]
resources: ["apmservers"]
verbs: ["get", "list", "watch"]
- apiGroups: ["kibana.k8s.elastic.co"]
resources: ["kibanas"]
verbs: ["get", "list", "watch"]
- apiGroups: ["enterprisesearch.k8s.elastic.co"]
resources: ["enterprisesearches"]
verbs: ["get", "list", "watch"]
- apiGroups: ["beat.k8s.elastic.co"]
resources: ["beats"]
verbs: ["get", "list", "watch"]
- apiGroups: ["agent.k8s.elastic.co"]
resources: ["agents"]
verbs: ["get", "list", "watch"]
- apiGroups: ["maps.k8s.elastic.co"]
resources: ["elasticmapsservers"]
verbs: ["get", "list", "watch"]
---
# Source: eck-operator/templates/cluster-roles.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: "elastic-operator-edit"
labels:
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
control-plane: elastic-operator
app.kubernetes.io/version: "2.1.0"
rules:
- apiGroups: ["elasticsearch.k8s.elastic.co"]
resources: ["elasticsearches"]
verbs: ["create", "delete", "deletecollection", "patch", "update"]
- apiGroups: ["apm.k8s.elastic.co"]
resources: ["apmservers"]
verbs: ["create", "delete", "deletecollection", "patch", "update"]
- apiGroups: ["kibana.k8s.elastic.co"]
resources: ["kibanas"]
verbs: ["create", "delete", "deletecollection", "patch", "update"]
- apiGroups: ["enterprisesearch.k8s.elastic.co"]
resources: ["enterprisesearches"]
verbs: ["create", "delete", "deletecollection", "patch", "update"]
- apiGroups: ["beat.k8s.elastic.co"]
resources: ["beats"]
verbs: ["create", "delete", "deletecollection", "patch", "update"]
- apiGroups: ["agent.k8s.elastic.co"]
resources: ["agents"]
verbs: ["create", "delete", "deletecollection", "patch", "update"]
- apiGroups: ["maps.k8s.elastic.co"]
resources: ["elasticmapsservers"]
verbs: ["create", "delete", "deletecollection", "patch", "update"]
---
# Source: eck-operator/templates/role-bindings.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: elastic-operator
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.1.0"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: elastic-operator
subjects:
- kind: ServiceAccount
name: elastic-operator
namespace: elastic-system
---
# Source: eck-operator/templates/webhook.yaml
apiVersion: v1
kind: Service
metadata:
name: elastic-webhook-server
namespace: elastic-system
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.1.0"
spec:
ports:
- name: https
port: 443
targetPort: 9443
selector:
control-plane: elastic-operator
---
# Source: eck-operator/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elastic-operator
namespace: elastic-system
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.1.0"
spec:
selector:
matchLabels:
control-plane: elastic-operator
serviceName: elastic-operator
replicas: 1
template:
metadata:
annotations:
# Rename the fields "error" to "error.message" and "source" to "event.source"
# This is to avoid a conflict with the ECS "error" and "source" documents.
"co.elastic.logs/raw": "[{\"type\":\"container\",\"json.keys_under_root\":true,\"paths\":[\"/var/log/containers/*${data.kubernetes.container.id}.log\"],\"processors\":[{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"error\",\"to\":\"_error\"}]}},{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"_error\",\"to\":\"error.message\"}]}},{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"source\",\"to\":\"_source\"}]}},{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"_source\",\"to\":\"event.source\"}]}}]}]"
"checksum/config": 06e29365c8508fbeda738005ba0e857e543fcac6580e97f081656c96b5944bff
labels:
control-plane: elastic-operator
spec:
terminationGracePeriodSeconds: 10
serviceAccountName: elastic-operator
securityContext:
runAsNonRoot: true
containers:
- image: "docker.elastic.co/eck/eck-operator:2.1.0"
imagePullPolicy: IfNotPresent
name: manager
args:
- "manager"
- "--config=/conf/eck.yaml"
env:
- name: OPERATOR_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: WEBHOOK_SECRET
value: elastic-webhook-server-cert
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 100m
memory: 150Mi
ports:
- containerPort: 9443
name: https-webhook
protocol: TCP
volumeMounts:
- mountPath: "/conf"
name: conf
readOnly: true
- mountPath: /tmp/k8s-webhook-server/serving-certs
name: cert
readOnly: true
volumes:
- name: conf
configMap:
name: elastic-operator
- name: cert
secret:
defaultMode: 420
secretName: elastic-webhook-server-cert
---
# Source: eck-operator/templates/webhook.yaml
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
name: elastic-webhook.k8s.elastic.co
labels:
control-plane: elastic-operator
app.kubernetes.io/version: "2.1.0"
webhooks:
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
path: /validate-agent-k8s-elastic-co-v1alpha1-agent
failurePolicy: Ignore
name: elastic-agent-validation-v1alpha1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1beta1]
sideEffects: None
rules:
- apiGroups:
- agent.k8s.elastic.co
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- agents
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
path: /validate-apm-k8s-elastic-co-v1-apmserver
failurePolicy: Ignore
name: elastic-apm-validation-v1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1beta1]
sideEffects: None
rules:
- apiGroups:
- apm.k8s.elastic.co
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- apmservers
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
path: /validate-apm-k8s-elastic-co-v1beta1-apmserver
failurePolicy: Ignore
name: elastic-apm-validation-v1beta1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1beta1]
sideEffects: None
rules:
- apiGroups:
- apm.k8s.elastic.co
apiVersions:
- v1beta1
operations:
- CREATE
- UPDATE
resources:
- apmservers
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
path: /validate-beat-k8s-elastic-co-v1beta1-beat
failurePolicy: Ignore
name: elastic-beat-validation-v1beta1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1beta1]
sideEffects: None
rules:
- apiGroups:
- beat.k8s.elastic.co
apiVersions:
- v1beta1
operations:
- CREATE
- UPDATE
resources:
- beats
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
path: /validate-enterprisesearch-k8s-elastic-co-v1-enterprisesearch
failurePolicy: Ignore
name: elastic-ent-validation-v1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1beta1]
sideEffects: None
rules:
- apiGroups:
- enterprisesearch.k8s.elastic.co
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- enterprisesearches
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
path: /validate-enterprisesearch-k8s-elastic-co-v1beta1-enterprisesearch
failurePolicy: Ignore
name: elastic-ent-validation-v1beta1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1beta1]
sideEffects: None
rules:
- apiGroups:
- enterprisesearch.k8s.elastic.co
apiVersions:
- v1beta1
operations:
- CREATE
- UPDATE
resources:
- enterprisesearches
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
path: /validate-elasticsearch-k8s-elastic-co-v1-elasticsearch
failurePolicy: Ignore
name: elastic-es-validation-v1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1beta1]
sideEffects: None
rules:
- apiGroups:
- elasticsearch.k8s.elastic.co
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- elasticsearches
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
path: /validate-elasticsearch-k8s-elastic-co-v1beta1-elasticsearch
failurePolicy: Ignore
name: elastic-es-validation-v1beta1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1beta1]
sideEffects: None
rules:
- apiGroups:
- elasticsearch.k8s.elastic.co
apiVersions:
- v1beta1
operations:
- CREATE
- UPDATE
resources:
- elasticsearches
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
path: /validate-kibana-k8s-elastic-co-v1-kibana
failurePolicy: Ignore
name: elastic-kb-validation-v1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1beta1]
sideEffects: None
rules:
- apiGroups:
- kibana.k8s.elastic.co
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- kibanas
- clientConfig:
caBundle: Cg==
service:
name: elastic-webhook-server
namespace: elastic-system
path: /validate-kibana-k8s-elastic-co-v1beta1-kibana
failurePolicy: Ignore
name: elastic-kb-validation-v1beta1.k8s.elastic.co
matchPolicy: Exact
admissionReviewVersions: [v1beta1]
sideEffects: None
rules:
- apiGroups:
- kibana.k8s.elastic.co
apiVersions:
- v1beta1
operations:
- CREATE
- UPDATE
resources:
- kibanas
This diff is collapsed.
ltm.yaml 0 → 100644
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: ltm-nwp
namespace: amygdala
spec:
podSelector:
matchLabels:
network/ltm: "true"
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
network-access/ltm: "true"
---
apiVersion: v1
kind: Service
metadata:
name: ltm-svc
namespace: amygdala
labels:
app.kubernetes.io/name: ltm-svc
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/managed-by: hand
app.kubernetes.io/component: ltm
app.kubernetes.io/part-of: amygdala
spec:
ports:
- name: "ltm"
port: 8080
targetPort: 8080
selector:
app.kubernetes.io/name: ltm-pod
---
apiVersion: v1
kind: ConfigMap
metadata:
name: ltm-cfg
namespace: amygdala
labels:
app.kubernetes.io/name: ltm-cfg
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/managed-by: hand
app.kubernetes.io/component: ltm
app.kubernetes.io/part-of: amygdala
data:
ELASTIC_USER: elastic
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ltm-dep
namespace: amygdala
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app.kubernetes.io/name: ltm-pod
template:
metadata:
labels:
network/ltm: "true"
network-access/elastic-cluster: "true"
app.kubernetes.io/name: ltm-pod
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/managed-by: hand
app.kubernetes.io/component: ltm
app.kubernetes.io/part-of: amygdala
spec:
containers:
- image: registry.gitlab.ost.ch:45023/amygdala/ltm/ltm:latest
name: ltm
ports:
- containerPort: 8080
env:
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
name: elastic-cluster-es-elastic-user
key: elastic
envFrom:
- configMapRef:
name: ltm-cfg
imagePullSecrets:
- name: regcred
restartPolicy: Always
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: memorize-nwp
namespace: amygdala
spec:
podSelector:
matchLabels:
network/memorize: "true"
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
network-access/memorize: "true"
---
apiVersion: v1
kind: Service
metadata:
name: memorize-svc
namespace: amygdala
labels:
app.kubernetes.io/name: memorize-svc
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/managed-by: hand
app.kubernetes.io/component: memorize
app.kubernetes.io/part-of: amygdala
spec:
ports:
- name: "memorize"
port: 8080
targetPort: 8080
selector:
app.kubernetes.io/name: memorize-pod
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: memorize-dep
namespace: amygdala
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app.kubernetes.io/name: memorize-pod
template:
metadata:
labels:
network/remember: "true"
network-access/stt: "true"
network-access/ltm: "true"
app.kubernetes.io/name: memorize-pod
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/managed-by: hand
app.kubernetes.io/component: memorize
app.kubernetes.io/part-of: amygdala
spec:
containers:
- image: registry.gitlab.ost.ch:45023/amygdala/memorize/memorize:latest
name: memorize
ports:
- containerPort: 8080
imagePullSecrets:
- name: regcred
restartPolicy: Always
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: remember-nwp
namespace: amygdala
spec:
podSelector:
matchLabels:
network/remember: "true"
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
network-access/remember: "true"
---
apiVersion: v1
kind: Service
metadata:
name: remember-svc
namespace: amygdala
labels:
app.kubernetes.io/name: remember-svc
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/managed-by: hand
app.kubernetes.io/component: remember
app.kubernetes.io/part-of: amygdala
spec:
ports:
- name: "remember"
port: 8080
targetPort: 8080
selector:
app.kubernetes.io/name: remember-pod
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: remember-dep
namespace: amygdala
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app.kubernetes.io/name: remember-pod
template:
metadata:
labels:
network/remember: "true"
network-access/stt: "true"
network-access/ltm: "true"
app.kubernetes.io/name: remember-pod
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/managed-by: hand
app.kubernetes.io/component: remember
app.kubernetes.io/part-of: amygdala
spec:
containers:
- image: registry.gitlab.ost.ch:45023/amygdala/remember/remember:latest
name: remember
ports:
- containerPort: 8080
imagePullSecrets:
- name: regcred
restartPolicy: Always
\ No newline at end of file
stt.yaml 0 → 100644
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: stt-nwp
namespace: amygdala
spec:
podSelector:
matchLabels:
network/stt: "true"
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
network-access/stt: "true"
---
apiVersion: v1
kind: Service
metadata:
name: stt-svc
namespace: amygdala
labels:
app.kubernetes.io/name: stt-svc
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/managed-by: hand
app.kubernetes.io/component: stt
app.kubernetes.io/part-of: amygdala
spec:
ports:
- name: "stt"
port: 8080
targetPort: 8080
selector:
app.kubernetes.io/name: stt-pod
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: stt-dep
namespace: amygdala
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app.kubernetes.io/name: stt-pod
template:
metadata:
labels:
network/stt: "true"
app.kubernetes.io/name: stt-pod
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/managed-by: hand
app.kubernetes.io/component: stt
app.kubernetes.io/part-of: amygdala
spec:
containers:
- image: registry.gitlab.ost.ch:45023/amygdala/stt/stt:latest
name: stt
ports:
- containerPort: 8080
imagePullSecrets:
- name: regcred
restartPolicy: Always
tts.yaml 0 → 100644
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: tts-nwp
namespace: amygdala
spec:
podSelector:
matchLabels:
network/tts: "true"
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
network-access/tts: "true"
---
apiVersion: v1
kind: Service
metadata:
name: tts-svc
namespace: amygdala
labels:
app.kubernetes.io/name: tts-svc
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/managed-by: hand
app.kubernetes.io/component: tts
app.kubernetes.io/part-of: amygdala
spec:
ports:
- name: "tts"
port: 8080
targetPort: 8080
selector:
app.kubernetes.io/name: tts-pod
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tts-dep
namespace: amygdala
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app.kubernetes.io/name: tts-pod
template:
metadata:
labels:
network/tts: "true"
app.kubernetes.io/name: tts-pod
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/managed-by: hand
app.kubernetes.io/component: tts
app.kubernetes.io/part-of: amygdala
spec:
containers:
- image: registry.gitlab.ost.ch:45023/amygdala/tts/tts:latest
name: tts
ports:
- containerPort: 8080
imagePullSecrets:
- name: regcred
restartPolicy: Always
vis.yaml 0 → 100644
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: vis
namespace: amygdala
spec:
podSelector:
matchLabels:
network/vis: "true"
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
network-access/vis: "true"
---
apiVersion: v1
kind: Service
metadata:
name: vis
namespace: amygdala
labels:
app.kubernetes.io/name: vis
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/managed-by: hand
app.kubernetes.io/component: frontend
app.kubernetes.io/part-of: amygdala
spec:
ports:
- name: "web-access-port"
port: 8080
targetPort: 80
selector:
app.kubernetes.io/name: ui
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: vis
namespace: amygdala
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 2
selector:
matchLabels:
app.kubernetes.io/name: vis
template:
metadata:
labels:
network/vis: "true"
app.kubernetes.io/name: vis
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/managed-by: hand
app.kubernetes.io/component: frontend
app.kubernetes.io/part-of: amygdala
spec:
containers:
- image: registry.gitlab.ost.ch:45023/amygdala/vis/vis:latest
name: vis
ports:
- containerPort: 80
imagePullSecrets:
- name: regcred
restartPolicy: Always
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment