Using Tigera Secure EE
Tigera Secure EE is a software-defined network solution that can be used with Kubernetes. For those familiar with Calico, Tigera Secure EE is essentially Calico with enterprise features on top.
Support for Tigera Secure EE in Charmed Kubernetes is provided in the form of a
tigera-secure-ee
subordinate charm, which can be used instead of flannel
or
calico
.
Deploying Charmed Kubernetes with Tigera Secure EE
Before you start, you will need:
- Tigera Secure EE licence key
- Tigera private Docker registry credentials (provided as a Docker config.json)
Note: Tigera Secure EE's network traffic, much like Calico's, is filtered on many clouds. It will work on MAAS, and can work on AWS if you manually configure instances to disable source/destination checking.
To start, deploy Charmed Kubernetes with Tigera Secure EE:
juju deploy cs:~containers/kubernetes-tigera-secure-ee
Configure the tigera-secure-ee
charm with your licence key and registry
credentials:
juju config tigera-secure-ee \
license-key=$(base64 -w0 license.yaml) \
registry-credentials=$(base64 -w0 config.json)
Wait for the deployment to settle before continuing on.
Using the built-in elasticsearch-operator
Caution: The built-in elasticsearch-operator is only recommended for testing or demonstrative purposes. For production deployments, please skip down to the next section.
For testing and quick start purposes, the tigera-secure-ee
charm deploys
elasticsearch-operator into your Kubernetes cluster by default. For it to
properly work, you will need to create a StorageClass.
The easiest way to do this is with the hostpath provisioner. Create a file named
elasticsearch-storage.yaml
containing the following:
# This manifest implements elasticsearch-storage using local host-path volumes.
# It is not suitable for production use; and only works on single node clusters.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: elasticsearch-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/host-path
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: tigera-elasticsearch-1
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /var/tigera/elastic-data/1
persistentVolumeReclaimPolicy: Recycle
storageClassName: elasticsearch-storage
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: tigera-elasticsearch-2
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /var/tigera/elastic-data/2
persistentVolumeReclaimPolicy: Recycle
storageClassName: elasticsearch-storage
Apply elasticsearch-storage.yaml:
kubectl apply -f elasticsearch-storage.yaml
Once you have a StorageClass available, delete the existing PVC and pods so Kubernetes will recreate them using the new StorageClass:
kubectl delete pvc -n calico-monitoring es-data-es-data-tigera-elasticsearch-default-0
kubectl delete pvc -n calico-monitoring es-data-es-master-tigera-elasticsearch-default-0
kubectl delete po -n calico-monitoring es-data-tigera-elasticsearch-default-0
kubectl delete po -n calico-monitoring es-master-tigera-elasticsearch-default-0
For a more robust storage solution, consider deploying Ceph with Charmed Kubernetes, as documented in the Storage section. This will create a default StorageClass that elasticsearch-operator will use automatically.
Using your own ElasticSearch
Disable the built-in elasticsearch operator:
juju config tigera-secure-ee enable-elasticsearch-operator=false
Then follow this guide from Tigera: Using your own ElasticSearch for logs
Accessing cnx-manager
The cnx-manager service is exposed as a NodePort on port 30003. Run the following command to open port 30003 on the workers:
juju run --application kubernetes-worker open-port 30003
Then connect to https://<kubernetes-worker-ip>:30003
in your web browser. Use
the Kubernetes admin credentials to log in (you can find these in the kubeconfig
file created on kubernetes-master units at /home/ubuntu/config
).
Accessing kibana
The kibana service is exposed as a NodePort on port 30601. Run the following command to open port 30601 on the workers:
juju run --application kubernetes-worker open-port 30601
Caution: Do not open this port if your kubernetes-worker units are exposed on a network you do not trust. Kibana does not require credentials to use
Then connect to http://<kubernetes-worker-ip>:30601
in your web browser.
Using a private Docker registry
For a general introduction to using a private Docker registry with Charmed Kubernetes, please refer to the Private Docker Registry page.
In addition to the steps documented there, you will need to upload the following images to the registry:
docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.3
docker.elastic.co/kibana/kibana-oss:6.4.3
quay.io/tigera/calicoctl:v2.3.0
quay.io/tigera/calicoq:v2.3.0
quay.io/tigera/cnx-apiserver:v2.3.0
quay.io/tigera/cnx-manager:v2.3.0
quay.io/tigera/cnx-manager-proxy:v2.3.0
quay.io/tigera/cnx-node:v2.3.0
quay.io/tigera/cnx-queryserver:v2.3.0
quay.io/tigera/es-proxy:v2.3.0
quay.io/tigera/fluentd:v2.3.0
quay.io/tigera/kube-controllers:v2.3.0
quay.io/tigera/cloud-controllers:v2.3.0
quay.io/tigera/typha:v2.3.0
quay.io/tigera/intrusion-detection-job-installer:v2.3.0
quay.io/tigera/es-curator:v2.3.0
quay.io/coreos/configmap-reload:v0.0.1
quay.io/coreos/prometheus-config-reloader:v0.0.3
quay.io/coreos/prometheus-operator:v0.18.1
quay.io/prometheus/alertmanager:v0.14.0
quay.io/prometheus/prometheus:v2.2.1
docker.io/upmcenterprises/elasticsearch-operator:0.2.0
docker.io/busybox:latest
docker.io/alpine:3.7
And configure Tigera Secure EE to use the registry with this shell script:
export IP=`juju run --unit docker-registry/0 'network-get website --ingress-address'`
export PORT=`juju config docker-registry registry-port`
export REGISTRY=$IP:$PORT
juju config tigera-secure-ee registry=$REGISTRY
We appreciate your feedback on the documentation. You can edit this page or file a bug here.