At the time of writing, EKS Fargate does not support a driver log for recording to CloudWatch. The only option is to use Sidecar
Let’s create a ConfigMap, in which we indicate the name of the EKS cluster, region and namespace:
kubectl create configmap cluster-info \ --from-literal=cluster.name=YOUR_EKS_CLUSTER_NAME \ --from-literal=logs.region=YOUR_EKS_CLUSTER_REGION -n KUBERNETES_NAMESPACE
Next, let’s create a service account and a ConfigMap with a configuration file for Fluentd. To do this, copy the text below and save it as "fluentd.yaml"
apiVersion: v1 kind: ServiceAccount metadata: name: fluentd namespace: {{NAMESPACE}} --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: fluentd-role rules: - apiGroups: [""] resources: - namespaces - pods - pods/logs verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: fluentd-role-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: fluentd-role subjects: - kind: ServiceAccount name: fluentd namespace: {{NAMESPACE}} --- apiVersion: v1 kind: ConfigMap metadata: name: fluentd-config namespace: {{NAMESPACE}} labels: k8s-app: fluentd-cloudwatch data: fluent.conf: | @include containers.conf <match fluent.**> @type null </match> containers.conf: | <source> @type tail @id in_tail_container_logs @label @containers path /var/log/application.log pos_file /var/log/fluentd-containers.log.pos tag * read_from_head true <parse> @type none time_format %Y-%m-%dT%H:%M:%S.%NZ </parse> </source> <label @containers> <filter **> @type kubernetes_metadata @id filter_kube_metadata </filter> <filter **> @type record_transformer @id filter_containers_stream_transformer <record> stream_name "#{ENV.fetch('HOSTNAME')}" </record> </filter> <filter **> @type concat key log multiline_start_regexp /^\S/ separator "" flush_interval 5 timeout_label @NORMAL </filter> <match **> @type relabel @label @NORMAL </match> </label> <label @NORMAL> <match **> @type cloudwatch_logs @id out_cloudwatch_logs_containers region "#{ENV.fetch('REGION')}" log_group_name "/aws/containerinsights/#{ENV.fetch('CLUSTER_NAME')}/application" log_stream_name_key stream_name remove_log_stream_name_key true auto_create_stream true <buffer> flush_interval 5 chunk_limit_size 2m queued_chunks_limit_size 32 retry_forever true </buffer> </match> </label>
And apply it:
curl fluentd.yaml | sed "s/{{NAMESPACE}}/default/" | kubectl apply -f -
Where "default" is the name of the required namespace
An example of a sidecar deployment:
deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: labels: app: testapp name: testapp spec: replicas: 1 selector: matchLabels: app: testapp strategy: {} template: metadata: labels: app: testapp spec: serviceAccountName: fluentd terminationGracePeriodSeconds: 30 initContainers: - name: copy-fluentd-config image: busybox command: ['sh', '-c', 'cp /config-volume/..data/* /fluentd/etc'] volumeMounts: - name: config-volume mountPath: /config-volume - name: fluentdconf mountPath: /fluentd/etc containers: - image: alpine:3.10 name: alpine command: ["/bin/sh"] args: ["-c", "while true; do echo hello 2>&1 | tee -a /var/log/application.log; sleep 10;done"] volumeMounts: - name: fluentdconf mountPath: /fluentd/etc - name: varlog mountPath: /var/log - image: fluent/fluentd-kubernetes-daemonset:v1.7.3-debian-cloudwatch-1.0 name: fluentd-cloudwatch env: - name: REGION valueFrom: configMapKeyRef: name: cluster-info key: logs.region - name: CLUSTER_NAME valueFrom: configMapKeyRef: name: cluster-info key: cluster.name - name: AWS_ACCESS_KEY_ID value: "XXXXXXXXXXXXXXX" - name: "AWS_SECRET_ACCESS_KEY" value: "YYYYYYYYYYYYYYY" resources: limits: memory: 400Mi requests: cpu: 100m memory: 200Mi volumeMounts: - name: config-volume mountPath: /config-volume - name: fluentdconf mountPath: /fluentd/etc - name: varlog mountPath: /var/log volumes: - name: config-volume configMap: name: fluentd-config - name: fluentdconf emptyDir: {} - name: varlog emptyDir: {}
In this deployment, the variables are set to "AWS_ACCESS_KEY_ID" and "AWS_SECRET_ACCESS_KEY", since at the moment there are endpoints for IAM roles only for services: EC2, ECS Fargate and Lambda. For avoid this you can use OpenID Connect provider for EKS.