FIX ERROR – RDS: Error creating DB Parameter Group: InvalidParameterValue: ParameterGroupFamily

When creating an RDS by specifying an incorrect value for the "ParameterGroupFamily" parameter, a similar error may occur:

Error creating DB Parameter Group: InvalidParameterValue: ParameterGroupFamily default.mariadb10.2 is not a valid parameter group family

 

To see a list of all possible values for the "ParameterGroupFamily" parameter, you can use the following command:

aws rds describe-db-engine-versions --query "DBEngineVersions[].DBParameterGroupFamily"

 

 

Nginx – Regular Expression Tester

 

For quick testing of Nginx regular expressions, you can use a ready-made docker image. To do this, you need to clone the NGINX-Demos repository:

git clone https://github.com/nginxinc/NGINX-Demos

 

Follow to the "nginx-regex-tester" directory:

cd NGINX-Demos/nginx-regex-tester/

 

And launch the container using "docker-compose":

docker-compose up -d

 

And open the next page:

http://localhost/regextester.php

 

AWS – EKS Fargate – Fluentd CloudWatch

At the time of writing, EKS Fargate does not support a driver log for recording to CloudWatch. The only option is to use Sidecar

Let’s create a ConfigMap, in which we indicate the name of the EKS cluster, region and namespace:

kubectl create configmap cluster-info \
--from-literal=cluster.name=YOUR_EKS_CLUSTER_NAME \
--from-literal=logs.region=YOUR_EKS_CLUSTER_REGION -n KUBERNETES_NAMESPACE

 

Next, let’s create a service account and a ConfigMap with a configuration file for Fluentd. To do this, copy the text below and save it as "fluentd.yaml"

apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd
  namespace: {{NAMESPACE}}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: fluentd-role
rules:
  - apiGroups: [""]
    resources:
      - namespaces
      - pods
      - pods/logs
    verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: fluentd-role-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: fluentd-role
subjects:
  - kind: ServiceAccount
    name: fluentd
    namespace: {{NAMESPACE}}
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: fluentd-config
  namespace: {{NAMESPACE}}
  labels:
    k8s-app: fluentd-cloudwatch
data:
  fluent.conf: |
    @include containers.conf
 
    <match fluent.**>
      @type null
    </match>
  containers.conf: |
    <source>
      @type tail
      @id in_tail_container_logs
      @label @containers
      path /var/log/application.log
      pos_file /var/log/fluentd-containers.log.pos
      tag *
      read_from_head true
      <parse>
        @type none
        time_format %Y-%m-%dT%H:%M:%S.%NZ
      </parse>
    </source>
 
    <label @containers>
      <filter **>
        @type kubernetes_metadata
        @id filter_kube_metadata
      </filter>
 
      <filter **>
        @type record_transformer
        @id filter_containers_stream_transformer
        <record>
          stream_name "#{ENV.fetch('HOSTNAME')}"
        </record>
      </filter>
 
      <filter **>
        @type concat
        key log
        multiline_start_regexp /^\S/
        separator ""
        flush_interval 5
        timeout_label @NORMAL
      </filter>
 
      <match **>
        @type relabel
        @label @NORMAL
      </match>
    </label>
 
    <label @NORMAL>
      <match **>
        @type cloudwatch_logs
        @id out_cloudwatch_logs_containers
        region "#{ENV.fetch('REGION')}"
        log_group_name "/aws/containerinsights/#{ENV.fetch('CLUSTER_NAME')}/application"
        log_stream_name_key stream_name
        remove_log_stream_name_key true
        auto_create_stream true
        <buffer>
          flush_interval 5
          chunk_limit_size 2m
          queued_chunks_limit_size 32
          retry_forever true
        </buffer>
      </match>
    </label>

 

And apply it:

curl fluentd.yaml | sed "s/{{NAMESPACE}}/default/" | kubectl apply -f -

 

Where "default" is the name of the required namespace

 

An example of a sidecar deployment:

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: testapp
  name: testapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: testapp
  strategy: {}
  template:
    metadata:
      labels:
        app: testapp
    spec:
      serviceAccountName: fluentd
      terminationGracePeriodSeconds: 30
      initContainers:
        - name: copy-fluentd-config
          image: busybox
          command: ['sh', '-c', 'cp /config-volume/..data/* /fluentd/etc']
          volumeMounts:
            - name: config-volume
              mountPath: /config-volume
            - name: fluentdconf
              mountPath: /fluentd/etc
      containers:
      - image: alpine:3.10
        name: alpine
        command: ["/bin/sh"]
        args: ["-c", "while true; do echo hello 2>&1 | tee -a /var/log/application.log; sleep 10;done"]
        volumeMounts:
        - name: fluentdconf
          mountPath: /fluentd/etc
        - name: varlog
          mountPath: /var/log
      - image: fluent/fluentd-kubernetes-daemonset:v1.7.3-debian-cloudwatch-1.0
        name: fluentd-cloudwatch
        env:
          - name: REGION
            valueFrom:
              configMapKeyRef:
                name: cluster-info
                key: logs.region
          - name: CLUSTER_NAME
            valueFrom:
              configMapKeyRef:
                name: cluster-info
                key: cluster.name
          - name: AWS_ACCESS_KEY_ID
            value: "XXXXXXXXXXXXXXX"
          - name: "AWS_SECRET_ACCESS_KEY"
            value: "YYYYYYYYYYYYYYY"
        resources:
          limits:
            memory: 400Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
          - name: config-volume
            mountPath: /config-volume
          - name: fluentdconf
            mountPath: /fluentd/etc
          - name: varlog
            mountPath: /var/log
      volumes:
        - name: config-volume
          configMap:
            name: fluentd-config
        - name: fluentdconf
          emptyDir: {}
        - name: varlog
          emptyDir: {}

 

In this deployment, the variables are set to "AWS_ACCESS_KEY_ID" and "AWS_SECRET_ACCESS_KEY", since at the moment there are endpoints for IAM roles only for services: EC2, ECS Fargate and Lambda. For avoid this you can use OpenID Connect provider for EKS.

Kubernetes – One role for multiple namespaces

 

 

Goal:

There are 2 namespaces, they are "kube-system" and "default". It is necessary to run a cron task in the "kube-system" namespace, which will clear the executed jobs and pods in the "default" space. To do this, create a service account in the "kube-system" namespace, a role with the necessary rights in the "default" namespace, and bind the created role for the created account.

cross-namespace-role.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: jobs-cleanup
  namespace: kube-system
automountServiceAccountToken: false
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: jobs-cleanup
  namespace: default
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list", "delete"]
- apiGroups: ["batch", "extensions"]
  resources: ["jobs"]
  verbs: ["get", "list", "watch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: jobs-cleanup
  namespace: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: jobs-cleanup
subjects:
- kind: ServiceAccount
  name: jobs-cleanup
  namespace: kube-system

 

 

FIX ERROR – EKS: cloudwatch – 0/1 nodes are available: 1 Insufficient pods.

After installing CloudWatch Agent in the EKS cluster, its pods stuck in the "Pending" state

 

Watching the describe pod

 

Solution:

The solution was found here. In this EKS cluster, a separate NodeGroup is allocated for the system pods with the instance type: t3.micro, and there is simply not enough capacity to launch the CloudWatch Agent.

 

After changing the type of instance upwards – t3.small, all pods switched to the "Running" status

EKS has limits on the number of pods per node, this limit can be calculated using the following formula:

N * (M-1) + 2

Where, N is the number of Elastic Network Interfaces (ENI) for this type of instance, M is the number of IP addresses for one ENI

The N and M values for a specific instance can be found here.

FIX ERROR – fdisk: DOS partition table format cannot be used on drives for volumes larger than 2199023255040 bytes for 512-byte sectors

When trying to create a partition with the "fdisk" utility, 2 or more TB in size, the following message appears:

The size of this disk is 2 TiB (2199023255552 bytes). DOS partition table format cannot be used on drives for volumes larger than 2199023255040 bytes for 512-byte sectors. Use GUID partition table format (GPT).

 

Solution:

Add the "gpt" label on the new disk

parted /dev/nvme2n1

 

Where "nvme2n1" – is your disk

 

Add the label:

mklabel gpt

 

Checking:

print

 

 

Then we return to fdisk and create the desired partition. The section can also be created in the "parted" utility

 

Let’s create a file system, for this we copy the full path of the created partition

 

And we create a file system, in this example it’s ext4:

mkfs.ext4 /dev/nvme2n1p1

 

Jenkins – Active Choice: GitHub – Commit

 

For a parameterized assembly with an image tag selection, you will need the Active Choices plugin

Go to "Manage Jenkins"

 

Section "Manage Plugins"

 

Go to the "Available" tab and select "Active Choices" in the search.

Install it.

Create a "New Item" – "Pipeline", indicate that it will be a parameterized assembly, and add the parameter "Active Choices Reactive Parameter"

 

We indicate that this is "Groovy Script" and paste the following into it:

import jenkins.model.*
import groovy.json.JsonSlurper

credentialsId = 'artem-github'
gitUri = '[email protected]:artem-gatchenko/ansible-openvpn-centos-7.git'

def creds = com.cloudbees.plugins.credentials.CredentialsProvider.lookupCredentials(
  com.cloudbees.plugins.credentials.common.StandardUsernameCredentials.class, Jenkins.instance, null, null ).find{
    it.id == credentialsId}

def slurper = new JsonSlurper()

def account = gitUri.split(":")[-1].split("/")[0]
def repo = gitUri.split(":")[-1].split("/")[-1].split("\\.")[0]

def addr = "https://api.github.com/repos/${account}/${repo}/commits"
def authString = "${creds.username}:${creds.password}".getBytes().encodeBase64().toString()
def conn = addr.toURL().openConnection()
conn.setRequestProperty( "Authorization", "Basic ${authString}" )
def response_json = "${conn.content.text}"
def parsed = slurper.parseText(response_json)
def commit = []
commit.add("Latest")
for (int i = 0; i < parsed.size(); i++) {
    commit.add(parsed.get(i).sha)
}
return commit

 

Where the value of variables, "credentialsId" – Jenkins Credentials ID with token to GitHub;

"gitUri" – the full path to the desired repository;

 

 

The same thing, but already in Pipeline

Pipeline:

properties([
  parameters([
    [$class: 'StringParameterDefinition',
      defaultValue: '[email protected]:artem-gatchenko/ansible-openvpn-centos-7.git',
      description: 'Git repository URI',
      name: 'gitUri',
      trim: true
    ],
    [$class: 'CascadeChoiceParameter', 
      choiceType: 'PT_SINGLE_SELECT', 
      description: 'Select Image',
      filterLength: 1,
      filterable: false,
      referencedParameters: 'GIT_URI',
      name: 'GIT_COMMIT_ID', 
      script: [
        $class: 'GroovyScript', 
        script: [
          classpath: [], 
          sandbox: false, 
          script: 
            '''
            import jenkins.model.*
            import groovy.json.JsonSlurper

            credentialsId = 'artem-github'

            def creds = com.cloudbees.plugins.credentials.CredentialsProvider.lookupCredentials(
              com.cloudbees.plugins.credentials.common.StandardUsernameCredentials.class, Jenkins.instance, null, null ).find{
                it.id == credentialsId}

            def slurper = new JsonSlurper()

            def account = gitUri.split(":")[-1].split("/")[0]
            def repo = gitUri.split(":")[-1].split("/")[-1].split("\\\\.")[0]

            def addr = "https://api.github.com/repos/${account}/${repo}/commits"
            def authString = "${creds.username}:${creds.password}".getBytes().encodeBase64().toString()
            def conn = addr.toURL().openConnection()
            conn.setRequestProperty( "Authorization", "Basic ${authString}" )
            def response_json = "${conn.content.text}"
            def parsed = slurper.parseText(response_json)
            def commit = []
            commit.add("Latest")
            for (int i = 0; i < parsed.size(); i++) {
                commit.add(parsed.get(i).sha)
            }
            return commit
            '''
        ]
      ]
    ]
  ])
])

AWS Cli – Lambda: Update single variable value

Key "–environment" AWS Cli utility replaces all the variables, those that you specify as an argument. To change the value of only one variable without erasing the others, or without listing them all, you can use the following BASH script:

aws_lambda_update_env.sh:

#!/usr/bin/env bash

# VARIABLES
LAMBDA='ARTEM-SERVICES'
REGION='us-east-1'
VARIABLE='ECR_TAG'
NEW_VALUE='1.0.1'

CURRENTVARIABLES="$(aws lambda get-function-configuration --region $REGION --function-name $LAMBDA | jq '.Environment.Variables')"

NEWVARIABLES=$(echo $CURRENTVARIABLES | jq --arg VARIABLE $VARIABLE --arg NEW_VALUE $NEW_VALUE ".$VARIABLE |= \"$NEW_VALUE\"")

COMMAND="aws lambda update-function-configuration --region $REGION --function-name $LAMBDA --environment '{\"Variables\":$NEWVARIABLES}'"

eval $COMMAND

 

This script requires jq utility

 

The script reads all current variables, changes the value of the variable "ECR_TAG" to the value "1.0.1" and updates all the variables with the changed value of the variable "ECR_TAG".

FIX ERROR – Jenkins: dial unix /var/run/docker.sock: connect: permission denied

After we installed Docker on the Jenkins server and started it, when we try to build, we get the following error:

dial unix /var/run/docker.sock: connect: permission denied

 

Solution:

Add the user "jenkins" to the "docker" group:

sudo usermod -aG docker jenkins

 

After that, the user "jenkins" will be able to work with Docker if he connects via SSH, but with Jenkins job there will still be the same error, to get rid of it, you need to restart the Jenkins service:

sudo systemctl restart jenkins