Terraform – AWS SSM: Extract content

The SSM Parameter Store contains the following JSON:

{
  "username": "admin",
  "password": "password"
}

 

It is necessary to extract the login and password, and use their values in the Terraform code. To do this, you can use the following construction:

# Should be there before the apply
data "aws_ssm_parameter" "rds-admin-user" {
  name  = "/ARTEM-SERVICES/PROD/RDS/CREDENTIALS"
}

locals {
  additional_rds_username      = jsondecode(data.aws_ssm_parameter.rds-admin-user.value)["username"]
  additional_rds_user_password = jsondecode(data.aws_ssm_parameter.rds-admin-user.value)["password"]
}

 

And use variables:

local.additional_rds_username
local.additional_rds_user_password

 

 

 OpenVPN – Exclude specific IPs or networks from routes

In order to exclude a specific range or IP address, you need to add the parameter "net_gateway".

For example, it is necessary that the network "10.0.0.0/8" is routed through the VPN, but at the same time the network "10.0.1.0/24" is excluded from the route, the entry in the configuration file will look like this:

push "route 10.0.0.0 255.0.0.0"
push "route 10.0.1.0 255.255.255.0 net_gateway"

 

 

 FIX ERROR – Ubuntu: /etc/resolv.conf is not a symbolic link to /run/resolvconf/resolv.conf

Similar error:

/etc/resolvconf/update.d/libc: Warning: /etc/resolv.conf is not a symbolic link to /run/resolvconf/resolv.conf

 

It may indicate that the symbolic link "/etc/resolv.conf" is missing, in order to fix this, you can create it manually:

sudo ln -s /run/resolvconf/resolv.conf /etc/resolv.conf
sudo resolvconf -u

 

Alternatively, run a "resolvconf" reconfiguration using the following command:

sudo dpkg-reconfigure resolvconf

 Jenkins – Python VirtualEnv with version selection

 

To select a Python version in the pipeline, you need to have the required versions installed on the system.

Further actions were performed on CentOS 7 and the installation of binaries took place in the "/usr/bin/" directory for convenience, since the system already has versions "2.7" and "3.6" installed from the repository along this path.

Install dependencies:

yum install gcc openssl-devel bzip2-devel libffi-devel wget

 

Download the necessary sources of the necessary versions, in this case: "3.7", "3.8" and "3.9"

cd /usr/src

wget https://www.python.org/ftp/python/3.7.9/Python-3.7.9.tgz
wget https://www.python.org/ftp/python/3.8.9/Python-3.8.9.tgz
wget https://www.python.org/ftp/python/3.9.5/Python-3.9.5.tgz

 

Unzip:

tar xzf Python-3.7.9.tgz
tar xzf Python-3.8.9.tgz
tar xzf Python-3.9.5.tgz

 

Install:

cd Python-3.7.9
./configure --enable-optimizations --prefix=/usr
make altinstall

cd ../Python-3.8.9
./configure --enable-optimizations --prefix=/usr
make altinstall

cd ../Python-3.9.5
./configure --enable-optimizations --prefix=/usr
make altinstall

 

Now let’s install the plugin Pyenv Pipeline

Go to Jenkins settings

 

Section "Manage Plugins"

 

Go to the "Available" tab and in the search we specify "Pyenv Pipeline"

And install it.

 

To select the version, we will use the "choice" parameter

Pipeline:

properties([
  parameters([
    choice(
      name: 'PYTHON',
      description: 'Choose Python version',
      choices: ["python2.7", "python3.6", "python3.7", "python3.8", "python3.9"].join("\n")
    ),
    base64File(
      name: 'REQUIREMENTS_FILE',
      description: 'Upload requirements file (Optional)'
    )
  ])
])

pipeline {
  agent any
  options {
    buildDiscarder(logRotator(numToKeepStr: '5'))
    timeout(time: 60, unit:'MINUTES')
    timestamps()
  }
  stages {
    stage("Python"){
      steps{
        withPythonEnv("/usr/bin/${params.PYTHON}") {
          script {
            if ( env.REQUIREMENTS_FILE.isEmpty() ) {
              sh "python --version"
              sh "pip --version"
              sh "echo Requirements file not set. Run Python without requirements file."
            }
            else {
              sh "python --version"
              sh "pip --version"
              sh "echo Requirements file found. Run PIP install using requirements file."
              withFileParameter('REQUIREMENTS_FILE') {
                sh 'cat $REQUIREMENTS_FILE > requirements.txt'
              }
              sh "pip install -r requirements.txt"
            }
          }
        }
      }
    }
  }
}

 

Let’s start the build:

Select the desired version, for example, "3.9" and run the build:

Let’s check the build log:

 AWS – S3: Allow public access to objects over VPN

Goal:

Allow public read access for all objects in the S3 bucket only using a VPN connection, objects must be non-public to connect from the world. OpenVPN is used as a VPN service, which can be deployed anywhere, so we will build an allow a rule to check the IP address.

 

First you need to find out the list of networks that belong to the endpoints of the S3 service in the region we need, so as not to wrap all traffic through the VPN. To do this, download the current list of networks and parse it:

jq '.prefixes[] | select(.region=="eu-central-1") | select(.service=="S3") | .ip_prefix' < ip-ranges.json

 

Where, "eu-central-1" is the region where the necessary S3 bucket is located.

 

You should get an output like:

"52.219.170.0/23"
"52.219.168.0/24"
"3.5.136.0/22"
"52.219.72.0/22"
"52.219.44.0/22"
"52.219.169.0/24"
"52.219.140.0/24"
"54.231.192.0/20"
"3.5.134.0/23"
"3.65.246.0/28"
"3.65.246.16/28"

 

Now we translate the subnet mask into a 4-byte format and add the parameters to the OpenVPN server configuration as "push" parameters:

push "route 52.219.170.0 255.255.254.0"
push "route 52.219.168.0 255.255.255.0"
push "route 3.5.136.0 255.255.252.0"
push "route 52.219.72.0 255.255.252.0"
push "route 52.219.44.0 255.255.252.0"
push "route 52.219.169.0 255.255.255.0"
push "route 52.219.140.0 255.255.255.0"
push "route 54.231.192.0 255.255.240.0"
push "route 3.5.134.0 255.255.254.0"
push "route 3.65.246.0 255.255.255.240"
push "route 3.65.246.16 255.255.255.240"

 

We restart the OpenVPN server service and after reconnecting we should get a list of required networks and traffic that will go through the VPN connection.

 

Now it remains to add the following policy to the S3 bucket:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Allow only from VPN",
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::artem-services",
                "arn:aws:s3:::artem-services/*"
            ],
            "Condition": {
                "IpAddress": {
                    "aws:SourceIp": "1.2.3.4"
                }
            }
        }
    ]
}

 

Where, "artem-services" is the name of the S3 bucket and "1.2.3.4" is the IP address of the OpenVPN server.

 Python – AWS EBS creating snapshots based on a tag and keeping only one latest version

This script looks for an EBS in the region "eu-west-1" with a tag whose key is "Application" and the value is passed as an argument, creating a snapshot of this EBS. In the same way, it searches for a snapshot by tag and deletes everything except the last one.

An example of running to create a snapshot:

python3 main.py create artem-test-app

 

An example of running to delete old snapshots:

python3 main.py cleanup artem-test-app

 

main.py:

from __future__ import print_function
from datetime import datetime
import sys
import boto3
import botocore
import urllib.request

ec2 = boto3.client('ec2', region_name='eu-west-1')

def create_snapshot(app):
    print("Creating snapshot for " + str(app))
    # Get Instance ID
    instances_id = []
    response = ec2.describe_instances(
        Filters = [
            {
                'Name': 'tag:Application',
                'Values': [
                    app,
                ]
            },
            {
                'Name': 'instance-state-name',
                'Values': ['running']
            }
        ]
    )

    for reservation in response['Reservations']:
        for instance in reservation['Instances']:
            instances_id.append(instance['InstanceId'])

    # Get Volumes ID
    volumes_id = []
    for instance_id in instances_id:
        response = ec2.describe_instance_attribute(InstanceId = instance_id, Attribute = 'blockDeviceMapping')
        for BlockDeviceMappings in response['BlockDeviceMappings']:
            volumes_id.append(BlockDeviceMappings['Ebs']['VolumeId'])

    # Create Volume Snapshot
    for volume_id in volumes_id:
        date = datetime.now()
        response = ec2.create_snapshot(
            Description = app + " snapshot created by Jenkins at " + str(date),
            VolumeId = volume_id,
            TagSpecifications=[
                {
                    'ResourceType': 'snapshot',
                    'Tags': [
                        {
                            'Key': 'Name',
                            'Value': app + "-snapshot"
                        },
                    ]
                },
            ]
        )


        try:
            print("Waiting until snapshot will be created...")
            snapshot_id = response['SnapshotId']
            snapshot_complete_waiter = ec2.get_waiter('snapshot_completed')
            snapshot_complete_waiter.wait(SnapshotIds=[snapshot_id], WaiterConfig = { 'Delay': 30, 'MaxAttempts': 120})

        except botocore.exceptions.WaiterError as e:
                print(e)

def cleanup_snapshot(app):
    print("Clean up step.")
    print("Looking for all snapshots regarding " + str(app) + " application.")
    # Get snapshots
    response = ec2.describe_snapshots(
        Filters = [
            {
                'Name': 'tag:Name',
                'Values': [
                    app + "-snapshot",
                ]
            },
        ],
        OwnerIds = [
            'self',
        ]
    )

    sorted_snapshots = sorted(response['Snapshots'], key=lambda k: k['StartTime'])

    # Clean up snapshots and keep only the latest snapshot
    for snapshot in sorted_snapshots[:-1]:
        print("Deleting snapshot: " + str(snapshot['SnapshotId']))
        response = ec2.delete_snapshot(
            SnapshotId = snapshot['SnapshotId']
        )

def main():
    if sys.argv[1:] and sys.argv[2:]:
        action = sys.argv[1]
        app = sys.argv[2]
        if action == 'create':
            create_snapshot(app)
        elif action == 'cleanup':
            cleanup_snapshot(app)
        else:
            print("Wrong action: " + str(action))
    else:
        print("This script for create and cleanup snapshots")
        print("Usage  : python3 " + sys.argv[0] + " {create|cleanup} " + "{application_tag}")
        print("Example: python3 " + sys.argv[0] + " create " + "ebs-snapshot-check")

if __name__ == '__main__':
    main()

 Jenkins – Active Choice: SSH – Return remote command values

For a parameterized assembly with an image tag selection, you will need the Active Choices plugin

Go to "Manage Jenkins"

 

Section "Manage Plugins"

 

Go to the "Available" tab and select "Active Choices" in the search.

Install it.

 

Create a "New Item" – "Pipeline", indicate that it will be a parameterized assembly, and add the parameter "Active Choices Parameter"

 

 

 

We indicate that this is "Groovy Script" and paste the following into it:

import com.cloudbees.plugins.credentials.Credentials;
import com.cloudbees.plugins.credentials.CredentialsNameProvider;
import com.cloudbees.plugins.credentials.common.StandardCredentials;
import com.cloudbees.jenkins.plugins.sshcredentials.SSHUserPrivateKey;
import com.jcraft.jsch.JSch;
import com.jcraft.jsch.Session;
import com.jcraft.jsch.ChannelExec;
import jenkins.model.*


sshCredentialsId = 'instance_ssh_key'
sshUser = 'artem'
sshInstance = '192.168.1.100'

def sshCreds = com.cloudbees.plugins.credentials.CredentialsProvider.lookupCredentials(SSHUserPrivateKey.class, Jenkins.instance, null, null ).find({it.id == sshCredentialsId});

String ssh_key_data = sshCreds.getPrivateKeys()

JSch jsch = new JSch();
jsch.addIdentity("id_rsa", ssh_key_data.getBytes(), null, null);
Session session = jsch.getSession(sshUser, sshInstance, 22);
Properties prop = new Properties();
prop.put("StrictHostKeyChecking", "no");
session.setConfig(prop);

session.connect();

ChannelExec channelssh = (ChannelExec)session.openChannel("exec");
channelssh.setCommand("cat /home/artem/secret_list");
channelssh.connect();

def result = []
InputStream is=channelssh.getInputStream();
is.eachLine {
    result.add(it)
}
channelssh.disconnect();

return result

 

Where is the value of the variables, "sshCredentialsId" – Jenkins Credentials ID with type: "SSH Username with private key";

"sshUser" – username for SSH connection;

"sshInstance" – IP address or domain name of the remote instance;

"cat /home/artem/secret_list" – the command whose result we want to return

 

 Jenkins – Active Choice: S3 – Return necessary key value from JSON file

For a parameterized assembly with an image tag selection, you will need the Active Choices plugin

Go to "Manage Jenkins"

 

Section "Manage Plugins"

 

Go to the "Available" tab and select "Active Choices" in the search.

Install it.

 

Create a "New Item" – "Pipeline", indicate that it will be a parameterized assembly, and add the parameter "Active Choices Parameter"

 

 

 

We indicate that this is "Groovy Script" and paste the following into it:

import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.S3Object;
import com.cloudbees.plugins.credentials.Credentials;
import com.cloudbees.plugins.credentials.CredentialsNameProvider;
import com.cloudbees.plugins.credentials.common.StandardCredentials;
import com.cloudbees.jenkins.plugins.awscredentials.AmazonWebServicesCredentials;
import groovy.json.JsonSlurper;
import java.nio.charset.Charset;
import java.nio.charset.StandardCharsets;
import java.util.stream.Collectors;
import jenkins.model.*
  
def AWS_REGION = 'eu-west-1'
def BUCKET_NAME = 'artem.services'
def INFO_FILE = 'info.json'

credentialsId = 'aws_credentials'

def creds = com.cloudbees.plugins.credentials.CredentialsProvider.lookupCredentials(AmazonWebServicesCredentials.class, Jenkins.instance, null, null ).find({it.id == credentialsId});


AmazonS3 s3Client = AmazonS3ClientBuilder.standard().withCredentials(creds).withRegion(AWS_REGION).build(); 

S3Object s3Object = s3Client.getObject(BUCKET_NAME, INFO_FILE)

try {
    BufferedReader reader = new BufferedReader(new InputStreamReader(s3Object.getObjectContent()))
    s3Data = reader.lines().collect(Collectors.joining("\n"));
} catch (Exception e) {
    System.err.println(e.getMessage());
    System.exit(1);
}

def json = new JsonSlurper().parseText(s3Data)

return json.instance.ip

 

Where is the value of the variables, "credentialsId" – Jenkins Credentials ID with AWS Access Key and AWS Secret Key (if the Jenkins instance does not use IAM Role, or it is not deployed in AWS Cloud);

"AWS_REGION" – AWS region where S3 Bucket is located;

"BUCKET_NAME" – S3 bucket name;

"INFO_FILE" – the key of the object in the bucket, in our case a JSON file;

"return json.instance.ip" – return key value instance: {'ip': "1.2.3.4}