IPTables – Initial configuration script

This script clears all firewall rules, sets standard values for chains, and opens access via SSH, HTTP and HTTPS

#!/bin/bash

iptables -F
iptables -X

iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -p tcp --tcp-flags ALL NONE -j DROP
iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP
iptables -A INPUT -m state --state INVALID -j DROP
iptables -A INPUT -p tcp --tcp-flags ALL ALL -j DROP
iptables -A INPUT -i lo -j ACCEPT

iptables -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT

iptables -P OUTPUT ACCEPT
iptables -P INPUT DROP
iptables -P FORWARD DROP

Continue reading "IPTables – Initial configuration script"

Ubuntu – rc.local

Faced with the fact that in Ubuntu there is no usual CentOS file with me "rc.local", the solution is the following.

Create a service:

vim /etc/systemd/system/rc-local.service

with the following content:

[Unit]
Description=/etc/rc.local Compatibility
ConditionPathExists=/etc/rc.local

[Service]
Type=forking
ExecStart=/etc/rc.local start
TimeoutSec=0
StandardOutput=tty
RemainAfterExit=yes
SysVStartPriority=99

[Install]
WantedBy=multi-user.target

Create the file"rc.local" and add it to the execution bit:

touch /etc/rc.local
chmod +x /etc/rc.local

Continue reading "Ubuntu – rc.local"

Saltstack – Installation

Install Master

CentOS 7

yum install https://repo.saltstack.com/yum/redhat/salt-repo-latest.el7.noarch.rpm

yum install salt-master salt-minion salt-ssh salt-syndic salt-cloud salt-api

Add to autorun and run:

systemctl enable salt-master
systemctl start salt-master

Installation may require dependencies in the EPEL repository.

yum install epel-release

Continue reading "Saltstack – Installation"

SSL – Certificate Validity check

The script receives the value, after how many days the certificate expires and sends the values to "Zabbix" via "zabbix-sender".

Script content:

#!/bin/bash

DOMAIN="$(hostname -f)"
DEADLINE="$(echo | openssl s_client -servername $DOMAIN -connect $DOMAIN:443 2>/dev/null | openssl x509 -noout -dates | grep "notAfter" | cut -d "=" -f2)"
SSL_LAST_DAY="$(date +%s -d "$DEADLINE")"
TODAY="$(date +%s)"
let "SSL_LEFT_DAYS = ( $SSL_LAST_DAY - $TODAY ) / 86400"

# WRITE TO LOG OR SEND TO ZABBIX
echo $SSL_LEFT_DAYS > /var/log/ssl_payday

 

You need to make sure that the required domain is set as the FQDN of the host. Or receive it in another way, for example, if you have more than one domain on your host.

Linux – Find an application as part of packages

For example, you need to know which package includes the utility "mkpasswd"

CentOS:

yum provides mkpasswd

Result:

[[email protected] ~]# yum provides mkpasswd
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.reconn.ru
 * epel: mirror.logol.ru
 * extras: dedic.sh
 * rpmforge: mirror.awanti.com
 * updates: mirror.reconn.ru
expect-5.45-14.el7_1.x86_64 : A program-script interaction and testing utility
Repo        : base
Matched from:
Filename    : /usr/bin/mkpasswd



expect-5.45-14.el7_1.x86_64 : A program-script interaction and testing utility
Repo        : @base
Matched from:
Filename    : /bin/mkpasswd



expect-5.45-14.el7_1.x86_64 : A program-script interaction and testing utility
Repo        : @base
Matched from:
Filename    : /usr/bin/mkpasswd

Continue reading "Linux – Find an application as part of packages"

Htaccess – Rewrite

File Contents: ".htaccess"

Redirecting everything to HTTPS

RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule .* https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]

 

Redirecting to www

RewriteEngine On
RewriteCond %{HTTP_HOST} !^www\. [NC]
RewriteRule .* https://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301]

 

Redirecting from www

RewriteEngine On
RewriteCond %{HTTP_HOST} www.yourwebsitehere.com
RewriteRule (.*) https://yourwebsitehere.com/$1 [R=301,L]

BASH – backup sites and MySQL databases on AWS S3

The script finds all folders in a given directory and archives them one by one, copies them to S3 Bucket and deletes them locally. It also receives a list of all MySQL databases, excluding system databases, archives them in turn, sends them to S3 Bucket and deletes them locally.

#!/bin/bash

DATE=$(date +%d-%m-%Y)
WEB_PATH="/var/www/html/"

SITE_LIST="$(ls $WEB_PATH)"
DB_LIST="$(mysql -u root -e "SHOW DATABASES;" | grep -Ev "(Database|information_schema|performance_schema|mysql|sys)")"
S3_BUCKET="s3://artem-services"

################################ BACKUP SITES ################################

for SITE in $SITE_LIST
do
	if [ -f "/tmp/$SITE.tar.gz" ] # Check if there is an old archive left
	then
		rm /tmp/$SITE.tar.gz
	fi

	tar -zcvf /tmp/$SITE.tar.gz --directory="$WEB_PATH/$SITE/" ./
	aws s3 cp /tmp/$SITE.tar.gz $S3_BUCKET/$DATE/SITE/$SITE.tar.gz
	rm /tmp/$SITE.tar.gz
done

############################### BACKUP DATABASES ##############################

for DB in $DB_LIST
do
	if [ -f "/tmp/$DB.gz" ] # Check if there is an old archive left
	then
		rm /tmp/$DB.gz
	fi

	mysqldump -u root $DB | gzip -c > /tmp/$DB.gz
	aws s3 cp /tmp/$DB.gz $S3_BUCKET/$DATE/DATABASE/$DB.gz
	rm /tmp/$DB.gz
done