terraform-strongswan-deploy.../README.md
Mauritz Uphoff a09e9b21d0
Some checks failed
CI / Terraform Format & Validate (push) Failing after 7s
CI / TruffleHog Secrets Scan (push) Successful in 30s
dev
2025-07-06 19:23:40 +02:00

127 lines
No EOL
3.1 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

StrongSwan VPN Verification Guide
This guide helps you verify that a site-to-site IPsec VPN tunnel using StrongSwan has been successfully established between virtual machines provisioned via Terraform and configured with cloud-init.
## Hosts Overview
The tunnel uses IKEv2 with a Pre-Shared Key (PSK) and is automatically established at boot.
| Host | IP Address | Role |
|-------------|------------|------------------------|
| appliance01 | 10.1.1.10 | Cloud VPN Appliance |
| machine01 | 10.1.1.11 | Cloud Internal Machine |
| appliance02 | 11.2.2.10 | On-Prem VPN Appliance |
---
## 🔧 Architecture
![Architecture Diagram](docs/network-architecture.png)
---
## 1. Check StrongSwan Service Status
SSH into each machine using its public IP:
```bash
ssh -i ~/.ssh/id_rsa debian@<machine-public-ip>
```
Once logged in, verify the StrongSwan service:
```bash
sudo ipsec statusall
```
Expected output should resemble:
```
Status of IKE charon daemon (strongSwan 5.9.8, Linux ...):
uptime: ...
worker threads: ...
Connections:
net-net: 10.1.1.10...11.2.2.10 IKEv2, dpddelay=30s
net-net: local: [10.1.1.10] uses pre-shared key authentication
net-net: remote: [11.2.2.10] uses pre-shared key authentication
net-net: child: 10.1.1.0/24 === 11.2.2.0/24 TUNNEL
Security Associations (SAs) (0 up, 0 connecting):
none
```
This output confirms the configuration is loaded, but the tunnel may not yet be active.
---
## 2. Manually Bring Up the VPN Tunnel (Optional)
If the tunnel didnt start automatically, initiate it manually from either VPN appliance:
```bash
sudo ipsec up net-net
```
Then re-check the connection:
```bash
sudo ipsec statusall
```
You should now see an established connection:
```
Connections:
net-net[1]: ESTABLISHED 15s ago, 10.1.1.10...11.2.2.10
net-net{1}: INSTALLED, TUNNEL, ESP SPIs: ...
net-net{1}: 10.1.1.0/24 === 11.2.2.0/24
```
Key indicators:
- ESTABLISHED: Tunnel is active
- Subnet-to-subnet routing: 10.1.1.0/24===11.2.2.0/24
---
## 3. Verify VPN-Backed Network Connectivity
Ping between hosts to validate that routing is working through the VPN tunnel:
### 💻 From appliance01 (cloud) to appliance02 (on-prem)
```bash
ping 11.2.2.10
# ✅ Successful ping confirms VPN tunnel works
```
### 💻 From appliance02 (on-prem) to appliance01 (cloud)
```bash
ping 10.1.1.10
# ✅ Confirms bidirectional connectivity
```
### 💻 From machine01 (cloud internal) to appliance02 (on-prem)
```bash
ping 11.2.2.10
# ✅ Tests routing through VPN appliance (appliance01)
```
### 💻 From appliance02 (on-prem) to machine01 (cloud internal)
```bash
ping 10.1.1.11
# ✅ Tests project-project routing via SNA transfer network
```
### ❌ From machine01 (cloud) to appliance02 (VPN-disconnected)
If you remove the static route that directs 11.2.2.0/24 through appliance01:
```bash
ping 11.2.2.10
# ❌ Should fail, indicating that VPN appliance is required for routing
```
All success cases confirm correct tunnel and routing setup. Failures (when expected) validate routing dependency on the VPN stack.