I’ve a db server (let’s name it DB) on one other cloud service and a VPN server working wireguard on Amazon AWS (let’s name it GW), an EC2 occasion. I even have an internet server as an EC2 occasion (let’s name it WEB).
I am an entire noob to AWS companies. My networking setup comprises the next:
- A VPC containing two subnets, one public (let’s name it PUB), one non-public (let’s name it PVT).
- An web gateway on the PUB subnet
- An Elastic IP connected to considered one of GW’s community interface
The GW occasion has two community interfaces:
- one on the PUB subnet (10.25.0.2/24) with the EIP attributed
- one on the PVT subnet (10.25.240.2/24)
The WEB occasion has one community interface (10.25.240.50/24).
Each have non-public IPv4, solely the GW has a public IPv4, and each have IPv6, however I am specializing in organising the IPv4 first, so let’s ignore the IPv6 setup.
There is a Wireguard tunnel established between DB and GW with the next setup:
- GW: wg0, 192.168.40.1/24
- DB: wg0, 192.168.40.2/24
Each DB and GW ping one another via the tunnel, and each GW and WEB ping one another via non-public subnet interfaces. I did a “permit all the things” Safety Group for each situations on the interface that talk with one another as a result of I suspected it could possibly be an issue.
All situations run Linux and GW has sys.internet.ipv4.ip_forward
sysctl choice set to 1.
I attempted disabling my firewall (firewalld), I attempted creating insurance policies for inter-zone visitors ahead, I attempted all the things, however packets from DB merely will not arrive at WEB (they do go away GW, although) and packets from WEB merely will not arrive at GW.
I examined with ICMP packets working tcpdump
, packets destined to WEB from DB arrive on the tunnel interface, and they’re despatched to the wire into the non-public subnet from dumping the non-public subnet’s interface, however tcpdump
on the WEB occasion does not present something arriving). Additionally, packets from the WEB destined to DB are captured on the WEB community interface, however will not seem on the GW interface in any respect.
DB routing desk:
default through 10.1.1.1 dev eth0 proto dhcp src 10.1.1.149 metric 100
10.1.1.0/24 dev eth0 proto kernel scope hyperlink src 10.1.1.149 metric 100
10.25.240.0/24 dev wg0 scope hyperlink
192.168.40.0/24 dev wg0 proto kernel scope hyperlink src 192.168.40.2
(the path to 10.25.240.0/24 was produced by wireguard’s AllowedIPs)
WEB Routing Desk
default through 10.25.240.1 dev eth0 proto dhcp src 10.25.240.50 metric 100
10.25.240.0/24 dev eth0 proto kernel scope hyperlink src 10.25.240.50 metric 100
192.168.40.0/24 through 10.25.240.2 dev eth0
(the path to 192.168.40.0/24 was manually added to NetworkManager config)
That mentioned, I’ve a twofold query:
- Typically, how would I method this type of scenario to diagnose the difficulty when working with AWS stuff?
- In particular, what could possibly be the attainable trigger and the attainable options for this problem?