Currently have a star VPN configuration between On Prem and AWS. (Two firewalls on prem, two firewalls in AWS).
Prior to a policy push with R80.10 management, I was able to collect logs from the AWS appliances over the VPN tunnel that was built. Now, after pushing policy from an R80.10 Management server, I'm no longer able to send logs from the AWS gateways to the on prem management server. Connectivity to the gateways in AWS is fine (can still push policy / reach devices behind the gateways)
A netstat shows the gateway in AWS attempting to connect to the proper logging server:
tcp 0 1 10.0.0.1:55388 192.168.0.1:257 SYN_SENT 4487/fwd
I am able to successfully connect to 192.168.0.1 from the gateway (via ping or telnet tests):
[Expert@FW1:0]# ping 192.168.0.1
PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data.
64 bytes from 192.168.0.1: icmp_seq=1 ttl=58 time=34.3 ms
64 bytes from 192.168.0.1: icmp_seq=2 ttl=58 time=28.4 ms
64 bytes from 192.168.0.1: icmp_seq=3 ttl=58 time=28.7 ms
[Expert@FW1:0]# telnet 192.168.0.1 443
Connected to 192.168.0.1.
Escape character is '^]'.
However, when I attempt to telnet directly to port 257, the telnet test times out:
[Expert@FW1:0]# telnet 192.168.0.1 257
telnet: connect to address 192.168.0.1: Connection timed out
My assumption is that the implied rules for Logging are taking precedence and the logging traffic is not making it to the explicit rule in policy that allows and encrypts the traffic.