Had a single 1600 running fine, a rule to allow an external IP to ping the gateway using the destination of "This Gateway", works fine.
Now I've added a second 1600 and clustered them. Original setup had the external address ending .30, so I've changed this to .31, set the new one to .32 and the cluster now has the .30 address.
The ping from the external IP address now fails.
I can ping .31 and .32, but not .30 so i'm wondering, how is the "This Gateway" object handled in a clustered environment? should it match the active member or the VIP, or both? (changing the destination to Any resolves the ping issue)
This is also leading to strange log entries, where the pings to the .30 address show in the log as coming from the default gateway of the external IP range (in this case ends .29), so that explains why the rule is not matched but why this is showing as coming from the default gateway rather than the external IP is unexplained!
Has anyone else seen anything like this?
UPDATE: having run more tests, it appears that ANY incoming connection that targets the VIP fails in the same way and generates these strange entries in the logs! Just tested incoming LDAP which always worked before, but now does the same! Is this expected behaviour on a cluster on these Spark devices?