PCIe is mostly a point-to-point connection standard. Processors offer a certain number of lanes with a certain speed. The boot ROM configures how the lanes are split into slots when initializing the processor. For example, if the processor offers 24 lanes total, the boot ROM is what tells the processor they should be arranged as three x8 slots.
When moving to a new version of PCIe signaling, it's common for processors to offer only some of their lanes at the new version. The Southbridge or Platform Controller Hub (PCH) often take some number of high-speed lanes and provide some number of low-speed lanes (and other low-speed interfaces like USB and SATA). These slower lanes are also arranged into slots by the boot ROM.
All of a given slot's lanes are plumbed to a single root endpoint. That is, you can't have a single slot which connects to both the processor and the PCH, or which connects half of its lanes to each processor.
PCIe switches exist, and allow you to connect more slot lanes to fewer root lanes. I don't think these would be used in any of Check Point's branded boxes.
Intel integrated the RAM controller into the processor quite a while ago. The link between processor sockets is much slower than the link of a processor to its own RAM. Thus, Non-Uniform Memory Access, or NUMA. Each NUMA node has direct access to some of the RAM in the system and to the PCIe lanes it provides, but it has to go through the other processor to get access to some of the RAM and to that processor's PCIe lanes.
For optimal performance, you need to keep traffic off of the inter-socket link. It should come in an interface, get handled by an SND on the same processor which handles the PCIe lanes, deal with memory on the same processor, and go out an interface on the same processor's PCIe lanes.
Looking at a 19200 I have, I see a total of ten PCIe root ports:
[Expert@Some19200]# lspci -vv | grep -A 30 "PCI bridge: Intel Corporation Device" | egrep -o "^([0-9a-f].+$|.+?NUMA node: [0-9]|.{2}(LnkCap:|LnkSta:).+Width x[0-9]{1,2})"
16:02.0 PCI bridge: Intel Corporation Device 347a (rev 04) (prog-if 00 [Normal decode])
NUMA node: 0
LnkCap: Port #1, Speed 16GT/s, Width x16
LnkSta: Speed 16GT/s, Width x16
30:02.0 PCI bridge: Intel Corporation Device 347a (rev 04) (prog-if 00 [Normal decode])
NUMA node: 0
LnkCap: Port #5, Speed 16GT/s, Width x16
LnkSta: Speed 2.5GT/s, Width x0
4a:02.0 PCI bridge: Intel Corporation Device 347a (rev 04) (prog-if 00 [Normal decode])
NUMA node: 0
LnkCap: Port #13, Speed 16GT/s, Width x16
LnkSta: Speed 2.5GT/s, Width x0
64:02.0 PCI bridge: Intel Corporation Device 347a (rev 04) (prog-if 00 [Normal decode])
NUMA node: 0
LnkCap: Port #17, Speed 16GT/s, Width x4
LnkSta: Speed 16GT/s, Width x4
64:03.0 PCI bridge: Intel Corporation Device 347b (rev 04) (prog-if 00 [Normal decode])
NUMA node: 0
LnkCap: Port #18, Speed 16GT/s, Width x4
LnkSta: Speed 16GT/s, Width x4
64:04.0 PCI bridge: Intel Corporation Device 347c (rev 04) (prog-if 00 [Normal decode])
NUMA node: 0
LnkCap: Port #19, Speed 16GT/s, Width x8
LnkSta: Speed 8GT/s, Width x8
97:02.0 PCI bridge: Intel Corporation Device 347a (rev 04) (prog-if 00 [Normal decode])
NUMA node: 1
LnkCap: Port #1, Speed 16GT/s, Width x16
LnkSta: Speed 2.5GT/s, Width x0
b0:02.0 PCI bridge: Intel Corporation Device 347a (rev 04) (prog-if 00 [Normal decode])
NUMA node: 1
LnkCap: Port #5, Speed 16GT/s, Width x16
LnkSta: Speed 16GT/s, Width x16
c9:02.0 PCI bridge: Intel Corporation Device 347a (rev 04) (prog-if 00 [Normal decode])
NUMA node: 1
LnkCap: Port #13, Speed 16GT/s, Width x16
LnkSta: Speed 16GT/s, Width x16
e2:02.0 PCI bridge: Intel Corporation Device 347a (rev 04) (prog-if 00 [Normal decode])
NUMA node: 1
LnkCap: Port #17, Speed 16GT/s, Width x16
LnkSta: Speed 2.5GT/s, Width x0
If you check lspci -tv, you can see 64:02.0 and 64:03.0 are connected to the SSDs, which explains why they are x4 slots. 64:04.0 is connected to the SFP28 slots for the interfaces named Sync and Sync2. The rest are all PCIe4 x16 slots, so my concern about some lanes connecting to the PCH isn't valid for this box. It looks to me like the 19200 and the 29200 probably share a configuration, and three of the "slots" on the 19200 just aren't connected to anything. If I'm right, all of the slots on the 29200 are probably PCIe4 x16 for 256gbps of throughput per direction per slot.