There should be more than 4 queues assigned to those 10Gbps interface by Multi-Queue with an 8/12 split, unless those interfaces are using the igb driver (ethtool -i to check) which is limited to a maximum of 4 queues due to a driver limitation. If this is the case there is nothing you can do about it, other than using a different NIC that supports the ixgbe driver which can have up to 16 queues. If you are in fact using ixgbe, see sk154392: An available CPU Core is not handling any queue, when using Multi-Q.
It is also possible that your manual process affinity for the fwd daemon is interfering with the assignment of more SND/IRQ cores for traffic processing with Multi-Queue.
As far as why PSLXL is 44% with so few blades enabled, this is probably due to the presence of microsoft-ds traffic (port 445) which by default will be sent to PSLXL. You can confirm by running fwaccel conns | grep 445. If you see a s/S flag for those connections that indicates the connection is going Medium Path. Also look for other connections that have the s/S flag in the output of fwaccel conns for clues.
As far as what you can do about this, if you upgrade to R80.20 Jumbo HFA 103 or later you can force this traffic into the Accelerated Path with fast_accel as discussed here: sk156672: SecureXL Fast Accelerator (fw fast_accel) for R80.20 and above
Be warned however that doing this will "fathpath" the traffic with a minimum of inspection, and bad things security-wise can happen if a threat or other security problem happens in the fastpath'ed connections. Also keep in mind that the load will increase on your SND/IRQ cores as this traffic is forced off the Firewall Workers into SecureXL; you may want to figure out what is going on with Multi-Queue not using all available SND/IRQ cores first.
Updated 2023 IPS/AV/ABOT R81.20 Course now
available at maxpowerfirewalls.com