First off in R80.40 and later (and some releases of R80.30) with the new Gaia 3.10 kernel, MQ is enabled by default on all interfaces (except the management interface in the early Jumbo HFAs on R80.40) and generally it is not a good idea to perform any manual configuration of MQ. Attempting to do so can cause some nasty SND/IRQ/Dispatcher core load imbalances if you aren't careful.
For your R80.20 release with the 2.6.18 kernel, here are the answers:
When you change your split 6/14 to 8/12 as you are planning, first you must change the number of instances in cpconfig and reboot. If you have MQ enabled at that time you then must then run the cpmq reconfigure command and reboot again. You must do it in this order and you can't skip any reboots.
1) You can have MQ enabled on up to 5 interfaces in Gaia 2.6.18 so configuring 4 is fine. The 16 queue maximum you are referring to specifies how many queues can be formed for each individual physical interface, and therefore how many SND/IRQ cores are allowed to process frames from that interface. So unless you are planning to allocate more than 16 SNQ/IRQ cores in your CoreXL split this limit does not apply to you. This queue limit is NIC hardware and driver dependent and can be as low as 2 on some NIC hardware such as Intel l211.
2) Yes, MQ will work fine with a mixture of interface speeds.
3) No. If you are experiencing RX-DRP rates of >0.1% (viewable with netstat -ni) on an MQ-enabled interface, you should always reduce the number of instances to allocate more SND/IRQ cores if possible, unless you have already reached the queue limit for the interface in question or reducing the number of instances will overload the remaining ones. Only in that scenario should you consider increasing RX ring buffer sizes, as doing so can incur an insidious performance-draining effect called Bufferbloat under full load.
Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com