Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Muazzam
Contributor
Contributor

MQ basics - R80.20

It's been long time since I have configured Multi Queue (MQ) on a gateway. I have couple of questions.

 

Hardware: 13800 with two 4x10Gb modules, 128GB mem, 20 cores.

Current CoreXL is 14, planning to change it to 12.

OS: R80.20 T161

 

1. Can I have 4 10G interfaces with 6 cores set for MQ? The document says maximum of 16 queues for 10G interfaces. How many this is going to be?

2. Can I mix 1G and 10G interfaces in the MQ?

3. I am expecting higher traffic (5-8 Gbps), should I increase the 10G interface ring buffer size that is currently set on 512?

 

Thank You

0 Kudos
2 Replies
Timothy_Hall
Legend Legend
Legend

First off in R80.40 and later (and some releases of R80.30) with the new Gaia 3.10 kernel, MQ is enabled by default on all interfaces (except the management interface in the early Jumbo HFAs on R80.40) and generally it is not a good idea to perform any manual configuration of MQ.  Attempting to do so can cause some nasty SND/IRQ/Dispatcher core load imbalances if you aren't careful.

For your R80.20 release with the 2.6.18 kernel, here are the answers:

When you change your split 6/14 to 8/12 as you are planning, first you must change the number of instances in cpconfig and reboot.  If you have MQ enabled at that time you then must then run the cpmq reconfigure command and reboot again.  You must do it in this order and you can't skip any reboots.

1) You can have MQ enabled on up to 5 interfaces in Gaia 2.6.18 so configuring 4 is fine.  The 16 queue maximum you are referring to specifies how many queues can be formed for each individual physical interface, and therefore how many SND/IRQ cores are allowed to process frames from that interface.  So unless you are planning to allocate more than 16 SNQ/IRQ cores in your CoreXL split this limit does not apply to you.  This queue limit is NIC hardware and driver dependent and can be as low as 2 on some NIC hardware such as Intel l211.

2) Yes, MQ will work fine with a mixture of interface speeds.

3) No.  If you are experiencing RX-DRP rates of >0.1% (viewable with netstat -ni) on an MQ-enabled interface, you should always reduce the number of instances to allocate more SND/IRQ cores if possible, unless you have already reached the queue limit for the interface in question or reducing the number of instances will overload the remaining ones.  Only in that scenario should you consider increasing RX ring buffer sizes, as doing so can incur an insidious performance-draining effect called Bufferbloat under full load. 

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
(1)
Muazzam
Contributor
Contributor

Thank You

0 Kudos

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events