Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Kevin_Vargo
Collaborator

SND/FW changes due to core and OS upgrade?

Hi –

We currently have an 8 core open server HA cluster running 77.30 and are upgrading those servers to 16 core servers and R80.10.  I’d like to understand what, if any, CoreXL changes I should make (or perhaps any other performance enhancements?).  Unfortunately I don’t believe I can use the cpsizeme tool being these are Dell open servers.  My initial thought is go to with four SNDs and 12 firewall works only because almost all interfaces are 10Gb.

There is a moderate amount of VoIP traffic, no VPN traffic and some IPS protections enabled, no other blades are enabled.

Some snapshots of command output is shown as a reference if anyone has thoughts, suggestions or experience with a change such as this.  Thanks in advance.

 

gatewayA> fw ctl affinity -l -r

CPU 0:  s0p3 s0p2 eth0 eth1 s7p1 s7p2

CPU 1:  fw_3

CPU 2:  s5p2 s0p4 s0p1 s4p1 s4p2

CPU 3:  fw_2

CPU 4:  fw_5

CPU 5:  fw_1

CPU 6:  fw_4

CPU 7:  fw_0

All:    fwd rtmd mpdaemon cpd cprid

 

gatewayA> fw ctl multik stat

ID | Active  | CPU    | Connections | Peak

----------------------------------------------

 0 | Yes     | 7      |        7588 |   144409

 1 | Yes     | 5      |       13856 |   125878

 2 | Yes     | 3      |        8876 |   136021

 3 | Yes     | 1      |       14764 |   147924

 4 | Yes     | 6      |       15675 |   149068

 5 | Yes     | 4      |       15052 |   150053

 

TOP

top - 23:06:32 up 20 days,  1:25,  1 user,  load average: 0.63, 0.43, 0.38

Tasks: 171 total,   1 running, 170 sleeping,   0 stopped,   0 zombie

Cpu0  :  0.0%us,  0.3%sy,  0.0%ni, 80.3%id,  0.0%wa,  1.0%hi, 18.3%si,  0.0%st

Cpu1  :  0.7%us,  1.3%sy,  0.0%ni, 92.4%id,  0.0%wa,  0.0%hi,  5.6%si,  0.0%st

Cpu2  :  0.3%us,  0.3%sy,  0.0%ni, 71.7%id,  0.0%wa,  1.0%hi, 26.7%si,  0.0%st

Cpu3  :  0.0%us,  0.3%sy,  0.0%ni, 94.3%id,  0.0%wa,  0.0%hi,  5.3%si,  0.0%st

Cpu4  :  0.0%us,  0.0%sy,  0.0%ni, 94.7%id,  0.0%wa,  0.0%hi,  5.3%si,  0.0%st

Cpu5  :  0.0%us,  1.0%sy,  0.0%ni, 94.3%id,  0.0%wa,  0.0%hi,  4.7%si,  0.0%st

Cpu6  :  0.0%us,  0.3%sy,  0.0%ni, 92.3%id,  0.0%wa,  0.0%hi,  7.3%si,  0.0%st

Cpu7  :  0.3%us,  2.3%sy,  0.0%ni, 91.7%id,  0.0%wa,  0.0%hi,  5.7%si,  0.0%st

Mem:  32778728k total, 12168000k used, 20610728k free,   349588k buffers

Swap:  8385920k total,        0k used,  8385920k free,  6004236k cached

 

 PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND

 8004 admin     15   0     0    0    0 S    8  0.0   1988:24 fw_worker_4

 8000 admin     15   0     0    0    0 S    6  0.0   2126:14 fw_worker_0

 8003 admin     15   0     0    0    0 S    6  0.0   2135:56 fw_worker_3

 8001 admin     15   0     0    0    0 S    6  0.0   2156:24 fw_worker_1

 8002 admin     15   0     0    0    0 S    6  0.0   2105:59 fw_worker_2

 8005 admin     15   0     0    0    0 S    5  0.0   1962:16 fw_worker_5

Netstat –ni (up for 20 days)

Iface

MTU_Met

NA

RX-OK

RX-ERR

RX-DRP

RX-OVR

TX-OK

TX-ERR

TX-DRP

TX-OVR

 Flg

bond11

1500

0

130349881574

0

9296

0

80152928665

0

0

0

BMmRU

bond12

1500

0

78527374619

0

939

0

133113696739

0

0

0

BMmRU

bond13

1500

0

27729450377

0

25

0

37792884917

0

0

0

BMmRU

bond13.3

1500

0

1637665830

0

0

0

729217784

0

0

0

BMmRU

bond13.11

1500

0

174927162

0

0

0

141145382

0

0

0

BMmRU

bond13.12

1500

0

355043413

0

0

0

528118604

0

0

0

BMmRU

bond13.16

1500

0

604802057

0

0

0

729538625

0

0

0

BMmRU

bond13.17

1500

0

1333

0

0

0

27573

0

0

0

BMmRU

bond13.18

1500

0

2296

0

0

0

31674

0

0

0

BMmRU

bond13.22

1500

0

162695764

0

0

0

149482123

0

0

0

BMmRU

bond13.23

1500

0

333894296

0

0

0

174112873

0

0

0

BMmRU

bond13.24

1500

0

12881847684

0

0

0

25321641913

0

0

0

BMmRU

bond13.26

1500

0

29909098

0

0

0

20838220

0

0

0

BMmRU

bond13.27

1500

0

1520095225

0

0

0

692780450

0

0

0

BMmRU

bond13.30

1500

0

42886857

0

0

0

66776815

0

0

0

BMmRU

bond13.36

1500

0

9970300137

0

0

0

9219791498

0

0

0

BMmRU

bond13.42

1500

0

733944

0

0

0

1146823

0

0

0

BMmRU

bond13.43

1500

0

13283229

0

0

0

13369836

0

0

0

BMmRU

eth0

1500

0

37812616839

0

444

0

76189885995

0

0

0

BMsRU

eth1

1500

0

40714758548

0

495

0

56923813966

0

0

0

BMsRU

lo

16436

0

104048

0

0

0

104048

0

0

0

LRU

s0p1

1500

0

16459725358

0

164

0

1449884045

0

0

0

BMRU

s0p1.4

1500

0

569145883

0

0

0

13351065

0

0

0

BMRU

s0p1.13

1500

0

15832888325

0

0

0

1393461365

0

0

0

BMRU

s0p1.31

1500

0

5064618

0

0

0

3418652

0

0

0

BMRU

s0p1.32

1500

0

6229135

0

0

0

4716002

0

0

0

BMRU

s0p1.33

1500

0

867958

0

0

0

27576

0

0

0

BMRU

s0p1.34

1500

0

1205573

0

0

0

85171

0

0

0

BMRU

s0p1.39

1500

0

1955359

0

0

0

939969

0

0

0

BMRU

s0p1.92

1500

0

38392944

0

0

0

33858440

0

0

0

BMRU

s0p2

1500

0

564176135

0

0

0

63106917

0

0

0

BMRU

s0p2.2

1500

0

30781935

0

0

0

13345212

0

0

0

BMRU

s0p2.100

1500

0

4722643

0

0

0

1079796

0

0

0

BMRU

s0p2.101

1500

0

1035852

0

0

0

1027773

0

0

0

BMRU

s0p2.102

1500

0

4161432

0

0

0

2436836

0

0

0

BMRU

s0p2.103

1500

0

0

0

0

0

27573

0

0

0

BMRU

s0p2.105

1500

0

1271313

0

0

0

820712

0

0

0

BMRU

s0p2.106

1500

0

13370017

0

0

0

12000147

0

0

0

BMRU

s0p2.107

1500

0

4758716

0

0

0

4223971

0

0

0

BMRU

s0p2.108

1500

0

3208956

0

0

0

2833171

0

0

0

BMRU

s0p2.110

1500

0

1180982

0

0

0

829131

0

0

0

BMRU

s0p2.111

1500

0

25420965

0

0

0

24482520

0

0

0

BMRU

s0p3

1500

0

203957021

0

1

0

2022423755

0

0

0

BMRU

s0p4

1500

0

79603650

0

0

0

672326345

0

0

0

BMRU

s4p1

1500

0

71335693881

0

7463

0

38876131270

0

0

0

BMsRU

s4p2

1500

0

59014194533

0

1833

0

41276800116

0

0

0

BMsRU

s5p2

1500

0

234755003

0

0

0

180788381

0

0

0

BMRU

s5p2.6

1500

0

13934961

0

0

0

13715248

0

0

0

BMRU

s5p2.7

1500

0

6035826

0

0

0

4093516

0

0

0

BMRU

s5p2.11

1500

0

153695023

0

0

0

121650121

0

0

0

BMRU

s5p2.12

1500

0

20878604

0

0

0

27577

0

0

0

BMRU

s5p2.13

1500

0

3792193

0

0

0

5771594

0

0

0

BMRU

s5p2.14

1500

0

5364843

0

0

0

9187536

0

0

0

BMRU

s5p2.51

1500

0

31053557

0

0

0

26348111

0

0

0

BMRU

s7p1

1500

0

14330536424

0

0

0

19082335534

0

0

0

BMsRU

s7p2

1500

0

13398915077

0

25

0

18710550994

0

0

0

BMsRU

0 Kudos
3 Replies
PhoneBoy
Admin
Admin

It's usually better to start with the defaults and adjust from there versus going in with a specific configuration, particularly if you're changing hardware.

0 Kudos
Timothy_Hall
Legend Legend
Legend

Hi Kevin Vargo,

The default 2/14 split for a 16-core firewall will probably be fine as Dameon Welch Abernathy‌ suggests, however given the command outputs & other info you have provided, I'd say doing a 4/12 split is a quite reasonable starting point as long as no other manual affinities are set in fwaffinity.conf or sim_aff.conf.  The Dynamic Dispatcher will be on by default in R80.10 so no need to worry about that.

Once thing that may change the distribution of load on the cores for R80.10 is that processes may only execute on the Firewall Worker cores, and are no longer allowed to grab spare cycles on the SND/IRQ cores as they were in R77.30.  I'm assuming this was done to keep the CPU fast caches as "hot" as possible on the SND/IRQ cores.  So keep an eye on the Firewall Worker cores as they may have slightly higher utilization.  There is also a possibility based on the blades you have enabled that much more traffic could be accelerated on R80.10, and also more rulebase lookups will be templated by SecureXL which will increase the load on the SND/IRQ cores, and is why I'd estimate 4 of them is a good starting point. 

You are seeing a negligible number of RX-DRPs currently (not RX-OVRs as your network statistics columns are off by one to the right vertically).  You shouldn't pile up RX-DRPs in the new config, but if you do you may need to enable Multi-Queue assuming that all SND/IRQ cores are not topping out at the same time.  I don't recommend turning on Multi-Queue unless you need it due to the increased overhead involved to "stick" connections to the same queue every time.

--
Second Edition of my "Max Power" Firewall Book
Now Available at http://www.maxpowerfirewalls.com

Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com
0 Kudos
Kevin_Vargo
Collaborator

Thank you, Tim.  I appreciate the post and your recommendations.  I just made an edit to my column heading so output lines up, I missed that.  Thanks again.

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events