Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
RS_Daniel
Advisor

NFS storage support

Hello CheckMates,

Recently we did an Advanced Upgrade with a management server, we moved from R80.30 to R81 JHA Take 44 (virtual machine on VMware). Since then we are experiencing very high latency and slowness with the server. It has really affected normal operations, as creating a single rule and publishing changes may take hours, every action gets us disconnected from the Management, and we have to try reconnecting many times just to push policy, which also fails some times with Timeout error until it is completed.

I think resources are not the problem, we have 12 cores of CPU, 32 GB RAM, 3 TB disk space and management is receiving logs from only two gateways at this moment.

On top we saw there are very high I/O wa values on some CPU cores (80-100%). With iotop checked the processes that cause this, but basically are all, java, postgress, kworker, fwm, gtar... and it varies depending on what the mamagement server is doing. After some time, customer showed me the new VM configuration and saw that the disk is mounted on a NFS storage.

I have seen on some posts and sk's, that NFS has been used to extend logs or backup partitions, but in this case, the entire management is mounted on NFS. I think the problem is here, but did not find a documentation that supports my theory.... XD.

Well, the question: Is NFS supported/recommended to mount the management server disk?

In case we request to move the VM to a local storage, do you think that copy the current vm is ok? or would you recommend exporting the database and reinstalling a new SMS?

Thanks in advance

Regards

0 Kudos
3 Replies
Wolfgang
Mentor
Mentor

@RS_Daniel I think you mean your VMware datastore is mounted via NFS and not the virtual harddrive for your virtual machine. This is a common configuration and these NFS mount is not visible to the GAiA OS. Every VMware supported datastore will be supported for your virtual machine of your SMS ( FibreChannel, NFS, local storage, iSCSI etc.)

You have to find the root cause of the high IO. Did you wait long enough after the migration? You mentioned gtar process is running, maybe something after the migration did not end succesfully or something is running very long. Did you copied your log files and the indexer is running again to Index all log files ?

RS_Daniel
Advisor

Hello @Wolfgang ,

Thanks for your reply. About your questions:

Did you wait long enough after the migration? Yes, the migration was more than a month ago.

I mentioned gtar just as an example to show that the process with high IO wait times are very varied and change constantly over the time. I did not copy log files after migration, customer did not have problem losing old logs, so all log files are new and created by this SMS.

I have followed this article: https://madflojo.medium.com/troubleshooting-high-i-o-wait-in-linux-358080d57b69

However, as the processes with High IO wait times are changing constantly i am not getting to find root cause. If you have any advice i would really appreciate it. Thank you!

Regards

0 Kudos