Create a Post
cancel
Showing results for 
Search instead for 
Did you mean: 
Kenneth_Greger1
Contributor
Jump to solution

Log Indexer crashing - SmartLog not working

Hi

We have been struggling, since before Christmas, with our R80.10 SmartCenter server (R80.10 - Build 439).

Every now and then (after a few hours and/or days) the SmartLog is not working. Meaning that it is not possible to view the log files in the SmartDashboard GUI client (SmartView).

We can see that the SmartCenter is receiving the logs, but the INDEXER service is crashing.

A workaround has been to do evstop.

Then look into $INDEXERDIR/log/log_indexer.elg and find the offending log file that the INDEXER process is not able to parse. Typically the file name it will show up right before an entry that reads:

log_indexer 30145 3804232592] Jan 16:05:41] Start reading 127.0.0.1:2019-01-02_151203_1.log [1546423998] at position 5738761

 

[2 Jan 16:05:41] CBinaryLogFile::ReplaceFileToTableMemStringID: error - can't get mem string id

[2 Jan 16:05:41] CBinaryLogFile::ReplaceTableStringId error: couldn't get file string_id, will set to default NULL VALUE

[2 Jan 16:05:41] CBinaryLogFile::ReplaceFileToTableMemStringID: error - can't get mem string id

[2 Jan 16:05:41] CBinaryLogFile::ReplaceTableStringId error: couldn't get file string_id, will set to default NULL VALUE

Then we edit the file $INDEXERDIR/data/FetchedFiles, mark the offending file as finished - and the INDEXER will move on to the next log file. This procedure is described in sk116117.

In some cases it does not indicate which files is problematic at all. What we do then is to evstop;evstart - and (usually) after some time it will show the offending log file.

We have tried to re-install SmartCenter, but the problem persists.

Both our vendor and CheckPoint is involved in the case, but so far they have not come up with a solution.

Any input is greatly appreciated.

/Kenneth

1 Solution

Accepted Solutions
Kenneth_Greger1
Contributor

A short update on this case since it's (hopefully) resolved.

Here is the answer from CheckPoint:

#######################

We recently received an update from the R&D team that the Indexer process crashing issue is solved.

The root cause is related to the process functionality and was solved by code investigation.

The solution is integrated in
    - R80.10 latest General Availability take 169.
    - In case the customer prefers to upgrade the environment, the solution is integrated in R80.20 version latest General Availability take 17

We added JHF 169 last night, and so far it seems to be OK

View solution in original post

7 Replies
Danny
Champion Champion
Champion

Did you already try the latest R80.10 Jumbo Hotfix (Take 169) or R80.20 / R80.20.M2? All my SmartEvent R80.20 servers are running fine but I also had such log indexer issues in previous R80.x releases (SmartEvent NGSE as well as R80.10).

Kenneth_Greger1
Contributor

Actually the problem started a few days after we applied T154 (that was the current JHF at the time).

The installation of T154 went fine, and everything was working OK for about 1.5 day.

I suspected that T154 was the problem, so we decided to re-install with the latest ISO image and not apply any patches.

However, the problem persisted.

Our R80.10 platform has been running for 1.5 year now, and we have never had problem with the logging/indexing before.

Upgrading to R80.20 is something that we are concidering, but we also think it would be good to get to the bottom of this issue.

Danny
Champion Champion
Champion

I've experienced this before as well. Uninstalling a Jumbo Hotfix does not resolve an issue that came with it. Since you've even re-installed your SmartCenter the root cause seems to be a different one. Are your gateways running of R77.30 or R80.10 as well?

0 Kudos
Kenneth_Greger1
Contributor

All firewalls are R80.10 clusters, except 1 cluster that is running R77.20 (Gaia Embedded).

Kenneth_Greger1
Contributor

A short update on this case since it's (hopefully) resolved.

Here is the answer from CheckPoint:

#######################

We recently received an update from the R&D team that the Indexer process crashing issue is solved.

The root cause is related to the process functionality and was solved by code investigation.

The solution is integrated in
    - R80.10 latest General Availability take 169.
    - In case the customer prefers to upgrade the environment, the solution is integrated in R80.20 version latest General Availability take 17

We added JHF 169 last night, and so far it seems to be OK

SOC_Saipem
Explorer

Please run below command check whether INDEXER demon running.

cpwd_admin list

If results ---> INDEXER 0 T 0 

Please stop this service and stsrt once again.

[Expert@HostName:0]# evstop

[Expert@HostName:0]#evstart

Run ---->cpwd_admin list whether INDEXER demon is running as expected.

 

Rishi_Sumbal
Participant

Hi Kenneth,

can you confirm after one year now that the problem was indeed gone after you installed the Take 169?

We have regularly the problem that the indexer crashes. We have R80.10 - Build 439 and no take.

I would then install the last Take (right now 249), rather than upgrade to R80.20 or R80.30 (I need to plan it properly).

Thank you.

Leaderboard

Epsum factorial non deposit quid pro quo hic escorol.

Upcoming Events

    CheckMates Events