Correct, more precisely, It is the Ubuntu (release 18.04) server running the following:
1) Apache v2.4 as a web server (+PHPv7.3, +SQLite3 used in queries);
2) Grafana (v6.5.1) for the data visualization;
3) InfluxDB (v1.7.9) where all the data is stored after extraction;
4) Number of bash scripts with queries which are used for data extraction from .dat files;
5) Python (python3) script using "GDown (v3.8.3)" utility for downloading the files from GDrive;
6) Number of .php scripts which are working in convergance with bash scripts in order to automatize the following:
- Data extraction from .dat files depending on the version (R77.30/R80.10 or R80.20 and above) since two engines differ in queries structure as the versions DBs are different;
- Automatic Grafana datasource provisioning;
- Automatic Grafana dashboard provisioning. Here we created two dashboards R77 and R80 which are used depending on the user`s input at the beginning regarding the versions;
- Automatic Grafana timestamp provisioning;
- Automatic URL provisioning (IP range checks - for cases where the VM is exposed to the access from the internet);
- Running the healthcheck script on cpinfo files and showing a result in .html to the user;
- Uncompressing from .gz, .zip, .info.tar.gz when needed;
7) Crontab lines responsible for automatically deleting all data on weekly basis;
8. Since ubuntu didn`t have preinstalled many small tools which were needed, I installed few along the way, necessary for dev fixes and the tool to run properly (examples open-vm-tools, mlocate, unzip etc. );
9) User interface (some html+css for frontend and JS for input checks on few places) which takes the relevant info from the fields (later used for naming the datasources/cpviews and version pointer);
I hope these details are clearing it more up.
/var/www/html contains all the code responsible for the stuff from above.