View Issue Details
|ID||Project||Category||View Status||Date Submitted||Last Update|
|0001367||bareos-core||file daemon||public||2021-06-28 12:30||2023-08-17 14:19|
|Summary||0001367: Potential memory leak in bareos-fd in version 20.0.1|
|Description||On big systems (~17TB Disk with 24million inodes) the bareos-fd is consuming a lot of ram. Restarting the client works in the short run but the problem is there after the next run.|
I thought the ram usage of the bareos-fd would decrease after all jobs have finished but it doesnt.
On the system mentioned above bareos-fd takes 5-8GB of RAM doing nothing.
It is a simple file backup with no plugins using accurate mode. The only special is that the partition is split up in multiple jobs.
51768 root 20 0 6462352 5,197g 0 S 0,0 33,2 57:29.76 bareos-fd
Any idea how to track this done?
|Tags||No tags attached.|
Could you test and report if newer version like the actual 22.1.0 show still the same (mis-)behavior?
The problem still exists on Ubuntu 20.04 and bareos-client 22.1.1-pre26.
Memory consumption risis rapidly until the OOM-Killer comes. (20 GB after 30 minutes)
Can we assist in providing logs or traces? Would need a howto for that.
Thanks for the feedback, we can't promise when it would be proceed (will be a best effort), but we are interested to try to reproduce this.
Please refer to the following page to run and add result of our bareos-support-info tool.
You certainly want to check our page concerning how to raise debuglevel howto
and the whole debugging chapter from our documentation.
if the resulting file is not too large (<2MB) please attach here (can be also made private, password,keys etc being already filtered out by the tool). Otherwise just ask I will open a temporary upload share.
The idea is to be able to reproduce it, so the developer will be able to create a proper fix.
As we have an interest to discover what happen on your side, our dev's team is also curious to have the following information.
Could you report the output of find <your fileset> | wc -c or as an equivalent if you didn't have huge amounts of files after the last full,
echo list files jobid=<last full jobid> | bconsole | wc -c
If you have a lot of changes you can replay the command for each incremental and summarize the whole.
The last full backup needed 7 days 19:46:26 to backup 101.431.976 files with a total size of 2.56 TB.
(just to clarify: it is another system with the same problem.)
Top information after 23 minutes of starting the backup job:
PID USER PR NI VIRT RES SHR S %CPU %MEM ZEIT+ BEFEHL
1165622 root 20 0 10,1g 10,0g 6884 S 99,3 42,4 4:41.56 /usr/sbin/bareos-fd -f
I cancled the job after that. Attached the bareos-support-info. The compressed debug trace log is more than 800 MB. Could you give me a temporary upload share?
bareos-support-info_0001367_2023-08-07_110333.tgz (5,852 bytes)
The fd will have to store all the path/filename list with 101.431 Files can start to be big especially on long path and file names.
You can estimate how much memory you need by counting the size of those string in the file+path table for a given jobid.
|2021-06-28 12:30||therm||New Issue|
|2023-07-20 17:25||bruno-at-bareos||Assigned To||=> bruno-at-bareos|
|2023-07-20 17:25||bruno-at-bareos||Status||new => feedback|
|2023-07-20 17:25||bruno-at-bareos||Note Added: 0005244|
|2023-07-24 09:38||therm||Note Added: 0005252|
|2023-07-24 09:38||therm||Status||feedback => assigned|
|2023-07-25 15:20||bruno-at-bareos||Note Added: 0005254|
|2023-07-31 11:32||bruno-at-bareos||Note Added: 0005275|
|2023-08-07 11:54||therm||Note Added: 0005314|
|2023-08-07 11:54||therm||File Added: bareos-support-info_0001367_2023-08-07_110333.tgz|
|2023-08-07 11:56||therm||Note Edited: 0005314|
|2023-08-17 14:19||bruno-at-bareos||Note Added: 0005323|