View Issue Details

IDProjectCategoryView StatusLast Update
0001503bareos-corestorage daemonpublic2023-12-13 15:27
Reportercmlara Assigned Tobruno-at-bareos  
PrioritynormalSeveritymajorReproducibilityalways
Status closedResolutionduplicate 
PlatformLinuxOSUbuntuOS Version20.04.4
Product Version22.0.0 
Summary0001503: Bareos-sd with libdroplet(s3) storage leaves sockets in CLOSE_WAIT state (FD exhaustion/resource leak)
Descriptionbareos-dir Version: 22.0.1~pre (21 December 2022) Ubuntu 20.04.4 LTS
s3-us-east-2-sd Version: 22.0.1~pre (21 December 2022) Ubuntu 20.04.4 LTS

Director and SD are on different hosts.

On a system with the bareos-sd, configured for s3 storage via libdroplet it appears that each individual job causes at least one FD to remain open in close_wait state. I've been tracking this on 21.0.0 for a while.. Now that 22.0.1-pre is published as the current release I upgraded and tested against it to ensure it was not fixed in the past year.

---- Logs from 21.0.1-pre

$ ps afux |grep -i bareos-sd
ubuntu 556449 0.0 0.0 8164 660 pts/0 S+ 02:49 0:00 \_ grep --color=auto -i bareos-sd
bareos 555014 0.0 2.4 307808 24432 ? Ssl Dec24 0:00 /usr/sbin/bareos-sd -f

$ sudo lsof -p 555014
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
...
bareos-sd 555014 bareos 3u IPv4 75608807 0t0 TCP *:bacula-sd (LISTEN)
bareos-sd 555014 bareos 4u IPv6 75608808 0t0 TCP *:bacula-sd (LISTEN)
bareos-sd 555014 bareos 6u IPv4 75609055 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:40150->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)
bareos-sd 555014 bareos 7u IPv4 75608950 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:39234->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)
bareos-sd 555014 bareos 8u REG 202,1 30374841 512018 /var/lib/bareos/s3-us-east-2-sd.trace
bareos-sd 555014 bareos 9u IPv4 75611304 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:45236->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)

<ran a small backup>


$ sudo lsof -p 555014
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
...
bareos-sd 555014 bareos 3u IPv4 75608807 0t0 TCP *:bacula-sd (LISTEN)
bareos-sd 555014 bareos 4u IPv6 75608808 0t0 TCP *:bacula-sd (LISTEN)
bareos-sd 555014 bareos 6u IPv4 75609055 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:40150->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)
bareos-sd 555014 bareos 7u IPv4 75608950 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:39234->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)
bareos-sd 555014 bareos 8u REG 202,1 30438986 512018 /var/lib/bareos/s3-us-east-2-sd.trace
bareos-sd 555014 bareos 9u IPv4 75611304 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:45236->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)
bareos-sd 555014 bareos 11u IPv4 75624999 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:42052->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)
bareos-sd 555014 bareos 12u IPv4 75625002 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:33668->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)
bareos-sd 555014 bareos 13u IPv4 75625007 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:59050->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)
bareos-sd 555014 bareos 14u IPv4 75625012 0t0 TCP ip-172-26-12-255.us-east-2.compute.internal:42476->s3-r-w.us-east-2.amazonaws.com:https (CLOSE_WAIT)
Steps To ReproduceRun a backup where storage is backed by S3.
TagsNo tags attached.

Relationships

duplicate of 0001553 new S3 (droplet) returns an error on Exoscale S3 service 

Activities

bruno-at-bareos

bruno-at-bareos

2023-01-12 16:08

manager   ~0004857

Hello,

TCP/IP state is not handled by Bareos but by your own operating system.
It look like a FIN_CLOSE is sended but not FIN_Ack is received back and as thus the CLOSE_WAIT state remain.

We have several tests and running OS but we are not able to reproduce this.
Could you check with your network admin if you don't have something block the normal workflow of TCP/IP.
cmlara

cmlara

2023-01-12 22:04

reporter   ~0004861

Last edited: 2023-01-12 22:07

"TCP/IP state is not handled by Bareos but by your own operating system."
True, however on Linux CLOSE_WAIT generally means that the remote side has closed the socket and the local system is waiting on the user software to either close() or shutdown() the socket it has open for the connection.

I'm running these connections inside the same region of AWS services with a Lightstail instance hosting the Bareos-SD and the VPC Gateway configured with S3 Target (the S3 connections are routed by AWS internally on their fabric at gateway/router). No firewall blocking enabled.

I performed an unencrypted network capture to get a better idea of what is going on (contains credentials and real data so I can't post the file)

What I observed for a sequence of operations was:

1) SD: HTTP HEAD of /S3-AI-Incremental-0159/0000 (this is the current active object chunk for this particular media)
2) AWS_S3: HTTP 200 Response with head results
3) SD: HTTP PUT /S3-AI-Incremental-0159/0000 (this is the backup data being uploaded)
4) AWS_S3: HTTP OK Response
5) SD: HEAD of /S3-AI-Incremental-0159/0000 (I presume validating the upload was truly successful, or getting latest data about the file to avoid possible write collisions)
6) AWS_S3: HTTP 200 with Head results
7) SD: HTTP HEAD of /S3-AI-Incremental-0159/0001 --- this would be the next chunk file if it crosses the chunksize threshold, not sure why this occurred at this point. This file shouldn't exist yet.
8) AWS_S3: HTTP 404 to HEAD request (expected)
9) OS of server hosting the SD sends a TCP ACK to the packet in step 8 (note: these have been sent for other packets as well, this is just the first relevant packet for discussion)

Approximately 23 seconds later (a timeout has likely occurred at the S3 bucket web server related to keeping the connection open):

10) AWS_S3: Sends FIN+ACK acknowledging the ACK from step 9 and requesting to close the connection.
11) OS of server with SD: Sends an ACK to the packet in step 10 allowing the S3 bucket to close the connection. Locally the connection move to CLOSE_WAIT.

Now that the connection has been closed on the bucket the OS is waiting on Bareos/libdroplet to read the last of the buffers (if any data is in them) and close() or shutdown() the socket which will generate another FIN/ACK cycle for the two-way shutdown. This does not appear to ever occur and as such the FD is left open and the connection remains in CLOSE_WAIT until the Bareos SD is restarted.

I will note that by the time step 9 occurs but before step 10 the Bareos console already indicates the backup was a success, which makes sense as the data is fully stored in the bucket at this time. To me this makes me suspect that likely there should be some sort of socket shutdown by the SD/libdroplet after step 9 that isn't occurring and instead the connection is timing out from the S3 bucket. Alternately whatever task should occur after step 11 isn't and the socket remains consumed.

If there are any config files or additional logs that could be useful please let me know.
cmlara

cmlara

2023-01-14 03:37

reporter   ~0004862

Looking at the raw HTTP helped me get a better understand of what was going on, coupled with some trace logging I had done in the past.

Some context on the steps above:
Steps 3-4 are likely DropletDevice::d_write->ChunkedDevice::WriteChunked->ChunkedDevice::FlushChunk->DropletDevice::FlushRemoteChunk() which eventually leads to dpl_s3_put().
Steps 5-8 are likely DropletDevice::d_flush->ChunkedDevice::WaitUntilChunksWritten()->ChunkedDevice::is_written()->DropletDevice::RemoteVolumeSize() being called which leads eventually to dpl_s3_head_raw(). It is expected behavior that it keeps going until it can't find a chunk to know that all chunks are accounted for.

What I'm observing on a preliminary review is that in src/droplet/libdroplet/src/backend/s3/backend/*.c connections are opened by calling `dpl_try_connect(ctx, req, &conn);` where &conn is returned as the connection to use for communicating with the S3 bucket.

On quick glance all the source files tend to have this piece of code included.
```
  if (NULL != conn) {
    if (1 == connection_close)
      dpl_conn_terminate(conn);
    else
      dpl_conn_release(conn);
  }
```
the value `connection_close` appears to only be set to `1` in times of unexpected errors, so in the case of successful backup and volume size validation at step 8 dpl_conn_release(conn) is again used which returns the connection back to the pool for future use where the connection dies from non use

I'm suspecting that DropletDevice::d_close() may be missing a step that would lead to calling dpl_conn_terminate() to close the connection and cleanup the sockets.
arogge

arogge

2023-01-26 09:19

manager   ~0004870

You're probably right.
However, AFAICT droplet is supposed to handle this on its own which it doesn't seem to do (correctly). At least not in your use-case.
arogge

arogge

2023-01-26 09:44

manager   ~0004871

After having yet another look the connection pooling in droplet/Libdroplet/src/conn.c is simlpy not able to handle that.
We would have to have some kind of job that looks at all connections in the the connection pool and shut them down if the remote end closed the connection.

If I understand the current code correctly, it will only look at the connection again if it tries to re-use it and then it should clean it up. So while you're seeing some sockets in CLOSE_WAIT, the amount of sockets in that state should not steadily increase, but max-out at some arbitrary limit (probably number of devices times number of threads).
cmlara

cmlara

2023-01-26 10:36

reporter   ~0004872

I currently have 3 'S3 devices' configured (each one limited to a single job at a time due to the the s3 interleaving restrictions).

Just did a restart of the SD (last restart was on the 12th) where I was at 1009 FD's, 6 of them being necessary for bareos-sd and the remaining 1003 are close-wait sockets. If I don't proactively restart the service I always hit the 1024 configured FD ulimit and backups will fail.

If there is an upper limit it appears to be quite high.

On my initial look I did wonder why I wasn't just seeing a reuse of the socket that would lead to an error and closing it. My only theory is that 'ctx' as passed to dpl_try_connect() is reset somewhere (being freed or lost) and that each 'session' (whatever a session is in this case) gets a new context with no knowledge of the previous sockets and as such no knowledge try and use them and cleanup. However I haven't been able to fully follow the code flow to confirm that is accurate.

Issue History

Date Modified Username Field Change
2022-12-25 04:13 cmlara New Issue
2023-01-12 16:08 bruno-at-bareos Note Added: 0004857
2023-01-12 22:04 cmlara Note Added: 0004861
2023-01-12 22:07 cmlara Note Edited: 0004861
2023-01-14 03:37 cmlara Note Added: 0004862
2023-01-26 09:19 arogge Note Added: 0004870
2023-01-26 09:44 arogge Note Added: 0004871
2023-01-26 10:36 cmlara Note Added: 0004872
2023-12-13 15:27 bruno-at-bareos Assigned To => bruno-at-bareos
2023-12-13 15:27 bruno-at-bareos Status new => closed
2023-12-13 15:27 bruno-at-bareos Resolution open => duplicate
2023-12-13 15:27 bruno-at-bareos Relationship added duplicate of 0001553