Bareos Bug Tracker
Bareos Bug Tracker

View Issue Details Jump to Notes ] Issue History ] Print ]
IDProjectCategoryView StatusDate SubmittedLast Update
0000949bareos-core[All Projects] storage daemonpublic2018-05-17 16:572018-06-14 15:24
Assigned To 
Platformx86_64OSCentOSOS Version7
Product Version17.2.5 
Target VersionFixed in Version 
Summary0000949: S3 Droplet backend documentation missing
DescriptionPlease get the S3 Droplet documented, as currently it is barely documented and setting it up is a real struggle, everything is unknown and I personally find it impossible to set S3 Droplet to work with AWS S3 Bucket.
Steps To Reproduce1. Try to find documentation
2. Find no documentation
3. ???
4. profit
Additional Informationthe only "documentation wanna-be" is the comment in example droplet config, which is insufficient to call it documentation and it is not really helpful.
people on IRC channel are neither able to help.
Tagsaws, droplet, s3
bareos-master: impact
bareos-master: action
bareos-17.2: impact
bareos-17.2: action
bareos-16.2: impact
bareos-16.2: action
bareos-15.2: impact
bareos-15.2: action
bareos-14.2: impact
bareos-14.2: action
bareos-13.2: impact
bareos-13.2: action
bareos-12.4: impact
bareos-12.4: action
Attached Files? file icon bareos-sd.trace [^] (22,555 bytes) 2018-06-08 17:29

- Relationships

-  Notes
stephand (developer)
2018-05-18 17:11

On the Bug Reporting Page, it says
Before reporting a bug, please read the Bug Reporting Howto: [^]

Here especially interesting:
"Also, please keep your language polite. You want somebody to help you with your problem and solve it, that is seldomly achieved by being impolite."
aron_s (updater)
2018-05-21 21:04

Take a look here. For a beta feature this contains more than enough documentation. [^]
tymik (reporter)
2018-05-22 10:39

I have already seen this and it is not sufficient.
I didn't succeed setting connection to AWS S3 properly and seeing posts on bareos-users mailing group, I'm not the only one.
Nothing really is explained there.
According to what is stated in this draft, I have set up everything properly, so following that I am either wrong or there is a bug and droplet doesn't work.
But having it so barely documented I cannot say which one is right.
aron_s (updater)
2018-05-23 19:33
edited on: 2018-05-26 17:24

Please post your configuration and the output of storage daemon running with debug level set to 250 when running a job. Maybe I can help. I will also write some kind of readme for the bareos/S3 setup down.

tymik (reporter)
2018-06-08 17:27

use_https = False
host =
access_key = "myaccesskey"
secret_key = "mysecretkey"
pricing_dir = ""
backend = s3
aws_auth_sign_version = 4

Device {
  Name = S3Droplet-lon-001
  Media Type = s3bucket
  Archive Device = S3 Object Storage
  # Config options:
  # profile= - Droplet profile to use either absolute PATH or logical name (e.g. ~/.droplet/<profile>.profile
  # location= - AWS location (e.g. us-east etc.)
  # acl= - Canned ACL
  # storageclass= - Storage Class to use.
  # bucket= - Bucket to store objects in.
  # chunksize= - Size of Volume Chunks (default = 10 Mb)
  # iothreads= - Number of IO-threads to use for uploads (use blocking uploads if not set.)
  # ioslots= - Number of IO-slots per IO-thread (default 10)
  # mmap - Use mmap to allocate Chunk memory instead of malloc().
  Device Options = "profile=/etc/bareos/bareos-sd.d/device/droplet/droplet.profile,bucket=my-bucket,iothreads=1,ioslots=1,location=eu-west-2"
  Device Type = droplet
  LabelMedia = yes # lets Bareos label unlabeled media
  Random Access = yes
  AutomaticMount = yes # when device opened, read it
  RemovableMedia = no
  AlwaysOpen = no
  Description = "S3 Object device. A connecting Director must have the same Name and MediaType."
  Maximum File Size = 200M # 500 MB (Allows for seeking to small portions of the Volume)
  Maximum Concurrent Jobs = 1
  #Maximum Spool Size = 15000M

I've tried manipulating iothreads, ioslots, bucket name, with/without location and Maximum Spool Size, different Maximum File Size and for profile I've tried different formats of URL, use_https = True and nothing has worked so far.

Will upload debug level 250 log file soon, can also provide more verbose log if needed.

I've also posted some debug in!topic/bareos-users/8-qOrSvcI2E [^] when I was trying one of the varying configs.
aron_s (updater)
2018-06-13 13:29

Change your host in your droplet.profile to "", as the backend will insert the bucket and location parameters from your Device Options there.

But as I assume that you tested this before. What you can try is to test your droplet profile with dplsh, it is a shell tool that tests your profile and direct-connects to your cloud storage. [^]
tymik (reporter)
2018-06-13 16:31

No, I haven't tried to set it up like you say before, as in bareos-users group which I linked previously, you can see that thraizz has posted very similar configuration (using full URL bucket) and claimed that this setup works for him.
Also nothing from the short manual regarding droplet suggested that I could try this option, the manual even suggests the opposite.

As for the droplet-sh you mentioned - I don't really see how could this help me verifying the droplet.
I have successfully connected to that bucket with S3FS and with `aws s3 ls`, so the access to the bucket is fine.
If the droplet-sh would let me test my profile file, I don't see it at all as you say that bucket name and region is taken by backend from Device Options, so the droplet-sh would not be able to get these data if the 'host' line wouldn't contain them.

I have just tested following options:
 host =
 host =
and it doesn't work, my bucket didn't register any traffic, so bareos didn't even try to contact it.
aron_s (updater)
2018-06-14 15:21
edited on: 2018-06-14 15:24

The "host=" argument in your droplet profile is prepended by "bucket=" and "location=" arguments in your device ressource. If the host is set as bareos-sd will try to reach the bucket "". thraizz's setup cant work.

I cannot see any droplet errors in your trace file, none the less I have created my own AWS S3 bucket and it seems as the resolving is still buggy - our internal version can reach and resolve the host right, while the same configuration fails with public version and the error: "Failed to lookup hostname "": Unknown host".
Be sure to keep your version updated as there is already a working solution, it's just not out yet.

I am also writing a plugin documentation like we have it for the vmware plugin so that the configuration will be easier.

Sorry for the inconvenience and thanks for the feedback.

- Issue History
Date Modified Username Field Change
2018-05-17 16:57 tymik New Issue
2018-05-18 17:11 stephand Note Added: 0003006
2018-05-21 21:04 aron_s Note Added: 0003008
2018-05-22 10:39 tymik Note Added: 0003010
2018-05-23 19:33 aron_s Note Added: 0003016
2018-05-26 17:24 aron_s Note Edited: 0003016 View Revisions
2018-06-08 17:27 tymik Note Added: 0003036
2018-06-08 17:28 tymik Tag Attached: s3;droplet;aws
2018-06-08 17:28 tymik Tag Detached: s3;droplet;aws
2018-06-08 17:28 tymik Tag Attached: aw
2018-06-08 17:28 tymik Tag Attached: droplet
2018-06-08 17:28 tymik Tag Attached: s3
2018-06-08 17:29 tymik Tag Detached: aw
2018-06-08 17:29 tymik Tag Attached: aws
2018-06-08 17:29 tymik File Added: bareos-sd.trace
2018-06-13 13:29 aron_s Note Added: 0003038
2018-06-13 16:31 tymik Note Added: 0003039
2018-06-14 15:21 aron_s Note Added: 0003044
2018-06-14 15:23 aron_s Note Edited: 0003044 View Revisions
2018-06-14 15:24 aron_s Note Edited: 0003044 View Revisions

Copyright © 2000 - 2018 MantisBT Team
Powered by Mantis Bugtracker