View Issue Details

IDProjectCategoryView StatusLast Update
0001070bareos-corestorage daemonpublic2019-12-12 13:29
Reporterguidoilbaldo Assigned To 
PriorityhighSeveritymajorReproducibilityalways
Status assignedResolutionopen 
PlatformLinuxOSCentOSOS Version7
Product Version18.2.5 
Summary0001070: DIR lost connection with SD - Broken pipe
DescriptionHello,
I recently installed BareOS on a single machine (DIR + SD) and installed droplet plugin to send data to AWS S3.
The configuration looks fine as bconsole tells me storage status is ok:
```
Device status:

Device "AWS_S3_Object_Storage" (AWS S3 Storage) is not open.
Backend connection is working.
Inflight chunks: 0
No pending IO flush requests.
==
====
```

However, when launching a FULL backup job, after 15 minutes it fails with the following error:
```
27-Mar 11:03 rabbit-1-fd JobId 3: Error: lib/bsock_tcp.cc:417 Wrote 9611 bytes to Storage daemon:172.17.0.60:9103, but only 0 accepted.
27-Mar 11:03 rabbit-1-fd JobId 3: Fatal error: filed/backup.cc:1033 Network send error to SD. ERR=Connection timed out
27-Mar 11:03 backup-1 JobId 3: Fatal error: Director's comm line to SD dropped.
27-Mar 11:03 backup-1 JobId 3: Error: Bareos backup-1 18.2.5 (30Jan19):
  Build OS: Linux-4.4.92-6.18-default redhat CentOS Linux release 7.6.1810 (Core)
  JobId: 3
  Job: backup-rabbit-1.2019-03-27_10.34.06_14
  Backup Level: Full
  Client: "rabbit-1-fd" 18.2.5 (30Jan19) Linux-4.4.92-6.18-default,redhat,CentOS Linux release 7.6.1810 (Core) ,CentOS_7,x86_64
  FileSet: "RabbitMQFileSet" 2019-03-27 09:42:15
  Pool: "Full" (From command line)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "S3_Object" (From Job resource)
  Scheduled time: 27-Mar-2019 10:34:06
  Start time: 27-Mar-2019 10:48:17
  End time: 27-Mar-2019 11:03:51
  Elapsed time: 15 mins 34 secs
  Priority: 10
  FD Files Written: 24
  SD Files Written: 0
  FD Bytes Written: 95,320 (95.32 KB)
  SD Bytes Written: 691 (691 B)
  Rate: 0.1 KB/s
  Software Compression: 24.3 % (lz4)
  VSS: no
  Encryption: no
  Accurate: no
  Volume name(s):
  Volume Session Id: 2
  Volume Session Time: 1553682684
  Last Volume Bytes: 0 (0 B)
  Non-fatal FD errors: 1
  SD Errors: 0
  FD termination status: Fatal Error
  SD termination status: Error
  Bareos binary info: bareos.org build: Get official binaries and vendor support on bareos.com
  Termination: *** Backup Error ***
```

The connection between FD on backup client and BareOS server looks fine, I can telnet ports 9101 and 9103 to DIR and SD correctly.
Steps To ReproduceInstall bareos stack 18.2.5 and try to perform a FULL backup job, sending data to S3
Additional InformationDIR config:

Client {
  Name = rabbit-1-fd
  Address = 172.17.0.170
  Password = ****
  Heartbeat Interval = 60
  File Retention = 30 days
  Job Retention = 6 months
  AutoPrune = yes
}

Job {
  Name = "backup-rabbit-1"
  Client = rabbit-1-fd
  JobDefs = "RabbitMQJobDef"
}

JobDefs {
  Name = "RabbitMQJobDef"
  Type = Backup
  Level = Incremental

  Client = bareos-fd

  FileSet = "RabbitMQFileSet"
  Schedule = "RabbitMQCycle"
  Storage = S3_Object

  Messages = Standard

  Pool = Incremental

  Priority = 10
  Write Bootstrap = "/var/lib/bareos/%c.bsr"

  Full Backup Pool = Full
  Differential Backup Pool = Differential
  Incremental Backup Pool = Incremental
}

Storage {
  Name = S3_Object
  Address = 172.17.0.60
  Password = ****
  Device = AWS_S3_Object_Storage
  Media Type = S3_Object1
  Heartbeat Interval = 60
}

SD config:

Device {
  Name = "AWS_S3_Object_Storage"
  Media Type = "S3_Object1"
  Archive Device = "AWS S3 Storage"
  Device Type = droplet
  Device Options = "profile=/etc/bareos/bareos-sd.d/device/droplet/aws.droplet.profile,bucket=bucket,location=eu-central-1,chunksize=100M,iothreads=10,retries=0"
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
  Maximum Concurrent Jobs = 10
  Maximum File Size = 500M
  Maximum Spool Size = 15000M
}

use_https = true
host = s3.eu-central-1.amazonaws.com
access_key = <access_key>
secret_key = <secret_key>
pricing_dir = ""
backend = s3
aws_region = eu-central-1
aws_auth_sign_version = 4
Tagsaws, s3;droplet;aws;storage, storage
bareos-master: impact
bareos-master: action
bareos-19.2: impact
bareos-19.2: action
bareos-18.2: impact
bareos-18.2: action
bareos-17.2: impact
bareos-17.2: action
bareos-16.2: impact
bareos-16.2: action
bareos-15.2: impact
bareos-15.2: action
bareos-14.2: impact
bareos-14.2: action
bareos-13.2: impact
bareos-13.2: action
bareos-12.4: impact
bareos-12.4: action

Activities

arogge

arogge

2019-03-27 12:43

developer   ~0003301

Can you please enable tracing of the sd (maybe level 200) and reproduce this?
guidoilbaldo

guidoilbaldo

2019-03-27 15:22

reporter   ~0003302

Sure, could you help me with that or point me somewhere in the docs where I can find how?
guidoilbaldo

guidoilbaldo

2019-03-27 15:26

reporter   ~0003303

Ok, i found where to look.

I launched the job again and keep you posted, in the meantime this is the device status:

Device status:

Device "AWS_S3_Object_Storage" (AWS S3 Storage) is mounted with:
    Volume: Full-0001
    Pool: Full
    Media type: S3_Object1
Backend connection is working.
Inflight chunks: 0
No pending IO flush requests.
Configured device capabilities:
  EOF BSR BSF FSR FSF EOM !REM RACCESS AUTOMOUNT LABEL !ANONVOLS !ALWAYSOPEN
Device state:
  OPENED !TAPE LABEL !MALLOC APPEND !READ EOT !WEOT !EOF !NEXTVOL !SHORT MOUNTED
  num_writers=1 reserves=0 block=0
Attached Jobs: 4
Device parameters:
  Archive name: AWS S3 Storage Device name: AWS_S3_Object_Storage
  File=0 block=64712
  Min block=64512 Max block=64512
    Total Bytes=64,712 Blocks=0 Bytes/block=64,712
    Positioned at File=0 Block=64,712
guidoilbaldo

guidoilbaldo

2019-03-27 16:02

reporter   ~0003304

This time the error was "Connection timed out". I attached the trace file for the SD and here I paste bareos log for completion:

27-Mar 14:45 backup-1 JobId 6: Using Device "AWS_S3_Object_Storage" to write.
27-Mar 14:45 backup-1 JobId 6: Probing client protocol... (result will be saved until config reload)
27-Mar 14:45 backup-1 JobId 6: Connected Client: rabbit-1-fd at 172.17.0.170:9102, encryption: PSK-AES256-CBC-SHA
27-Mar 14:45 backup-1 JobId 6: Handshake: Immediate TLS 27-Mar 14:45 backup-1 JobId 6: Encryption: PSK-AES256-CBC-SHA
27-Mar 14:45 rabbit-1-fd JobId 6: Connected Storage daemon at 172.17.0.60:9103, encryption: PSK-AES256-CBC-SHA
27-Mar 14:45 backup-1: info: src/droplet.c:127: dpl_init: PRNG has been seeded with enough data
27-Mar 14:45 rabbit-1-fd JobId 6: Extended attribute support is enabled
27-Mar 14:45 rabbit-1-fd JobId 6: ACL support is enabled
27-Mar 14:45 backup-1 JobId 6: Volume "Full-0001" previously written, moving to end of data.
27-Mar 14:45 backup-1 JobId 6: Ready to append to end of Volume "Full-0001" size=64712
27-Mar 15:00 rabbit-1-fd JobId 6: Error: lib/bsock_tcp.cc:417 Wrote 22084 bytes to Storage daemon:172.17.0.60:9103, but only 0 accepted.
27-Mar 15:00 rabbit-1-fd JobId 6: Fatal error: filed/backup.cc:1033 Network send error to SD. ERR=Connection timed out
27-Mar 15:00 backup-1 JobId 6: Fatal error: Director's comm line to SD dropped.
27-Mar 15:00 backup-1 JobId 6: Error: Bareos backup-1 18.2.5 (30Jan19):
  Build OS: Linux-4.4.92-6.18-default redhat CentOS Linux release 7.6.1810 (Core)
  JobId: 6
  Job: backup-rabbit-1.2019-03-27_14.44.59_07
  Backup Level: Full
  Client: "rabbit-1-fd" 18.2.5 (30Jan19) Linux-4.4.92-6.18-default,redhat,CentOS Linux release 7.6.1810 (Core) ,CentOS_7,x86_64
  FileSet: "RabbitMQFileSet" 2019-03-27 09:42:15
  Pool: "Full" (From command line)
  Catalog: "MyCatalog" (From Client resource)
  Storage: "S3_Object" (From Job resource)
  Scheduled time: 27-Mar-2019 14:44:59
  Start time: 27-Mar-2019 14:45:01
  End time: 27-Mar-2019 15:00:29
  Elapsed time: 15 mins 28 secs
  Priority: 10
  FD Files Written: 27
  SD Files Written: 0
  FD Bytes Written: 164,032 (164.0 KB)
  SD Bytes Written: 1,852 (1.852 KB)
  Rate: 0.2 KB/s
  Software Compression: 27.4 % (lz4)
  VSS: no
  Encryption: no
  Accurate: no
  Volume name(s):
  Volume Session Id: 1
  Volume Session Time: 1553697872
  Last Volume Bytes: 0 (0 B)
  Non-fatal FD errors: 1
  SD Errors: 0
  FD termination status: Fatal Error
  SD termination status: Error
  Bareos binary info: bareos.org build: Get official binaries and vendor support on bareos.com
  Termination: *** Backup Error ***
backup-1.trace (27,106 bytes)   
backup-1 (50): stored/dir_cmd.cc:448-0 level=200 trace=1 timestamp=0 tracefilename=/var/lib/bareos/backup-1.trace
backup-1 (200): stored/job.cc:463-0 Start stored FreeJcr
backup-1 (200): stored/job.cc:590-0 End stored FreeJcr
backup-1 (100): lib/jcr.cc:446-0 FreeCommonJcr: 7f5dd0015688 
backup-1 (100): lib/tls_openssl.cc:71-0 Destruct TLsOpenSsl Implementation Object
backup-1 (100): lib/tls_openssl_private.cc:62-0 Destruct TlsOpenSslPrivate
backup-1 (100): lib/bsock.cc:129-0 Destruct BareosSocket
backup-1 (100): lib/bsock.cc:81-0 Construct BareosSocket
backup-1 (100): lib/tls_openssl_private.cc:57-0 Construct TlsOpenSslPrivate
backup-1 (100): lib/tls_openssl_private.cc:536-0 Set tcp filedescriptor: <4>
backup-1 (100): lib/tls_openssl_private.cc:482-0 Set ca_certfile:	<>
backup-1 (100): lib/tls_openssl_private.cc:488-0 Set ca_certdir:	<>
backup-1 (100): lib/tls_openssl_private.cc:494-0 Set crlfile_:	<>
backup-1 (100): lib/tls_openssl_private.cc:500-0 Set certfile_:	<>
backup-1 (100): lib/tls_openssl_private.cc:506-0 Set keyfile_:	<>
backup-1 (100): lib/tls_openssl_private.cc:518-0 Set pem_userdata to address: <0>
backup-1 (100): lib/tls_openssl_private.cc:524-0 Set dhfile_:	<>
backup-1 (100): lib/tls_openssl_private.cc:542-0 Set cipherlist:	<>
backup-1 (100): lib/tls_openssl_private.cc:530-0 Set Verify Peer:	<false>
backup-1 (50): lib/tls_openssl.cc:100-0 Preparing TLS_PSK SERVER callback
backup-1 (100): lib/tls_openssl_private.cc:401-0 psk_server_cb. identitiy: R_DIRECTOR backup-1.
backup-1 (100): lib/tls_openssl_private.cc:426-0 psk_server_cb. result: 32.
backup-1 (50): lib/bnet.cc:150-0 TLS server negotiation established.
backup-1 (110): stored/socket_server.cc:97-0 Conn: Hello Director backup-1 calling
backup-1 (110): stored/socket_server.cc:115-0 Got a DIR connection at 27-Mar-2019 14:45:01
backup-1 (100): include/jcr.h:320-0 Construct JobControlRecord
backup-1 (50): lib/cram_md5.cc:69-0 send: auth cram-md5 <1053146493.1553697901@backup-1> ssl=1
backup-1 (100): lib/cram_md5.cc:116-0 cram-get received: auth cram-md5 <890806511.1553697901@backup-1> ssl=1
backup-1 (99): lib/cram_md5.cc:135-0 sending resp to challenge: liEdhR05BF+bs/sFI5c0OC
backup-1 (90): stored/dir_cmd.cc:289-0 Message channel init completed.
backup-1 (199): stored/dir_cmd.cc:300-0 <dird: JobId=6 job=backup-rabbit-1.2019-03-27_14.44.59_07 job_name=backup-rabbit-1 client_name=rabbit-1-fd type=66 level=70 FileSet=RabbitMQFileSet NoAttr=0 SpoolAttr=0 FileSetMD5=TG+hrCcpdl+P87N1+Wcv7B SpoolData=0 PreferMountedVols=1 SpoolSize=0 rerunning=0 VolSessionId=0 VolSessionTime=0 Quota=0 Protocol=0 BackupFormat=Native
backup-1 (200): stored/dir_cmd.cc:318-0 Do command: JobId=
backup-1 (100): stored/job.cc:92-0 <dird: JobId=6 job=backup-rabbit-1.2019-03-27_14.44.59_07 job_name=backup-rabbit-1 client_name=rabbit-1-fd type=66 level=70 FileSet=RabbitMQFileSet NoAttr=0 SpoolAttr=0 FileSetMD5=TG+hrCcpdl+P87N1+Wcv7B SpoolData=0 PreferMountedVols=1 SpoolSize=0 rerunning=0 VolSessionId=0 VolSessionTime=0 Quota=0 Protocol=0 BackupFormat=Native
backup-1 (100): stored/job.cc:112-0 rerunning=0 VolSesId=0 VolSesTime=0 Protocol=0
backup-1 (50): stored/job.cc:159-6 Quota set as 0
backup-1 (50): stored/job.cc:169-6 >dird jid=6: 3000 OK Job SDid=1 SDtime=1553697872 Authorization=BNBI-AMDB-IBBM-MMID-BKNH-DBMM-FNLK-JBNF
backup-1 (199): stored/dir_cmd.cc:300-6 <dird: getSecureEraseCmd
backup-1 (200): stored/dir_cmd.cc:318-6 Do command: getSecureEraseCmd
backup-1 (199): stored/dir_cmd.cc:300-6 <dird: use storage=S3_Object media_type=S3_Object1 pool_name=Full pool_type=Backup append=1 copy=0 stripe=0
backup-1 (200): stored/dir_cmd.cc:318-6 Do command: use storage=
backup-1 (150): stored/reserve.cc:216-6 <dird: use storage=S3_Object media_type=S3_Object1 pool_name=Full pool_type=Backup append=1 copy=0 stripe=0
backup-1 (150): stored/reserve.cc:247-6 <dird device: use device=AWS_S3_Object_Storage
backup-1 (150): stored/reserve.cc:473-6 Start find_suit_dev PrefMnt=1 exact=1 suitable=0 chgronly=0 any=0
backup-1 (150): stored/reserve.cc:650-6 Try match res=AWS_S3_Object_Storage wanted AWS_S3_Object_Storage
backup-1 (150): stored/reserve.cc:725-6 chk MediaType device=S3_Object1 request=S3_Object1
backup-1 (150): stored/reserve.cc:750-6 try reserve AWS_S3_Object_Storage
backup-1 (100): stored/block.cc:127-6 created new block of blocksize 64512 (dev->device->label_block_size) as dev->max_block_size is zero
backup-1 (200): stored/acquire.cc:826-6 Attach Jid=6 dcr=7f5dd0011678 size=0 dev="AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (150): stored/reserve.cc:777-6 call reserve for append: have_vol=0 vol=
backup-1 (150): stored/reserve.cc:978-6 reserve_append device is "AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (150): stored/reserve.cc:1084-6 PrefMnt=1 exact=1 suitable=1 chgronly=0 any=0
backup-1 (150): stored/reserve.cc:1039-6 MaxJobs=0 Jobs=0 reserves=0 Status= Vol=
backup-1 (150): stored/reserve.cc:1213-6 OK Dev avail reserved "AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (150): stored/reserve.cc:142-6 Inc reserve=0 dev="AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (150): stored/reserve.cc:786-6 Reserved=1 dev_name=AWS_S3_Object_Storage mediatype=S3_Object1 pool=Full ok=1
backup-1 (150): stored/reserve.cc:788-6 Vol= num_writers=0, have_vol=0
backup-1 (150): stored/reserve.cc:799-6 no vol, call find_next_appendable_vol.
backup-1 (50): stored/askdir.cc:284-6 DirFindNextAppendableVolume: reserved=1 Vol=
backup-1 (50): stored/askdir.cc:304-6 >dird CatReq Job=backup-rabbit-1.2019-03-27_14.44.59_07 FindMedia=1 pool_name=Full media_type=S3_Object1 unwanted_volumes=
backup-1 (50): stored/askdir.cc:188-6 <dird 1000 OK VolName=Full-0001 VolJobs=5 VolFiles=0 VolBlocks=0 VolBytes=64712 VolMounts=5 VolErrors=0 VolWrites=1 MaxVolBytes=1073741824000 VolCapacityBytes=0 VolStatus=Append Slot=0 MaxVolJobs=0 MaxVolFiles=0 InChanger=0 VolReadTime=0 VolWriteTime=0 EndFile=0 EndBlock=0 LabelType=0 MediaId=1 EncryptionKey= MinBlocksize=0 MaxBlocksize=0
backup-1 (50): stored/askdir.cc:224-6 DoGetVolumeInfo return true slot=0 Volume=Full-0001, VolminBlocksize=0 VolMaxBlocksize=0
backup-1 (50): stored/askdir.cc:226-6 setting dcr->VolMinBlocksize(0) to vol.VolMinBlocksize(0)
backup-1 (50): stored/askdir.cc:228-6 setting dcr->VolMaxBlocksize(0) to vol.VolMaxBlocksize(0)
backup-1 (150): stored/vol_mgr.cc:228-6 find_read_vol: read_vol_list empty.
backup-1 (150): stored/vol_mgr.cc:935-6 Vol=Full-0001 not in use.
backup-1 (50): stored/askdir.cc:315-6 Call reserve_volume for write. Vol=Full-0001
backup-1 (150): stored/vol_mgr.cc:405-6 enter reserve_volume=Full-0001 drive="AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (150): stored/vol_mgr.cc:292-6 new Vol=Full-0001 at 7f5dd00154e8 dev="AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (150): stored/vol_mgr.cc:574-6 === set in_use. vol=Full-0001 dev="AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (150): stored/vol_mgr.cc:611-6 Inc walk_start UseCount=2 volname=Full-0001
backup-1 (150): stored/vol_mgr.cc:273-6 List end new volume: Full-0001 in_use=1 swap=0 on device "AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (50): stored/askdir.cc:320-6 DirFindNextAppendableVolume return true. vol=Full-0001
backup-1 (150): stored/reserve.cc:803-6 looking for Volume=Full-0001
backup-1 (150): stored/reserve.cc:862-6 >dird: 3000 OK use device device=AWS_S3_Object_Storage
backup-1 (150): stored/reserve.cc:665-6 Device AWS_S3_Object_Storage reserved=1 for append.
backup-1 (150): stored/reserve.cc:578-6 available device found=AWS_S3_Object_Storage
backup-1 (150): stored/reserve.cc:592-6 OK dev found. Vol=Full-0001
backup-1 (199): stored/dir_cmd.cc:300-6 <dird: runbackup-1 (200): stored/dir_cmd.cc:318-6 Do command: run
backup-1 (200): stored/dir_cmd.cc:1695-6 Run_cmd: run
backup-1 (50): stored/job.cc:191-6 backup-rabbit-1.2019-03-27_14.44.59_07 waiting 1800 sec for FD to contact SD key=BNBI-AMDB-IBBM-MMID-BKNH-DBMM-FNLK-JBNF
backup-1 (100): lib/bsock.cc:81-0 Construct BareosSocket
backup-1 (100): lib/tls_openssl_private.cc:57-0 Construct TlsOpenSslPrivate
backup-1 (100): lib/tls_openssl_private.cc:536-0 Set tcp filedescriptor: <6>
backup-1 (100): lib/tls_openssl_private.cc:482-0 Set ca_certfile:	<>
backup-1 (100): lib/tls_openssl_private.cc:488-0 Set ca_certdir:	<>
backup-1 (100): lib/tls_openssl_private.cc:494-0 Set crlfile_:	<>
backup-1 (100): lib/tls_openssl_private.cc:500-0 Set certfile_:	<>
backup-1 (100): lib/tls_openssl_private.cc:506-0 Set keyfile_:	<>
backup-1 (100): lib/tls_openssl_private.cc:518-0 Set pem_userdata to address: <0>
backup-1 (100): lib/tls_openssl_private.cc:524-0 Set dhfile_:	<>
backup-1 (100): lib/tls_openssl_private.cc:542-0 Set cipherlist:	<>
backup-1 (100): lib/tls_openssl_private.cc:530-0 Set Verify Peer:	<false>
backup-1 (50): lib/tls_openssl.cc:100-0 Preparing TLS_PSK SERVER callback
backup-1 (100): lib/tls_openssl_private.cc:401-0 psk_server_cb. identitiy: R_JOB backup-rabbit-1.2019-03-27_14.44.59_07.
backup-1 (100): lib/tls_openssl_private.cc:426-0 psk_server_cb. result: 39.
backup-1 (50): lib/bnet.cc:150-0 TLS server negotiation established.
backup-1 (110): stored/socket_server.cc:97-0 Conn: Hello Start Job backup-rabbit-1.2019-03-27_14.44.59_07
backup-1 (110): stored/socket_server.cc:103-0 Got a FD connection at 27-Mar-2019 14:45:01
backup-1 (50): stored/fd_cmds.cc:130-0 Found Job backup-rabbit-1.2019-03-27_14.44.59_07
backup-1 (50): lib/cram_md5.cc:69-0 send: auth cram-md5 <1937451309.1553697901@backup-1> ssl=1
backup-1 (100): lib/cram_md5.cc:116-0 cram-get received: auth cram-md5 <1396251900.1553697901@rabbit-1-fd> ssl=1
backup-1 (99): lib/cram_md5.cc:135-0 sending resp to challenge: xggDDypv36NPcj+RKQ/YuB
backup-1 (50): stored/fd_cmds.cc:156-0 OK Authentication jid=6 Job backup-rabbit-1.2019-03-27_14.44.59_07
backup-1 (50): stored/job.cc:208-6 Auth=1 canceled=0 errstat=0
backup-1 (120): stored/fd_cmds.cc:185-6 Start run Job=backup-rabbit-1.2019-03-27_14.44.59_07
backup-1 (110): stored/fd_cmds.cc:229-6 <filed: append open session
backup-1 (120): stored/fd_cmds.cc:313-6 Append open session: append open session
backup-1 (110): stored/fd_cmds.cc:324-6 >filed: 3000 OK open ticket = 1
backup-1 (110): stored/fd_cmds.cc:229-6 <filed: append data 1
backup-1 (120): stored/fd_cmds.cc:275-6 Append data: append data 1
backup-1 (110): stored/fd_cmds.cc:277-6 <filed: append data 1
backup-1 (100): stored/append.cc:80-6 Start append data. res=1
backup-1 (100): stored/acquire.cc:439-6 acquire_append device is disk
backup-1 (190): stored/acquire.cc:475-6 jid=6 Do mount_next_write_vol
backup-1 (150): stored/mount.cc:80-6 Enter mount_next_volume(release=0) dev="AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (150): stored/mount.cc:93-6 mount_next_vol retry=0
backup-1 (100): stored/mount.cc:673-6 No swap_dev set
backup-1 (50): stored/askdir.cc:259-6 >dird CatReq Job=backup-rabbit-1.2019-03-27_14.44.59_07 GetVolInfo VolName=Full-0001 write=1
backup-1 (50): stored/askdir.cc:188-6 <dird 1000 OK VolName=Full-0001 VolJobs=5 VolFiles=0 VolBlocks=0 VolBytes=64712 VolMounts=5 VolErrors=0 VolWrites=1 MaxVolBytes=1073741824000 VolCapacityBytes=0 VolStatus=Append Slot=0 MaxVolJobs=0 MaxVolFiles=0 InChanger=0 VolReadTime=0 VolWriteTime=0 EndFile=0 EndBlock=0 LabelType=0 MediaId=1 EncryptionKey= MinBlocksize=0 MaxBlocksize=0
backup-1 (50): stored/askdir.cc:224-6 DoGetVolumeInfo return true slot=0 Volume=Full-0001, VolminBlocksize=0 VolMaxBlocksize=0
backup-1 (50): stored/askdir.cc:226-6 setting dcr->VolMinBlocksize(0) to vol.VolMinBlocksize(0)
backup-1 (50): stored/askdir.cc:228-6 setting dcr->VolMaxBlocksize(0) to vol.VolMaxBlocksize(0)
backup-1 (150): stored/mount.cc:134-6 After find_next_append. Vol=Full-0001 Slot=0
backup-1 (100): stored/autochanger.cc:125-6 Device "AWS_S3_Object_Storage" (AWS S3 Storage) is not an autochanger
backup-1 (150): stored/mount.cc:196-6 autoLoadDev returns 0
backup-1 (150): stored/mount.cc:232-6 want vol=Full-0001 devvol= dev="AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (100): stored/dev.cc:632-6 open dev: type=6 dev_name="AWS_S3_Object_Storage" (AWS S3 Storage) vol=Full-0001 mode=OPEN_READ_WRITE
backup-1 (100): stored/dev.cc:651-6 call OpenDevice mode=OPEN_READ_WRITE
backup-1 (190): stored/dev.cc:1096-6 Enter mount
backup-1 (100): stored/dev.cc:725-6 open disk: mode=OPEN_READ_WRITE open(AWS S3 Storage/Full-0001, 00000002, 0640)
backup-1 (10): backends/droplet_device.cc:103-6 : info: src/droplet.c:127: dpl_init: PRNG has been seeded with enough data
backup-1 (100): backends/droplet_device.cc:315-6 check_path(device="AWS_S3_Object_Storage" (AWS S3 Storage), bucket=prontopro-bareos, path=/): DPL_SUCCESS
backup-1 (100): backends/droplet_device.cc:342-6 CheckRemote("AWS_S3_Object_Storage" (AWS S3 Storage)): ok
backup-1 (100): backends/chunked_device.cc:129-6 New allocated buffer of 104857600 bytes at 7f5dc1bff038
backup-1 (100): backends/droplet_device.cc:528-6 Reading chunk /Full-0001/0000
backup-1 (100): stored/dev.cc:741-6 open dev: disk fd=0 opened
backup-1 (100): stored/dev.cc:659-6 preserve=33055754140 fd=0
backup-1 (100): stored/dev.cc:573-6 setting minblocksize to 64512, maxblocksize to label_block_size=64512, on device "AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (100): stored/label.cc:86-6 Enter ReadVolumeLabel res=1 device="AWS_S3_Object_Storage" (AWS S3 Storage) vol=Full-0001 dev_Vol=*NULL* max_blocksize=64512
backup-1 (130): stored/label.cc:150-6 Big if statement in ReadVolumeLabel
backup-1 (200): backends/chunked_device.cc:812-6 Reading 64512 bytes at offset 0 from chunk buffer
backup-1 (190): stored/label.cc:769-6 unser_vol_label

Volume Label:
Id                : Bareos 2.0 immortal
VerNo             : 20
VolName           : Full-0001
PrevVolName       : 
VolFile           : 0
LabelType         : VOL_LABEL
LabelSize         : 164
PoolName          : Full
MediaType         : S3_Object1
PoolType          : Backup
HostName          : backup-1
Date label written: 27-Mar-2019 09:42
backup-1 (130): stored/label.cc:223-6 Compare Vol names: VolName=Full-0001 hdr=Full-0001

Volume Label:
Id                : Bareos 2.0 immortal
VerNo             : 20
VolName           : Full-0001
PrevVolName       : 
VolFile           : 0
LabelType         : VOL_LABEL
LabelSize         : 164
PoolName          : Full
MediaType         : S3_Object1
PoolType          : Backup
HostName          : backup-1
Date label written: 27-Mar-2019 09:42
backup-1 (130): stored/label.cc:244-6 Leave ReadVolumeLabel() VOL_OK
backup-1 (100): stored/label.cc:261-6 Call reserve_volume=Full-0001
backup-1 (150): stored/vol_mgr.cc:405-6 enter reserve_volume=Full-0001 drive="AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (150): stored/vol_mgr.cc:611-6 Inc walk_start UseCount=2 volname=Full-0001
backup-1 (150): stored/vol_mgr.cc:273-6 List begin reserve_volume: Full-0001 in_use=1 swap=0 on device "AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (150): stored/vol_mgr.cc:432-6 Vol attached=Full-0001, newvol=Full-0001 volinuse=1 on "AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (150): stored/vol_mgr.cc:440-6 === set reserved vol=Full-0001 dev="AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (150): stored/vol_mgr.cc:574-6 === set in_use. vol=Full-0001 dev="AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (150): stored/vol_mgr.cc:611-6 Inc walk_start UseCount=2 volname=Full-0001
backup-1 (150): stored/vol_mgr.cc:273-6 List end new volume: Full-0001 in_use=1 swap=0 on device "AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (100): stored/dev.cc:503-6 Device "AWS_S3_Object_Storage" (AWS S3 Storage) has dev->device->max_block_size of 0 and dev->max_block_size of 64512, dcr->VolMaxBlocksize is 0
backup-1 (100): stored/dev.cc:545-6 set minblocksize to 64512, maxblocksize to 64512 on device "AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (150): stored/mount.cc:461-6 Want dirVol=Full-0001 dirStat=Append
backup-1 (150): stored/mount.cc:469-6 Vol OK name=Full-0001
backup-1 (50): stored/askdir.cc:259-6 >dird CatReq Job=backup-rabbit-1.2019-03-27_14.44.59_07 GetVolInfo VolName=Full-0001 write=1
backup-1 (50): stored/askdir.cc:188-6 <dird 1000 OK VolName=Full-0001 VolJobs=5 VolFiles=0 VolBlocks=0 VolBytes=64712 VolMounts=5 VolErrors=0 VolWrites=1 MaxVolBytes=1073741824000 VolCapacityBytes=0 VolStatus=Append Slot=0 MaxVolJobs=0 MaxVolFiles=0 InChanger=0 VolReadTime=0 VolWriteTime=0 EndFile=0 EndBlock=0 LabelType=0 MediaId=1 EncryptionKey= MinBlocksize=0 MaxBlocksize=0
backup-1 (50): stored/askdir.cc:224-6 DoGetVolumeInfo return true slot=0 Volume=Full-0001, VolminBlocksize=0 VolMaxBlocksize=0
backup-1 (50): stored/askdir.cc:226-6 setting dcr->VolMinBlocksize(0) to vol.VolMinBlocksize(0)
backup-1 (50): stored/askdir.cc:228-6 setting dcr->VolMaxBlocksize(0) to vol.VolMaxBlocksize(0)
backup-1 (200): stored/mount.cc:312-6 applying vol block sizes to device "AWS_S3_Object_Storage" (AWS S3 Storage): dcr->VolMinBlocksize set to 0, dcr->VolMaxBlocksize set to 0
backup-1 (100): stored/dev.cc:503-6 Device "AWS_S3_Object_Storage" (AWS S3 Storage) has dev->device->max_block_size of 0 and dev->max_block_size of 64512, dcr->VolMaxBlocksize is 0
backup-1 (100): stored/dev.cc:545-6 set minblocksize to 64512, maxblocksize to 64512 on device "AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (100): stored/mount.cc:346-6 Device previously written, moving to end of data. Expect 64712 bytes
backup-1 (100): stored/dev.cc:828-6 Enter eod
backup-1 (100): backends/droplet_device.cc:957-6 get chunked_remote_volume_size(Full-0001)
backup-1 (100): backends/droplet_device.cc:228-6 chunk /Full-0001/0000 exists. Calling callback.
backup-1 (100): backends/droplet_device.cc:245-6 chunk /Full-0001/0001 does not exists. Exiting.
backup-1 (100): backends/droplet_device.cc:969-6 Size of volume /Full-0001: 64712
backup-1 (100): backends/droplet_device.cc:988-6 Current volumesize: 64712
backup-1 (200): stored/dev.cc:840-6 ====== Seek to 64712
backup-1 (150): stored/mount.cc:364-6 update volinfo mounts=6
backup-1 (50): stored/askdir.cc:377-6 Update cat VolBytes=64712
backup-1 (50): stored/askdir.cc:402-6 >dird CatReq Job=backup-rabbit-1.2019-03-27_14.44.59_07 UpdateMedia VolName=Full-0001 VolJobs=5 VolFiles=0 VolBlocks=0 VolBytes=64712 VolMounts=6 VolErrors=0 VolWrites=1 MaxVolBytes=1073741824000 EndTime=1553697901 VolStatus=Append Slot=0 relabel=0 InChanger=0 VolReadTime=0 VolWriteTime=0 VolFirstWritten=0
backup-1 (50): stored/askdir.cc:188-6 <dird 1000 OK VolName=Full-0001 VolJobs=5 VolFiles=0 VolBlocks=0 VolBytes=64712 VolMounts=6 VolErrors=0 VolWrites=1 MaxVolBytes=1073741824000 VolCapacityBytes=0 VolStatus=Append Slot=0 MaxVolJobs=0 MaxVolFiles=0 InChanger=0 VolReadTime=0 VolWriteTime=0 EndFile=0 EndBlock=0 LabelType=0 MediaId=1 EncryptionKey= MinBlocksize=0 MaxBlocksize=0
backup-1 (50): stored/askdir.cc:224-6 DoGetVolumeInfo return true slot=0 Volume=Full-0001, VolminBlocksize=0 VolMaxBlocksize=0
backup-1 (50): stored/askdir.cc:226-6 setting dcr->VolMinBlocksize(0) to vol.VolMinBlocksize(0)
backup-1 (50): stored/askdir.cc:228-6 setting dcr->VolMaxBlocksize(0) to vol.VolMaxBlocksize(0)
backup-1 (150): stored/mount.cc:374-6 set APPEND, normal return from MountNextWriteVolume. dev="AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (190): stored/acquire.cc:488-6 Output pos=0:64712
backup-1 (100): stored/acquire.cc:499-6 === nwriters=1 nres=1 vcatjob=6 dev="AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (50): stored/askdir.cc:377-6 Update cat VolBytes=64712
backup-1 (50): stored/askdir.cc:402-6 >dird CatReq Job=backup-rabbit-1.2019-03-27_14.44.59_07 UpdateMedia VolName=Full-0001 VolJobs=6 VolFiles=0 VolBlocks=0 VolBytes=64712 VolMounts=6 VolErrors=0 VolWrites=1 MaxVolBytes=1073741824000 EndTime=1553697901 VolStatus=Append Slot=0 relabel=0 InChanger=0 VolReadTime=0 VolWriteTime=0 VolFirstWritten=0
backup-1 (50): stored/askdir.cc:188-6 <dird 1000 OK VolName=Full-0001 VolJobs=6 VolFiles=0 VolBlocks=0 VolBytes=64712 VolMounts=6 VolErrors=0 VolWrites=1 MaxVolBytes=1073741824000 VolCapacityBytes=0 VolStatus=Append Slot=0 MaxVolJobs=0 MaxVolFiles=0 InChanger=0 VolReadTime=0 VolWriteTime=0 EndFile=0 EndBlock=0 LabelType=0 MediaId=1 EncryptionKey= MinBlocksize=0 MaxBlocksize=0
backup-1 (50): stored/askdir.cc:224-6 DoGetVolumeInfo return true slot=0 Volume=Full-0001, VolminBlocksize=0 VolMaxBlocksize=0
backup-1 (50): stored/askdir.cc:226-6 setting dcr->VolMinBlocksize(0) to vol.VolMinBlocksize(0)
backup-1 (50): stored/askdir.cc:228-6 setting dcr->VolMaxBlocksize(0) to vol.VolMaxBlocksize(0)
backup-1 (150): stored/reserve.cc:151-6 Dec reserve=0 dev="AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (50): stored/append.cc:100-6 Begin append device="AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (100): stored/append.cc:111-6 Just after AcquireDeviceForAppend
backup-1 (130): stored/label.cc:659-6 session_label record=d00ff570
backup-1 (150): stored/label.cc:704-6 Write sesson_label record JobId=6 FI=SOS_LABEL SessId=1 Strm=6 len=171 remainder=0
backup-1 (150): stored/label.cc:708-6 Leave WriteSessionLabel Block=64712d File=0d
backup-1 (100): lib/bsock.cc:81-0 Construct BareosSocket
backup-1 (100): lib/tls_openssl_private.cc:57-0 Construct TlsOpenSslPrivate
backup-1 (100): lib/tls_openssl_private.cc:536-0 Set tcp filedescriptor: <8>
backup-1 (100): lib/tls_openssl_private.cc:482-0 Set ca_certfile:	<>
backup-1 (100): lib/tls_openssl_private.cc:488-0 Set ca_certdir:	<>
backup-1 (100): lib/tls_openssl_private.cc:494-0 Set crlfile_:	<>
backup-1 (100): lib/tls_openssl_private.cc:500-0 Set certfile_:	<>
backup-1 (100): lib/tls_openssl_private.cc:506-0 Set keyfile_:	<>
backup-1 (100): lib/tls_openssl_private.cc:518-0 Set pem_userdata to address: <0>
backup-1 (100): lib/tls_openssl_private.cc:524-0 Set dhfile_:	<>
backup-1 (100): lib/tls_openssl_private.cc:542-0 Set cipherlist:	<>
backup-1 (100): lib/tls_openssl_private.cc:530-0 Set Verify Peer:	<false>
backup-1 (50): lib/tls_openssl.cc:100-0 Preparing TLS_PSK SERVER callback
backup-1 (100): lib/tls_openssl_private.cc:401-0 psk_server_cb. identitiy: R_DIRECTOR backup-1.
backup-1 (100): lib/tls_openssl_private.cc:426-0 psk_server_cb. result: 32.
backup-1 (50): lib/bnet.cc:150-0 TLS server negotiation established.
backup-1 (110): stored/socket_server.cc:97-0 Conn: Hello Director backup-1 calling
backup-1 (110): stored/socket_server.cc:115-0 Got a DIR connection at 27-Mar-2019 15:00:29
backup-1 (100): include/jcr.h:320-0 Construct JobControlRecord
backup-1 (50): lib/cram_md5.cc:69-0 send: auth cram-md5 <461601013.1553698829@backup-1> ssl=1
backup-1 (100): lib/cram_md5.cc:116-0 cram-get received: auth cram-md5 <950524310.1553698829@backup-1> ssl=1
backup-1 (99): lib/cram_md5.cc:135-0 sending resp to challenge: XVkIXSZZLjpi2/+sNi/X8A
backup-1 (90): stored/dir_cmd.cc:289-0 Message channel init completed.
backup-1 (199): stored/dir_cmd.cc:300-0 <dird: cancel Job=backup-rabbit-1.2019-03-27_14.44.59_07
backup-1 (200): stored/dir_cmd.cc:318-0 Do command: cancel
backup-1 (10): lib/jcr.cc:649-0 Warning, can't send kill to jid=6
backup-1 (200): stored/job.cc:463-0 Start stored FreeJcr
backup-1 (200): stored/job.cc:590-0 End stored FreeJcr
backup-1 (100): lib/jcr.cc:446-0 FreeCommonJcr: 7f5dc80253a8 
backup-1 (100): lib/tls_openssl.cc:71-0 Destruct TLsOpenSsl Implementation Object
backup-1 (100): lib/tls_openssl_private.cc:62-0 Destruct TlsOpenSslPrivate
backup-1 (100): lib/bsock.cc:129-0 Destruct BareosSocket
backup-1 (200): stored/append.cc:280-6 Write EOS label JobStatus=A
backup-1 (130): stored/label.cc:659-6 session_label record=d00ff570
backup-1 (150): stored/label.cc:704-6 Write sesson_label record JobId=6 FI=EOS_LABEL SessId=1 Strm=6 len=207 remainder=0
backup-1 (150): stored/label.cc:708-6 Leave WriteSessionLabel Block=64712d File=0d
backup-1 (90): stored/append.cc:299-6 back from write_end_session_label()
backup-1 (100): stored/block.cc:582-6 return WriteBlockToDev, job is canceled
backup-1 (100): stored/acquire.cc:561-6 releasing device "AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (100): stored/acquire.cc:586-6 There are 0 writers in ReleaseDevice
backup-1 (200): stored/acquire.cc:589-6 dir_create_jobmedia. Release vol=Full-0001 dev="AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (50): stored/askdir.cc:448-6 JobMedia FI=0 StartBlock!=0 record suppressed
backup-1 (50): stored/askdir.cc:377-6 Update cat VolBytes=64712
backup-1 (50): stored/askdir.cc:402-6 >dird CatReq Job=backup-rabbit-1.2019-03-27_14.44.59_07 UpdateMedia VolName=Full-0001 VolJobs=6 VolFiles=0 VolBlocks=0 VolBytes=64712 VolMounts=6 VolErrors=0 VolWrites=1 MaxVolBytes=1073741824000 EndTime=1553698832 VolStatus=Append Slot=0 relabel=0 InChanger=0 VolReadTime=0 VolWriteTime=0 VolFirstWritten=0
backup-1 (200): stored/acquire.cc:610-6 dir_update_vol_info. Release vol=Full-0001 dev="AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (150): stored/vol_mgr.cc:771-6 === clear in_use vol=Full-0001
backup-1 (150): stored/vol_mgr.cc:790-6 === set not reserved vol=Full-0001 num_writers=0 dev_reserved=0 dev="AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (150): stored/vol_mgr.cc:821-6 === clear in_use vol=Full-0001
backup-1 (150): stored/vol_mgr.cc:835-6 === remove volume Full-0001 dev="AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (100): stored/acquire.cc:626-6 0 writers, 0 reserve, dev="AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (100): stored/dev.cc:1016-6 close_dev "AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (100): backends/chunked_device.cc:139-6 Freeing buffer of 104857600 bytes at 7f5dc1bff038
Pool   Maxsize  Maxused  Inuse
NoPool      256       12      7
NAME        256       11      5
FNAME      3328        4      2
MSG        1024        7      4
EMSG       1024        7      2
BareosSocket     65636        3      2
RECORD      128        2      1

backup-1 (100): stored/dev.cc:1131-6 Enter unmount
backup-1 (100): stored/dev.cc:1004-6 Clear volhdr vol=Full-0001
backup-1 (150): stored/vol_mgr.cc:812-6 No vol on dev "AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (100): stored/acquire.cc:684-6 JobId=6 broadcast wait_device_release at 27-Mar-2019 15:00:32
backup-1 (200): stored/acquire.cc:847-6 Detach Jid=6 dcr=7f5dd0011678 size=1 to dev="AWS_S3_Object_Storage" (AWS S3 Storage)
backup-1 (100): stored/acquire.cc:706-6 Device "AWS_S3_Object_Storage" (AWS S3 Storage) released by JobId=6
backup-1 (100): stored/append.cc:354-6 return from DoAppendData() ok=0
backup-1 (190): stored/dir_cmd.cc:321-6 Command run requests quit
backup-1 (200): stored/job.cc:463-6 Start stored FreeJcr
backup-1 (100): lib/tls_openssl.cc:71-6 Destruct TLsOpenSsl Implementation Object
backup-1 (100): lib/tls_openssl_private.cc:62-6 Destruct TlsOpenSslPrivate
backup-1 (100): lib/bsock.cc:129-6 Destruct BareosSocket
backup-1 (100): lib/jcr.cc:170-6 write_last_jobs seek to 192
backup-1 (200): stored/job.cc:590-6 End stored FreeJcr
backup-1 (100): lib/jcr.cc:446-6 FreeCommonJcr: 7f5dd0013058 
backup-1 (100): lib/tls_openssl.cc:71-0 Destruct TLsOpenSsl Implementation Object
backup-1 (100): lib/tls_openssl_private.cc:62-0 Destruct TlsOpenSslPrivate
backup-1 (100): lib/bsock.cc:129-0 Destruct BareosSocket
backup-1.trace (27,106 bytes)   
arogge

arogge

2019-03-28 07:53

developer   ~0003305

According to the trace, your director told the sd to cancel the job:

backup-1 (110): stored/socket_server.cc:97-0 Conn: Hello Director backup-1 calling
backup-1 (110): stored/socket_server.cc:115-0 Got a DIR connection at 27-Mar-2019 15:00:29
backup-1 (100): include/jcr.h:320-0 Construct JobControlRecord
backup-1 (50): lib/cram_md5.cc:69-0 send: auth cram-md5 <461601013.1553698829@backup-1> ssl=1
backup-1 (100): lib/cram_md5.cc:116-0 cram-get received: auth cram-md5 <950524310.1553698829@backup-1> ssl=1
backup-1 (99): lib/cram_md5.cc:135-0 sending resp to challenge: XVkIXSZZLjpi2/+sNi/X8A
backup-1 (90): stored/dir_cmd.cc:289-0 Message channel init completed.
backup-1 (199): stored/dir_cmd.cc:300-0 <dird: cancel Job=backup-rabbit-1.2019-03-27_14.44.59_07
backup-1 (200): stored/dir_cmd.cc:318-0 Do command: cancel

Do you have any idea why this might have happened?
guidoilbaldo

guidoilbaldo

2019-03-29 08:56

reporter   ~0003306

No, unfortunately not. I just launched the job from bareos-webui and left it to complete.
Do you think it might be some misconfiguration in DIR or SD configs I posted in the "additional information" part?
guidoilbaldo

guidoilbaldo

2019-04-01 17:31

reporter   ~0003308

Hi,
if you have any hint why our configuration is not working, I would like to know because I'm in the process of deciding whether to stick with BareOS or finding a possible alternative.
Thank you very much,
Stefano
guidoilbaldo

guidoilbaldo

2019-04-10 15:23

reporter   ~0003316

Is it possible to have a clue on this?
arogge

arogge

2019-04-10 16:37

developer   ~0003317

Sorry, I have really no idea.
Maybe you can take this to the mailing list and somebody can help?

Issue History

Date Modified Username Field Change
2019-03-27 12:31 guidoilbaldo New Issue
2019-03-27 12:31 guidoilbaldo Tag Attached: aws
2019-03-27 12:31 guidoilbaldo Tag Attached: s3;droplet;aws;storage
2019-03-27 12:31 guidoilbaldo Tag Attached: storage
2019-03-27 12:43 arogge Assigned To => arogge
2019-03-27 12:43 arogge Status new => feedback
2019-03-27 12:43 arogge Note Added: 0003301
2019-03-27 15:22 guidoilbaldo Note Added: 0003302
2019-03-27 15:22 guidoilbaldo Status feedback => assigned
2019-03-27 15:26 guidoilbaldo Note Added: 0003303
2019-03-27 16:02 guidoilbaldo File Added: backup-1.trace
2019-03-27 16:02 guidoilbaldo Note Added: 0003304
2019-03-28 07:53 arogge Status assigned => feedback
2019-03-28 07:53 arogge Note Added: 0003305
2019-03-29 08:56 guidoilbaldo Note Added: 0003306
2019-03-29 08:56 guidoilbaldo Status feedback => assigned
2019-04-01 17:31 guidoilbaldo Note Added: 0003308
2019-04-10 15:23 guidoilbaldo Note Added: 0003316
2019-04-10 16:37 arogge Note Added: 0003317
2019-07-10 17:48 arogge Assigned To arogge =>
2019-12-12 13:29 arogge Relationship added related to 0001152
2019-12-12 13:29 arogge Relationship deleted related to 0001152