View Issue Details

IDProjectCategoryView StatusLast Update
0000330bareos-core[All Projects] directorpublic2015-11-07 10:53
Reporternorbert.tobolskiAssigned To 
PrioritynormalSeverityminorReproducibilityalways
Status resolvedResolutionopen 
PlatformLinuxOSCentOSOS Version6
Product Version14.2.2 
Fixed in Version15.2.2 
Summary0000330: NDMP protocol error
DescriptionHi,

when using NDMP backup feature, bareos.log is filling with these error messages:

18-Aug 17:58 srv-bhv-backup-dir JobId 384: Fatal error: NDMP protocol error, FHDB add_dir for unknown parent inode 2332.
18-Aug 17:58 srv-bhv-backup-dir JobId 384: Fatal error: NDMP protocol error, FHDB add_dir for unknown parent inode 2332.
18-Aug 17:58 srv-bhv-backup-dir JobId 384: Fatal error: NDMP protocol error, FHDB add_dir for unknown parent inode 2332.
18-Aug 17:58 srv-bhv-backup-dir JobId 384: Fatal error: NDMP protocol error, FHDB add_dir for unknown parent inode 2332.
18-Aug 17:58 srv-bhv-backup-dir JobId 384: Fatal error: NDMP protocol error, FHDB add_dir for unknown parent inode 2332.
18-Aug 17:58 srv-bhv-backup-dir JobId 384: Async request NDMP4_FH_ADD_DIR
18-Aug 17:58 srv-bhv-backup-dir JobId 384: Fatal error: NDMP protocol error, FHDB add_dir for unknown parent inode 2332.
18-Aug 17:58 srv-bhv-backup-dir JobId 384: Fatal error: NDMP protocol error, FHDB add_dir for unknown parent inode 2332.
18-Aug 17:58 srv-bhv-backup-dir JobId 384: Fatal error: NDMP protocol error, FHDB add_dir for unknown parent inode 2332.
18-Aug 17:58 srv-bhv-backup-dir JobId 384: Fatal error: NDMP protocol error, FHDB add_dir for unknown parent inode 2332.

....

Storage-Daemon seems to write data to backup device.

Backup Client is NetApp with NetApp Release 8.2.1P2 7-Mode. Backup Format is "dump". Bareos-Version is 14.3.0 from bareos-master Repos.


Steps To ReproduceStarting backup job
Additional InformationConfiguration:

Client {
  Name = netapp01
  Address = netapp01.domain.local
  Port = 10000
  Catalog = MyCatalog
  File Retention = 90 days
  Job Retention = 6 months
  Protocol = NDMPv4
  Auth Type = clear # Clear == Clear Text, MD5 == Challenge protocol
  Username = "ndmp-backup" # username of the NDMP user on the DATA AGENT e.g. storage box being backuped.
  Password = "secret" # password of the NDMP user on the DATA AGENT e.g. storage box being backuped.
}

Job {
  Name = Backup-netapp01
  Protocol = NDMP
  #Backup Format = smtape
  Backup Format = dump
  Type = Backup
  Level = Full
  Client = netapp01
  FileSet = netapp01-Fileset
  Schedule = netapp01-Schedule
  Storage = NDMPFile-Netapp01
  Pool = netapp01-Pool
  Messages = Standard
}

Fileset {
  Name = netapp01-Fileset
  Include {
   Options {
   }
     #File = /vol/vol_cifs_users
     #File = /vol/vol_cifs_archiv
     File = /vol/vol_cifs_install
     #File = /vol/vol_cifs_data
  }
  Exclude {
        File = /vol/vol_cifs_users/.snapshot
        File = /vol/vol_cifs_archiv/.snapshot
        File = /vol/vol_cifs_install/.snapshot
        File = /vol/vol_cifs_data/.snapshot
        }
}
TagsNo tags attached.
bareos-master: impactyes
bareos-master: actionfixed
bareos-19.2: impact
bareos-19.2: action
bareos-18.2: impact
bareos-18.2: action
bareos-17.2: impact
bareos-17.2: action
bareos-16.2: impact
bareos-16.2: action
bareos-15.2: impactyes
bareos-15.2: actionfixed
bareos-14.2: impactyes
bareos-14.2: actionnone
bareos-13.2: impactyes
bareos-13.2: actionnone
bareos-12.4: impactyes
bareos-12.4: actionnone

Relationships

has duplicate 0000457 closed NDMP BACKUP HANG 
related to 0000345 closed NDMP restore does not work 

Activities

mvwieringen

mvwieringen

2014-08-18 18:53

developer   ~0000962

Master has some preliminary code for storing meta data to allow restores of
individual files using NDMP dump and tar protocols. That worked nice on some
NDMP implementations but it seems the NetAPP implementation returns the meta
data in some random order and there is currently no code to work around that
random metadata storage.

An option would be for the time being to disable the storage of this metadata
and fall back to the previous way of ignoring this metadata as is done in
13.2 via a config option and/or try to see what workaround is needed to allow
so called out-of-order metadata.

Given we have no real customers using this new code on a NetAPP NDMP instance
this will probably happen when we think the time is ready. Unless we see a
business opportunity there its depending on some community contribution.

Related Changesets

bareos: master b0b9bcfc

2014-08-21 15:24:12

mvwieringen

Ported: N/A

Details Diff
Don't save filehist for NDMP backup when requested.

Some NDMP implementations send their FILEHIST information in an out of
order way which we currently don't handle fully correct so allow the user
to set a config option to allow not to save the FILEHIST only count the
files and directories saved.
Affected Issues
0000330
mod - src/dird/dird_conf.c Diff File
mod - src/dird/dird_conf.h Diff File
mod - src/dird/ndmp_dma.c Diff File

bareos: bareos-14.2 cd1b8079

2014-08-21 15:24:12

mvwieringen

Ported: N/A

Details Diff
Don't save filehist for NDMP backup when requested.

Some NDMP implementations send their FILEHIST information in an out of
order way which we currently don't handle fully correct so allow the user
to set a config option to allow not to save the FILEHIST only count the
files and directories saved.
Affected Issues
0000330
mod - src/dird/dird_conf.c Diff File
mod - src/dird/dird_conf.h Diff File
mod - src/dird/ndmp_dma.c Diff File

bareos: bareos-15.2 9b78b96b

2015-10-30 21:41:42

mvwieringen

Ported: N/A

Details Diff
NDMP: Add out of order meta data handling.

This uses an additional hash table when we receive the FHDB dir
information in a somewhat random order from the NDMP DATA Agent.
The NetApp NDMP implementation is known to do this semi random ordering.

We pre-allocate the nodes in the same way they are normally created them
but store them temporary in the hash table as a pointer to the node and
the node id of its parent. This way we use the tree alloc optimal.

When we start processing the actual node information we first look if
there is any out of order metadata captured in the hash table. If so we
do a linear walk of the hash table and recursivly create the missing
parent nodes until we can connect things to the existing tree. For
creating the parent node we hash lookup the parent node in the hash and
try inserting it into the tree. When we consume the node for a parent
node we set the pointer to the node to NULL in the hash table entry.
This way the linear walk knows we already visited this node and can skip
it.
Affected Issues
0000330
mod - src/dird/ndmp_fhdb_mem.c Diff File

Issue History

Date Modified Username Field Change
2014-08-18 18:05 norbert.tobolski New Issue
2014-08-18 18:53 mvwieringen Note Added: 0000962
2014-08-18 18:53 mvwieringen Status new => confirmed
2014-10-06 11:21 mvwieringen Relationship added related to 0000345
2014-10-06 11:24 mvwieringen Changeset attached => bareos bareos-14.2 cd1b8079
2014-10-06 11:25 mvwieringen Changeset attached => bareos master b0b9bcfc
2015-04-16 22:39 mvwieringen Relationship added has duplicate 0000457
2015-11-07 10:51 mvwieringen Changeset attached => bareos bareos-15.2 9b78b96b
2015-11-07 10:53 mvwieringen bareos-master: impact => yes
2015-11-07 10:53 mvwieringen bareos-master: action => fixed
2015-11-07 10:53 mvwieringen bareos-15.2: impact => yes
2015-11-07 10:53 mvwieringen bareos-15.2: action => fixed
2015-11-07 10:53 mvwieringen bareos-14.2: impact => yes
2015-11-07 10:53 mvwieringen bareos-14.2: action => none
2015-11-07 10:53 mvwieringen bareos-13.2: impact => yes
2015-11-07 10:53 mvwieringen bareos-13.2: action => none
2015-11-07 10:53 mvwieringen bareos-12.4: impact => yes
2015-11-07 10:53 mvwieringen bareos-12.4: action => none
2015-11-07 10:53 mvwieringen Status confirmed => resolved
2015-11-07 10:53 mvwieringen Product Version => 14.2.2
2015-11-07 10:53 mvwieringen Fixed in Version => 15.2.2