I have used HTTrack to copy sites. It works wonderfully for f.x. Drupal sites, but BackdropCMS sites can not be copied this way.

Why is that?

It is of course a good security 'thing', but not good when a copy is needed for converting to other CMS or for further development.

I use WebHTTrack on linux (Mint).

Accepted answer

I was able to create working backups for both your sites today using wget:

wget --mirror --convert-links --html-extension --verbose --output-file=choan.org.log -X /user,/file,/node https://choan.org/

wget --mirror --convert-links --html-extension --verbose --output-file=choan.dk.log -X /user,/file,/node https://choan.dk/

I installed HTTrack and even WebHTTrack, and tried both and got the same errors that you got.  

When I try other sites with httrack and webhrtrack, I get lots of errors and permission denied on Drupal 7 & 9, and backdrop sites.  

HTTrack has not been updated since 2017, which is a long time in tech years!  

Try the commands above, and adjust the log file location and the URL. 

The -X paramater ignores all outputs for those paths. I sometimes also exclude the /feeds path.  

--html-extension adds an .html extension to all url paths.  This is probably required as Drupal and Backdrop sites do not contain a file extension. 

Hope this helps.  

Comments

Tools like HTTrack and wget don't care what kind of CMS is behind the website. Drupal, WordPress, Backdrop, etc, doesn't matter.

There might be some server configuration issue on your particular site. Does it have Apache auth enabled? Can it be viewed anonymously?

 - but there must be something 'special' for Backdrop, since I always have this problem w. such but never (yet) encountered this w. Drupal or WP.

These are publicized sites (not in maintenance mode), but I was not logged in.

There isn't anything within Backdrop sites to block the use of WebHTTrack or wget to make site backups.  

Can you share some more details about how it's breaking?  That way we can run some tests to see if we can replicate this.  

I'm happy to help test and see what is happening.  

Thank you Wylbur.

This is the errors in one attempt:

06:57:49    Warning:     Retry after error -5 (Error attempting to solve status 206 (partial file)) at link https://choan.org/ (from primary/primary)
06:57:50    Warning:     Retry after error -5 (Error attempting to solve status 206 (partial file)) at link https://choan.org/ (from primary/primary)
06:57:51    Error:     "Error attempting to solve status 206 (partial file)" (-5) after 2 retries at link https://choan.org/ (from primary/primary)
06:57:51    Warning:     No data seems to have been transferred during this session! : restoring previous one!

I use no adjustments of WebHTTrack.

A try on https://choan.dk running Drupal 9 gives no errors: I get a complete copy:

HTTrack Website Copier/3.49-2 mirror complete in 52 seconds : 52 links scanned, 50 files written (202070 bytes overall) [98971 bytes received at 1903 bytes/sec], 191214 bytes transferred using HTTP compression in 39 files, ratio 32%, 10.4 requests per connection
(No errors, 0 warnings, 0 messages)

These sites are publicized, feel free to try.

Maybe the problem is all the '@import url' lines.

I can get a page with wget but it is useless because of all the @import links. I get a page missing styling.

Drupal does not use such.

Is there a way around?

I was able to create working backups for both your sites today using wget:

wget --mirror --convert-links --html-extension --verbose --output-file=choan.org.log -X /user,/file,/node https://choan.org/

wget --mirror --convert-links --html-extension --verbose --output-file=choan.dk.log -X /user,/file,/node https://choan.dk/

I installed HTTrack and even WebHTTrack, and tried both and got the same errors that you got.  

When I try other sites with httrack and webhrtrack, I get lots of errors and permission denied on Drupal 7 & 9, and backdrop sites.  

HTTrack has not been updated since 2017, which is a long time in tech years!  

Try the commands above, and adjust the log file location and the URL. 

The -X paramater ignores all outputs for those paths. I sometimes also exclude the /feeds path.  

--html-extension adds an .html extension to all url paths.  This is probably required as Drupal and Backdrop sites do not contain a file extension. 

Hope this helps.  

Thank you Wylbur,

That worked.

There is a slight difference in the files when I am off-line - maybe google-font related.

Yes, I will learn & use wget. Funny though that I had no problems httracking Drupal and other non-backdrop sites.

I used this line as a test, and got /core files downloaded also, so it seems 'better'. Any comments apreciated.

wget -m -p -k -E -e robots=off -o choan.org.log https://choan.org/

Not sure which part 'did the trick'