![]() However, this suggestion was incomplete as at the time, there was enough storage space for tarring so it never returned any errors. is running without any errors, that the return code (echo $?) is 0 and that the disk space is sufficient. tar -xvf Untar tar.bz2 File The bz2 is another popular compression format where tar can be compressed with it. In the following example, we extract the tar.gz file. tar -xvf nmap.tar Untar tar.gz File The tar files can be compressed with the gzip as gz format. I would like to point out someone did suggest checking if the storage space is sufficient and I would like to give them credit. A tar file can be untared or extracted with the following command. In case someone stumbles upon this question, and is going to go through the same route as me, here's the steps I followed to add the new hdd: This solution is probably not the most elegant solution, but removing the original file after being tarred would involve significant overhead which would have taken more time than simply adding additional storage space. ![]() The issue was noticed when we had something large downloading in the background which took up most of the space reproducing the broken tar issue from the question. Originally, the whole process was done on a 6TB hdd with other large files on it giving us at most 3TB storage space to work with. To be specific, I added a 2TB hdd which is used to hold the tar while it's split on it. The problem was solved by adding additional storage space. This is all done on Ubuntu machines which have both been updated (No update pending).ĭoes anyone have an idea as to how to solve this issue? I've tried manually going through the process and still either got an issue with a mismatch in md5sum or the EOF error. I've tried changing the part size to smaller sizes. I have tried ensuring the names were not invalid. It is worth noting that sometimes the md5sum don't match up which also results in stopping the process (this is checked before the cat assembling step). Tar: Error is not recoverable: exiting nowĪfter untarring, the tar is removed regardless if it failed or not (No point in keeping large files that failed to untar). Once the cat is done, the file is renamed using: mv .incomplete Īt this point, the watchdog detects it and untars it using: tar -C DEST -xzf -totals -unlink-first -recursive-unlinkĪt this point, I get an error I can't debug: Tar Failed 2 (The incomplete is there because I have a watchdog waiting to untar any. When it comes to putting them back together I do the following: cat file_part_* > .incomplete Otherwise, I give up and delete all the parts. If no difference is found, I continue to the next part. After the transfer, before putting the parts back together, I run the following: diff list_md5.start list_md5_2.start Once they arrive on the other computer, their md5 hash is collected using the same method but in a different list let's call it list_md5_2.start. The parts are then transferred one by one in the order they're in the list_md5.start. (This sorts them and remove duplicates, just to be safe ya know) Once each part has been hashed, I do the following: sort -u list_md5.start Before transferring the file, I loop through each part in the directory the tar was split and collect their md5 hashes using an equivalent to: md5sum PART_NAME > list_md5.start until the whole file is split into 2GB chunks. This creates a bunch of file_part_00, file_part_01. Once finished with the tarring process, the following is done: split -d -b 2G file_part_ I would advise to unpack the files from the end of the archiveĪnd delete them in reverse order of the archive.I am dealing with transferring large files from one machine to another (600GB+) and I'm tarring them up using tar -cpvzf -C PATH_TO_DIR DIR Repeat the operation five (or six) times. ![]() In this case, you might unpack 20 GB of files and then delete them Unpack in order to create a a new instance of the archive. You can only use `-delete' on an archive if the archive device allows you to write to any point on the media, such as a disk because of this, it does not work on magnetic tapes.Īs this requires the media to support random reads/writes, this might withĪ bit of luck mean that -delete is done in-place without doing The documentation for the tar option -delete has this interesting text :
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |