#blr_p, nicely explained.
A dvd disc has block size of 2048 bytes (2KB) or 2352 bytes? so if we choose that for quickpar, and keep 10% recovery then we would have about 450MB recovery data for a 4.5GB disk? right?
The block size depends on how the image is created. If its only user data then the block size will be 2048, but if its a RAW image then it will be 2352. A raw dvd image is not much use so 2048 which applies to user data only will be used.
The situation is more complicated with CDs. If its a data disc, then again its 2048, but otherwise it depends on the type. Audio cds do not have any parity data on the disc so its 2352 bytes per sector.
CD-ROM XA
None of this exists with dvd, its just 2048.
Its important to get the right block size as that influences efficiency. The idea is if the source blocks realign with the data that is extracted then the smallest number of recovery blocks will be used to repair the data. Otherwise it will require more recovery blocks and this in turn will influence chances of recovery, you might get lucky or not.
This is why the basics have to be understood
properly at the time of generating parity.
processing the parity information and recovering/assuming data is very taxing. in case of raid on hdds, the raid controller does it, but in this case it has to be done via software, ie CPU, which will be very intensive for cpu.
Best is to try and see how long it takes, you can tweak the quickpar parameters.
Also in case oh hdd, the recovered data is immediately written to a new hdd while syncing, but in this case it has to be kept in cache for ... like recovering to an iso. then why not store the iso on the hdd itself ? and relieve the system from the hefty processing and all.
Storing the iso on the drive is fine, it takes up more space and does not include any parity data if you have HDD read errors you will again lose data though this is much less likely than with optical media. Better would be to store just the parity info on HD and leave the optical disc as is.
Also if there is any slight read error on the parity info disc, everything is lost if you try to recover a damaged disc.
This is a good point but, no.
With PAR2, even if there are parity info errors from reading only the recover block in which it occurs will be affected the rest is good to go. There are checksums stored in the recovery block for each & every one. So if one recovery block has problems then the others will be used.
This is why more recovery blocks is good, because then fewer blocks will be affected and chances of recovery are good. But generating more recovery blocks takes more time than less recovery blocks. Lets say for example, you used just one recovery block, and got a read error then only in this case will there be total loss of the parity info. You never want to have just one recovery block.
This is another reason to use PAR2, because the specs are published and there is lots of discussion using it compared to other more proprietary algorithms
Typically 300 recovery blocks for 3000 source blocks with 10% redundancy is a reasonable amount. But what is reasonable
depends on how fast your cpu is, how much risk you are willing to take and how many discs you have to do. These choices will be influenced by what your personal experience has been with data loss on optical media, how much did you lose on a disc in the past This is a personal choice, there is no one size fits all. So that is why i suggested playing with the text file, before moving to disc images, much faster to learn.
You can even try to simulate loss with a cd-r and a marker to colour over the disc to see how much can be recovered though leave this for later when we discuss imaging apps. Can always clean the disc with some brasso and reuse the disc.
Right now i use 15 or 20% redundnancy,
20% means do five discs and store the total parity on another disc.
15% means do six discs and store the total parity on another disc.
And you would need two dvd drives to perform this or mount the parity disc in some virtual drive.
No, just one drive is necessary.
You would need an app that can read the disc sector by sector, wth multiple sector re-reads if necessary and if it fails then it inserts zeroes and goes on. Once the image is created on your HDD, then you put the parity files in the same folder as the image and run the repair, depending on the damage it will take time to repair and once done, you re-burn the fixed image. You have repaired your disc
at time like this, when hdd prices are hitting the roof, the idea makes sense, but when hdd prices fall, well hdds are prefered all the way.
Sure, but i dont want all my data on HDDs, things like movies i watch only once for what do i need to store them on HDD. I prefer data that is frequently accessed to be on HDD whilst the data that is used once in a while can easily be stored on optical medium. Provided there is adequate protection against data loss.
rareravi' timestamp='1329989991' post='1708148 said:
Nero SecureDisk uses same concept I think.
http://www.securdisc.net/eng/how-to-secure.html
How do I retrieve data from damaged discs?
I want to be able to retrieve my files if a disc is accidentally damaged.
After you've copied all your files onto a disc, SecurDisc uses the empty space to add redundant and checksum data. This significantly increases the chances of your files being retrieved, even if the disc itself is damaged.
Dvdisaster can do the same thing too. I prefer not to store the parity info of a disc on the same disc itself but rather on another disc as well as HDD. Secondly i only write discs to 90-95% capacity and rarely upto 100% because IME errors start from the outer and move inwards. There can always be exceptions but this is the general case. With parity data i dont care as i can recover the disc with reasonable confidence. 20% redudancy means you can lose nearly 700MB on the disc and still recover it. With many recovery blocks, does not matter if sector errors are small & scattered all over the place
Also what are the parameters for this recovery. All i can see is choose the level of redundnancy, this means its just like dvddisaster. No idea about recovery blocks, source blocks or anything because there is no published spec and a proprietary algoriithm is used. Will be faster but you have no idea just how much damage or the kind you can recover from. Somebody else already made that decision for you.