Create an account

Very important

  • To access the important data of the forums, you must be active in each forum and especially in the leaks and database leaks section, send data and after sending the data and activity, data and important content will be opened and visible for you.
  • You will only see chat messages from people who are at or below your level.
  • More than 500,000 database leaks and millions of account leaks are waiting for you, so access and view with more activity.
  • Many important data are inactive and inaccessible for you, so open them with activity. (This will be done automatically)


Thread Rating:
  • 406 Vote(s) - 3.53 Average
  • 1
  • 2
  • 3
  • 4
  • 5
How good is Subversion at storing lots of binary files?

#1
I'm looking for a place to put a few GB of documents (mostly `.doc` and `.xls`). My team already has a Subversion server set up for managing the documents we create, so I'd prefer to use that if possible. How well will Subversion handle all this extra stuff? Most of it is legacy information and will only ever have one version, but it is possible that a few documents could be updated.

I've been warned that SVN isn't particularly lots-of-big-binary-files-friendly. I'm wary of trying it to see whether it works since they'll always be in the repository history even if I later delete them.

Any alternatives? We'll need the ability to comment on and/or tag documents, but we can use a Delicious-like service combined with the URLs for the documents in SVN (or similar).

**Later**
I'm not so worried about diffs on the binaries since, as stated above, they won't change much. I'm OK with a slight hassle if they do -- it's no worse than SharePoint.
Reply

#2
Well it's going to take up alot of space storing all that in Subversion, I'll tell you that much. Subversion doesn't store binary files via delta's the way that it stores text files. It'll probably take up as much space as it would to just store a bunch of binary files on your hard drive, plus the repository.

You may be able to a server-side tiddlywiki to store the urls to the documents within Subversion.

If they are mostly .doc and .xls files, there's also Microsoft's Sharepoint.
Reply

#3
It depends on how often the files are updated. It can't do anything about merging binary files and so everytime there's a conflict you'll have pain. Otherwise it's just storage and retrieval, and while it's not as good as with text it still handles that just fine.
Reply

#4
I personally use Mercurial for such tasks. I've used it to store several hundreds of gigs of media. Yes, it takes up some disk space, but disk space is cheap. With Mercurial, you also get the benefit of it being distributed, so doing a "checkout", or clone as is know in Mercurial, you get the whole repo, not just a snapshot. If your server ever dies then, your still in business.
Reply

#5
From what I've seen Git is very fast compared to Subversion, and I've heard it's somewhat faster than Mercurial, but only by a bit. However, I've not specifically tested it with large, or lots of, binary files.

That being said the way Git tracks changes, I would imagine it's very efficient at dealing with binary files.

I can say this for sure though; Once I got used to Git there's no way I'd choose to go back to Subversion. When I have to work with Subversion repositories I still use Git though git-svn. This way I get all the advantages of distributed version control, but still have really nice support for pushing commits back to the central Subversion repository.
Reply

#6
In my previous company we setup Subversion to store CAD files. Files upto 100 MB were stored in Subversion. If many people 'add' big files to Subversion webserver can be a bottleneck. However, incremental commits were perfectly ok.

Subversion stored 'binary delta'. In fact, on server side, binary and text files are treated exactly same in storing the 'delta'. Check "binary delta encoding improvements' section on page [

[To see links please register here]

][1]. It explicitly says "*Subversion uses the xdelta algorithm to compute differences between **strings of bytes***" (and not strings of 'characters').

Just for experiment, I stored the 10 version of CAD (CATIA part file). Each version I made minor modifications to part and then check the serverside repository size. The total size was about 1.2x for about 10 revision (x - being the original file size).

Remember to set svn:needs-lock property. In my experience, Best way is to use 'auto props' to set the svn:needs-lock based on file extension.


[1]:

[To see links please register here]

Reply

#7
We built our [subversion][1] client exactly for this, as we did really big design/consulting jobs that really needed version control. We never had any problems with it.


[1]:

[To see links please register here]

Reply

#8
There's a difference between lots of big binary files, and a big number of binary files.

In my experience SVN is fine with individual binary files of several hundred megabytes. The only problems I've seen begin to occur with individual files of around a gigabyte or so. Operations fail for mysterious and unknown reasons, possibly SVN failing to handle network related problems.

I am not aware of any SVN problems related to the number of binary files, beyond their lack of merge-ability and the fact that binary files often can't be efficiently stored as deltas (SVN can use deltas).

So;

* 1000 1MB files = fine.
* 100 10MB files = fine
* 10 100MB files = fine
* 1 >1000MB file = not a good idea.

I would hope the size of your documents fits into one of the fine categories :)
Reply



Forum Jump:


Users browsing this thread:
1 Guest(s)

©0Day  2016 - 2023 | All Rights Reserved.  Made with    for the community. Connected through