[Deploying Sakai] MySQL dump format for backing up

Colin Tatham colin.tatham at unimelb.edu.au
Wed Oct 12 21:33:59 PDT 2011


Thanks Robert. I'm beginning to agree; my restore is complicated by the fact
that the SQL files produced by the dual file format dump can't be run in
alphabetical order, due to foreign key constraints. I'll have to turn the
foreign key checking off before running the SQL files, and then back on
again afterwards.

Colin


On 12/10/11 12:06 PM, Robert Cohen wrote:

> We use the standard dump.
> I can't see any reason why we'd want to only restore part of it.
> So I don't see much reason to do anything else.
> 
> 
> 
> 
> On 12/10/11 11:58 AM, "Colin Tatham" <colin.tatham at unimelb.edu.au> wrote:
> 
>> 
>> Thanks for that (and your Oracle reply). Now I have even more options to
>> choose from for our MySQL backup! :-)
>> 
>> Does anyone currently use the standard all-in-one dump for Sakai backups,
>> or
>> should we avoid that?
>> 
>> Thanks
>> 
>> Colin
>> 
>> 
>> On 11/10/11 10:27 AM, Matthew Jones wrote:
>> 
>>> While not Sakai, what I would do in the past is create two dumps:
>>> 
>>> One with the --no-create-info and another with the --no-create-db. The
>>> --no-create-db would dump all of the data and other would just dump the
>>> schema
>>> (which wouldn't change often). To the data, I'd add the option
>>> "--order-by-primary" because I checked both files each night into a a
>>> local
>>> svn repo and committed so I had backup "forever" of the database. Since
>>> it was
>>> ordered, only the diff's each day were stored in svn, so it didn't take
>>> up
>>> much space in the total repo.
>>> 
>>> Then I could roll back the database to any point in the past (or as
>>> long as
>>> the file system resources and actual deployed version of the software
>>> was
>>> available, which it was) We only usually kept file snapshots around 30
>>> days,
>>> using something like
>>> http://www.mikerubel.org/computers/rsync_snapshots/. . .
>>> Was awhile ago but that process seems familiar.
>>> 
>>> We had to homebrew everything back then, popular off the shelf packages
>>> are
>>> probably just as good.
>>> 
>>> You should be able to restore the schema from auto.ddl as well, the
>>> data is
>>> the most important piece. You'd probably want to test this out though.
>>> ;)
>>> 
>>> -Matthew 
>>> 
>>> On Sun, Oct 9, 2011 at 7:16 PM, Colin Tatham
>>> <colin.tatham at unimelb.edu.au>
>>> wrote:
>>>> Hi
>>>> 
>>>> Just wondering whether anyone has any experience or opinions on this.
>>>> mysqldump has an option which produces separate files for the database
>>>> structure (in SQL files) and content (in tab-delimited plain text
>>>> files):
>>>> http://dev.mysql.com/doc/refman/5.0/en/mysqldump-delimited-text.html
>>>> 
>>>> We have a Sakai DB dump in this format which I need to restore, but as
>>>> there
>>>> doesn't seem to be any MySQL facility to restore from the 677 files in
>>>> the
>>>> tar archive automatically, I'll need to script it. Anyone done that
>>>> already?
>>>> 
>>>> I'm also interested to know what the usual practice is for backing up;
>>>> do
>>>> most of you do the standard mysqldump back-up to a single very large
>>>> SQL
>>>> file, and then have that backed up at the file-level by your normal
>>>> back-up
>>>> software (TSM, in our case)? Anyone have any opinions of the possible
>>>> advantages of using the separated format?
>>>> 
>>>> Thanks!
>>>> 
>>>> Colin
>> 
>> 
>> -- 
>> Colin Tatham | Educational Technologist | Learning Environments
>> University Library | The University of Melbourne
>> W http://www.trs.unimelb.edu.au
>> 
>> 
>> _______________________________________________
>> production mailing list
>> production at collab.sakaiproject.org
>> http://collab.sakaiproject.org/mailman/listinfo/production
>> 
>> TO UNSUBSCRIBE: send email to
>> production-unsubscribe at collab.sakaiproject.org with a subject of
>> "unsubscribe"
> 
> 
> 

-- 
Colin Tatham | Educational Technologist | Learning Environments
 University Library | The University of Melbourne
W http://www.trs.unimelb.edu.au




More information about the production mailing list