I decided to dig in the direction of the backup collections for the crown. Is it possible to execute the console command from under the nodes?

mongoexport -d database -c collection -o collection.json --jsonArray 

I put npm node-schedule with which I want to back up collections on certain days of the month.

    2 answers 2

    Why do you need to run from node.js? If something goes wrong and the node can not run backup, the data may be lost, which contradicts the very essence of backups: they are created regardless of the performance of the system. Usually hosters provide a daily backup service, included in the price. If you have your own server, first, do not store backup in the same place where the applications are running. I would look towards cron . About. Backup - not an easy thing =)

    • wrong is not properly stated. I want to make it so that, for example, once a week a node saves a database dump to a certain directory so that I can download it via FTP, I don’t resort to console commands. Hoster, yes offers, but why pay, and on your computer to keep better (IMHO). Of course, I’m sure that it’s best to do it with “server tools”, but I don’t have time to write such scripts;) And most importantly, I’m afraid to nakasachit so that then the hoster gets rid of everything unnecessary from the xD server, and thanks, I’m already reading, everything seems to be clear. - webphp
    • First, the hoster is not tortured - everything is stored on virtual disks. It will simply delete one) Naturally, no one will give you access to any directories other than yours. Secondly, the hoster has obviously better storage backup: there are professionals there, backups are done more often, the hoster is responsible for your backup with money and reputation. You will understand this when your computer (accidentally) fails due to a power surge or due to a fire. Of course, you can make your server with good protection from all threats, but this is hardly profitable (equipment cost + time consuming). - Zelta
    • Yes, you wrote here as if I open Google xD I have a little saytik, the base will be 100-200 meters. For archiving and 20 meters merge on the computer is not problematic. >> That you will understand when your computer (accidentally) fails due to a power surge or due to a fire. Well, from the server to the new salt, what problems xD I agree with you in general, I do not argue, but I don’t want to pay for something that I can safely and on my computer store. And if the base is large with time, then yes, there is no point in merging, and of course it will be easier for hosts to order the service. - webphp
    • Backup is needed when trouble with the server. > Well, from the server to the new salt, what problems xD trouble =) in general, you decide. Oh, yes, and if you opened Google, the question would be something like this: “tell me the backup storage server, resistant to network interference, preferably within n * $ 1000”, where n> = 2 - Zelta
    • @Zelta, you are praising the average host :) There are, for example, such cases (I personally investigated who and why, as well as the logs were erased) - someone accidentally deletes your account from the host (with a thousand others) and backups to In this case, the data center automation system is also deleted (references to them are forgotten - this is enough), it can be even simpler - the automation system of backups has broken or backed up some kind of crooked "all this time." And there are no backups ... therefore, it is better not only to hope that the hoster does backups well, but also to attend to their own storage. - zb '

    If someone needs it, google: https://gist.github.com/lazarofl/4961746

     #!/bin/bash #Force file syncronization and lock writes mongo admin --eval "printjson(db.fsyncLock())" MONGODUMP_PATH="/usr/bin/mongodump" MONGO_HOST="prod.example.com" MONGO_PORT="27017" MONGO_DATABASE="dbname" TIMESTAMP=`date +%F-%H%M` S3_BUCKET_NAME="bucketname" S3_BUCKET_PATH="mongodb-backups" # Create backup $MONGODUMP_PATH -h $MONGO_HOST:$MONGO_PORT -d $MONGO_DATABASE # Add timestamp to backup mv dump mongodb-$HOSTNAME-$TIMESTAMP tar cf mongodb-$HOSTNAME-$TIMESTAMP.tar mongodb-$HOSTNAME-$TIMESTAMP # Upload to S3 s3cmd put mongodb-$HOSTNAME-$TIMESTAMP.tar s3://$S3_BUCKET_NAME/$S3_BUCKET_PATH/mongodb-$HOSTNAME-$TIMESTAMP.tar #Unlock databases writes mongo admin --eval "printjson(db.fsyncUnlock())"