this a part of a .sh script I need to edit to make some backups and upload them on Dropbox but I need to split that backup in smaller parts.
NOW=$(date +"%Y.%m.%d")
DESTFILE="$BACKUP_DST/$NOW.tgz"
# Backup mysql.
mysqldump -u $MYSQL_USER -h $MYSQL_SERVER -p$MYSQL_PASS --all-databases > "$NOW-Databases.sql"
tar cfz "$DESTFILE" "$NOW-Databases.sql"
And then the function to upload the backup on DropBox....
dropboxUpload "$DESTFILE"
How can I split the .tar file in smaller parts (for example of 100 or 200mb size) and get the name and the number of those files to upload them with the dropboxUpload function?
You could use split. For example, this:
split -b500k $DESTFILE ${DESTFILE}-
will split $DESTFILE into 500 KB pieces called:
${DESTFILE}-aa
${DESTFILE}-ab
${DESTFILE}-ac
...
Then you could loop through them with something like:
for x in ${DESTFILE}-*
do
dropboxUpload $x
end
To join binary files in windows, use
copy /b parts.. dest
/a is for ASCII text files.
Related
What I want is to simply dump my MySQL database at certain intervals and then sync that folder to my S3 bucket when it's done. I am running into the same problem with every incarnation of my scripts.
If I run the script directly, it works fine. The DB dumps and is gzipped with no issues. If I run the script from a Cron job, the script seems to execute in chunks. It dumps and zips between 75-100 meg at a time overwriting the piece that just finished. So instead of ending up with one complete 541Mb gzip file, I end up with one that is only 75Mb and not the complete file.
It's almost like it starts to zip before the dump is complete.
Here is the current script I am using. There isn't much to it.
#!/bin/bash
NOW=$(date +%Y-%m-%d--%H)
echo Dumping database at production-$NOW.sql
mysqldump --user=USERNAME --password=PASSWORD --routines DBNAME > /var/mysqlBackups/production-$NOW.sql
echo Zipping production-$NOW.sql
gzip /var/mysqlBackups/production-$NOW.sql
printf "Completed backup and gzip\n\n"
I have tried adding my env variables to the script but that does nothing, and this is not a script that can run manually.
Any ideas, or places to start? I am going crazy.
I use the following .bat script
set varSearch="C:\Users\User1\Desktop\Test-folder\*.crypt8"
for /f %%i in ('dir %varSearch% /B ') do set varSearch= %%i
WhatsAppViewer.exe -decrypt8 %myName% key exp.db
sqlite3.exe exp.db<command.txt
cd C:\xampp\mysql\bin
mysql -u admin -p1234 < query.txt
The basic function is to find a file thats ending with .crypt8, decrypt it, save as csv and import to mysql. Its working correctly
But i need some extra features
Case1
The folder contains more than 1 file, and every file has to be processed, but only once
Case 2
Everyday at least one file gets added. It would be superb if the .bat could be scheduled as a task, and run every night and just process the new added files.
Does anybody has a solution for this?
Case 2
The forfiles command processes groups of files based on date. This does files made today only.
forfiles /d 0 /m *.crypt8 /c "cmd /c echo #fname in #path"
Case 1
Your code has errors, it may work but not under all conditions.
The easist way is to put the sequence of commands in a batchfile for a file (%1) which is passed on command line, and use forfiles to call it.
I am having difficulty importing large datasets into Couchbase. I have experience doing this very fast with Redis via the command line but I have not seen anything yet for Couchbase.
I have tried using the PHP SDK and it imports about 500 documents / second. I have also tried the cbcdocload script in the Couchbase bin folder but it seems to want each document in its on JSON file. It is a bit of work to create all these files and then load them. Is there some other importation process I am missing? If cbcdocload is the only way load data fast then is it possible to put multiple documents into 1 json file.
Take the file that has all the JSON documents in it and zip up the file:
zip somefile.zip somefile.json
Place the zip file(s) into a directory. I used ~/json_files/ in my home directory.
Then load the file or files by the following command:
cbdocloader -u Administrator -p s3kre7Pa55 -b MyBucketToLoad -n 127.0.0.1:8091 -s 1000 \
~/json_files/somefile.zip
Note: '-s 1000' is the memory size. You'll need to adjust this value for your bucket.
If successful you'll see output stating how many documents were loaded, success, etc.
Here is a brief script to load up a lot of .zip files in a given directory:
#!/bin/bash
JSON_Dir=~/json_files/
for ZipFile in $JSON_Dir/*.zip ;
do /Applications/Couchbase\ Server.app/Contents/Resources/couchbase-core/bin/cbdocloader \
-u Administrator -p s3kre7Pa55 -b MyBucketToLoad \
-n 127.0.0.1:8091 -s 1000 $ZipFile
done
UPDATED: Keep in mind this script will only work if your data is formatted correctly or if the files are less than the max single document size of 20MB. (not the zipfile, but any document extracted from the zip)
I have created a blog post describing bulk loading from a single file as well and it is listed here:
Bulk Loading Documents Into Couchbase
I have the following file dumped daily into one of our online directories:
dat-part2-489359-43535-toward.txt
The numbers change each day randomly.
I have the following code to try and LOAD the file:
mysql_query("LOAD DATA LOCAL INFILE 'dat-part2-%-toward.txt'
REPLACE INTO TABLE my_table
FIELDS TERMINATED BY ',' ENCLOSED BY ''
LINES TERMINATED BY '\r\n'
IGNORE 1 LINES") or die(mysql_error());
And of course no luck. What's the best way to do this?
assuming this is a scheduled job, why not check the directory for the most recent file that matches your filename template. Store the name of said file in a variable and then sub the variable into your query. Check out glob()
I would do it via a shell script or windows cmd file. This assumes you created the file with mysqldump or some program that creates a valid sql script.
You can run dir /B in windows commend to get a directory of files. So do a dir /B > input.txt then use Windows scripting to read the input file and for each line, pipe the file in using the mysql client.
# echo off set infile= %%1
for /f "eol= tokens=*
delims= usebackq" %%i in (%infile%)
do ( mysql -u userName --password=pwd < %%i )
It's been a long time since I wrote any windows scripts, but that should give you an idea of an approach.
I have a directory with a bunch of .sql files that mysql dumps of each database on my server.
e.g.
database1-2011-01-15.sql
database2-2011-01-15.sql
...
There are quite a lot of them actually.
I need to create a shell script or single line probably that will import each database.
I'm running on a Linux Debian machine.
I thinking there is some way to pipe in the results of a ls into some find command or something..
any help and education is much appreciated.
EDIT
So ultimately I want to automatically import one file at a time into the database.
E.g. if I did it manually on one it would be:
mysql -u root -ppassword < database1-2011-01-15.sql
cat *.sql | mysql? Do you need them in any specific order?
If you have too many to handle this way, then try something like:
find . -name '*.sql' | awk '{ print "source",$0 }' | mysql --batch
This also gets around some problems with passing script input through a pipeline though you shouldn't have any problems with pipeline processing under Linux. The nice thing about this approach is that the mysql utility reads in each file instead of having it read from stdin.
One-liner to read in all .sql files and imports them:
for SQL in *.sql; do DB=${SQL/\.sql/}; echo importing $DB; mysql $DB < $SQL; done
The only trick is the bash substring replacement to strip out the .sql to get the database name.
There is superb little script at http://kedar.nitty-witty.com/blog/mydumpsplitter-extract-tables-from-mysql-dump-shell-script which will take a huge mysqldump file and split it into a single file for each table. Then you can run this very simple script to load the database from those files:
for i in *.sql
do
echo "file=$i"
mysql -u admin_privileged_user --password=whatever your_database_here < $i
done
mydumpsplitter even works on .gz files, but it is much, much slower than gunzipping first, then running it on the uncompressed file.
I say huge, but I guess everything is relative. It took about 6-8 minutes to split a 2000-table, 200MB dump file for me.
I don't remember the syntax of mysqldump but it will be something like this
find . -name '*.sql'|xargs mysql ...
I created a script some time ago to do precisely this, which I called (completely uncreatively) "myload". It loads SQL files into MySQL.
Here it is on GitHub
It's simple and straight-forward; allows you to specify mysql connection parameters, and will decompress gzip'ed sql files on-the-fly. It assumes you have a file per database, and the base of the filename is the desired database name.
So:
myload foo.sql bar.sql.gz
Will create (if not exist) databases called "foo" and "bar", and import the sql file into each.
For the other side of the process, I wrote this script (mydumpall) which creates the corresponding sql (or sql.gz) files for each database (or some subset specified either by name or regex).