Mongoimport without mongoexport - json

Suppose I have ssh access to a server with mongodb on it. However, suppose the server does not have mongoexport installed, and I cannot install it. I can use mongo, interactively or feed it a script. I wish to export a subset of the data and import it on my local computer. Ideally, I'd like to run a script or command that saves the data in the same format as mongoexport, so I can import it with mongoimport locally. https://stackoverflow.com/a/12830385/513038 doesn't work (as it has extra line breaks in the results), nor does using printjsononeline instead, because some values get printed differently and I end up with "Bad characters" and "expecting number" errors when I run mongoimport.
Any ideas? Again, I'd like to use mongoimport if possible, but other sufficiently workable ways are acceptable, as well.

Related

Executing binary SQL file using SQLCMD from WiX

I'm trying to install SQL script(SSDT) using SQLCMD - as this script contains to many SSDT definitions and cannot be run by the WIX SQL extension.
i want my SQL script file to be binary(as i don't want it to stay on target machine)
how can i set the SQLCMD command to use the binary script (with -i)?
p.s.
i tried this blog:
http://neilsleightholm.blogspot.co.il/2008/08/executing-sqlcmd-from-wix.html##
but this code don't shows the link between the binary SQL file and the SCLCMD command.
can someone help me with the correct code?
this is the code i used, which did not work for me
<Binary Id="CreateSchema.sql" SourceFile="..\SQL\CreateSchema.sql" />
<CustomAction Id="sqlcmd.cmd"
Property="sqlcmd"
Value=""sqlcmd.exe" -S [DATABASE_SERVER]
-i "[#CreateSchema.sql]" -v var=SYSTEM_USER -o [INSTALLDIR]installSql.log" />
<CustomAction Id="sqlcmd"
BinaryKey="WixCA"
DllEntry="CAQuietExec"
Return="check"
Execute="deferred"
Impersonate="yes" />
<InstallExecuteSequence>
<Custom Action="sqlcmd.cmd" After="InstallFiles">NOT Installed</Custom>
<Custom Action="sqlcmd" After="sqlcmd.cmd">NOT Installed</Custom>
</InstallExecuteSequence>
the log file showed that -i parameter did not had any file name value:
MSI (s) (4C:6C) [09:58:15:610]: Executing op: CustomActionSchedule(Action=sqlcmd,ActionType=1025,Source=BinaryData,Target=CAQuietExec,CustomActionData="sqlcmd.exe" -S (local) -i "" -v var=SYSTEM_USER -o C:\installSql.log)
That's not how <Binary> works. The [#FileID] syntax is used to dynamically use the at runtime installation full path of a component's file.
Binaries are used typically as temporary extracted files for custom actions or, in this case, sql files among other things.
Consider looking into the SQL Extension in wix. As a minimal example take a look at this code.
Add the sql namespace xmlns:sql="http://schemas.microsoft.com/wix/SqlExtension"
<Binary Id="CreateSchema" SourceFile="..\SQL\CreateSchema.sql" />
<sql:SqlDatabase Id="MyDB" Database="[DATABASE]" Server="[DATABASE_SERVER]" />
And in a component you can add
<sql:SqlScript Id="CreateSchemaScript" BinaryKey="CreateSchema" ExecuteOnInstall="yes" Sequence="1" SqlDb="MyDB"/>
Here is a link to the SQL Schema definition with all the available elements. I haven't done much with the SQL Extension so you may need to do some reading to get a better idea of what you will need to do to accomplish creating your DB on install.
As i mentioned i wanted to use both SQLCMD - since my SQL script is SSDT format, and binary file(so file will be deleted in end of the install).
After looking for answers i understood that i cannot use the WiX [#filekey], as binary file will not be extracted as long as there is no custom action that is running - using it explicitly.
So in the end i understood that the best way is to extract the binary file by my self.
the steps i used in one single custom action are:
extract binary SQL script from MSI binary table.
save this file locally
run SQLCMD with -i and new file path(the one i save to)
delete the SQL file
I encounter some issues, worth mentioned, if you save the file to INSTALLDIR than the directory may not exist at the tun time of the custom action, so consider save it to temp folder or to create directory beforehand.

Linux shell script command - gzip

I am having one shell script in Linux in which the output will be generated in .csv format.
At the end of the script i am making this .csv to .gz format to reduce the space on my machine.
The file which is generated comes in this format Output_04-07-2015.csv
The command which i have written to make it zip is:-gzip Output_*.csv
But i am facing an issue that if the file already exists, then it should make the new file with that reported time stamp.
Can anyone help me with it.?
If all you want is to just overwrite the file if it already exists, gzip has a -f flag for it.
gzip -f Output_*.csv
What the -f flag does is forcefully create the gzip file, and overwrite whatever existing zip file there might already be.
Have a look at the man pages by typing man gzip or even this link for many other options.
If instead you want to do it more elegantly, you could check out and see if shell commands for your script work for you or not. But that would differ depending on what shell you have, bash, cshell, etc.

Importing zipped files in Mysql using CMD

I am trying to import zipped database files into Mysql using command prompt using the following command
7z < backup.sql.7z | mysql -u root test
The root user don't have any password associated with it.
test is my target blank database.
I use 7zip for unzipping purpose.
The zipped database i.e. backup.sql.7z is located in D drive.
But it's giving the following error
So, instead I used the following command
7z < backup.7z | mysql -u root test
Note: This time I am using backup.7z instead of backup.sql.7z
But then I get the following error
Clearly there's something wrong with my SQL syntax.
What will be the correct syntax to use then ?
I needed to import from a compressed file as well, and stumbled upon your question.
After a bit of messing around, I found that this worked for me:
7z x -so backup.7z | mysql -u root test
x is the extraction command
-so makes 7-zip write to stdout
Nothing wrong with your syntax, it's just a limitation with 7zip. It's better to use xz in this case, which doesn't put extraneous junk in stdout, or directly call the 7z.dll with your favorite programming language. 7z.exe is really meant for archive management, rather than unix-style piping, and Igor is very reluctant to change that.
If you try a plain 7z < somefile.7z you'll immediately see that all you get back is a usage list.

cant find a frm file when trying to import data into mySql

I am trying to import English wikipedia dump into MySQL so I can use the JWPL library to work with it.
I installed MySSQ, created a database named wikidump, ran a sql script that created the needed tables, and tried to run the following import command to load the data:
mysqlimport -u root-p --local --default-character-set=utf8 wikidump `pwd`/*.txt
When I do so, I get the following error:
msqlimport: Error: 1017,can't find file: '.\wilidump\#002.frm' <errno:22> when using table:*
I ran the command from the root directory of the files to import. Is this okay?
Is this a problem with the db or the the files I am trying to import?
Any clues on what to do next?
(Sorry, if it a simple question and I'm just missing out on something simple, I am a newbie to sql and I did my best searching for an answer.)
I got this message once when I tried to read in gzipped data files and needed to uncompress them first...
I got the problem too.
It seems that the command didn't support the usage of "*". So my way to solve the problem is to list all the names of the files into another file, use the shell to add "mysqlimport ......" before every file name, the use the file as a script to repeat the import command to all the files.

Import Multiple .sql dump files into mysql database from shell

I have a directory with a bunch of .sql files that mysql dumps of each database on my server.
e.g.
database1-2011-01-15.sql
database2-2011-01-15.sql
...
There are quite a lot of them actually.
I need to create a shell script or single line probably that will import each database.
I'm running on a Linux Debian machine.
I thinking there is some way to pipe in the results of a ls into some find command or something..
any help and education is much appreciated.
EDIT
So ultimately I want to automatically import one file at a time into the database.
E.g. if I did it manually on one it would be:
mysql -u root -ppassword < database1-2011-01-15.sql
cat *.sql | mysql? Do you need them in any specific order?
If you have too many to handle this way, then try something like:
find . -name '*.sql' | awk '{ print "source",$0 }' | mysql --batch
This also gets around some problems with passing script input through a pipeline though you shouldn't have any problems with pipeline processing under Linux. The nice thing about this approach is that the mysql utility reads in each file instead of having it read from stdin.
One-liner to read in all .sql files and imports them:
for SQL in *.sql; do DB=${SQL/\.sql/}; echo importing $DB; mysql $DB < $SQL; done
The only trick is the bash substring replacement to strip out the .sql to get the database name.
There is superb little script at http://kedar.nitty-witty.com/blog/mydumpsplitter-extract-tables-from-mysql-dump-shell-script which will take a huge mysqldump file and split it into a single file for each table. Then you can run this very simple script to load the database from those files:
for i in *.sql
do
echo "file=$i"
mysql -u admin_privileged_user --password=whatever your_database_here < $i
done
mydumpsplitter even works on .gz files, but it is much, much slower than gunzipping first, then running it on the uncompressed file.
I say huge, but I guess everything is relative. It took about 6-8 minutes to split a 2000-table, 200MB dump file for me.
I don't remember the syntax of mysqldump but it will be something like this
find . -name '*.sql'|xargs mysql ...
I created a script some time ago to do precisely this, which I called (completely uncreatively) "myload". It loads SQL files into MySQL.
Here it is on GitHub
It's simple and straight-forward; allows you to specify mysql connection parameters, and will decompress gzip'ed sql files on-the-fly. It assumes you have a file per database, and the base of the filename is the desired database name.
So:
myload foo.sql bar.sql.gz
Will create (if not exist) databases called "foo" and "bar", and import the sql file into each.
For the other side of the process, I wrote this script (mydumpall) which creates the corresponding sql (or sql.gz) files for each database (or some subset specified either by name or regex).