I'm new at PostgreSQL. I'm trying to import JSON file into PostgreSQL table. I created an empty table:
covid19=# CREATE TABLE temp_cov(
covid19(# data jsonb
covid19(# );
and tried to copy my data from JSON in this table with this command in Command line:
cat output.json | psql -h localhost -p 5432 covid19 -U postgres -c "COPY temp_cov (data) FROM STDIN;"
The output was just "COPY 1" and when I open my table in psql with
SELECT * FROM temp_cov;
But this command goes without an end and with this output.
Unfortunately, I couldn't find an answer or some similar problem solution. Thank you in advance for your advices.
Also my json file is already modified to "not pretty" form and it has over than 11k lines.
Your data is there. psql is sending the row to the pager (likely more?), and the pager can't deal with it very usably because it is too big. You can turn off the pager (\pset pager off inside psql) or set the pager to a better program (PAGER=less or PSQL_PAGER=less as environment variables), but really none of those is going to be all that useful for viewing giant JSON data.
You have your data in PostgreSQL, now what do you want to do with it? Just looking at it within psql's pager is unlikely to be interesting.
Related
I have hundreds of SQL file which I want to restore all of the databases in different database name for each file.
I look around for a solution, but what I got is something like concat all the files into one SQL file using cat.* and then restore using the concatenated file.
But, what I want is to restore it to a different database so, I think concat is not suitable for my case.
Here's one solution: alternate USE commands with your sql files, so you change the default database before the respective database's content. Gather the whole collection together and then pipe that to the input of the mysql client.
Example using bash syntax:
(
echo "USE database1;"
cat file1.sql
echo "USE database2;"
cat file2.sql
...
) | mysql
Another solution is to run the mysql client once for each file, and specify the database name as the argument:
mysql database1 < file1.sql
mysql database2 < file2.sql
...
Re your comment:
You can write a loop in bash too.
for file in *.sql
do
db=...
mysql $db < $file
done
The tricky part above is the "..." — deciding which db goes with each input SQL file. You haven't described any way to match them, so I don't know what you'd have to do to figure that out. But if you can make that inference somehow from the filename, then you can do this without having to type every file.
I have a simple question I want be able to store sql query response with dotted lines. So when I hit mysql with command line interface like mysql -h${sqlhost} -u${sqluser} -p${sqlpass} -e "SELECT * FROM test.employee" > output.txt I should be able to store structured output in the text file - in linux environment even Windows would do.
I should be able to store the structured view above into a say 'output.txt' file.
Use the --table switch, as per the documentation: https://dev.mysql.com/doc/refman/5.7/en/mysql-shell-output-table-format.html
i have to generate a CSV file of full database/table when any new row comes in table.
so is there any script using i can generate CSV file.
i use MySQL to store data in database from HTML Form.
plz help
Finally, I found a beautiful tutorial to export data from Database to CSV File.
Also one answer on Stackoverflow.
sqlcmd -S . -d DatabaseName -E -s, -W -Q "SELECT * FROM TableName" > C:\Test.csv
Alternatively, you can use Skyvia, a cloud solution with native support for CSV file export from MySQL database. Just type in the query or use the Query Designer for a no-code solution then export the results to CSV. See an example below:
mysql-query-to-csv
After the results appear, simply click the CSV button and a CSV download will appear in your browser.
I created a shell script such that will create a string that contain the process of table creation for db2 . As in Example:
string=" db2 "CREATE TABLE foo (......... ""
Now my script will connect to the database and input the string which translate to db2 that will create a table .Before shell inputs the string , I enabled on db2 the command
db2 update command options using z on test-database.txt
so that I want to save all the outputs on textfile
However, my problem is I want to for that string to show in the output file created by db2 just like when you are typing in db2 to create a table, but in never shows in the output file. It rather will show the result whether table successfully created or not in test-database.txt , e.g
The SQL command completed successfully.
Is there a way to make the output file show the creation of table ? . Thanks in advance
You are talking about the options for the db2clp, which has many different options.
If I understood, you are writing a script (a bash script, I think so) and you want to retrieve the command output. For this, you have two options
Write the command output into a file, and then read the file.
Redirect the command output to a varaible.
The first option is the easier one. This option uses the z option, that writes the whole output to a file. You can change this behaviour just by printing out what you want, and then redirecting the output to a file.
db2 -tf myfile.sql -z /tmp/output
VAR=$(cat /tmp/output)
The second option is a little tricky, because redirection implies the creation of another shell, and then you should reload the db2 profile. This option uses the v option, that is the standard output, and I hope the output is what you want to have.
VAR=$(. ~db2inst1/sqllib/db2profile ; db2 -tvf myfile.sql)
Finally, you just need to process the content of VAR, via awk, sed, grep, etc.
For more information: http://pic.dhe.ibm.com/infocenter/db2luw/v10r5/topic/com.ibm.db2.luw.admin.cmd.doc/doc/r0010410.html
What's the easiest way to get the data for a single table, delete a single table or break up the whole dump file into files each containing individual tables? I usually end up doing a lot of vi regex munging, but I bet there are easier ways to do these things with awk/perl, etc. The first page of Google results brings back a bunch of non-working perl scripts.
When I need to pull a single table from an sql dump, I use a combination of grep, head and tail.
Eg:
grep -n "CREATE TABLE" dump.sql
This then gives you the line numbers for each one, so if your table is on line 200 and the one after is on line 269, I do:
head -n 268 dump.sql > tophalf.sql
tail -n 69 tophalf.sql > yourtable.sql
I would imagine you could extend upon those principles to knock up a script that would split the whole thing down into one file per table.
Anyone want a go doing it here?
Another bit that might help start a bash loop going:
grep -n "CREATE TABLE " dump.sql | tr ':`(' ' ' | awk '{print $1, $4}'
That gives you a nice list of line numbers and table names like:
200 FooTable
269 BarTable
Save yourself a lot of hassle and use mysqldump -T if you can.
From the documentation:
--tab=path, -T path
Produce tab-separated data files. For each dumped table, mysqldump
creates a tbl_name.sql file that contains the CREATE TABLE statement
that creates the table, and a tbl_name.txt file that contains its
data. The option value is the directory in which to write the files.
By default, the .txt data files are formatted using tab characters
between column values and a newline at the end of each line. The
format can be specified explicitly using the --fields-xxx and
--lines-terminated-by options.
Note This option should be used only when mysqldump is run on the
same machine as the mysqld server. You must have the FILE privilege,
and the server must have permission to write files in the directory
that you specify.
This shell script will grab the tables you want and pass them to splitted.sql.
It’s capable of understanding regular expressions as I’ve added a sed -r option.
Also MyDumpSplitter can split the dump into individual table dumps.
Maatkit seems quite appropriate for this with mk-parallel-dump and mk-parallel-restore.
I am a bit late on that one, but if it can help anyone, I had to split a huge SQL dump file in order to import the data to another Mysql server.
what I ended up doing was splitting the dump file using the system command.
split -l 1000 import.sql splited_file
The above will split the sql file every 1000 lines.
Hope this helps someone