I am very new to mySQL and I want to export a list of subscribers to a CSV file. The subscribers are found in wp_post_notif_subscriber.
From here I am lost. So any help would be nice.
If I understand your question correctly, you want to get a list of all records in the table "wp_post_notif_subscriber" from a mySQL database which you have access to.
To get all the rows from some table, query the mySQL database with:
SELECT * FROM <table_name>
To get only some columns use:
SELECT <column_name1, column_name2, ...> FROM <table_name>
In your case, seems like you need:
SELECT * FROM wp_post_notif_subscriber
That should return you an array with the data you need. How you export to a CSV file depends on the rest of your development environment.
Hope this helps.
Related
So I'm working on a legacy database, and unfortunately the performance of database is very slow. Simple select query can take up to 10 seconds in tables with less than 10000 record.
So i tried to investigate problem and found out that deleting column that they have used to store files (mostly videos and images) fix the problem and improve performance a lot.
Along with adding proper indexes I was able to run exact same query that used to take 10-15sec to run in under 1sec.
So my question is. Is there any already existing tool or script I can use to help me export those blobs (videos) from database and save the to disk and update row with new file name/path on file system?
If not is there any proper way to optimize database so that those blob would not impact performance that much?
Hint some one clients consuming this database use high level orms so we don't have much control on queries orm use to fetch rows and its relations. So I cannot optimize queries directly.
SELECT column FROM table1 WHERE id = 1 INTO DUMPFILE 'name.png';
How about this way?
These is also INTO_OUTFILEinstead of INTO_DUMPFILE
13.2.10.1 SELECT ... INTO Statement The SELECT ... INTO form of SELECT enables a query result to be stored in variables or written to a file:
SELECT ... INTO var_list selects column values and stores them into
variables.
SELECT ... INTO OUTFILE writes the selected rows to a file. Column and
line terminators can be specified to produce a specific output format.
SELECT ... INTO DUMPFILE writes a single row to a file without any
formatting.
Link: https://dev.mysql.com/doc/refman/8.0/en/select-into.html
Link: https://dev.mysql.com/doc/refman/8.0/en/select.html
I refactored a project and wanted to try out PostgreSQL instead of MySQL. I now want to migrate the table contents of some tables.
The problem is, when I use a select query like this (don't bother about the names, is just an example)
SELECT id AS id_x, name AS name_x, name2 AS name2_x
I want to export the table data and import it into MySQL. The problem is, that the syntax for INSERT INTO is different in MySQL and PostgreSQL. I don't want to export the whole table, because I also changed some parts of the structure, tried to make it more performant etc. So I just want to get the table data, but I need those AS x thing, because the names of the columns have changed
I already found several links on this topic.
I can use mysqldump to dump the table and set the --compatible=name parameter. The problem here is, that I can't add a SELECT statement, right? I can only add a where check.
Then, I could use the mysql command to export the query I want, but mysql doesn't have any compatible parameter. How would I achieve that?
You could consider to create a temporaty table by issuing
SELECT id AS id_x, name AS name_x, name2 AS name2_x FROM oldtable INTO temptable
And then as a second step export the temptable using mysqldump with the --compatible= parameter.
See https://dev.mysql.com/doc/refman/5.7/en/select-into.html
I have this MySQL database with over a million records. I am not the owner of the database and dont have write/modify permissions to it. I have a small target db called MyDB that fetches some fields from the giant view. Now these are the problems I face working with the huge million-record table on MysqlWorkbench..
GiantDB(MySQL database)
--gview(over a million records. No permissions to write)
+id(PK)
+name-String
+marks-Integer
+country-String
myDB(Target SQLite DB)
--mytable
+id(PK)
+name-String
So this is a rough sketch of these two databases. I am not able to query gtable without setting row limits(to 1000).
count(*) doesnt work either.
My ultimate goal is to insert the million names into myTable from gtable.
Inserting gView's fields into myTable takes forever, and automatically gets killed.
Any way of doing this efficiently?
I looked up and people were talking about indexes and stuff. I am completely clueless on what to do. A clear explanation would be of great help. Thanks and regards.
(A million rows is a medium sized table. Don't let its size throw you.)
From the comment thread it sounds like you're taking too long to read the result set from MySQL, because it takes time to create your rows in your output database.
Think of this as an export from MySQL followed by an import to sqlite.
The export you can do with MySQL Workbench's export... feature, which itself uses the mysqldump command-line tool.
You then may need to edit the .sql file created by the export command so it's compatible with sqlite. Then import it into sqliite. There are multiple tools that can do this.
Or, if you're doing this in a program (a python program, perhaps) try reading your resultset from the MySQL database row by row and writing it to a temporary disk file.
Then disconnect from the MySQL database, open up your sqlLite database and the file, read the file line by like and load it into the database.
Or, if you write the file so it looks like this
1,"Person Name"
2,"Another Name"
3,"More Name"
etc, you'll have a so-called CSV (comma-separated value) file. There are many tools that can load such files into SQLlite.
Another choice: this will be mandatory if your MySQL database has very tight restrictions on what you can do. For example, they may have given you a 30-second query time limit. ASK your database administrator for help exporting this table to your sqlite databse. Tell her you need a .csv file.
You should be able to say SELECT MAX(id) FROM bigtable to get the largest ID value. If that doesn't work the table is probably corrupt.
One more suggestion: fetch the rows in batches of, say, ten thousand.
SELECT id, name FROM bigtable LIMIT 0,10000
SELECT id, name FROM bigtable LIMIT 10000,10000
SELECT id, name FROM bigtable LIMIT 20000,10000
SELECT id, name FROM bigtable LIMIT 30000,10000 etc etc.
This will be a pain in the neck, but it will get you your data if your dba is uncooperative.
I hope this helps.
I already know how to export the whole database, but how do you export selected rows?
Say I need to export data from this queries: select * from users where gender="0"; How do you do turn them into a dump file?
Easy way to do that:
Execute your query. In your case select * from users where gender="0";
In result section you will find File: option. There you will get Export record set to an external file. Click here and save your file in your desired extension.
Thats all.
This may sound strange but is it possible to construct a SQL statement that search all the tables in a database for a specific value? I'm testing another person's Drupal(V.7) code and that code uses taxonomy_term_save function to import data in CSV format. I like to find the table where these data are stored. I don't know the field name either. Is it possible? I use MySQL.
SELECT * FROM databasenameHERE WHERE tablenameHERE = 'keyYouAreSearchingForHere'";
That is for MySql