I'd like to dump my databases to a file.
Certain website hosts don't allow remote or command line access, so I have to do this using a series of queries.
All of the related questions say "use mysqldump" which is a great tool but I don't have command line access to this database.
I'd like CREATE and INSERT commands to be created at the same time - basically, the same performance as mysqldump. Is SELECT INTO OUTFILE the right road to travel, or is there something else I'm overlooking - or maybe it's not possible?
Use mysqldump-php a pure-PHP solution to replicate the function of the mysqldump executable for basic to med complexity use cases - I understand you may not have remote CLI and/or mysql direct access, but so long as you can execute via an HTTP request on a httpd on the host this will work:
So you should be able to just run the following purely PHP script straight from a secure-directory in /www/ and have an output file written there and grab it with a wget.
mysqldump-php - Pure PHP mysqldump on GitHub
PHP example:
<?php
require('database_connection.php');
require('mysql-dump.php')
$dumpSettings = array(
'include-tables' => array('table1', 'table2'),
'exclude-tables' => array('table3', 'table4'),
'compress' => CompressMethod::GZIP, /* CompressMethod::[GZIP, BZIP2, NONE] */
'no-data' => false,
'add-drop-table' => false,
'single-transaction' => true,
'lock-tables' => false,
'add-locks' => true,
'extended-insert' => true
);
$dump = new MySQLDump('database','database_user','database_pass','localhost', $dumpSettings);
$dump->start('forum_dump.sql.gz');
?>
With your hands tied by your host, you may have to take a rather extreme approach. Using any scripting option your host provides, you can achieve this with just a little difficulty. You can create a secure web page or strait text dump link known only to you and sufficiently secured to prevent all unauthorized access. The script to build the page/text contents could be written to follow these steps:
For each database you want to back up:
Step 1: Run SHOW TABLES.
Step 2: For each table name returned by the above query, run SHOW CREATE TABLE to get the create statement that you could run on another server to recreate the table and output the results to the web page. You may have to prepend "DROP TABLE X IF EXISTS;" before each create statement generated by the results of these queryies (!not in your query input!).
Step 3: For each table name returned from step 1 again, run a SELECT * query and capture full results. You will need to apply a bulk transformation to this query result before outputing to screen to convert each line into an INSERT INTO tblX statement and output the final transformed results to the web page/text file download.
The final web page/text download would have an output of all create statements with "drop table if exists" safeguards, and insert statements. Save the output to your own machine as a ".sql" file, and execute on any backup host as needed.
I'm sorry you have to go through with this. Note that preserving mysql user accounts that you need is something else entirely.
Use / Install PhpMySQLAdmin on your web server and click export. Many web hosts already offer you this as a service pre-configured, and it's easy to install if you don't already have it (pure php): http://www.phpmyadmin.net/
This allows you to export your database(s), as well as perform other otherwise tedious database operations very quickly and easily -- and it works for older versions of PHP < 5.3 (unlike the Mysqldump.php offered as another answer here).
I am aware that the question states 'using query' but I believe the point here is that any means necessary is sought when shell access is not available -- that is how I landed on this page, and PhpMyAdmin saved me!
Related
is there any query to return information about the current script that is running it ?
like for example if i have a file in
/home/domain/public_html/script.php
i want to put a query in it that file to return the filename and path i.e :
/home/domain/public_html/script.php
or at least the base path to it ]
/home/domain/public_html/
pleas note i know lots of method to do this but i specifically want to get these information back from database in response and after running a query
No, there is no way for the MySQL Server to know the name or path to the PHP script that is executing queries unless you tell it.
I've seen some projects that establish a coding practice to append a comment to the SQL queries with information to help you identify the source.
SELECT * FROM MyTable WHERE id = 123; /* File: script.php, Function: myFunction() */
You then see these comments appearing in the MySQL processlist, and in query logs.
You have to write code to put the comment into the SQL yourself:
$sql = sprintf("SELECT * FROM MyTable WHERE id = 123; /* File: %s, Function: %s() */",
__FILE__, __FUNCTION__);
If you don't want to change the code, another great solution is to use New Relic Application Monitoring, which tracks SQL queries in code for you, without any need to modify your code. It's costly to pay for the New Relic service, but it's the best solution. If you want to explore cheaper alternatives, search google "alternatives to new relic".
I am using syslog-ng to parse some logs that I am receiving via a csv-parser. However, I want to achieve insert operations that are a bit more complex than the conventional insert using the "destination" option in syslog-ng. Currently, my destination into MYSQL from my syslog-ng conf file looks like this:
destination d_sql_test
{
sql(
type(mysql)
host('<host>')
username('<user>')
password('<pass>')
database('<db_name>')
table('test')
columns('col1')
values('${val1}')
);
};
However, this simply just inserts the contents of val1 into the column col1. I want to be able to specify my insert "logic" as shown in the example in this question.
I am unsure as to where to actually do this, and if it is even supported by syslog-ng
I think you can do this if you can somehow make the decision within syslog-ng.
You could try to use an in-list() filter to check if the username is already listed in a file. If it is not then, you can send the log into the mysql destination, and also to another destination (possibly a program() destination) that updates the file containing the list of users, and reloads the syslog-ng to update the inlist filter.
You can write a syslog-ng template-function in Python that implements the logic somehow, and for example sets a macro to 1 in the message if it should be sent to the database. Then you can use a filter for this macro in your log path with the mysql destination.
Or if you can write a separate destination that does the work in Python: Writing syslog-ng destinations in Python
Also, you might want to post this question on the syslog-ng mailing list, where the developers notice it more easily.
I have coded a Ruby IRC bot which is on github (/ninjex/rubot) which is having some conflicting output with MySQL on a dedicated server I just purchased.
Firstly we have the connection to the database in the MySQL folder (in .gitignore) which looks similar to the following code block.
#con = Mysql.new('localhost', 'root', 'pword', 'db_name')
Then we have an actual function to query the database
def db_query
que = get_message # Grabs query from user i.e,./db_query SELECT * FROM words
results = #con.query(que) # Send query through the connection i.e, #con.query("SELECT * FROM WORDS")
results.each {|x| chan_send(x)} # For each row returned, send it to the channel via
end
On my local machine, when running the command:
./db_query SELECT amount, user from words WHERE user = 'Bob' and word = 'hello'
I receive the output in IRC in an Array like fashion: ["17", "Bob"] Where 17 is amount and Bob is the user.
However, using this same function on my dedicated server results in an output like: 17Bob I have attempted many changes in the code, as well as try to parse the data into it's own variable, however it seems that 17Bob is coming out as a single variable, making it impossible to parse into something like an array, which I could then use to send the data correctly.
This seems odd to me on both my local machine and the dedicated server, as I was expecting the output to first send 17 to the IRC and then Bob like:
17
Bob
For all the functions and source you can check my github /Ninjex/rubot, however you may need to install some gems.
A few notes:
Make sure you are sanitizing query via get_message. Or you are opening yourself up to some serious security problems.
Ensure you are using the same versions of the mysql gem, ruby and MySql. Differences in any of these may alter the expected output.
If you are at your wits end and are unable to resolve the underlying issue, you can always send a custom delimiter and use it to split. Unfortunately, it will muck up the case that is actually working and will need to be stripped out.
Here's how I would approach debugging the issue on the dedicated machine:
def db_query
que = get_sanitized_message
results = #con.query(que)
require 'pry'
binding.pry
results.each {|x| chan_send(x)}
end
Add the pry gem to your Gemfile, or gem install pry.
Update your code to use pry: see above
This will open up a pry console when the binding.pry line is hit and you can interrogate almost everything in your running application.
I would take a look at results and see if it's an array. Just type results in the console and it will print out the value. Also type out results.class. It's possible that query is returning some special result set object that is not an array, but that has a method to access the result array.
If results is an array, then the issue is most likely in chan_send. Perhaps it needs to be using something like puts vs print to ensure there's a new line after each message. Is it possible that you have different versions of your codebase deployed? I would also add a sleep 1 within the each block to ensure that this is not related to your handling of messages arriving at the same time.
I used leaves stru2mysql.prg and vfp2mysql_upload.prg to create a .sql dump file from DBF's. I connect to mysql database from vfp using ODBC.I KNOW how upload the sql dump file but i need to automate the whole process i.e after creating the dump file,my visual foxpro program can upload the dump file without a third party(automatically). I thought of using the source command but that needs to be run in mysql prompt.The assumption here is that my end users dont know how to import(which most of them dont).Please advice on how i can automate importation of sql file to mysql database.thank you
I think what you are looking for are the various SQL* functions in Foxpro. See the VFP help or MSDN on SQLCONNECT (or SQLSTRINGCONNECT), SQLEXEC, and SQLDISCONNECT functions to get you started. Microsoft provided good examples on each in the documentation.
You may also want to use FILETOSTR to get the output from Leafe's programs into a string for the SQLEXEC function.
Here's the steps I use to take data from a Visual FoxPro Database and upload to a MySql Database. These are all put into a custom method on a form, which is fired by a command button. For example the method would be 'uploadnewdata' and I pass parameters for whichever data tables I need
1) Connect to the Server - I use MySql ODBC
2) Validate the user (this uses a SQLEXEC to pull the correct matching record for a users tables
IF M.WorkingDatabase<>-1
nRetVal=SQLEXEC(m.WorkingDatabase,"SELECT * FROM users", "csrUsersOnServer")
SELECT csrUsersOnServer
SELECT userid,FROM csrUsersOnServer;
WHERE ALLTRIM(UPPER(userid))=ALLTRIM(UPPER(lcRanchUser));
AND ALLTRIM(UPPER(lcPassWord))=ALLTRIM(UPPER(lchPassWord));
INTO CURSOR ValidUsers
IF _TALLY>=1
ELSE
=MESSAGEBOX("Your Premise ID Does Not Match Any Records On The Server","System Message")
RETURN 0
ENDIF
ELSE
=MESSAGEBOX("Unable To Connect To Your Database", "System Message")
RETURN 0
ENDIF
3) Once that is successful I create my base cursor (this is the one I'm sending from)
4) I then loop through that cursor creating variable for the values in the fields
5) then using the SQLEXEC, and INSERT INTO, I update each record
6) once the program is finished processing the cursor, it generates a messagebox with the 'finished' message and control returns to the form.
All the user has to do, is select the starting table and enter their login information
I've been trying to get a shell(bash) script to insert a row into a REMOTE database, but I've been having some trouble :(
The script is meant to upload a file to a server, get a URL, HASH, and a file size, connect to a remote mysql database, and insert the data into an existing table. I've gotten it working until the remote MYSQL database bit.
It looks like this:
#!/bin/bash
zxw=randomtext
description=randomtext2
for file in "$#"
do
echo -n *****
ident= *****
data= ****
size=` ****
hash=`****
mysql --host=randomhost --user=randomuser --password=randompass randomdb
insert into table (field1,field2,field3) values('http://www.example.com/$hash','$file','$size');
echo "done"
done
I'm a total noob at programming so yeah :P
Anyway, I added the \ to escape the brackets as I was getting errors. As it is right now, the script is works fine until connects to the mysql database. It just connects to the mysql database and doesn't do the insert command (and I don't even know if the insert command would work in bash).
PS: I've tried both the mysql commands from the command line one by one, and they worked, though I defined the hash/file/size and didn't have the escaping "".
Anyway, what do you guys think? Is what I'm trying to do even possible? If so how?
Any help would be appreciated :)
The insert statement has to be sent to mysql, not another line in the shell script, so you need to make it a "here document".
mysql --host=randomhost --user=randomuser --password=randompass randomdb << EOF
insert into table (field1,field2,field3) values('http://www.site.com/$hash','$file','$size');
EOF
The << EOF means take everything before the next line that contains nothing but EOF (no whitespace at the beginning) as standard input to the program.
This might not be exactly what you are looking for but it is an option.
If you want to bypass the annoyance of actually including your query in the sh script, you can save the query as .sql file (useful sometimes when the query is REALLY big and complicated). This can be done with simple file IO in whatever language you are using.
Then you can simply include in your sh scrip something like:
mysql -u youruser -p yourpass -h remoteHost < query.sql &
This is called batch mode execution. Optionally, you can include the ampersand at the end to ensure that that line of the sh script does not block.
Also if you are concerned about the same data getting entered multiple times and your rdbms getting inconsistent, you should explore MySql transactions (commit, rollback, etc).
Don't use raw SQL from bash; bash has no sane facility for sanitizing the data beforehand. Generate a CSV file and upload that instead.