find the path to the script which is running the query - mysql

is there any query to return information about the current script that is running it ?
like for example if i have a file in
/home/domain/public_html/script.php
i want to put a query in it that file to return the filename and path i.e :
/home/domain/public_html/script.php
or at least the base path to it ]
/home/domain/public_html/
pleas note i know lots of method to do this but i specifically want to get these information back from database in response and after running a query

No, there is no way for the MySQL Server to know the name or path to the PHP script that is executing queries unless you tell it.
I've seen some projects that establish a coding practice to append a comment to the SQL queries with information to help you identify the source.
SELECT * FROM MyTable WHERE id = 123; /* File: script.php, Function: myFunction() */
You then see these comments appearing in the MySQL processlist, and in query logs.
You have to write code to put the comment into the SQL yourself:
$sql = sprintf("SELECT * FROM MyTable WHERE id = 123; /* File: %s, Function: %s() */",
__FILE__, __FUNCTION__);
If you don't want to change the code, another great solution is to use New Relic Application Monitoring, which tracks SQL queries in code for you, without any need to modify your code. It's costly to pay for the New Relic service, but it's the best solution. If you want to explore cheaper alternatives, search google "alternatives to new relic".

Related

CakePHP Error: SQLSTATE[42S02] table not found - but exist

You might read this question every day so i tried another Stackoverflow's answer before asking:
CakePHP table is missing even when it exists
Anyways. The table i try to select data from does exist (quadra-checked uppercase/lowercase!) and it gets also listed via $db->->listSources().
Here's a screenshot of the query, the message and the last result from listing all Datasource's tables:
http://i.stack.imgur.com/CdhcV.png
Note: If i run this query in PHPMyAdmin manually it works fine. I would say its impossible to get the pictures output at one time in a view - now its up to you to tell me the opposite. By the way: I am pretty sure to use the correct Datasource.
I should tell additionally that the mysql-server is hosted on another platform. Since i can use it for my localhost-phpmyadmin if i modify the config.inc.php i can promise it is no Firewall-Problem.
Written in behalf of xcy7e:
The mistake was to execute the Query from the local Model. Here's the code:
$conn = ConnectionManager::getDataSource('myDB');
$conn->query($query);
// instead of $this->query($query);

Backup database(s) using query without using mysqldump

I'd like to dump my databases to a file.
Certain website hosts don't allow remote or command line access, so I have to do this using a series of queries.
All of the related questions say "use mysqldump" which is a great tool but I don't have command line access to this database.
I'd like CREATE and INSERT commands to be created at the same time - basically, the same performance as mysqldump. Is SELECT INTO OUTFILE the right road to travel, or is there something else I'm overlooking - or maybe it's not possible?
Use mysqldump-php a pure-PHP solution to replicate the function of the mysqldump executable for basic to med complexity use cases - I understand you may not have remote CLI and/or mysql direct access, but so long as you can execute via an HTTP request on a httpd on the host this will work:
So you should be able to just run the following purely PHP script straight from a secure-directory in /www/ and have an output file written there and grab it with a wget.
mysqldump-php - Pure PHP mysqldump on GitHub
PHP example:
<?php
require('database_connection.php');
require('mysql-dump.php')
$dumpSettings = array(
'include-tables' => array('table1', 'table2'),
'exclude-tables' => array('table3', 'table4'),
'compress' => CompressMethod::GZIP, /* CompressMethod::[GZIP, BZIP2, NONE] */
'no-data' => false,
'add-drop-table' => false,
'single-transaction' => true,
'lock-tables' => false,
'add-locks' => true,
'extended-insert' => true
);
$dump = new MySQLDump('database','database_user','database_pass','localhost', $dumpSettings);
$dump->start('forum_dump.sql.gz');
?>
With your hands tied by your host, you may have to take a rather extreme approach. Using any scripting option your host provides, you can achieve this with just a little difficulty. You can create a secure web page or strait text dump link known only to you and sufficiently secured to prevent all unauthorized access. The script to build the page/text contents could be written to follow these steps:
For each database you want to back up:
Step 1: Run SHOW TABLES.
Step 2: For each table name returned by the above query, run SHOW CREATE TABLE to get the create statement that you could run on another server to recreate the table and output the results to the web page. You may have to prepend "DROP TABLE X IF EXISTS;" before each create statement generated by the results of these queryies (!not in your query input!).
Step 3: For each table name returned from step 1 again, run a SELECT * query and capture full results. You will need to apply a bulk transformation to this query result before outputing to screen to convert each line into an INSERT INTO tblX statement and output the final transformed results to the web page/text file download.
The final web page/text download would have an output of all create statements with "drop table if exists" safeguards, and insert statements. Save the output to your own machine as a ".sql" file, and execute on any backup host as needed.
I'm sorry you have to go through with this. Note that preserving mysql user accounts that you need is something else entirely.
Use / Install PhpMySQLAdmin on your web server and click export. Many web hosts already offer you this as a service pre-configured, and it's easy to install if you don't already have it (pure php): http://www.phpmyadmin.net/
This allows you to export your database(s), as well as perform other otherwise tedious database operations very quickly and easily -- and it works for older versions of PHP < 5.3 (unlike the Mysqldump.php offered as another answer here).
I am aware that the question states 'using query' but I believe the point here is that any means necessary is sought when shell access is not available -- that is how I landed on this page, and PhpMyAdmin saved me!

Get Redmine custom field value to a file

I'm trying to create a text file that contains the value of a custom field I added on redmine. I tried to get it from an SQL query in the create method of the project_controller.rb (at line 80 on redmine 1.2.0) as follows :
sql = Mysql.new('localhost','root','pass','bitnami_redmine')
rq = sql.query("SELECT value
FROM custom_values
INNER JOIN projects
ON custom_values.customized_id=projects.id
WHERE custom_values.custom_field_id=7
AND projects.name='#{#project.name}'")
rq.each_hash { |h|
File.open('pleasework.txt', 'w') { |myfile|
myfile.write(h['value'])
}
}
sql.close
This works fine if I test it in a separate file (with an existing project name instead of #project.name) so it may be a syntax issue but I can't find what it is. I'd also be glad to hear any other solution to get that value.
Thanks !
(there's a very similar post here but none of the solutions actually worked)
First, you could use Project.connection.query instead of your own Mysql instance. Second, I would try to log the SQL RAILS_DEFAULT_LOGGER.info "SELECT ..." and check if it's ok... And the third, I would use identifier instead of name.
I ended up simply using params["project"]["custom_field_values"]["x"] where x is the custom field's id. I still don't know why the sql query didn't work but well, this is much simpler and faster.

Second RMySQL operation fails - why?

I am running a script that stores different datasets to a MySQL database. This works so far, but only sequentially. e.g.:
# write table1
replaceTable(con,tbl="table1",dframe=dframe1)
# write table2
replaceTable(con,tbl="table2",dframe=dframe2)
If I select both (I use StatET / Eclipse) and run the selection, I get an error:
Error in function (classes, fdef, mtable) :
unable to find an inherited method for function "dbWriteTable",
for signature "MySQLConnection", "data.frame", "data.frame".
I guess this has to do with the fact that my con is still busy or so when the second request is started. When I run the script line after line it just works fine. Hence I wonder, how can I tell R to wait til the first request is ready and then go ahead ? How can I make R scripts interactive (just console like plot examples - no tcl/tk).
EDIT:
require(RMySQL)
replaceTable <- function(con,tbl,dframe){
if(dbExistsTable(con,tbl)){
dbRemoveTable(con,tbl)
dbWriteTable(con,tbl,dframe)
cat("Existing database table updated / overwritten.")
}
else {
dbWriteTable(con,tbl,dframe)
cat("New database table created")
}
}
dbWriteTable has two important arguments:
overwrite: a logical specifying whether to overwrite an existing table
or not. Its default is ‘FALSE’.
append: a logical specifying whether to append to an existing table
in the DBMS. Its default is ‘FALSE’.
For past project I have successfully achieve appending, overwriting, creating, ... of tables with proper combinations of these.

Why do our queries get stuck on the state "Writing to net" in MySql?

We have a lot of queries
select * from tbl_message
that get stuck on the state "Writing to net". The table has 98k rows.
The thing is... we aren't even executing any query like that from our application, so I guess the question is:
What might be generating the query?
...and why does it get stuck on the state "writing to net"
I feel stupid asking this question, but I'm 99,99% sure that our application is not executing a query like that to our database... we are however executing a couple of querys to that table using WHERE statement:
SELECT Count(*) as StrCount FROM tbl_message WHERE m_to=1960412 AND m_restid=948
SELECT Count(m_id) AS NrUnreadMail FROM tbl_message WHERE m_to=2019422 AND m_restid=440 AND m_read=1
SELECT * FROM tbl_message WHERE m_to=2036390 AND m_restid=994 ORDER BY m_id DESC
I have searched our application several times for select * from tbl_message but haven't found anything... But still our query-log on our mysql server is full of Select * from tbl_message queries
Since applications don't magically generate queries as they like, I think that it's rather likely that there's a misstake somewhere in your application that's causing this. Here's a few suggestions that you can use to track it down. I'm guessing that your using PHP, since your using MySQL, so I'll use that for my examples.
Try adding comments in front of all your queries in the application, like this:
$sqlSelect = "/* file.php, class::method() */";
$sqlSelect .= "SELECT * FROM foo ";
$sqlSelect .= "WHERE criteria";
The comment will show up in your query log. If you're using some kind database api wrapper, you could potentially add these messages automatically:
function query($sql)
{
$backtrace = debug_backtrace();
// The function that executed the query
$prev = $backtrace[1];
$newSql = sprintf("/* %s */ ", $prev["function"]);
$newSql .= $sql;
mysql_query($newSql) or handle_error();
}
In case you're not using a wrapper, but rather executing the queries directly, you could use the runkit extension and the function runkit_function_rename to rename mysql_query (or whatever you're using) and intercept the queries.
There are (at least) two data retrieval modes for mysql. With the c api you either call mysql_store_result() or mysql_use_result().
mysql_store_result() returns when all result data is transferred from the MySQL server to your process' memory, i.e. no data has to be transferred for further calls to mysql_fetch_row().
However, by using mysql_use_result() each record has to be fetched individually if and when mysql_fetch_row() is called. If your application does some computing that takes longer than the time period specified in net_write_timeout between two calls to mysql_fetch_row() the MySQL server considers your connection to be timed out.
Temporarily enable the query log by putting
log=
into your my.cnf file, restart mysql and watch the query log for those mystery queries (you don't have to give the log a name, it'll assume one from the host value).