It annoys me that the following query when fired up by an AJAX request takes 1 second to process where as when called during page refresh(synchronous) takes merely 2 ms. I have spent hours tracking down what goes wrong but I am helpless. I have tried Model->read, Model->find, Model->query() yet it takes the same amount of time. I think 1 second for a simple query like this is not natural. May be the CakePHP models wasting too much resources and time. But my instincts say it's related to query cache.
protected function _user_info($id= NULL){
//benchmarking
$time = -microtime(true);
if(!$id){
if($this->Auth->loggedIn())
$id = $this->Auth->user('id');
else
return NULL;
}
$this->loadModel('User');
/*$findOptions = array('conditions'=>array('User.id'=>$id),
'fields'=>'User.id, User.name, User.email, User.role, dp',
'limit'=>1,
'recursive'=>-1);
$r = $this->User->find('first', $findOptions);
*/
$r = $this->User->query("SELECT * FROM users WHERE id = '".$id."' LIMIT 1");
$time += microtime(true);
echo '<h1>'.$time.'</h1>'; //out- time taken for the query
return $r['User'];
}
Any kind of help would be awesome!
First, try normal Cake search style:
// You should have containable
$this->User->contain();
$r = $this->User->find('first',array('conditions'=>array('id'=>$id)));
Test it.
Cheers.
If you're on debug 2 you're not counting execution times but debugging overhead also.
With debug enabled cache won't be used for long which will mean the DB will be asked to DESCRIBE the table, an sql log will be created, expensive object reflection might be requested multiple times especially if you hit warnings, exceptions or non fatal errors and all this will take considerably longer.
Related
I have two tables:
cache and main
In cache there are a lot of fields; in main a little less. A UNION is not going to work because of the unequal number of columns.
cache
client - file - target - many other columns
main
client - file - target - few other columns
From cache I would like all columns for which main.target LIKE '%string%', cache.client = main.client, cache.file = main.file
For these particular records, target, client and file are always the same in main and cache.
I just can't get my head around this, but then again MySQL never was my strongest point.
Thank you very much in advance!
In the end combining the two SELECT statements with a UNION made things very complicated, for the simple reason there were countless other queries, some without UNION, that in the end all had to be processed by the same end routine presenting the results. As this was only a one-time query and time wasn't really an issue, in the end I just used SELECT on the two different tables and then combined the results by checking if a certain field was present. If not, the remaining results had to be fetched from the cache table; if so, the remaining results had to be fetched from the main table.
I actually wonder whether this solution is faster, slower or just as fast.
if (!isset($row['current']))
{
$field = $row['field'];
$sqlcache = "SELECT * FROM " . $dbtable . " WHERE (client = '$sqlclient' AND file = '$sqlfile' AND field = '$field')";
$resultcache = $conn->query($sqlcache);
if (!$resultcache)
{
die($conn->error);
}
$rowcache = $resultcache->fetch_assoc();
$currenttarget = $rowcache['current'];
$context = $rowcache['context'];
$dirtysource = $rowcache['dirtysource'];
$stringid = $rowcache['stringid'];
$limit = $rowcache['maxlength'];
$locked = $rowcache['locked'];
$filei = $rowcache['filei'];
}
else
{
$currenttarget = $row['current'];
$context = $row['context'];
$dirtysource = $row['dirtysource'];
$stringid = $row['stringid'];
$limit = $row['maxlength'];
$locked = $row['locked'];
$filei = $row['filei'];
}
So im doing a project for which i have made an android application which communicates with a mysql database on phpMyAdmin. The android app sends requests to a php file which in turn communicates with the database. these php files are on the same server (000webhost) I have a bit of a problem though. Sometimes the connection doesnt work (eg. the data doesnt come through or i get a time-out error or something)
I have looked at the data traffic on phpMyAdmin and I think it is abnormally high (im not sure). The app is used by about 10 persons and the total queries should be about a 1000 a day. The queries are simple select and update actions. Though when I look at the data traffic (status) on phpMyadmin it says there has been 2.2 TiB data traffic total. Which I think is ridiculous. Also phpMyAdmin gives a lot of errors like "the rate of reading the first index entry is high" "and the rate of opening tables is high". And when I look at the monitor on phpMyAdmin it has random peaks of sudden 150 MiB data sent. Like, everything is incredibly high.
this is an example of a query I send to the database via php:
<?php
$con = mysqli_connect("*************");
$ID = $_POST["ID"];
$naam = $_POST["NAME"];
$inbier = $_POST["INBIER"];
$outbier = $_POST["OUTBIER"];
$pof = $_POST["POF"];
$adjust = $_POST["ADJUST"];
$statement = mysqli_prepare($con, "SELECT * FROM Bierlijst WHERE NAME = ?");
mysqli_stmt_bind_param($statement, "s", $naam);
mysqli_stmt_execute($statement);
mysqli_stmt_store_result($statement);
mysqli_stmt_bind_result($statement, $ID, $naam, $inbier, $outbier, $pof, $adjust);
$response = array();
$response["success"] = false;
while(mysqli_stmt_fetch($statement)){
$response["success"] = true;
$response["NAME"] = $naam;
$response["INBIER"] = $inbier;
$response["OUTBIER"] = $outbier;
$response["POF"] = $pof;
$response["ADJUST"] = $adjust;
}
echo json_encode($response);
?>
Is it possible that I'm running bad queries which are failing and looping and stuff?
I have designed a coding platform just like Spoj and Codeforces for competitions to be organised in my college on LAN.
I have used long polling there so that any announcements from the Admin can be broadcasted to all users with a JavaScript alert message. When anything is posted on the forum then the admin also gets a notification.
But for just 16 users (including the 1 Admin) accessing the site, the server went down showing too many sql connections. I restarted my laptop (server) and it continued for a while, then again went down; giving the same error message as before.
When I removed both long-poll processes everything continued smoothly.
Server-side code for long-poll:
include 'dbconnect.php';
$old_ann_id = $_GET['old_ann_id'];
$resultann = mysqli_query($con,"SELECT cmntid FROM announcements ORDER BY cmntid DESC LIMIT 1");
while($rowann = mysqli_fetch_array($resultann)){
$last_ann_id = $rowann['cmntid'];
}
while($last_ann_id <= $old_ann_id){
usleep(10000000);
clearstatcache();
$resultann = mysqli_query($con,"SELECT cmntid FROM announcements ORDER BY cmntid DESC LIMIT 1");
while($rowann = mysqli_fetch_array($resultann)){
$last_ann_id = $rowann['cmntid'];
}
}
$response = array();
$response['msg'] = 'new';
$response['old_ann_id'] = $last_ann_id;
$resultann = mysqli_query($con, "Select announcements from announcements where cmntid = $last_ann_id");
while($rowann = mysqli_fetch_array($resultann)){
$response['announcement'] = $rowann['announcements'];
}
echo json_encode($response);
Max connections is defined. Think the default is 100 or 151 connections depending on the version of MySQL. You can see the value in "Server variables and settings" in phpmyadmin (or directly by executing *show variables like "max_connections";* ).
If that is set to something very low (say 10) and you have (say) 15 users you will hit the limit rapidly. You are giving each long polling script its own connection, and that connection is probably sitting open until that long polling script ends. You could likely reduce this by having the script disconnect after each time it checks the database, then reconnect the next time it checks (ie, if your long polling script checks the db every 5 seconds you probably have well over 4.5 seconds of that 5 seconds currently where there is a connection to the db but where the connection is not being used)
However you could have a larger number of connections, but if you trigger the ajax polling multiple times per user, each could have several simultaneous connections. This is probably quite easy to do with a minor bug in your javascript.
Possibly worse if you are using a persistent connections you might leave connections open after the user has left the page that calls the long polling script.
EDIT - update based on your script.
Note I am not sure exactly what your dbconnect.php include is doing. I might be possible to easily call a connect / disconnect function in that include, but I have just put it in this example code as using the mysqlu_close and mysqli_connect functions.
<?php
include 'dbconnect.php';
$old_ann_id = $_GET['old_ann_id'];
$resultann = mysqli_query($con,"SELECT MAX(cmntid) FROM announcements");
if($rowann = mysqli_fetch_array($resultann))
{
$last_ann_id = $rowann['cmntid'];
}
$timeout = 0;
while($last_ann_id <=$old_ann_id and $timeout < 6)
{
$timeout++;
mysqli_close($con);
usleep(10000000);
clearstatcache();
$con = mysqli_connect("myhost","myuser","mypassw","mybd");
$resultann = mysqli_query($con,"SELECT MAX(cmntid) FROM announcements");
if($rowann = mysqli_fetch_array($resultann))
{
$last_ann_id = $rowann['cmntid'];
}
}
if ($last_ann_id >$old_ann_id)
{
$response = array();
$response['msg'] = 'new';
$response['old_ann_id'] = $last_ann_id;
$resultann=mysqli_query($con,"SELECT cmntid, announcements FROM announcements WHERE cmntid>$old_ann_id ORDER BY cmntid");
while($rowann = mysqli_fetch_array($resultann))
{
$response['announcement'][]=$rowann['announcements'];
$response['old_ann_id'] = $rowann['cmntid'];
}
mysqli_close($con);
echo json_encode($response);
}
else
{
echo "No annoucements - resubmit";
}
?>
I have added a count to the main loop. But it will drop out of the loop whether anything is found once it has executed 6 times. This way even if someone leaves the page the script will only be running for a short time afterwards (max a minute). You will have to amend you javascript to catch this and resubmit the ajax call.
Also I have changed the announcement in the response to be an array. This way if there are several announcements while the script is running all will be brought back.
I have two tables, one is static database that i need to search in, the other is dynamic that i will be using to search the first database. Right now i have two separate queries. First on page load, values from second table are passed to first one as search term, and i am "capturing" the search result using cURL. This is very inefficient and probably really wrong way to do it, so i need help in fixing this issue. Currently page (html, front-end) takes 40 seconds to load.
Possible solutions: Turn it into function, but still makes so many calls out. Load table into memory and then run queries and unload cache once done. Use regexp to help speed up query? Possible join? But i am a noob so i can only imagine...
Search script:
require 'mysqlconnect.php';
$id = NULL;
if(isset($_GET['n'])) { $id = mysql_real_escape_string($_GET['n']); }
if(isset($_POST['n'])) { $id = mysql_real_escape_string($_POST['n']); }
if(!empty($id)){
$getdata = "SELECT id, first_name, last_name, published_name,
department, telephone FROM $table WHERE id = '$id' LIMIT 1";
$result = mysql_query($getdata) or die(mysql_error());
$num_rows = mysql_num_rows($result);
while($row = mysql_fetch_array($result, MYSQL_ASSOC))
{
echo <<<PRINTALL
{$row[id]}~~::~~{$row[first_name]}~~::~~{$row[last_name]}~~::~~{$row[p_name]}~~::~~{$row[dept]}~~::~~{$row[ph]}
PRINTALL;
}
}
HTML Page Script:
require 'mysqlconnect.php';
function get_data($url)
{
$ch = curl_init();
$timeout = 5;
curl_setopt($ch,CURLOPT_URL,$url);
curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch,CURLOPT_CONNECTTIMEOUT,$timeout);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
$getdata = "SELECT * FROM $table WHERE $table.mid != '1'ORDER BY $table.$sortbyme $o LIMIT $offset, $rowsPerPage";
$result = mysql_query($getdata) or die(mysql_error());
while($row = mysql_fetch_array($result, MYSQL_ASSOC))
{
$idurl = 'http://mydomain.com/dir/file.php?n='.$row['id'].'';
$p_arr = explode('~~::~~',get_data($idurl));
$p_str = implode(' ',$p_arr);
//Use p_srt and p_arr if exists, otherwise just output rest of the
//html code with second table values
}
As you can see, second table may or may not have valid id, hence no results but second table is quiet large, and all in all, i am reading and outputting 15k+ table cells. And as you can probably see from the code, i have tried paging but that solution doesn't fit my needs. I have to have all of the data on client side in single html page. So please advice.
Thanks!
EDIT
First table:
id_row id first_name last_name dept telephone
1 aaa12345 joe smith ANS 800 555 5555
2 bbb67890 sarah brown ITL 800 848 8848
Second_table:
id_row type model har status id date
1 ATX Hybrion 88-85-5d-id-ss y aaa12345 2011/08/12
2 BTX Savin none n aaa12345 2010/04/05
3 Full Hp 44-55-sd-qw-54 y ashley a 2011/07/25
4 ATX Delin none _ smith bon 2011/04/05
So the second table is the one that gets read and displayed, first is read and info displayed if ID is positive match. ID is only unique in the first one, second one has multi format input so it could or could not be ID as well as could be duplicate ID. Hope this gives better understanding of what i need. Thanks again!
A few things:
Curl is completely unnecessary here.
Order by will slow down your queries considerably.
I'd throw in an if is_numeric check on the ID.
Why are you using while and mysql_num_rows when you're limiting to 1 in the query?
Where are $table and these other things being set?
There is code missing.
If you give us the data structure for the two tables in question we can help you with the queries, but the way you have this set up now, I'm surprised its even working at all.
What you're doing is, for each row in $table where mid!=1 you're executing a curl call to a 2nd page which takes the ID and queries again. This is really really bad, and much more convoluted than it needs to be. Lets see your table structures.
Basically you can do:
select first_name, last_name, published_name, department, telephone FROM $table1, $table2 WHERE $table1.id = $table2.id and $table2.mid != 1;
Get rid of the curl, get rid of the exploding/imploding.
I've wrote a script to batch process domains and retrieve data on each one. For each domain retrieved, it connects to a remote page via curl and retrieves the data required for 30 domains at a time.
This page typical takes between 2 - 3 mins to load and return the curl result, at this point, the details are parsed and placed into an array (page rank tools function).
Upon running this script via CRON, I keep getting the error 'MySQL server has gone away'.
Can anyone tell me if I'm missing something obvious that could be causing this?
// script dies after 4 mins in time for next cron to start
set_time_limit(240);
include('../include_prehead.php');
$sql = "SELECT id, url FROM domains WHERE (provider_id = 9 OR provider_id = 10) AND google_page_rank IS NULL LIMIT 30";
$result = mysql_query($sql);
$row = mysql_fetch_assoc($result);
do {
$url_list[$row['id']] = $row['url'];
} while ($row = mysql_fetch_assoc($result));
// curl domain information page - typically takes about 3 minutes
$pr = page_rank_tools($url_list);
foreach ($pr AS $p) {
// each domain
if (isset($p['google_page_rank']) && isset($p['alexa_rank']) && isset($p['links_in_yahoo']) && isset($p['links_in_google'])) {
$sql = "UPDATE domains SET google_page_rank = '".$p['google_page_rank']."' , alexa_rank = '".$p['alexa_rank']."' , links_in_yahoo = '".$p['links_in_yahoo']."' , links_in_google = '".$p['links_in_google']."' WHERE id = '".$p['id']."'";
mysql_query($sql) or die(mysql_error());
}
}
Thanks
CJ
This happens because MySQL connection has its own timeout and while you are parsing your pages, well, it ends. You can try to increase this timeout with
ini_set('mysql.connect_timeout', 300);
ini_set('default_socket_timeout', 300);
(as mentioned in MySQL server has gone away - in exactly 60 seconds)
Or just call mysql_connect() again.
Because the curl take too long time, you can consider to connect again your database before entering the LOOP for update
There are many reasons why this error occurs. See a list here, it may be something you can fix quite easily
MySQL Server has gone away