fast execution analitycs database with thousand rows to displaying in php - mysql

i've table with a thousand rows and i want to creating analitycs with chart display my front end php
my table structure is
and how i display this data :
by user_agent column i display operating system, browsers, and devices.
for now i still using the old algorithm with looping using for () method and parsing each rows. And it takes a long time respond and displaying the data.
anyone knows how i can display this data without take long respond in my website? any idea? with the database structure or my php script?
Thankyou before.

Assuming you're loading all your data in a PHP script and postprocessing it in a for-loop in PHP, you should alter your database query. A GROUP BY statement might help. Of course, you need to alter your script to work with the new data. Revisiting your database structure is a good idea, too. A better approach might be not to save the whole user-agent string in one column but to use several columns.
Example before:
$data = $db->query('SELECT * FROM table');
for ($i = 0; $i <= $data->max(); i++) {
$row = $data->getRow($i);
postprocessRow($row); /* $sum += 1; */
}
Example after:
$data = $db->query('SELECT count(*) as weight, * FROM table GROUP BY user_agent');
for ($i = 0; $i <= $data->max(); i++) {
$row = $data->getRow($i);
postprocessRowWeighted($row); /* $sum += $row['weight']; */
}

Related

Can't wrap my head around MySQL statement

I have two tables:
cache and main
In cache there are a lot of fields; in main a little less. A UNION is not going to work because of the unequal number of columns.
cache
client - file - target - many other columns
main
client - file - target - few other columns
From cache I would like all columns for which main.target LIKE '%string%', cache.client = main.client, cache.file = main.file
For these particular records, target, client and file are always the same in main and cache.
I just can't get my head around this, but then again MySQL never was my strongest point.
Thank you very much in advance!
In the end combining the two SELECT statements with a UNION made things very complicated, for the simple reason there were countless other queries, some without UNION, that in the end all had to be processed by the same end routine presenting the results. As this was only a one-time query and time wasn't really an issue, in the end I just used SELECT on the two different tables and then combined the results by checking if a certain field was present. If not, the remaining results had to be fetched from the cache table; if so, the remaining results had to be fetched from the main table.
I actually wonder whether this solution is faster, slower or just as fast.
if (!isset($row['current']))
{
$field = $row['field'];
$sqlcache = "SELECT * FROM " . $dbtable . " WHERE (client = '$sqlclient' AND file = '$sqlfile' AND field = '$field')";
$resultcache = $conn->query($sqlcache);
if (!$resultcache)
{
die($conn->error);
}
$rowcache = $resultcache->fetch_assoc();
$currenttarget = $rowcache['current'];
$context = $rowcache['context'];
$dirtysource = $rowcache['dirtysource'];
$stringid = $rowcache['stringid'];
$limit = $rowcache['maxlength'];
$locked = $rowcache['locked'];
$filei = $rowcache['filei'];
}
else
{
$currenttarget = $row['current'];
$context = $row['context'];
$dirtysource = $row['dirtysource'];
$stringid = $row['stringid'];
$limit = $row['maxlength'];
$locked = $row['locked'];
$filei = $row['filei'];
}

Big database - doctrine query slow even with index

I'm building an app with Symfony 4 + Doctrine, where people can upload big CSV files and those records then get stored in a database. Before inserting, I'm checking that the entry doesn't already exist...
On a sample CSV file with only 1000 records, it takes 16 seconds without an index and 8 seconds with an index (MacBook 3Ghz - 16 GB Memory). My intuition tells me, this is quite slow and should be done in under < 1 sec especially with the index.
The index is set on the email column.
My code:
$ssList = $this->em->getRepository(EmailList::class)->findOneBy(["id" => 1]);
foreach ($csv as $record) {
$subscriber_exists = $this->em->getRepository(Subscriber::class)
->findOneByEmail($record['email']);
if ($subscriber_exists === NULL) {
$subscriber = (new Subscriber())
->setEmail($record['email'])
->setFirstname($record['first_name'])
->addEmailList($ssList)
;
$this->em->persist($subscriber);
$this->em->flush();
}
}
My Question:
How can I speed up this process?
Use LOAD DATA INFILE.
LOAD DATA INFILE has IGNORE and REPLACE options for handling duplicates if you put a UNIQUE KEY or PRIMARY KEY on your email column.
Look at settings for making the import faster.
Like Cid said, move the flush() outside of the loop or put a batch counter inside the loop and only flush inside of it at certain intervals
$batchSize = 1000;
$i = 1;
foreach ($csv as $record) {
$subscriber_exists = $this->em->getRepository(Subscriber::class)
->findOneByEmail($record['email']);
if ($subscriber_exists === NULL) {
$subscriber = (new Subscriber())
->setEmail($record['email'])
->setFirstname($record['first_name'])
->addEmailList($ssList)
;
$this->em->persist($subscriber);
if ( ($i % $batchSize) === 0) {
$this->em->flush();
}
$i++;
}
}
$this->em->flush();
Or if that's still slow, you could grab the Connection $this->em->getConnection() and use DBAL as stated here: https://www.doctrine-project.org/projects/doctrine-dbal/en/2.8/reference/data-retrieval-and-manipulation.html#insert

Loop in column name MYSQL

I am using MYSQL.My table contains column name as Revenue2000,Revenue2001,Revenue2002,....,Revenue 2016,Revenue 2017
Traditional way(to select all column manually):
select Revenue2005,
Revenue2006,
Revenue2007,
Revenue2008,
Revenue2009,
Revenue2010
from table_name
Desired Way:
I want to write a Dynamic select statement .There should 2 variables "start" and "end" so that i can make it dynamic.User has the option to specify the starting year and ending year and can view the desired result.
In above case, Start year =2005
End Year=2010
Yes, it's bad database design, and the best answer would be "don't do this at all, just fix your table." Unfortunately, sometimes you're stuck with something someone else made, and can't change it for whatever reason, but you still need to accomplish something (welcome to my life). I would do it like this:
Get the years from user input and convert them to integers in case someone enters something silly/naughty. Don't depend on client-side validation. Prepared statements won't help you here because these will be used as parts of column names.
$start = (int) $_POST['start'];
$end = (int) $_POST['end'];
Do a quick sanity check to make sure that the range makes sense and should work with what's in your database.
if ($start > $end
|| $start < $lowest_year_in_your_db
|| $end > $highest_year_in_your_db) {
// quit with error
}
Then you can generate a list of columns to use in your query. Here's one way with range and array_map, but you could also just build a string with a for loop.
$columns = implode(', ', array_map(function($year) {
return "Revenue$year";
}, range($start, $end)));
$sql = "SELECT $columns FROM table_name";
Theoretically, the worst thing that should be able to happen with this is that you'd get a column that didn't exist, and your query would fail.
But really, if you have any choice about it, don't do this. Normalize your database as people have stated in the comments, or find whoever keeps adding more year columns to the database and make them do it.
As already pointed out the database design is horrible. You should really normalize it, it's worth the effort.
However if that is not possible at the moment the follow code should do exactly what you need:
// Connect to DB
$mysqli = new mysqli("localhost", "USERNAME", "PASSWORD", "DATABASE");
// Get column names
$columns = $mysqli->query('SHOW COLUMNS FROM revenue')->fetch_all();
$columnNames = array_column($columns, 0);
// Extract years from column names
$years = array_map(function($columnName) {
return (int) substr($columnName, -4);
}, $columnNames);
// Get max and min year
$maxYear = max($years);
$minYear = min($years);
// Input year start and end
$start = (int) $_POST['start']; // User-input
$end = (int) $_POST['end']; // User-input
// Avoid wrong inputs
if($start > $end || $start < $minYear || $end > $maxYear) {
die('Error');
}
// Create the SQL-query
$selectColumns = [];
for ($i = $start; $i <= $end; $i++) {
$selectColumns[] = "revenue" . $i;
}
$queryString = "SELECT " . implode(", ", $selectColumns) . " FROM TABLE";
// Run the query
// ...

How to fetch a record from a column or field?

I have a table with a column named balance.
if(mysqli_num_rows($get_bank_check_res) > 0){
$display_block = "<p>your autho code is:</p>";
$account_check = mysql_fetch_array($get_bank_check_res);
$balance= $account_check > $grand_total_safe ? (balance - $grand_total_safe) : 0;
$display_block .= "<p>your balance is: '".$balance."' </p>";
I received the warning : Undefined variable balance. Trying mysql_fetch_assoc() didn't work either.
You get a row back with mysql_fetch_array, it doesn't automagically create new variables for you. Ie your column is located here. Also, since you are using the MySQLi extension instead of mysql, it look like this:
$row = $get_bank_check_res->fetch_assoc();
$balance = $row["balance"];
then you can do you whatever math your doing using the values found inside your $row array.

Zend framework - insert less than 500 rows only

I'm using Zend framework with Mysql. My application loads the data from a csv file into the mysql database. The table has two columns (id and name). The application uses file_get_contents to read the csv file and uses $this->insert($data) of Zend_Db_Table. The file has exactly two columns similar to the table.
The problem I'm facing is, while inserting data, it inserts around 500 rows only. Remaining rows are not inserted in database. No errors are shown in the browser and the application works like nothing happened. I tried with different data but the problem is the same.
$file = file_get_contents($filename, FILE_USE_INCLUDE_PATH);
$lines = explode("\n", $file);
$i=1;
for($c=1; $c < (count($lines)-1); $c++) {
list($field1, $field2) = explode(",", $lines[$i]);
$borrower= new Application_Model_DbTable_TempB();
$borrower->uploadborrower($field1, $field2);
$i++;
The uploadborrower function simply makes array $data and insert by using this->insert($data) – A
Can anyone help me to find where the problem is and how to solve the problem?
Can it be a problem of timeout? If the CSV is massive, it can happen.
Try:
set_time_limit(0);
before to execute your code.
$file = file_get_contents($filename, FILE_USE_INCLUDE_PATH);
$lines = explode("\n", $file);
set_time_limit(0);
$i=1;
for($c=1; $c < (count($lines)-1); $c++) {
list($field1, $field2) = explode(",", $lines[$i]);
$borrower= new Application_Model_DbTable_TempB();
$borrower->uploadborrower($field1, $field2);
$i++;