Cloud SQL marginally slower than localhost server - mysql

I have trouble understanding why Cloud SQL is marginally slower than my localhost MySQL server.
I can accept that a little variation can occur from configuration to configuration but...
In localhost it only takes 240 milliseconds to fulfill a query that just returns all records from a table with its relationships (there's only 90 records) while in Cloud SQL it takes almost 30 seconds. And It's just like 28kb of data.
I am using PHP with Laravel 5.5. According to all the GCP docs I have all the configuration needed.
I don't think it is a quota problem.
Using second gen Cloud SQL and connecting from App Engine flex environment with auto-scale.
I did improve the server specs of the Cloud SQL instance to 1.7 GB RAM. I don't think it is a server performance related issue...
Thanks.
Edit
This is confusing.
I am debugging the raw SQL strings being executed and when I use DD to debug I am preventing the JSON response and it takes just 1 second. A lot better by some reason I can't understand.
This is the one ending faster
DB::enableQueryLog();
Contact::with([
'rooms',
'bathrooms',
'parkingLots',
'generalInterests',
'locationInterests',
'strongInterests',
'phonePrefix',
'source',
'contactType'
])->get();
dd(DB::getQueryLog());
return new JsonResponse([
'query' => 'all',
'results' => Contact::with([
'rooms',
'bathrooms',
'parkingLots',
'generalInterests',
'locationInterests',
'strongInterests',
'phonePrefix',
'source',
'contactType'
])->get()
], 200);
And this is the one that just takes the same loooong time
DB::enableQueryLog();
Contact::with([
'rooms',
'bathrooms',
'parkingLots',
'generalInterests',
'locationInterests',
'strongInterests',
'phonePrefix',
'source',
'contactType'
])->get();
Log::debug(print_r(DB::getQueryLog(), true));
return new JsonResponse([
'query' => 'all',
'results' => Contact::with([
'rooms',
'bathrooms',
'parkingLots',
'generalInterests',
'locationInterests',
'strongInterests',
'phonePrefix',
'source',
'contactType'
])->get()
], 200);
As you can see, the only difference is the return being reached or not.
Edit 2
Problem found
Turns out that when returning the JSON response, the Eloquent model processes the appenders of the model which make some requests to another API REST and it does that for each model in the result set.
In localhost, that same operation just returns null before making the request. So, there's my problem.
Nothing to do with MySQL, GCP, Laravel, or PHP. Just my own dumbness.
Thank you very much for taking the time to read this absurd question. My most sincere apologies.

Have you checked the throughput of your Internet connection? That is the main difference between your 2 environments. Other ways to determine the cause include: running a query straight from SQL, checking your logs (especially the SQL DB log), trying the same query from another location.
In my experience, running a query straight from SQL may give you the best help. Just a basic SELECT * from my_table; and see how long that takes. And I found this document on the Google site, it might help you with investigation and tuning tips. It mentions AppEngine limits, but with that amount of data, I don't think you are running into any of those.

Related

Difference between offset vs limit [duplicate]

I have this really big table with some millions of records every day and in the end of every day I am extracting all the records of the previous day. I am doing this like:
String SQL = "select col1, col2, coln from mytable where timecol = yesterday";
Statement.executeQuery(SQL);
The problem is that this program takes like 2GB of memory because it takes all the results in memory then it processes it.
I tried setting the Statement.setFetchSize(10) but it takes exactly the same memory from OS it does not make any difference. I am using Microsoft SQL Server 2005 JDBC Driver for this.
Is there any way to read the results in small chunks like the Oracle database driver does when the query is executed to show only a few rows and as you scroll down more results are shown?
In JDBC, the setFetchSize(int) method is very important to performance and memory-management within the JVM as it controls the number of network calls from the JVM to the database and correspondingly the amount of RAM used for ResultSet processing.
Inherently if setFetchSize(10) is being called and the driver is ignoring it, there are probably only two options:
Try a different JDBC driver that will honor the fetch-size hint.
Look at driver-specific properties on the Connection (URL and/or property map when creating the Connection instance).
The RESULT-SET is the number of rows marshalled on the DB in response to the query.
The ROW-SET is the chunk of rows that are fetched out of the RESULT-SET per call from the JVM to the DB.
The number of these calls and resulting RAM required for processing is dependent on the fetch-size setting.
So if the RESULT-SET has 100 rows and the fetch-size is 10,
there will be 10 network calls to retrieve all of the data, using roughly 10*{row-content-size} RAM at any given time.
The default fetch-size is 10, which is rather small.
In the case posted, it would appear the driver is ignoring the fetch-size setting, retrieving all data in one call (large RAM requirement, optimum minimal network calls).
What happens underneath ResultSet.next() is that it doesn't actually fetch one row at a time from the RESULT-SET. It fetches that from the (local) ROW-SET and fetches the next ROW-SET (invisibly) from the server as it becomes exhausted on the local client.
All of this depends on the driver as the setting is just a 'hint' but in practice I have found this is how it works for many drivers and databases (verified in many versions of Oracle, DB2 and MySQL).
The fetchSize parameter is a hint to the JDBC driver as to many rows to fetch in one go from the database. But the driver is free to ignore this and do what it sees fit. Some drivers, like the Oracle one, fetch rows in chunks, so you can read very large result sets without needing lots of memory. Other drivers just read in the whole result set in one go, and I'm guessing that's what your driver is doing.
You can try upgrading your driver to the SQL Server 2008 version (which might be better), or the open-source jTDS driver.
You need to ensure that auto-commit on the Connection is turned off, or setFetchSize will have no effect.
dbConnection.setAutoCommit(false);
Edit: Remembered that when I used this fix it was Postgres-specific, but hopefully it will still work for SQL Server.
Statement interface Doc
SUMMARY: void setFetchSize(int rows)
Gives the JDBC driver a hint as to the
number of rows that should be fetched
from the database when more rows are
needed.
Read this ebook J2EE and beyond By Art Taylor
Sounds like mssql jdbc is buffering the entire resultset for you. You can add a connect string parameter saying selectMode=cursor or responseBuffering=adaptive. If you are on version 2.0+ of the 2005 mssql jdbc driver then response buffering should default to adaptive.
http://msdn.microsoft.com/en-us/library/bb879937.aspx
It sounds to me that you really want to limit the rows being returned in your query and page through the results. If so, you can do something like:
select * from (select rownum myrow, a.* from TEST1 a )
where myrow between 5 and 10 ;
You just have to determine your boundaries.
Try this:
String SQL = "select col1, col2, coln from mytable where timecol = yesterday";
connection.setAutoCommit(false);
PreparedStatement stmt = connection.prepareStatement(SQL, SQLServerResultSet.TYPE_SS_SERVER_CURSOR_FORWARD_ONLY, SQLServerResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(2000);
stmt.set....
stmt.execute();
ResultSet rset = stmt.getResultSet();
while (rset.next()) {
// ......
I had the exact same problem in a project. The issue is that even though the fetch size might be small enough, the JDBCTemplate reads all the result of your query and maps it out in a huge list which might blow your memory. I ended up extending NamedParameterJdbcTemplate to create a function which returns a Stream of Object. That Stream is based on the ResultSet normally returned by JDBC but will pull data from the ResultSet only as the Stream requires it. This will work if you don't keep a reference of all the Object this Stream spits. I did inspire myself a lot on the implementation of org.springframework.jdbc.core.JdbcTemplate#execute(org.springframework.jdbc.core.ConnectionCallback). The only real difference has to do with what to do with the ResultSet. I ended up writing this function to wrap up the ResultSet:
private <T> Stream<T> wrapIntoStream(ResultSet rs, RowMapper<T> mapper) {
CustomSpliterator<T> spliterator = new CustomSpliterator<T>(rs, mapper, Long.MAX_VALUE, NON-NULL | IMMUTABLE | ORDERED);
Stream<T> stream = StreamSupport.stream(spliterator, false);
return stream;
}
private static class CustomSpliterator<T> extends Spliterators.AbstractSpliterator<T> {
// won't put code for constructor or properties here
// the idea is to pull for the ResultSet and set into the Stream
#Override
public boolean tryAdvance(Consumer<? super T> action) {
try {
// you can add some logic to close the stream/Resultset automatically
if(rs.next()) {
T mapped = mapper.mapRow(rs, rowNumber++);
action.accept(mapped);
return true;
} else {
return false;
}
} catch (SQLException) {
// do something with this Exception
}
}
}
you can add some logic to make that Stream "auto closable", otherwise don't forget to close it when you are done.

Building a query in PDO with parameters for table name [duplicate]

This is my PHP SQL statement and it's returning false while var dumping
$sql = $dbh->prepare('INSERT INTO users(full_name, e_mail, username, password) VALUES (:fullname, :email, :username, :password)');
$result = $sql->execute(array(
':fullname' => $_GET['fullname'],
':email' => $_GET['email'],
':username' => $_GET['username'],
':password' => $password_hash));
TL;DR
Always have set PDO::ATTR_ERRMODE to PDO::ERRMODE_EXCEPTION in your PDO connection code. It will let the database tell you what the actual problem is, be it with query, server, database or whatever. Also, make sure you can see PHP errors in general.
Always replace every PHP variable in the SQL query with a question mark, and execute the query using prepared statement. It will help to avoid syntax errors of all sorts.
Explanation
Sometimes your PDO code produces an error like Call to a member function execute() or similar. Or even without any error but the query doesn't work all the same. It means that your query failed to execute.
Every time a query fails, MySQL has an error message that explains the reason. Unfortunately, by default such errors are not transferred to PHP, and all you have is a silence or a cryptic error message mentioned above. Hence it is very important to configure PHP and PDO to report you MySQL errors. And once you get the error message, it will be a no-brainer to fix the issue.
In order to get the detailed information about the problem, either put the following line in your code right after connect
$dbh->setAttribute( PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION );
(where $dbh is the name of your PDO instance variable) or - better - add this parameter as a connection option. After that all database errors will be translated into PDO exceptions which, if left alone, would act just as regular PHP errors.
After getting the error message, you have to read and comprehend it. It sounds too obvious, but learners often overlook the meaning of the error message. Yet most of time it explains the problem pretty straightforward:
Say, if it says that a particular table doesn't exist, you have to check spelling, typos, letter case. Also you have to make sure that your PHP script connects to a correct database
Or, if it says there is an error in the SQL syntax, then you have to examine your SQL. And the problem spot is right before the query part cited in the error message.
You have to also trust the error message. If it says that number of tokens doesn't match the number of bound variables then it is so. Same goes for absent tables or columns. Given the choice, whether it's your own mistake or the error message is wrong, always stick to the former. Again it sounds condescending, but hundreds of questions on this very site prove this advice extremely useful.
Note that in order to see PDO errors, you have to be able to see PHP errors in general. To do so, you have to configure PHP depends on the site environment:
on a development server it is very handy to have errors right on the screen, for which displaying errors have to be turned on:
error_reporting(E_ALL);
ini_set('display_errors',1);
while on a live site, all errors have to be logged, but never shown to the client. For this, configure PHP this way:
error_reporting(E_ALL);
ini_set('display_errors', 0);
ini_set('log_errors', 1);
Note that error_reporting should be set to E_ALL all the time.
Also note that despite the common delusion, no try-catch have to be used for the error reporting. PHP will report you PDO errors already, and in a way better form. An uncaught exception is very good for development, yet if you want to show a customized error page, still don't use try catch for this, but just set a custom error handler. In a nutshell, you don't have to treat PDO errors as something special but regard them as any other error in your code.
P.S.
Sometimes there is no error but no results either. Then it means, there is no data to match your criteria. So you have to admit this fact, even if you can swear the data and the criteria are all right. They are not. You have to check them again. I've short answer that would help you to pinpoint the matching issue, Having issue with matching rows in the database using PDO. Just follow this instruction, and the linked tutorial step by step and either have your problem solved or have an answerable question for Stack Overflow.
Some time ago I had the same problem of not seeing any error messages from mysql. After a research it turned out that the problem has got nothing to do with PHP itself, but with mysql server configuration. The default value of the variable lc_messages_dir pointed to non existing directory. After adding a line in mysqld.cnf, then restarted the mysql server, and finally I was able to see the error messages. For me the following was the right one:
lc_messages_dir=/usr/share/mysql
It is described in the mysql reference manual: https://dev.mysql.com/doc/refman/5.7/en/error-message-language.html

Postgres vs MySQL: Commands out of sync;

MySQL scenario:
When I execute "SELECT" queries in MySQL using multiple threads I get the following message: "Commands out of sync; you can't run this command now", I found that this is due to the limitation of having to wait "consume" the results to make another query. C ++ example:
void DataProcAsyncWorker::Execute()
{
std::thread (&DataProcAsyncWorker::Run, this).join();
}
void DataProcAsyncWorker :: Run () {
sql::PreparedStatement * prep_stmt = c->con->prepareStatement(query);
...
}
Important:
I can't help using multiple threads per query (SELECT, INSERT, ETC) because the module I'm building that is being integrated with NodeJS "locks" the thread until the result is already obtained, for this reason I need to run in the background (new thread) and resolve the "promise" containing the result obtained from MySQL
Important:
I am saving several "connections" [example: 10], and with each SQL call the function chooses a connection. This is: 1. A connection pool that contains 10 established connections, Ex:
for (int i = 0; i <10; i ++) {
Com * c = new Com;
c->id = i;
c->con = openConnection ();
c->con->setSchema("gateway");
conns.push_back(c);
}
The problem occurs when executing> = 100 SELECT queries per second, I believe that even with the connection balance 100 connections per second is a high number and the connection "ex: conns.at(10)" is in process and was not consumed
My question:
Does PostgreSQL have this limitation as well? Or in PostgreSQL there is also such a limitation?
Note:
In PHP Docs about MySQL, the mysqli_free_result command is required after using mysqli_query, if not, I will get a "Commands out of sync" error, in contrast to the PostgreSQL documentation, the pg_free_result command is completely optional after using pg_query.
That said, someone using PostgreSQL has already faced problems related to "commands are out of sync", maybe there is another name for this error?
Or is PostgreSQL able to deal with this problem automatically for this reason the free_result is being called invisibly by the server without causing me this error?
You need to finish using one prepared statement (or cursor or similar construct) before starting another.
"Commands out of sync" is often cured by adding the closing statement.
"Question:
Does PostgreSQL have this limitation as well? Or in PostgreSQL there is also such a limitation?"
No, the PostgreSQL does not have this limitation.

Calling MySQL stored procedure in ROR 4

There are few example out there but non of them are very clarified (or on old version).
I want to call MySQL procedure and check the return status (in rails 4.2). The most common method I saw is to call result = ActiveRecord::Base.connection.execute("call example_proc()"), but in some places people wrote there is prepared method result = ActiveRecord::Base.connection.execute_procedure("Stored Procedure Name", arg1, arg2) (however it didn't compiled).
So what is the correct way to call and get the status for MySQL procedure?
Edit:
And how to send parameters safly, where the first parameter is integer, second string and third boolean?
Rails 4 ActiveRecord::Base doesn't support execute_procedure method, though result = ActiveRecord::Base.connection still works. ie
result = ActiveRecord::Base.connection.execute("call example_proc('#{arg1}','#{arg2}')")
You can try Vishnu approach below
or
You can also try
ActiveRecord::Base.connections.exec_query("call example_proc('#{arg1}','#{arg2}')")
here is the document
In general, you should be able to call stored procedures in a regular where or select method for a given model:
YourModel.where("YOUR_PROC(?, ?)", var1, var2)
As for your comment "Bottom line I want the most correct approach with procedure validation afterwards (for warnings and errors)", I guess it always depends on what you actually want to implement and how readable you want your code to be.
For example, if you want to return rows of YourModel attributes, then it probably would be better if you use the above statement with where method. On the other hand, if you write some sql adapter then you might want to go down to the ActiveRecord::Base.connection.execute level.
BTW, there is something about stored proc performance that should be mentioned here. In several databases, database does stored proc optimization on the first run of the stored proc. However, the parameters that you pass to that first run might not be those that will be running on it more frequently later on. As a result, your stored-proc might be auto-optimized in a "none-optimal" way for your case. It may or may not happen this way, but it is something that you should consider while using stored procs with dynamic params.
I believe you have tried many other solutions and got some or other errors mostly "out of sync" or "closed connection" errors. These errors occur every SECOND time you try to execute the queries. We need to workaround like the connection is new every time to overcome this. Here is my solution that didn't throw any errors.
#checkout a connection for Model
conn = ModelName.connection_pool.checkout
#use the new connection to execute the query
#records = conn.execute("call proc_name('params')")
#checkout the connection
ModelName.connection_pool.checkin(conn)
The other approaches failed for me, possibly because ActiveRecord connections are automatically handled to checkout and checking for each thread. When our method tries to checkout a connection just to execute the SP, it might conflict since there will be an active connection just when the method started.
So the idea is to manually #checkout a connection for the model instead of for thread/function from the pool and #checkin once the work is done. This worked great for me.

Backup database(s) using query without using mysqldump

I'd like to dump my databases to a file.
Certain website hosts don't allow remote or command line access, so I have to do this using a series of queries.
All of the related questions say "use mysqldump" which is a great tool but I don't have command line access to this database.
I'd like CREATE and INSERT commands to be created at the same time - basically, the same performance as mysqldump. Is SELECT INTO OUTFILE the right road to travel, or is there something else I'm overlooking - or maybe it's not possible?
Use mysqldump-php a pure-PHP solution to replicate the function of the mysqldump executable for basic to med complexity use cases - I understand you may not have remote CLI and/or mysql direct access, but so long as you can execute via an HTTP request on a httpd on the host this will work:
So you should be able to just run the following purely PHP script straight from a secure-directory in /www/ and have an output file written there and grab it with a wget.
mysqldump-php - Pure PHP mysqldump on GitHub
PHP example:
<?php
require('database_connection.php');
require('mysql-dump.php')
$dumpSettings = array(
'include-tables' => array('table1', 'table2'),
'exclude-tables' => array('table3', 'table4'),
'compress' => CompressMethod::GZIP, /* CompressMethod::[GZIP, BZIP2, NONE] */
'no-data' => false,
'add-drop-table' => false,
'single-transaction' => true,
'lock-tables' => false,
'add-locks' => true,
'extended-insert' => true
);
$dump = new MySQLDump('database','database_user','database_pass','localhost', $dumpSettings);
$dump->start('forum_dump.sql.gz');
?>
With your hands tied by your host, you may have to take a rather extreme approach. Using any scripting option your host provides, you can achieve this with just a little difficulty. You can create a secure web page or strait text dump link known only to you and sufficiently secured to prevent all unauthorized access. The script to build the page/text contents could be written to follow these steps:
For each database you want to back up:
Step 1: Run SHOW TABLES.
Step 2: For each table name returned by the above query, run SHOW CREATE TABLE to get the create statement that you could run on another server to recreate the table and output the results to the web page. You may have to prepend "DROP TABLE X IF EXISTS;" before each create statement generated by the results of these queryies (!not in your query input!).
Step 3: For each table name returned from step 1 again, run a SELECT * query and capture full results. You will need to apply a bulk transformation to this query result before outputing to screen to convert each line into an INSERT INTO tblX statement and output the final transformed results to the web page/text file download.
The final web page/text download would have an output of all create statements with "drop table if exists" safeguards, and insert statements. Save the output to your own machine as a ".sql" file, and execute on any backup host as needed.
I'm sorry you have to go through with this. Note that preserving mysql user accounts that you need is something else entirely.
Use / Install PhpMySQLAdmin on your web server and click export. Many web hosts already offer you this as a service pre-configured, and it's easy to install if you don't already have it (pure php): http://www.phpmyadmin.net/
This allows you to export your database(s), as well as perform other otherwise tedious database operations very quickly and easily -- and it works for older versions of PHP < 5.3 (unlike the Mysqldump.php offered as another answer here).
I am aware that the question states 'using query' but I believe the point here is that any means necessary is sought when shell access is not available -- that is how I landed on this page, and PhpMyAdmin saved me!