I have a working Qt application that uses MySQL as a Database, but I decided to change the database to Sqlite in order to deploy on android. I created the same DB in Sqlite with the same data and tables structure.
I changed the connection to the Sqlite successfully.
By testing the App (in Desktop) I found that some queries works fine with both Sqlite and MySQL, but in some other places the query doesn't return any data in Sqlite while working fine in MySQL.
Its not a query problem, because I changed the Query to a simple SELECT * FROM TABLE_NAME and I still get the same problem.
Here is a simple code snippet
QSqlQuery qry;
qry.prepare("SELECT * FROM users;");
if( !qry.exec() ){
qDebug()<< qry.lastError().text().toLatin1();
qDebug()<< "data" ;
}
else if (qry.size() < 1) {
qDebug()<< "There is no users" << qry.size();
}
else {
qDebug()<< "It Works !!" << qry.size();
}
While using Sqlite I always get there is no users -1
But in MySQL it returns the right number of the rows in the table.
Any suggestion what the problem might be! Is it related to speed or something?
From Qt's documentation on QSqlQuery:
Returns the size of the result (number of rows returned), or -1 if the size cannot be determined or if the database does not support reporting information about query sizes.
SQLite is actually the one which doesn't provide this information. You can confirm it with:
qDebug() << qry.driver().hasFeature(QSqlDriver::QuerySize);
In case of SQLite you need to iterate through all rows from the results and count the results by yourself.
If you just need to count rows in users table, then it's better to do:
qry.prepare("SELECT count(*) FROM users;");
This will always tell you number of rows as a single-cell result. No need for special features of database. This will work everywhere (unless some database doesn't support count(*) function, then please correct me in comments).
Related
Our website has a problem: The visiting time of one page is too long. We have found out that it has a n*n matrix in that page; and for each item in the matrix, it queries three tables from MYSQL database. Every item in that matrix do the query quiet alike.
So I wonder maybe it is the large amount of MYSQL queries lead to the problem. And I want to try to fix it. Here is one of my confusions I list below:
1.
m = store.execute('SELECT X FROM TABLE1 WHERE I=1')
result = store.execute('SELECT Y FROM TABLE2 WHERE X in m')
2.
r = store.execute('SELECT X, Y FROM TABLE2');
result = []
for each in r:
i = store.execute('SELECT I FROM TABLE1 WHERE X=%s', each[0])
if i[0][0]=1:
result.append(each)
It got about 200 items in TABLE1 and more then 400 items in TABLE2. I don't know witch part takes the most time, so I can't make a better decision of how to write my sql statement.
How could I know how much time it takes to do some operation in MYSQL? Thank you!
Rather than installing a bunch of special tools, you could take a dead-simple approach like this (pardon my Ruby):
start = Time.new
# DB query here
puts "Query XYZ took #{Time.now - start} sec"
Hopefully you can translate that to Python. OR... pardon my Ruby again...
QUERY_TIMES = {}
def query(sql)
start = Time.new
connection.execute(sql)
elapsed = Time.new - start
QUERY_TIMES[sql] ||= []
QUERY_TIMES[sql] << elapsed
end
Then run all your queries through this custom method. After doing a test run, you can make it print out the number of times each query was run, and the average/total execution times.
For the future, plan to spend some time learning about "profilers" (if you haven't already). Get a good one for your chosen platform, and spend a little time learning how to use it well.
I use the MySQL Workbench for SQL development. It gives response times and can connect remotely to MySQL servers granted you have the permission (which in this case will give you a more accurate reading).
http://www.mysql.com/products/workbench/
Also, as you've realized it appears you have a SQL statement in a for loop. That could drastically effect performance. You'll want to take a different route with retrieving that data.
I am using the RJDBC package to connect to a MySQL (Maria DB) database in R on a Windows 7 machine and I am trying a statement like
select a as b
from table
but the column will always continue to be named "a" in the data frame.
This works normally with RODBC and RMySQL but doesn't work with RJDBC. Unfortunately, I have to use RJDBC as this is the only package that has no problem with the encoding of chinese, hebrew and so on letters (set names and so on don't seem to work with RODBC and RMySQL).
Has anybody experienced this problem?
I have run into the same frustrating issue. Sometimes the AS keyword would have its intended effect, but other times it wouldn't. I was unable to identify the conditions to make it work correctly.
Short Answer: (Thanks to Simon Urbanek (package maintainer for RJDBC), Yev, and Sebastien! See the Long Answer.) One thing that you may try is to open your JDBC connection using ?useOldAliasMetadataBehavior=true in your connection string. Example:
drv <- JDBC("com.mysql.jdbc.Driver", "C:/JDBC/mysql-connector-java-5.1.18-bin.jar", identifier.quote="`")
conn <- dbConnect(drv, "jdbc:mysql://server/schema?useOldAliasMetadataBehavior=true", "username", "password")
query <- "SELECT `a` AS `b` FROM table"
result <- dbGetQuery(conn, query)
dbDisconnect(conn)
This ended up working for me! See more details, including caveats, in the Long Answer.
Long Answer: I tried all sorts of stuff, including making views, changing queries, using JOIN statements, NOT using JOIN statements, using ORDER BY and GROUP BY statements, etc. I was never able to figure out why some of my queries were able to rename columns and others weren't.
I contacted the package maintainer (Simon Urbanek.) Here is what he said:
In the vast majority of cases this is an issue in the JBDC driver, because there is really not much RJDBC can do other than to call the driver.
He then recommended that I make sure I had the most recent JDBC driver for MySQL. I did have the most recent version. However, it got me thinking "maybe it IS a bug with the JDBC driver." So, I searched Google for: mysql jdbc driver bug alias.
The top result for this query was an entry at bugs.mysql.com. Yev, using MySQL 5.1.22, says that when he upgraded from driver version 5.0.4 to 5.1.5, his column aliases stopped working. Asked if it was a bug.
Sebastien replied, "No, it's not a bug! It's a documented change of behavior in all subsequent versions of the driver." and suggested using ?useOldAliasMetadataBehavior=true, citing documentation for the JDBC driver.
Caveat Lector: The documentation for the JDBC driver states that
useColumnNamesInFindColumn is preferred over useOldAliasMetadataBehavior unless you need the specific behavior that it provides with respect to ResultSetMetadata.
I haven't had the time to fully research what this means. In other words, I don't know what all of the ramifications are of using useOldAliasMetadataBehavior=true are. Use at your own risk. Does someone else have more information?
I don't know RJDBC, but in some cases when it is necessary to give permanent aliases to columns without renaming them, you can use VIEWs
CREATE OR REPLACE VIEW v_table AS
SELECT a AS b
FROM table
... and then ...
SELECT b FROM v_table
There is a separate function in the ResultSetMetaData interface for retrieving the column label vs the column name:
String getColumnLabel(int column) throws SQLException;
Gets the designated column's suggested title for use in printouts and
displays. The suggested title is usually specified by the SQL AS
clause. If a SQL AS is not specified, the value returned
fromgetColumnLabel will be the same as the value returned by the
getColumnName method.
Using getColumnLabel should resolve this issue (if not, check that your JDBC driver is following this spec).
e.g.
ResultSetMetaData rsmd = rs.getMetaData();
int columnCount = rsmd.getColumnCount();
while(rs.next()) {
for (int i = 1; i < columnCount + 1; i++) {
String label = rsmd.getColumnLabel(i);
System.out.println(rs.getString(label));
}
}
This is the work around we use for R and SAP HANA via RJDBC:
names(result)[1]<-"b"
It's not the nicest work around, but since Aaron's solution does work for us, we went with this "solution".
The following is generated query from Hibernate (except I replaced the list of fields with *):
select *
from
resource resource0_,
resourceOrganization resourceor1_
where
resource0_.active=1
and resource0_.published=1
and (
resource0_.resourcePublic=1
or resourceor1_.resource_id=resource0_.id
and resourceor1_.organization_id=2
and (
resourceor1_.resource_id=resource0_.id
and resourceor1_.forever=1
or resourceor1_.resource_id=resource0_.id
and (
current_date between resourceor1_.startDate and resourceor1_.endDate
)
)
)
Currently I have 200+ records in both the Windows and Linux databases and currently for each record, the following happens to be true:
active = 1
published = 1
resourcePublic = 1
When I run this directly in a SQL client, this SQL query gets me all the matching records on Windows but none on Linux. I've MySQL 5.1 on both Windows and Linux.
If I apply the Boolean logic, (true and true and (true or whatever)), I expect the outcome to be true. It indeed is true on Windows but false on Linux!!!
If I modify the query as the following, it works on both Windows and Linux:
select *
from
resource resource0_
where
resource0_.active=1
and resource0_.published=1
and (
resource0_.resourcePublic=1
)
So, just the presence of conditions related to resourceOrganization is making the query bring 0 results on Linux and I expected that since it is the second part of an 'or' condition whose first part is true, the outcome should be true.
Any idea why this difference in behavior between the 2 OSs and why what should obviously work on Linux doesn't!
Thanks in advance!
Check the case sensitivity and collation sets (Collation issues)
Check the table case sensitivity. In particular note that on windows the table names are case-insensitive and on Linux they are case-sensitive.
Have you tried a simple test case on both system?
Check that current_date() returns the same format in both plataforms
I notice that the second test query only consults the resource table not the resourceOrganisation table.
I suspect that the table resourceOrganisation is populated differently on the two machines, and the corresponding rows may not exist in your Linux MySQL.
What does this query return?
select *
from
resource resource0_,
resourceOrganization resourceor1_
where
resource0_.active=1
and resource0_.published=1
and (
resource0_.resourcePublic=1
or resourceor1_.resource_id=resource0_.id
and resourceor1_.organization_id=2
)
Also don't forget to check the collation and case sensitivity, if one server uses a different collation to the other then you will have this same issue.
Hey guys, I'm trying to select random data from the database in Ruby on Rails. Unfortunately, sqlite and mysql use different names for the "random" function. Mysql uses rand(), sqlite use random(). I've been pretty happy using sqlite in my development environments so far, and I don't want to give it up for just this.
So I have a solution for it, but I'm not very happy with it. First, is there a cleaner abstraction in RoR for getting the random function? And if not, is this the best way to get the "adapter"?
# FIXME: There has to be a better way...
adapter = Rails.configuration.database_configuration[Rails.configuration.environment]["adapter"]
if adapter == "sqlite3"
# sqllite calls it rand
random = "random"
else
# mysql calls it rand
random = "rand"
end
query.push("SELECT *, (" + random + "() * (0.1 * value)) AS weighted_random_value...")
You can effectively alias MySQL's rand() to the standard random() by creating a function:
CREATE FUNCTION random() RETURNS FLOAT NO SQL SQL SECURITY INVOKER RETURN rand();
I wrote a small plugin that handles this problem:
http://github.com/norman/active_record_random
I ran into this problem when developing locally using SQLite. Unfortunately, this is not the only difference between the databases you're going to run into (booleans are also handled differently for instance).
Is it a requirement that you support both SQLite and MySQL? If not I recommend switching to a single database: the one you're deploying on in production.
This takes a bit more time to set up but IMHO in the long run it will save you time, and you will have confidence that your app works well on the database that you'll actually be deploying it with.
We have a lot of queries
select * from tbl_message
that get stuck on the state "Writing to net". The table has 98k rows.
The thing is... we aren't even executing any query like that from our application, so I guess the question is:
What might be generating the query?
...and why does it get stuck on the state "writing to net"
I feel stupid asking this question, but I'm 99,99% sure that our application is not executing a query like that to our database... we are however executing a couple of querys to that table using WHERE statement:
SELECT Count(*) as StrCount FROM tbl_message WHERE m_to=1960412 AND m_restid=948
SELECT Count(m_id) AS NrUnreadMail FROM tbl_message WHERE m_to=2019422 AND m_restid=440 AND m_read=1
SELECT * FROM tbl_message WHERE m_to=2036390 AND m_restid=994 ORDER BY m_id DESC
I have searched our application several times for select * from tbl_message but haven't found anything... But still our query-log on our mysql server is full of Select * from tbl_message queries
Since applications don't magically generate queries as they like, I think that it's rather likely that there's a misstake somewhere in your application that's causing this. Here's a few suggestions that you can use to track it down. I'm guessing that your using PHP, since your using MySQL, so I'll use that for my examples.
Try adding comments in front of all your queries in the application, like this:
$sqlSelect = "/* file.php, class::method() */";
$sqlSelect .= "SELECT * FROM foo ";
$sqlSelect .= "WHERE criteria";
The comment will show up in your query log. If you're using some kind database api wrapper, you could potentially add these messages automatically:
function query($sql)
{
$backtrace = debug_backtrace();
// The function that executed the query
$prev = $backtrace[1];
$newSql = sprintf("/* %s */ ", $prev["function"]);
$newSql .= $sql;
mysql_query($newSql) or handle_error();
}
In case you're not using a wrapper, but rather executing the queries directly, you could use the runkit extension and the function runkit_function_rename to rename mysql_query (or whatever you're using) and intercept the queries.
There are (at least) two data retrieval modes for mysql. With the c api you either call mysql_store_result() or mysql_use_result().
mysql_store_result() returns when all result data is transferred from the MySQL server to your process' memory, i.e. no data has to be transferred for further calls to mysql_fetch_row().
However, by using mysql_use_result() each record has to be fetched individually if and when mysql_fetch_row() is called. If your application does some computing that takes longer than the time period specified in net_write_timeout between two calls to mysql_fetch_row() the MySQL server considers your connection to be timed out.
Temporarily enable the query log by putting
log=
into your my.cnf file, restart mysql and watch the query log for those mystery queries (you don't have to give the log a name, it'll assume one from the host value).