Getting unexpected row counts in MySQL query - mysql

I am executing following two queries on MySQL database using JDBC connection in JAVA.
1:
SELECT count(*)
FROM transaction
WHERE
(entry_time between STR_TO_DATE('2012-09-24 00:00:00','%Y-%m-%d %k:%i:%s')
AND STR_TO_DATE('2012-09-24 23:59:59','%Y-%m-%d %k:%i:%s'))
2:
SELECT *
FROM transaction
WHERE
(entry_time between STR_TO_DATE('2012-09-24 00:00:00','%Y-%m-%d %k:%i:%s')
AND STR_TO_DATE('2012-09-24 23:59:59','%Y-%m-%d %k:%i:%s'))
When I am searching continuously I am getting different records each time.
for e.g.
68
72
58
69
I printed the output in a log file and found that for 1st query it brings the value which is in actual in database. When the second query is executed and there is some process going on it gives different values. Why is it so?
My second question is for the second query I am getting values for '2012-09-23' also. But as per the range specified it should not bring it.
I am using MySQL 5.1 and JAVA 1.6.0_14. This is a Web Application which is now on a production server. Where I cannot debug it. :( the problem is occurring only on the production server the test setup works fine. Any help is appreciated.

Related

Connection Error Code: 2013 happening only for one specific table in MySQL Workbench, but all other tables work fine

I'm using MySQL WorkBench 8.0 on Windows 10.
When I run a simple SELECT query
SELECT * FROM some_table;
It will eventually time out with the error
Error Code: 2013. Lost connection to MySQL server during query
This only happens for one specific table, all other tables work just fine. I've tried setting the time out to longer than the default (30 seconds), but even after very long amounts of time it will still have the same result.
This is different from this similar question because the OP was getting the error when adding an index to a table, but in my situation it happens for any query for only one table.

Too many columns in query, MySQL Error Code: 1117

I very recently started the process of trying to get a undocumented and poorly designed project under control. After struggling to get the thing built locally, I started running into errors when going through various functionalities.
Most of these problems appear to be a result of MySQL errors due to the way my product is generating Hibernate criteria queries. For example, when doing an autocomplete on the displayName of an object, the criteria query that results from this action is very large. I end up with around 2200 fields to select from around 50 tables. When hibernate attempts to execute this query I get an error:
30-Mar-2018 11:43:07.353 WARNING [http-nio-8080-exec-8] org.hibernate.util.JDBCExceptionReporter.logExceptions SQL Error: 1117, SQLState: HY000
30-Mar-2018 11:43:07.353 SEVERE [http-nio-8080-exec-8] org.hibernate.util.JDBCExceptionReporter.logExceptions Too many columns
[ERROR] 11:43:07 pos.services.HorriblyDefinedObjectAjax - could not execute query
org.hibernate.exception.GenericJDBCException: could not execute query
I turned on general logging for MySQL and obtained the criteria query that is trying to be executed. If I attempt to execute it in MySQLWorkbench I also get the following result:
Error Code: 1117. Too many columns
I've gone to the QA instances of this application and the autocompletes work there, which seems to indicate there is a way that this huge query will execute. Is it possible that I just do not have the right MySQL configurations on?
Currently my sql_mode='NO_ENGINE_SUBSTITUTION', is there anything else I might need to do?

What is the size limit for a Query/View table in MySQL?

I am using MySQL and currently have 3 tables in a database. I created a view table with relationships between the 3 tables. The view table should be about 200 000 rows of data because i also tested the same query in Access and it works fine, but unfortunately i am not allowed to use Access.
When i build the view in MySQL i get a message that says the view was build successfully. But when i try to actually view the data, it gives me something like, MySQL ran out of memory. I am not sure what i can do differently to avoid this message. Can someone please give some advice?
You can use the Quick option for running your query.
Do not cache each query result, print each row as it is received. This
may slow down the server if the output is suspended. With this option,
mysql does not use the history file.

Mysql stored proc returning 500 empty rows

I have a table (vendTable) that contains venName and venDesc. Both varchar(30) NOT NULL. Contains 500 rows that get refreshed everyday. The table is truncated and reloaded daily. No indexes.
Doing a "select venName , venDesc from vendTable;" will give me 500 rows.
I wrapped this 1 sql statment into a stored procedure, getVendors.
When I do a "call getVendors();" it returns 500 rows of good data.
It's been running fine for weeks, but today, the getVendors() proc returned 500 empty rows. Not NULL, but just empty, but 500 of them. No errors, no warnings. I run the 1 line qry, and it returns as expected. I drop and recreate the proc, but still the same empty result occurs. We created an additional proc with the same code, but different name, and no luck. Table diagnostics and checks for errors came back fine. Repair came back with no errors found. Character sets and collations are fine. Nothing in the log files.
After a bit, the procedure was working again, returning good data.
I have 3 similar mysql data bases on 3 different hosts. And they run fine.
Has anyone seen this before? Anything else to look at? Thanks in advance. It's mysql 5.0.95 running on CentOS 5.8

SSIS: SQL 2008 R2 to MySQL data loss

I have an SSIS package set up to export data from a SQL Server 2008 R2 table to a MySQL version of that table. The package executes however, I am getting about 1% of the rows failing to be exported.
My source connection uses the SQL statement
SELECT * FROM Table1
all of the columns are integers. An example of a row which is exported successfully is
2169,2680, 3532,NULL, 2169
compared to a row which fails
2168,2679,3532,NULL, 2168
virtually nothing different that I can ascertain.
Notably, if I change the source query to only attempt the transfer of a single failing row - ie.
SELECT * FROM Table1 WHERE ID = 2168
then the record is exported fine - it is only when part of a select which returns multiple rows that it fails. The same rows fail the export each time. I have redirected error rows to a text file which displays a -1071610801 error for the failing rows. This would apparently translate to:-
DTS_E_ADODESTERRORUPDATEROW: "An error has occurred while sending this row to destination data source."
which doesn't really add a great deal to my understanding of the issue!
I am wondering if there is a locking issue or something preventing given rows from being fetched or inserted correctly but if anyone has any ideas or suggestions on what might be causing this or even better how to go about resolving it they would be greatly appreciated. I am currently at a total loss...
Try to setup longer timeout (1 day) ot the mysql (ADO.NET) destination.
Well after much head scratching and attempting every work around that I could come up with I have finally found a solution for this.
In the end I switched out the MySQL connector for a different driver produced by devArt -dotConnect for MySql and, with a few minor exceptions (which I think I can resolve) all of my data is now exporting without error.
The driver is a paid for product unfortunately but in the end I'd have taken out a new mortgage to see all those tasks go green!