i'm running mysql 5.5 with mysql 5.1.18 connector.
a simple query of style
select * from my_table where column_a in ('aaa','bbb',...) and column b=1;
is executed from within java application. the query returns a resultset of 25k rows, 8 columns in each. while reading the results in while loop
while(rs.next())
{
MyObject c= new MyObject();
c.setA(rs.getString("A"));
c.setB(rs.getString("B"));
c.setC(rs.getString("C"));
...
}
a following exception is thrown, usually during the first loops, but never in the same row:
java.lang.NullPointerException
at com.mysql.jdbc.ResultSetImpl.getStringInternal(ResultSetImpl.java:5720)
at com.mysql.jdbc.ResultSetImpl.getString(ResultSetImpl.java:5570)
at com.mysql.jdbc.ResultSetImpl.getString(ResultSetImpl.java:5610)
i took a look at the source code in ResultSetImpl.java:5720 and i see the following:
switch (metadata.getSQLType())
where metadata is
Field metadata = this.fields[internalColumnIndex];
and getSQLType is a logic-less getter returning an int. what's interesting, is that the same metadata object is called numerous times several lines above with other getters, and throws no exceptions.
btw, there is no problem with the query above while ran directly within mysql.
application runs in aws.
any ideas how to solve this?
thanks.
I ran into this issue, using a Spring Data CrudRepository, retrieving a Stream of results from a MySql database running on AWS RDS, with a moderately convoluted query.
It would also throw at a non-deterministic row, after about 30k rows.
I resolved this issue by annotating the calling method with #Transactional.
Although you're not using JPA, setting up a transaction for your database access may help your issue.
From my experience, this error is a result of locked table while trying to read/write simultaneously from the same table. You need to add a pool for every request or some wait time between every operation to the MySQL.
Related answer as well:
Getting java.sql.SQLException: Operation not allowed after ResultSet closed
Related
I was toying around with the Azure Data Factory using the Sakila Dataset. I set up a Maria DB (5.5.64) on a private centos7.7-vm. I also ran into the same issue when I was using MySQL 8 instead of MariaDB.
I run a parameterized load pipeline in Azure Data Factory. I repeatedly get this error inside a foreach loop in the Azure Data Factory. I get the error every time with a different source table.
Error from Azure Data Factory:
{
“errorCode”: “2100”,
“message”: “’Type=System.InvalidOperationException,Message=Collection was modified; enumeration operation may not execute.,Source=mscorlib,’”,
“failureType”: “UserError”,
“target”: “GET MAX MySQL”,
“details”: []
}
Parameterized query running in the lookup activity:
SELECT MAX(#{item().WatermarkColumn}) as maxd FROM #{item().SRC_tab}
becomes
SELECT MAX(last_update) as maxd FROM sakila.actor
Please note that the error appeared the last time in the staff and the category table, I was using the MariaDB connector. After I switched to the MySQL connector, the error disappeared. However in the past when I used the MySQL connector, and switched to the MariaDB connector the error also persisted.
Have any of you experienced a similar behaviour? If yes, what were your workarounds?
Apologizes , but we need more clarity here . As I understand is this issue still with and MariaDB connection and MySQL or only with MySQL ?
Just to let you know ADF team regularly deploys changes and it may happen that the issues which you experienced and is not repro-able at this time , a fix may have been deployed for that .
I very recently started the process of trying to get a undocumented and poorly designed project under control. After struggling to get the thing built locally, I started running into errors when going through various functionalities.
Most of these problems appear to be a result of MySQL errors due to the way my product is generating Hibernate criteria queries. For example, when doing an autocomplete on the displayName of an object, the criteria query that results from this action is very large. I end up with around 2200 fields to select from around 50 tables. When hibernate attempts to execute this query I get an error:
30-Mar-2018 11:43:07.353 WARNING [http-nio-8080-exec-8] org.hibernate.util.JDBCExceptionReporter.logExceptions SQL Error: 1117, SQLState: HY000
30-Mar-2018 11:43:07.353 SEVERE [http-nio-8080-exec-8] org.hibernate.util.JDBCExceptionReporter.logExceptions Too many columns
[ERROR] 11:43:07 pos.services.HorriblyDefinedObjectAjax - could not execute query
org.hibernate.exception.GenericJDBCException: could not execute query
I turned on general logging for MySQL and obtained the criteria query that is trying to be executed. If I attempt to execute it in MySQLWorkbench I also get the following result:
Error Code: 1117. Too many columns
I've gone to the QA instances of this application and the autocompletes work there, which seems to indicate there is a way that this huge query will execute. Is it possible that I just do not have the right MySQL configurations on?
Currently my sql_mode='NO_ENGINE_SUBSTITUTION', is there anything else I might need to do?
In my code I use database->last_insert_id(undef,undef,undef,"id"); to get the autoincremented primary key. This works 99.99% of the time. But once in a while it returns a value of 0.
In such situations, Running a select with a WHERE clause similar to the value of the INSERT statement shows that the insert was successful. Indicating that the last_insert_id method failed to get the proper data.
Is this a known problem with a known fix? or should I be following up each call to last_insert_id with a check to see if it is zero and if yes a select statement to retrieve the correct ID value?
My version of mysql is
mysql Ver 14.14 Distrib 5.7.19, for Linux (x86_64)
Edit1: Adding the actual failing code.
use Dancer2::Plugin::Database;
<Rest of the code to create the insert parameter>
eval{
database->quick_insert("build",$job);
$job->{idbuild}=database->last_insert_id(undef,undef,undef,"idbuild");
if ($job->{idbuild}==0){
my $build=database->quick_select("build",$job);
$job->{idbuild}=$build->{idbuild};
}
};
debug ("=================Scheduler build Insert=======================*** ERROR :Got Error",$#) if $#;
Note: I am using Dancer's Database plugin. Whose description says,
Provides an easy way to obtain a connected DBI database handle by
simply calling the database keyword within your Dancer2 application
Returns a Dancer::Plugin::Database::Core::Handle object, which is a
subclass of DBI's DBI::db connection handle object, so it does
everything you'd expect to do with DBI, but also adds a few
convenience methods. See the documentation for
Dancer::Plugin::Database::Core::Handle for full details of those.
I've never heard of this type of problem before, but I suspect your closing note may be the key. Dancer::Plugin::Database transparently manages database handles for you behind the scenes. This can be awfully convenient... but it also means that you could change from using one dbh to using a different dbh at any time. From the docs:
Calling database will return a connected database handle; the first time it is called, the plugin will establish a connection to the database, and return a reference to the DBI object. On subsequent calls, the same DBI connection object will be returned, unless it has been found to be no longer usable (the connection has gone away), in which case a fresh connection will be obtained.
(emphasis mine)
And, as ysth has pointed out in comments on your question, last_insert_id is handle-specific, which suggests that, when you get 0, that's likely to be due to the handle changing on you.
But there is hope! Continuing on in the D::P::DB docs, there is a database_connection_lost hook available which is called when the database connection goes away and receives the defunct handle as a parameter, which would allow you to check and record last_insert_id within the hook's callback sub. This could provide a way for you to get the id without the additional query, although you'd first have to work out a means of getting that information from the callback to your main processing code.
The other potential solution, of course, would be to not use D::P::DB and manage your database connections yourself so that you have direct control over when new connections are created.
For some time we have been receiving a "unknown column" error from our mysql server.
The errors look like this:
Unknown column 'JOIN search_table��z[.cc.' in 'field list'
Unknown column '(`IX_cfs$order$make$model`) INNER JOIN search_t' in 'field list'
Unknown column 'eated, cp.stat_sales, cp.stat_views, cp.culture_code' in 'field list'
+ more
The most strange part is that it's completly random which methods in our .NET code that gets the errors. Even methods that does not include any of the reported error tables in the query, sometimes reports the "unkown column" error with SQL code inside that did not belong to that query... :-(
We are running windows 2008, mysql 5.0.45 and the MySQL connector 6.2, .NET 3.5. We have an avarage 250 requests/second with peaks of 750 requests/second. MySQL CPU usage is 10-50% and Memory usage is 5-6 GB (8 GB available).
The errors started only a few months ago, but have become more and more frequent, to the point that we get +500 errors per day from ELMAH. We are suspecting that it could be something with a stressed mysql server, mix-up of connections (either in mysql, or the .NET connection pool).
We have tried to reproduce it locally and on a separate identical server setup, but so far no luck in re-generating the errors, as it does not happen for all sql queries, however a restart of the mysql service eliminates the error for a period of time. But as our userbase and server load is increasing on a 10-15% per month the error have become more an more frequent.
Any help, ideas, advice is very much appreciated...
Additional info:
We are running all external paramters (QueryString, Form post data, webservice parameters, and also internal parameters) thrue a custom function that fixes all SQL injection attempts. And we do not use "dynamic" SQL, only Stored Procedures are used.
On top of this the most frequent method that returns the "unknown column" error is a method in .NET that only takes a int32 as input parameter, an the MySQL SP also only takes a int as parameter.
Also we wrap everything in try-catch-finally, and the error we are getting is from our error handling modules (primarily ELMAH)
It looks like a corrupt query string is getting passed to mySQL.
Your .Net application is almost certainly the culprit.
SUGGESTIONS:
Look again at the code that's making the queries.
If you're lucky, you can easily isolate the actual SQL
Any any case, make sure the relevant code: where you create the query, followed by where you make the query, and finally where you access the result - is wrapped in a try/catch block.
I'm guessing that some unhandled exception might be taking you out of the flow of control path you expect to be taking, resulting in data corruption.
I'm trying to write a Java app that imports a data file. The process is as follows
Create Transaction
Delete all rows from datatable
Load data file into datatable
Commit OR Rollback if any errors were encountered.
The data loaded in step 3 is mostly the same as the data deleted in step3.
The deletion is performed using the following
DetachedCriteria criteria = DetachedCriteria.forClass(myObject.class);
List<myObject> myObjects = hibernateTemplate.findByCriteria(criteria);
hibernateTemplate.deleteAll(myObjects);
When I then load the datafile, i get the following exception
nested exception is org.hibernate.NonUniqueObjectException:
a different object with the same identifier value was already associated with the session:
The whole process needs to take place in transaction.
And I don't really want to have to compare the import file / data table and then perform an insert/update/delete to get them into sync.
Any help would be appreciated.
Shortest answer, use session.merge()
Short answer, use plain jdbc hibernate is the wrong tool for this job.
Longer answer, see what your database tools support in this regard.
A solution could be to:
rename table old_table
create an new empty table
import the data into the new table
drop old_table
Your entire table would be locked in your use case so this should not be a problem.
First idea: did you try to flush() the Session after step #2?
Second idea: use the StatelessSession interface. You may have to extend HibernateTemplate for that since SPR-6202 and SPR-2495 are unresolved.