For some time we have been receiving a "unknown column" error from our mysql server.
The errors look like this:
Unknown column 'JOIN search_table��z[.cc.' in 'field list'
Unknown column '(`IX_cfs$order$make$model`) INNER JOIN search_t' in 'field list'
Unknown column 'eated, cp.stat_sales, cp.stat_views, cp.culture_code' in 'field list'
+ more
The most strange part is that it's completly random which methods in our .NET code that gets the errors. Even methods that does not include any of the reported error tables in the query, sometimes reports the "unkown column" error with SQL code inside that did not belong to that query... :-(
We are running windows 2008, mysql 5.0.45 and the MySQL connector 6.2, .NET 3.5. We have an avarage 250 requests/second with peaks of 750 requests/second. MySQL CPU usage is 10-50% and Memory usage is 5-6 GB (8 GB available).
The errors started only a few months ago, but have become more and more frequent, to the point that we get +500 errors per day from ELMAH. We are suspecting that it could be something with a stressed mysql server, mix-up of connections (either in mysql, or the .NET connection pool).
We have tried to reproduce it locally and on a separate identical server setup, but so far no luck in re-generating the errors, as it does not happen for all sql queries, however a restart of the mysql service eliminates the error for a period of time. But as our userbase and server load is increasing on a 10-15% per month the error have become more an more frequent.
Any help, ideas, advice is very much appreciated...
Additional info:
We are running all external paramters (QueryString, Form post data, webservice parameters, and also internal parameters) thrue a custom function that fixes all SQL injection attempts. And we do not use "dynamic" SQL, only Stored Procedures are used.
On top of this the most frequent method that returns the "unknown column" error is a method in .NET that only takes a int32 as input parameter, an the MySQL SP also only takes a int as parameter.
Also we wrap everything in try-catch-finally, and the error we are getting is from our error handling modules (primarily ELMAH)
It looks like a corrupt query string is getting passed to mySQL.
Your .Net application is almost certainly the culprit.
SUGGESTIONS:
Look again at the code that's making the queries.
If you're lucky, you can easily isolate the actual SQL
Any any case, make sure the relevant code: where you create the query, followed by where you make the query, and finally where you access the result - is wrapped in a try/catch block.
I'm guessing that some unhandled exception might be taking you out of the flow of control path you expect to be taking, resulting in data corruption.
Related
I have a database in SQL Server that I am trying to convert into a MySQL database, so I can host it on AWS and move everything off-premises. From this link, it seems like normally this is no big deal, although that link doesn't seem to migrate from a .bak file so much as from your local instance of SQL Server that is running and contains the database in question. No big deal, I can work with that.
However when I actually use MySQL Workbench to migrate using these steps, it gets to the Bulk Data Transfer step, and then comes up with odd errors.
I get errors like the following:
ERROR: OptionalyticsCoreDB-Prod.UserTokens:Inserting Data: Data too long for column 'token' at row 1
ERROR: OptionalyticsCoreDB-Prod.UserTokens:Failed copying 6 rows
ERROR: OptionalyticsCoreDB-Prod.UserLogs:Inserting Data: Data too long for column 'ActionTaken' at row 1
ERROR: OptionalyticsCoreDB-Prod.UserLogs:Failed copying 244 rows
However the data should not be "too long." These columns are nvarchar(MAX) in SQL Server, and the data for them is often very short in the specified rows, nothing that approaches the maximum value for an nvarchar.
Links like this and this show that there used to be, almost a decade ago, bugs with nvarchar formats, but they've been fixed for years now. I have checked and even updated and restarted my software and then computer - I have up-to-date versions of MySQL and MySQL Workbench. So what's going on?
What is the problem here, and how do I get my database successfully migrated? Surely it's possible to migrate from SQL Server to MySQL, right?
I have answered my own question... Apparently there IS some sort of bug with Workbench when translating SQL Server nvarchar(MAX) columns. I output the schema migration to a script and examined it, it was translating those columns as varchar(0). After replacing all of them with TEXT columns, the completed migration worked.
Frustrating lesson.
I very recently started the process of trying to get a undocumented and poorly designed project under control. After struggling to get the thing built locally, I started running into errors when going through various functionalities.
Most of these problems appear to be a result of MySQL errors due to the way my product is generating Hibernate criteria queries. For example, when doing an autocomplete on the displayName of an object, the criteria query that results from this action is very large. I end up with around 2200 fields to select from around 50 tables. When hibernate attempts to execute this query I get an error:
30-Mar-2018 11:43:07.353 WARNING [http-nio-8080-exec-8] org.hibernate.util.JDBCExceptionReporter.logExceptions SQL Error: 1117, SQLState: HY000
30-Mar-2018 11:43:07.353 SEVERE [http-nio-8080-exec-8] org.hibernate.util.JDBCExceptionReporter.logExceptions Too many columns
[ERROR] 11:43:07 pos.services.HorriblyDefinedObjectAjax - could not execute query
org.hibernate.exception.GenericJDBCException: could not execute query
I turned on general logging for MySQL and obtained the criteria query that is trying to be executed. If I attempt to execute it in MySQLWorkbench I also get the following result:
Error Code: 1117. Too many columns
I've gone to the QA instances of this application and the autocompletes work there, which seems to indicate there is a way that this huge query will execute. Is it possible that I just do not have the right MySQL configurations on?
Currently my sql_mode='NO_ENGINE_SUBSTITUTION', is there anything else I might need to do?
I'm a bit stumped by this. I've used mysql 5.7x before and I've always been able to fix this issue by removing ONLY_GROUP_BY from the sql_modes in the mysql config. However, today I appear to be unable to do so, even removing it from the sql_modes setting doesn't stop me from recieving this error.
I know what causes the error and I know there is a work around that you can add to the SQL, however I do not have time to fix the litteral hundreds of queries in our application which cause this.
SQLMode setting in mysql.cnf:
sql_mode = " ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
This is then confirmed by running the query:
SELECT ##SQL_MODE;
Result:
'ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION'
However, when running the stored proc, it still produces the error:
SQLSTATE[42000]: Syntax error or access violation: 1140 In aggregated query without GROUP BY, expression #1 of SELECT list contains nonaggregated column 'schema.emp.company_id'; this is incompatible with sql_mode=only_full_group_by
Does anyone know why mySQL appears not to recognise that the option has been disabled?
I'm running 5.7.21-0ubuntu0.17.10.1 (on Ubuntu 17.10, not surprisingly!)
Well, just to double the bizarre-ness of the entire issue, it appears that if you drop and recreate the stored proc (with no code changes, its exactly the same), the issue goes away.
This schema wasn't an existing one and mySQL upgraded over it, I did a fresh install of Ubuntu 17.10 on my machine only a few weeks ago and restored everything from backups onto it.
I'm at a loss to explain the above, I can only presume mySQL caches sql modes when it compiles(?) stored procs. I don't know if it does compile then when you create them, but it's clearly caching something.
We are running on Google Compute Engine/Debian9/PHP/Lumen/Doctrine2 <-> Google SQL MySQL 2nd Gen 5.7.
Usually it works without hiccups, but we are now getting error messages, similar to the one below, with increasing frequency:
Error while sending QUERY packet. PID=123456
PDOStatement::execute(): MySQL server has gone away
Any idea why this is happening and how i would fix it?
As noted here, there is a list of cases which may be causing this error. A few are:
You have encountered a timeout on the server side and the automatic reconnection in the client is disabled (the reconnect flag in the
MYSQL structure is equal to 0).
You can also get these errors if you send a query to the server that is incorrect or too large...An INSERT or REPLACE statement that
inserts a great many rows can also cause these sorts of errors.
. . .
Please refer to the link for a complete list.
Also, there see this answers on the same problem.
I have been charged with maintaining a legacy classic ASP application. The application uses an ODBC system DSN to connect to a MySQL database.
We had to recently update the servers to satisfy some licencing requirements. We were on Windows, with MySQL 4.x and the 3.51 ODBC driver. We moved to a Linux machine running MySQL 5.1.43 and are running the 5.1.6 ODBC driver on the new IIS server.
Users almost instantly started reporting errors as such:
Row cannot be located for updating.
Some values may have been changed
since it was last read.
This is a ghost error, and the same data changes, on the same record, at a different time won't always produce the error. It is also intermittent between different records, as in, sometimes, no matter what values I plug in, I haven't been able to repro the defect on all records.
It is happening across 70 of about 120 scripts, many over 1,000 lines long.
The only consistency I can find is that on all of the scripts that fail, they are all reading/writing floats to the DB. Fields that have a null value don't seem to crash, but if there is a value like '19' in the database (note the no decimal places) that seems to fail, whereas, '19.00' does not. Most floats are defined as 11,2.
The scripts are using ADODB and recordsets. Updates are done with the following pattern:
select * from table where ID =
udpdated recordID
update properties of the record from the form
call RecordSet.Update and RecordSet.Close
The error is generated from the RecordSet.Update command.
I have created a workaround, where rather than select/copy/update I generate an SQL statement that I execute. This works flawlessly (obviously, an UPDATE statement with a where clause is more focused and doesn't consider fields not updated), so I have a pretty good feeling that it is a rounding issue with the floats that is causing a mis-match with the re-retrieval of the record on the update call.
I really would prefer NOT re-writing 100's of these instances (a grep across the source directly finds 280+ update calls).
Can anyone confirm that the issue here is related to floats/rounding?
And if so, is there a global fix I can apply?
Thanks in advance,
-jc
Have a look at MySQL Forums :: ODBC :: Row cannot be located for updating.
They seem to have found some workaround and some explanations as well..
I ran into a similar issue with a VBA macro utilizing 4.1. When upgraded to 5 errors started popping up.
For me the issue was that values being returned to VBA from MySQL was in a unhandled (by VBA) decimal format.
A CAST on the numbers when querying helped to fix the issue.
So for your issue perhaps the ODBC/ASP combination is recording/reading values differently then what you might expect them to be.