MySQL crash while generating entity data model in Visual Studio 2019 - mysql

When I try to generate the entity models from the existing database the mysqld service crashes.
This does not occur with MySQL 8.0.20, only with 8.0.21. I was hoping to use the new json features added to the update but this problem is driving me nuts.
MySQL Installed
The entity wizard connects ok with the server and show the tables I want to import, when the importing process begin throws an exception "connection lost" and I see the mysqld.exe stopped.
The MySQL log show nothing useful except for part of the query generated by the wizard:
15:45:58 UTC - mysqld got exception 0xc0000005 ;
Most likely, you have hit a bug, but this error can also be caused by malfunctioning hardware.
Thread pointer: 0x1c9945fcfc0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
7ff7c812f74b mysqld.exe!?get_full_info#Item_aggregate_type##IEAAXPEAVItem###Z()
7ff7c81228d6 mysqld.exe!??0Item_aggregate_type##QEAA#PEAVTHD##PEAVItem###Z()
7ff7c83ab288 mysqld.exe!?prepare#SELECT_LEX_UNIT##QEAA_NPEAVTHD##PEAVQuery_result##_K2#Z()
7ff7c841dd0e mysqld.exe!?resolve_derived#TABLE_LIST##QEAA_NPEAVTHD##_N#Z()
7ff7c83d40d6 mysqld.exe!?resolve_placeholder_tables#SELECT_LEX##QEAA_NPEAVTHD##_N#Z()
7ff7c83d25aa mysqld.exe!?prepare#SELECT_LEX##QEAA_NPEAVTHD###Z()
7ff7c83ab191 mysqld.exe!?prepare#SELECT_LEX_UNIT##QEAA_NPEAVTHD##PEAVQuery_result##_K2#Z()
7ff7c841dd0e mysqld.exe!?resolve_derived#TABLE_LIST##QEAA_NPEAVTHD##_N#Z()
7ff7c83d40d6 mysqld.exe!?resolve_placeholder_tables#SELECT_LEX##QEAA_NPEAVTHD##_N#Z()
7ff7c83d25aa mysqld.exe!?prepare#SELECT_LEX##QEAA_NPEAVTHD###Z()
7ff7c83ab191 mysqld.exe!?prepare#SELECT_LEX_UNIT##QEAA_NPEAVTHD##PEAVQuery_result##_K2#Z()
7ff7c841dd0e mysqld.exe!?resolve_derived#TABLE_LIST##QEAA_NPEAVTHD##_N#Z()
7ff7c83d40d6 mysqld.exe!?resolve_placeholder_tables#SELECT_LEX##QEAA_NPEAVTHD##_N#Z()
7ff7c83d25aa mysqld.exe!?prepare#SELECT_LEX##QEAA_NPEAVTHD###Z()
7ff7c832980c mysqld.exe!?prepare_inner#Sql_cmd_select##MEAA_NPEAVTHD###Z()
7ff7c832942c mysqld.exe!?prepare#Sql_cmd_dml##UEAA_NPEAVTHD###Z()
7ff7c8325ef5 mysqld.exe!?execute#Sql_cmd_dml##UEAA_NPEAVTHD###Z()
7ff7c822d36d mysqld.exe!?mysql_execute_command##YAHPEAVTHD##_N#Z()
7ff7c822dfc9 mysqld.exe!?mysql_parse##YAXPEAVTHD##PEAVParser_state###Z()
7ff7c8226eb2 mysqld.exe!?dispatch_command##YA_NPEAVTHD##PEBTCOM_DATA##W4enum_server_command###Z()
7ff7c8227e6e mysqld.exe!?do_command##YA_NPEAVTHD###Z()
7ff7c80726c8 mysqld.exe!?modify_thread_cache_size#Per_thread_connection_handler##SAXK#Z()
7ff7c93322a1 mysqld.exe!?set_compression_level#Zstd_comp#compression#transaction#binary_log##UEAAXI#Z()
7ff7c8f3739c mysqld.exe!?my_thread_join##YAHPEAUmy_thread_handle##PEAPEAX#Z()
7fff75851542 ucrtbase.dll!_configthreadlocale()
7fff77ab6fd4 KERNEL32.DLL!BaseThreadInitThunk()
7fff77bfcec1 ntdll.dll!RtlUserThreadStart()
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (1c999a99d08): SELECT
`Project7`.`C12` AS `C1`,
`Project7`.`C1` AS `C2`,
`Project7`.`C2` AS `C3`,
`Project7`.`C3` AS `C4`,
`Project7`.`C4` AS `C5`,
`Project7`.`C5` AS `C6`,
`Project7`.`C6` AS `C7`,
`Project7`.`C7` AS `C8`,
`Project7`.`C8` AS `C9`,
`Project7`.`C9` AS `C10`,
`Project7`.`C10` AS `C11`
FROM (SELECT
`UnionAll3`.`SchemaName` AS `C1`,
`UnionAll3`.`Name` AS `C2`,
`UnionAll3`.`ReturnTypeName` AS `C3`,
`UnionAll3`.`IsAggregate` AS `C4`,
`UnionAll3`.`C1` AS `C5`,
`UnionAll3`.`IsBuiltIn` AS `C6`,
`UnionAll3`.`IsNiladic` AS `C7`,
`UnionAll3`.`C2` AS `C8`,
`UnionAll3`.`C3` AS `C9`,
`UnionAll3`.`C4` AS `C10`,
`UnionAll3`.`C5` AS `C11`,
1 AS `C12`
FROM ((SELECT
`Extent1`.`SchemaName`,
`Extent1`.`Name`,
`Extent1`.`ReturnTypeName`,
`Extent1`.`IsAggregate`,
1 AS `C1`,
`Extent1`.`IsBuiltIn`,
`Extent1`.`IsNiladic`,
`UnionAll1`.`Name` AS `C2`,
`UnionAll1`.`TypeName` AS `C3`,
`UnionAll1`.`Mode` AS `C4`,
`UnionAll1`.`Ordinal` AS `C5`
FROM (
SELECT /* Funct
Connection ID (thread ID): 18
Status: NOT_KILLED
The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash.
Is this a bug or some kind of configuration error on my part?

This proved to be a bug in the new query optimizations, but I was able to find a workaround by disabling some of the optimizations on the server.
Here is the command I used:
Set global optimizer_switch='derived_merge=off,subquery_to_derived=off,prefer_ordering_index=off,semijoin=off';
Hope it helps until they fix the bug.

Related

How to Close a Connection When Using Jupyter SQL Magic?

I'm using SQL Magic to connect to a db2 instance. However, I can't seem to find the syntax anywhere on how to close the connection when I'm done querying the database.
you cannot explicitly close a connection using Jupyter SQL Magic. In fact, that is one of the shortcoming of using Jupyter SQL Magic to connect to DB2. You need to close your session to close the Db2 connection. Hope this helps.
This probably isn't very useful, and to the extent it is it's probably not guaranteed to work in the future. But if you need a really hackish way to close the connection, I was able to do it this way (for a postgres db, I assume it's similar for db2):
In[87]: connections = %sql -l
Out[87]: {'postgresql://ngd#node1:5432/graph': <sql.connection.Connection at 0x7effdbcf6b38>}
In[88]: conn = connections['postgresql://ngd#node1:5432/graph']
In[89]: conn.session.close()
In[90]: %sql SELECT 1
...
StatementError: (sqlalchemy.exc.ResourceClosedError) This Connection is closed
[SQL: SELECT 1]
[parameters: [{'__name__': '__main__', '__doc__': 'Automatically created module for IPython interactive environment', '__package__': None, '__loader__': None, '__s ... (123202 characters truncated) ... stgresql://ngd#node1:5432/graph']", '_i28': "conn = connections['postgresql://ngd#node1:5432/graph']\nconn.session.close()", '_i29': '%sql SELECT 1'}]]
A big problem is--if you want to reconnect, that doesn't seem to work. Even after running %reload_ext sql, and trying to connect again, it still thinks the connection is closed when you try to use it. So unless someone knows how to fix that behavior, this is only useful for disconnecting if you don't want to re-connect again (to the same db with the same params) before restarting the kernel.
You can also restart the kernel.
This is the most simple way I've found to close all connections at the end of the session. You must restart the kernel to be able to re-establish the connection.
connections = %sql -l
[c.session.close() for c in connections.values()]
sorry for being to late but I've just started with working with SQL Magic and got annoyed with the constant errors appearing. It's a bit of a awkward patch but this helped me use it.
def multiline_qry(qry):
try:
%sql {qry}
except Exception as ex:
if str(type(ex).__name__) != 'ResourceClosedError':
template = "An exception of type {0} occurred. Arguments:\n{1!r}"
message = template.format(type(ex).__name__, ex.args)
print (message)
qry = '''DROP TABLE IF EXISTS EMPLOYEE;
CREATE TABLE EMPLOYEE(firstname varchar(50),lastname varchar(50));
INSERT INTO EMPLOYEE VALUES('Tom','Mitchell'),('Jack','Ryan');
'''
multiline_qry(qry)
log out the notebook first if you want to close the connection.

MySQLNonTransientConnectionException in PDI

I have problem with MySQL in PDI (Kettle). This error appears in process of reading information by Input Table. Even if all data is gived out of base successfully, this error appears and, probably, doesn't affect on transformation.
Error comitting connection
Communications link failure during commit(). Transaction resolution unknown.
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Communications link failure during commit(). Transaction resolution unknown.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)...
Why this problem happens?
This is a MySQL error documented in a manual page with a nice title: My sql server has gone away.
Matt Casters (the main author of Kettle) gives a bunch of solutions on the Pentaho wiki which is not yet uploaded on Hitachi Vantara forum.
Matt's first solution is to increase the net_write_timeout. The default is 60 and he did increase it to 1800, mentioning that less may be sufficient.
In order to do this, edit the connection and select Options on the left panel.
Then write net_write_timeout in he Parameters column and 1800 as value.

Progress SQL error in ssis package: buffer too small for generated record

I have an ssis package which uses SQL command to get data from Progress database. Every time I execute the query, it throws this specific error:
ERROR [HY000] [DataDirect][ODBC Progress OpenEdge Wire Protocol driver][OPENEDGE]Internal error -1 (buffer too small for generated record) in SQL from subsystem RECORD SERVICES function recPutLONG called from sts_srtt_t:::add_row on (ttbl# 4, len/maxlen/reqlen = 33/32/33) for . Save log for Progress technical support.
I am running the following query:
Select max(ROWID) as maxRowID from TableA
GROUP BY ColumnA,ColumnB,ColumnC,ColumnD
I've had the same error.
After change startup-parameter -SQLTempStorePageSize and -SQLTempStoreBuff to 24 and 3000 respectively the problem was solved.
I think, for you the values must be changed to 40 and 20000.
You can find more information here. The name of the parameter in that article was a bit different than in my Database, it depends on the Progress-version witch is used.

How to get the processes running the sql server

I am getting the deadlock problem in our database on a table .
I am getting the following error message :
3/13/2015 11:37:35 AM
System.Data.SqlClient.SqlException: Transaction (Process ID 143) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
at System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
at clsdb.clsDB.execcmd(String strsql)
at clspllog.Clspllog.RunXPSPushliveResponse(String pushtype, String journalname, String strvol, String strissue, String articleid, String PushLiveResponseTime, Int32 plstatus, String plstatusmsg, String strerrmsg)
There is one method named RunXPSPushliveResponse using the table articleschedule . it is trying to update this table , but it got the above error message.
I am not able to know which is another process which is using this table.
thus not able to take any action.
Is there some way so I got the processes using this table or any other way . Totally blank with this issue. Any hint will be much appreciated. I am fresher, so don't have much ideas to rectify this.
Any help is appreciated.
You can use the following queries for getting the info mentioned in the log.
SQL Server:Check for all running process and Kill ?
You can also use Activity Monitor which can help you with all the activities in the server
Activity Monitor

Mysql + django exception: "Commands out of sync; you can't run this command now"

Running django via gunicorn to RDS (AWS mysql), I'm seeing this error in my gunicorn logs:
Exception _mysql_exceptions.ProgrammingError: (2014, "Commands out of sync; you can't run this command now") in <bound method Cursor.__del__ of <MySQLdb.cursors.Cursor object at 0x690ecd0>> ignored
I can't reliably reproduce it yet, nor can I track down the underlying code that's causing it.
I am using raw cursors in some places, following this pattern:
cursor = connections['read_only'].cursor()
sql = "select username from auth_user;"
cursor.execute(sql)
rows = cursor.fetchall()
usernames = []
for row in rows:
usernames.append(row[0])
In some places I immediately reuse the cursor for another query execute() / fetchall() pattern. Sometimes I don't.
I also use raw manager queries in some place.
I'm not explicitly closing cursors, but I don't believe that I should.
Other than that: I'm not using any stored procedures, no init_command parameters, nor anything else indicated in the other answers I've seen posted here.
Any ideas or suggestions for how to debug would be appreciated.
Check out https://code.djangoproject.com/ticket/17289
you'll need to do something like:
while cursor.nextset() is not None:
if verbose:
print "rows modified %s" % cursor.rowcount