Query crashing while using pg8000 - sqlalchemy

I am making a query to Amazon Redshift table using pg8000 and SQLAlchemy. Sometimes my query is crashing with the following error.
pg8000.core.ProgrammingError: ('ERROR', '42P01', 'Relation with OID 1100223 does not exist.', '/home/ec2-user/padb/src/pg/src/backend/catalog/aclchk.c', '1423', 'pg_class_aclmask')
However, this is happening for only one table in the DB. And this behaviour is completely random. Not sure why SQLAlchemy is throwing programmingError and whether it is correct.

Related

Connection Error Code: 2013 happening only for one specific table in MySQL Workbench, but all other tables work fine

I'm using MySQL WorkBench 8.0 on Windows 10.
When I run a simple SELECT query
SELECT * FROM some_table;
It will eventually time out with the error
Error Code: 2013. Lost connection to MySQL server during query
This only happens for one specific table, all other tables work just fine. I've tried setting the time out to longer than the default (30 seconds), but even after very long amounts of time it will still have the same result.
This is different from this similar question because the OP was getting the error when adding an index to a table, but in my situation it happens for any query for only one table.

Too many columns in query, MySQL Error Code: 1117

I very recently started the process of trying to get a undocumented and poorly designed project under control. After struggling to get the thing built locally, I started running into errors when going through various functionalities.
Most of these problems appear to be a result of MySQL errors due to the way my product is generating Hibernate criteria queries. For example, when doing an autocomplete on the displayName of an object, the criteria query that results from this action is very large. I end up with around 2200 fields to select from around 50 tables. When hibernate attempts to execute this query I get an error:
30-Mar-2018 11:43:07.353 WARNING [http-nio-8080-exec-8] org.hibernate.util.JDBCExceptionReporter.logExceptions SQL Error: 1117, SQLState: HY000
30-Mar-2018 11:43:07.353 SEVERE [http-nio-8080-exec-8] org.hibernate.util.JDBCExceptionReporter.logExceptions Too many columns
[ERROR] 11:43:07 pos.services.HorriblyDefinedObjectAjax - could not execute query
org.hibernate.exception.GenericJDBCException: could not execute query
I turned on general logging for MySQL and obtained the criteria query that is trying to be executed. If I attempt to execute it in MySQLWorkbench I also get the following result:
Error Code: 1117. Too many columns
I've gone to the QA instances of this application and the autocompletes work there, which seems to indicate there is a way that this huge query will execute. Is it possible that I just do not have the right MySQL configurations on?
Currently my sql_mode='NO_ENGINE_SUBSTITUTION', is there anything else I might need to do?

Rails persisting table data after MySQL truncate

I truncated 2 tables in my database in MySQL (for a Rails project) so that I could repopulated it with test data. But for some reason the application is still counting how many entries there used to be (250), even though there is only 9 entries now.
I even went into the ruby console using (ruby script/rails console), then truncated using:
ActiveRecord::Base.connection.execute("TRUNCATE TABLE bars;")
but that didn't do anything different then running the query through MySQL. I am pretty confused, the only thing I can think of doing is restarting the server. I am just wondering if there is maybe another way to do this without having to reboot everything.
Printing the search to the logger I can see that the results for the bars are a bunch of nil values for where there used to be a bar_profile, but I have truncated the tables that referenced bars or bar_profiles.
So I don't get why it would just be returning what the results would have been before the tables were truncated. Except now now instead of returning actual results they are just nil.

SSIS: SQL 2008 R2 to MySQL data loss

I have an SSIS package set up to export data from a SQL Server 2008 R2 table to a MySQL version of that table. The package executes however, I am getting about 1% of the rows failing to be exported.
My source connection uses the SQL statement
SELECT * FROM Table1
all of the columns are integers. An example of a row which is exported successfully is
2169,2680, 3532,NULL, 2169
compared to a row which fails
2168,2679,3532,NULL, 2168
virtually nothing different that I can ascertain.
Notably, if I change the source query to only attempt the transfer of a single failing row - ie.
SELECT * FROM Table1 WHERE ID = 2168
then the record is exported fine - it is only when part of a select which returns multiple rows that it fails. The same rows fail the export each time. I have redirected error rows to a text file which displays a -1071610801 error for the failing rows. This would apparently translate to:-
DTS_E_ADODESTERRORUPDATEROW: "An error has occurred while sending this row to destination data source."
which doesn't really add a great deal to my understanding of the issue!
I am wondering if there is a locking issue or something preventing given rows from being fetched or inserted correctly but if anyone has any ideas or suggestions on what might be causing this or even better how to go about resolving it they would be greatly appreciated. I am currently at a total loss...
Try to setup longer timeout (1 day) ot the mysql (ADO.NET) destination.
Well after much head scratching and attempting every work around that I could come up with I have finally found a solution for this.
In the end I switched out the MySQL connector for a different driver produced by devArt -dotConnect for MySql and, with a few minor exceptions (which I think I can resolve) all of my data is now exporting without error.
The driver is a paid for product unfortunately but in the end I'd have taken out a new mortgage to see all those tasks go green!

DatabaseLookup hangs on specific values

I use Kettle for some transformations and ran into a problem:
For one specific row, my DatabaseLookup step hangs. It just doesn't give a result. Trying to stop the transformation results in a never ending "Halting" for the lookup step.
The value given is nothing complicated at all, neither it is different from all other rows/values. It just won't continue.
Doing the same query in the database directly or in a different database tool (e.g. SQuirreL), it works.
I use Kettle/Spoon 4.1, the database is MySQL 5.5.10. It happens with Connector/J 5.1.14 and the one bundled with spoon.
The step initializes flawlessly (it even works for other rows) and I have no idea why it fails. No error message in the Spoon logs, nothing on the console/shell.
weird. Whats the table type? is it myisam? Does your transform also perform updates to the same table? maybe you are locking the table inadvertantly at the same time somehow?
Or maybe it's a mysql 5.5 thing.. But ive used this step extensively with mysql 5.0 and pdi 4.everything and it's always been fine... maybe post the transform?
I just found the culprit:
The lookup takes as a result the id field and gave it a new name, PERSON_ID. This FAILS in some cases! The resulting lookup/prepared statement was something like
select id as PERSON_ID FROM table WHERE ...
SOLUTION:
Don't use underscore in the "New name" for the field! With a new name of PERSONID everything works flawlessly for ALL rows!
Stupid error ...