MySql Amazon RDS: 'Innodb feature disabled' error from application - mysql

Edit: the answer to the first question is that the application calls the reader instance of the cluster. I can reproduce the problem with workbench if I execute the procedure on the reader instance.
I have a stored procedure with a temporary table. I am using Amazon AWS RDS (Aurora) MySql. I create the temporary table like:
create temporary table if not exists tmpResources(
pkKey varchar(50) NOT NULL, PRIMARY KEY(resource), UNIQUE KEY(resource),
...
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
When I call the procedure from MySql workbench it executes fine. When I call it from my application, I receive the following error:
The 'InnoDB' feature is disabled; you need MySQL built with 'InnoDB'
to have it working
I have an asp.net web application, using the Oracle c# drivers version 8.0.20. AWS RDS is currently MySql version 5.7.12.
There are 2 very perplexing questions:
1. Why does it work differently when called from workbench? The error seems to be coming from server side.
2. Why do I get this error about InnoDB disabled, when it is clearly not disabled?
Thanks for any insight...

Edit: I verified this with AWS technical support - temp tables on reader clusters are MyISAM. My initial problem involves indices on MyISAM databases, but I will post that as its own question. The response from AWS RDS/Aurora team follows:
this is by design in Aurora, and it is because when a temporary table or a system generated internal temp table is created on writer with InnoDB, it needs to go to underlying storage for readers in the cluster, but when you create temporary table on the reader, it is just for that particular reader instance and does not populate to other nodes, hence by default reader instances will pick up MyISAM engine. This behavior is attributed to the fact that the value of the variable “innodb_read_only” is set as ON for readers and OFF for writers, thus restricting the creation of InnoDB tables on the reader instances
Initial response:
It appears the issue is with readers. Temporary tables on stored procedures in the reader RDS instances are not InnoDB, even though this is not documented anywhere and the instance type/price is the same for readers as for the writer. What they are exactly I don't know, I have a question to AWS about it. If they respond I'll post here.

Related

Mysql Got no errors. Table not Ajusted [duplicate]

I am facing a problem where I am trying to add data from a python script to mysql database with InnonDB engine, it works fine with myisam engine of the mysql database. But the problem with the myisam engine is that it doesn't support foreign keys so I'll have to add extra code each place where I want to insert/delete records in database.
Does anyone know why InnonDB doesn't work with python scripts and possible solutions for this problem ??
InnoDB is transactional. You need to call connection.commit() after inserts/deletes/updates.
Edit: you can call connection.autocommit(True) to turn on autocommit.
Python DB API disables autocommit by default
Pasted from google (first page, 2nd result)
MySQL :: MySQL 5.0 Reference Manual :: 13.2.8 The InnoDB ...
By default, MySQL starts the session for each new connection with autocommit ...
dev.mysql.com/.../innodb-transaction-model.html
However
Apparently Python starts MySQL in NON-autocommit mode, see:
http://www.kitebird.com/articles/pydbapi.html
From the article:
The connection object commit() method commits any outstanding changes in the current transaction to make them permanent in the database. In DB-API, connections begin with autocommit mode disabled, so you must call commit() before disconnecting or changes may be lost.
Bummer, dunno how to override that and I don't want to lead you astray by guessing.
I would suggest opening a new question titled:
How to enable autocommit mode in MySQL python DB-API?
Good luck.

How does the MySQL FEDERATED Storage Engine handle column/schema changes of local and remote databases?

The title pretty much says it all.
I was wondering how changes to the remote or local table (e.g. adding a column) would affect the connection between them and could not find any resources about it.
So does this work? (I assume it does not, because otherwise there would not be the constraint that both must have the same schema in the first place) But if it works, is it bi-directional and what steps have to be done?
I would appreciate any help and especially links to resources about this problem.
If you alter the base table on the remote system, you would have to DROP and then re-CREATE the federated table that connects to it.
https://dev.mysql.com/doc/refman/8.0/en/federated-usagenotes.html says:
The FEDERATED storage engine supports SELECT, INSERT, UPDATE, DELETE, TRUNCATE TABLE, and indexes. It does not support ALTER TABLE...
There is no way for the FEDERATED engine to know if the remote table has changed.
https://dev.mysql.com/doc/refman/8.0/en/federated-create.html says:
When you create the local table it must have an identical field definition to the remote table.
So maintaining a federated table is a somewhat manual process, and it's not supported to have continuous access to it if you ALTER TABLE on the remote end.
Frankly, I've never found a good use for federated tables. I'd rather code my application to connect to multiple database instances and query the tables directly.

How to fill for the first time a SQL database with multiple tables

I have a general question regarding the method of how to fill a database for the first time. Actually, I work on "raw" datasets within R (dataframes that I've built to work and give insights quickly) but I now need to structure and load everything in a relational Database.
For the DB design, everything is OK (=> Conceptual, logical and 3NF). The result is a quite "complex" (it's all relative) data model with many junction tables and foreign keys within tables.
My question is : Now, what is the easiest way for me to populate this DB ?
My approach would be to generate a .csv for each table starting from my "raw" dataframes in R and then load them table per table in the DB. Is it the good way to do it or do you have any easier method ? . Another point is, how to not struggle with FK constraints while populating ?
Thank you very much for the answers. I realize it's very "methodological" questions but I can't find any tutorial/thread related
Notes : I work with R (dplyr, etc.) and MySQL
A serious relational database, such as Postgres for example, will offer features for populating a large database.
Bulk loading
Look for commands that read in external data to be loaded into a table with a matching field structure. The data moves directly from the OS’s file system file directly into the table. This is vastly faster than loading individual rows with the usual SQL INSERT. Such commands are not standardized, so you must look for the proprietary commands in your particular database engine.
In Postgres that would be the COPY command.
Temporarily disabling referential-integrity
Look for commands that defer enforcing the foreign key relationship rules until after the data is loaded.
In Postgres, use SET CONSTRAINTS … DEFERRED to not check constraints during each statement, and instead wait until the end of the transaction.
Alternatively, if your database lacks such a feature, as part of your mass import routine, you could delete your constraints before and then re-establish them after. But beware, this may affect all other transactions in all other database connections. If you know the database has no other users, then perhaps this is workable.
Other issues
For other issues to consider, see the Populating a Database in the Postgres documentation (whether you use Postgres or not).
Disable Autocommit
Use COPY (for mass import, mentioned above)
Remove Indexes
Remove Foreign Key Constraints (mentioned above)
Increase maintenance_work_mem (changing the memory allocation of your database engine)
Increase max_wal_size (changing the configuration of your database engine’s write-ahead log)
Disable WAL Archival and Streaming Replication (consider moving a copy of your database to replicant server(s) rather than letting replication move the mass data)
Run ANALYZE Afterwards (remind your database engine to survey the new state of the data, for use by its query planner)
Database migration
By the way, you will likely find a database migration tool helpful in creating the tables and columns, and possibly in loading the data. Consider tools such as Flyway or Liquibase.

Why does InnoDB have its own parser and server connection modules if MySQL has its own modules for completing those tasks?

It is my understanding that MySQL creates an execution plan from a SQL query, and then uses innodb (or any other storage engine) to execute the plan. If this is the case, then why does the innodb storage engine have its own parser, server main program, and user-session modules? It looks as if InnoDB could run on its own as a fully functional DBMS.
InnoDB began as an independent company in 1995. The founder wanted to create a standalone RDBMS server.
It wasn't until 2000 that InnoDB began working closely with MySQL, and by March 2001 they announced the InnoDB Table Handler, which allowed MySQL to delegate work to the storage engine.
But InnoDB wanted to support some features that MySQL did not support:
FOREIGN KEY constraints
Proprietary table options
Transactions
MySQL wanted to allow InnoDB and other storage engines to implement their own features too. So they allowed the storage engine layer to perform their own SQL parsing. There are a number of features (like CHECK constraints) that are validated for syntax by the MySQL storage-independent layer, without implementing the semantics. It's up to the storage engine to perform extra parsing and implement those features.
There have also been cases where the InnoDB storage engine wanted to implement features that had no SQL support at the higher level.
For example, the InnoDB monitor, to output periodic troubleshooting data to the server's error log, could be enabled not by sensible syntax like SET ENGINE INNODB MONITOR=ON or something like that, but by creating a table with a special name:
CREATE TABLE innodb_monitor (a INT) ENGINE=INNODB;
It doesn't matter which schema you create this table in, nor what columns you put in it. It doesn't need any rows of data. The name itself is special to InnoDB, and it's a signal to start logging monitor data to the log. Just so they didn't have to implement a new configuration option or SQL syntax!
In later versions of MySQL, you can enable the monitor in a less hacky way with SET GLOBAL innodb_status_output=ON.

Setting up MySQL 5.6 with Memcache fails without error

I am trying to setup MySQL 5.6 with the memcached plugin enabled. I followed the procedure on the mysql website and a couple of other tutorials 2, 3 that I found online. Specifically, as per 2, this should be really simple to setup and test.
I am trying to verify that the setup works as expected using telnet. When I set the value of a key from telnet, I get the return status of STORED. I can even fetch the value immediately from memcache. However, when I login into the DB, I do not see the new row. I don't see any errors in the logs either. "show plugins" shows that the daemon_memcached plugin is enabled.
[Edited]
Actually, things don't even the other way. I added a new row into the demo_test table and tried fetching it through the memcache interface. That didn't work either.
Any pointers about how to go about identifying what's wrong?
The memcache integration in MySQL communicates directly with the InnoDB storage engine, not the higher MySQL "server layer." As such, changes to table data through this interface do not invalidate queries against the table that have been stored in the query cache. This is in contrast to normal operations through the SQL interface, where any change to a table's data will immediately evict any and all results held from the query cache for queries against that table, without regard to whether or not the change to the table data actually invalidated each specific query impacted.
Repeat your query, but instead of SELECT, use SELECT SQL_NO_CACHE. If you get the result you expect, this is the explanation.
Once you have established that this is the cause, you will find that any SQL query that does an insert, delete, or update against the table will also have the effect of making memcache-changed data visible to SELECT queries, without the need for adding the SQL_NO_CACHE directive, and this will hold true even when the insert, delete, or update does not directly impact the rows in question, so long as it modifies something in the table in question.
Duh!! There was already a memcached instance running on port 11211. Unfortunately, mysql doesn't error out in this situation. When I was using telnet to connect to port 11211, I was reaching the existing memcached instance. It was storing/retrieving values that it had seen but wasn't communicating with MySQL.
I stopped the existing memcached instance and restarted mysql. I am now able to connect to port 11211. Using telnet, when I do a "get", I get back values from the db. Also, when I set new values from telnet, they get reflected in the DB (and can be retrieved using SQL).