We have a Cloud SQL instance using MySQL 5.7 which has about 30 DB in it. Recently we are finding a rare issue that clears all the tables in our DB and makes it appear as an empty DB. Every week a different DB gets empty.
We previously used MySQL 5.6 in our Cloud SQL instance and we didn't face this problem in the last two years. However, we have face this problem 3 times in the last month :(.
The following error is thrown:
14:57:53 Error loading schema content Error Code: 1049 Unknown database 'wp-map'
Even though the DB name is visible we are not allowed to use it.
Is there any problem due to using upper case in the database name, as the DB name is shown in upper case in the Cloud Console?
Related
I have a .NET 6 application that is currently backed by MSSQL database, that is maintained by Entity Framework using a Model First approach. I am trying to migrate it to use a MySQL database backend, for a variety of reasons.
I have installed MySQL Locally (Windows) to start exploring and getting it working. I can migrate the schema easily enough (With either MySQL Workbench or using EF) but migrating the data is proving to be a little tricky.
Around half of the tables migrated fine, but the other half, relating to string data, are failing due to errors which look a little like this - the column obviously differs from table to table. The source data is nvarchar in SQL, and the destination is type `varchar'
Statement execution failed: Incorrect string value: '\xF0\x9F\x8E\xB1' for column 'AwayNote'
Does anyone know how I can get the Migration to run successfully?
The research I have read has said to ensure server and table character sets are aligned as per the below.
I have set up my Source as SQL using the ODBC FreeTDS
The data import screen is set up like this - the check box doesn;t seem to affect things especially.
I have MySQL setup with this too, which I have also read is important.
[mysql]
default-character-set = utf8mb4
I have a database in SQL Server that I am trying to convert into a MySQL database, so I can host it on AWS and move everything off-premises. From this link, it seems like normally this is no big deal, although that link doesn't seem to migrate from a .bak file so much as from your local instance of SQL Server that is running and contains the database in question. No big deal, I can work with that.
However when I actually use MySQL Workbench to migrate using these steps, it gets to the Bulk Data Transfer step, and then comes up with odd errors.
I get errors like the following:
ERROR: OptionalyticsCoreDB-Prod.UserTokens:Inserting Data: Data too long for column 'token' at row 1
ERROR: OptionalyticsCoreDB-Prod.UserTokens:Failed copying 6 rows
ERROR: OptionalyticsCoreDB-Prod.UserLogs:Inserting Data: Data too long for column 'ActionTaken' at row 1
ERROR: OptionalyticsCoreDB-Prod.UserLogs:Failed copying 244 rows
However the data should not be "too long." These columns are nvarchar(MAX) in SQL Server, and the data for them is often very short in the specified rows, nothing that approaches the maximum value for an nvarchar.
Links like this and this show that there used to be, almost a decade ago, bugs with nvarchar formats, but they've been fixed for years now. I have checked and even updated and restarted my software and then computer - I have up-to-date versions of MySQL and MySQL Workbench. So what's going on?
What is the problem here, and how do I get my database successfully migrated? Surely it's possible to migrate from SQL Server to MySQL, right?
I have answered my own question... Apparently there IS some sort of bug with Workbench when translating SQL Server nvarchar(MAX) columns. I output the schema migration to a script and examined it, it was translating those columns as varchar(0). After replacing all of them with TEXT columns, the completed migration worked.
Frustrating lesson.
I was toying around with the Azure Data Factory using the Sakila Dataset. I set up a Maria DB (5.5.64) on a private centos7.7-vm. I also ran into the same issue when I was using MySQL 8 instead of MariaDB.
I run a parameterized load pipeline in Azure Data Factory. I repeatedly get this error inside a foreach loop in the Azure Data Factory. I get the error every time with a different source table.
Error from Azure Data Factory:
{
“errorCode”: “2100”,
“message”: “’Type=System.InvalidOperationException,Message=Collection was modified; enumeration operation may not execute.,Source=mscorlib,’”,
“failureType”: “UserError”,
“target”: “GET MAX MySQL”,
“details”: []
}
Parameterized query running in the lookup activity:
SELECT MAX(#{item().WatermarkColumn}) as maxd FROM #{item().SRC_tab}
becomes
SELECT MAX(last_update) as maxd FROM sakila.actor
Please note that the error appeared the last time in the staff and the category table, I was using the MariaDB connector. After I switched to the MySQL connector, the error disappeared. However in the past when I used the MySQL connector, and switched to the MariaDB connector the error also persisted.
Have any of you experienced a similar behaviour? If yes, what were your workarounds?
Apologizes , but we need more clarity here . As I understand is this issue still with and MariaDB connection and MySQL or only with MySQL ?
Just to let you know ADF team regularly deploys changes and it may happen that the issues which you experienced and is not repro-able at this time , a fix may have been deployed for that .
We are running on Google Compute Engine/Debian9/PHP/Lumen/Doctrine2 <-> Google SQL MySQL 2nd Gen 5.7.
Usually it works without hiccups, but we are now getting error messages, similar to the one below, with increasing frequency:
Error while sending QUERY packet. PID=123456
PDOStatement::execute(): MySQL server has gone away
Any idea why this is happening and how i would fix it?
As noted here, there is a list of cases which may be causing this error. A few are:
You have encountered a timeout on the server side and the automatic reconnection in the client is disabled (the reconnect flag in the
MYSQL structure is equal to 0).
You can also get these errors if you send a query to the server that is incorrect or too large...An INSERT or REPLACE statement that
inserts a great many rows can also cause these sorts of errors.
. . .
Please refer to the link for a complete list.
Also, there see this answers on the same problem.
I have a mySQL database that tracks our projects and drives our website's display of their info. For ease of updating the database I have set up an access database that used an ODBC connection (MySQL ODBC 5.1) to edit the data. It has been working just fine for the past few months with no hiccups.
However, last night users(2 of 3) experienced Write Conflict errors. The users could only Copy the changes to the Clipboard or Drop the changes. So thinking there is something wrong with the Access database I created a new access database, linked the tables through the ODBC connection, and still the issue occurred. I also deleted and recreated the ODBC connection, to no effect.
So where do I go from here? What could have caused this issue to crop up now, not when I was setting this up months ago?
There have been no changes to the database server, database or access database in the last week (+5 days).
We have made sure that only one instance of Access is attempting to effect the database.
All tables have a PK and a timestamp column.
We are not using any forms, just using the Table interface.
The server has not been updated, nor has the ODBC connection.
We are using Access 2007
Nothing is showing up in the server's error log when we try and update rows.
In general, all ODBC databases used from Access need to have PKs in all tables and timestamp fields in them that are updated each time the record is changed. Access uses this in bound forms for handling refreshes of the bound data and Jet uses them in in choosing how to tell the ODBC database what do update.
You may be able to get things to work with some tables without PK and timestamp, but I've found that it's best just to make sure that all your tables have them so you don't run into the problem (I never have any tables with no PK, of course).y
Make sure BIT columns have default values that are not NULL. Any records which have a BIT column set to NULL could get the Write Conflict error.