Every time i relog to mysqlworkbench the tables that i have created within my schema have been automatically dropped. This happens after about 2 hours every time. I can't quite figure out if this is some mysql setting or if I have incorrectly configured the database.
I'm accessing the database via java spring-data persistence jpa requests.
sql table entry code:
use storage_app_schema;
CREATE TABLE `StorageItem` (
`DateStored` tinyblob,
`Image` tinyblob,
`Name` varchar(255) DEFAULT NULL,
`ReferenceCode` varchar(255) DEFAULT NULL,
`Size` varchar(255) DEFAULT NULL)
hibernate.hbm2ddl.auto was set to "create". I wasn't aware this dropped existing table. This should be set to "update" - in order to update the existing schema as opposed to overriding it when the session closes.
Related
I'm scratching my head off, but basically I'm upgrading my RDS instance from MySQL 5.7 to 8.0.
However, it's returning a very odd error during the PrePatchCompatibiity check.
12) Usage of removed functions
Following DB objects make use of functions that have been removed in version 8.0. Please make sure to update them to use supported alternatives before upgrade.
More Information:
https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals
app-website-org-staging.eav_value_point.value_point_value - COLUMN uses removed functions X (consider using ST_X instead), Y (consider using ST_Y instead)
The odd thing here is, I'm NOT using the functions X and Y on this stored generated column. I am ALREADY using ST_X and ST_Y in the generated column function. I was wondering how I can get rid of that error.
What I have tried:
In phpMyAdmin, edit, and save again.
In phpMyAdmin, edit, change the functions to uppercase, and save again.
Used mysqldump to export the table, and verified that indeed I was using ST_X and ST_Y.
Removed the problematic column.
1-3 did not work, 4 worked, but that was not my intention. I want to know the true reason.
Please see my table dump below:
CREATE TABLE `eav_value_point` (
`value_point_id` int(11) UNSIGNED NOT NULL,
`value_point_created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`value_point_modified` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`value_point_modified_by` int(11) UNSIGNED DEFAULT NULL,
`value_point_attribute_id` int(11) UNSIGNED NOT NULL,
`value_point_entity_id` int(11) UNSIGNED NOT NULL,
`value_point_value_real` point NOT NULL,
`value_point_value` varchar(255) COLLATE utf8mb4_unicode_520_ci GENERATED ALWAYS AS (concat(st_x(`value_point_value_real`),',',st_y(`value_point_value_real`))) STORED
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_520_ci;
If anyone can shed me some light I'd appreciate it! Thanks all in advance!
So in this case, I will get the whole database schema multiple times. But everytime the tables structure might be slightly different than the previous one. Since I already have data inside, is there a way to write a query to compare with the existing table and just adding new columns?
For example I already have this table in my database.
CREATE TABLE `Ages` (
`AgeID` int(11) DEFAULT NULL,
`AgeName` varchar(32) DEFAULT NULL,
`AgeAbbreviation` varchar(13) DEFAULT NULL,
`YouthAge` varchar(15) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
And in the new schema that I get has the same table but with different columns.
CREATE TABLE `Ages` (
`AgeID` int(11) DEFAULT NULL,
`AgeName` varchar(32) DEFAULT NULL,
`AgeAbbreviation` varchar(13) DEFAULT NULL,
`YouthAge` varchar(15) DEFAULT NULL,
`AgeLimit` varchar(20) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
In this case the column AgeLimit will be add to the existing table.
You should be able to do it by looking at the table definitions in the metadata tables (information_schema).
You can always look into the existing schema using the information_schema database, which holds the metadata.
You can then import your new schema into a temporary database, creating all tables according to the new schema and then again look into the metadata.
You might be able to use dynamic sql inside a stored procedure to execute alter table statements created from that differences at runtime
But I think, this is a lot easier from the backend nodejs server, because you can easily do step 1 and 2 also from nodejs (it's in fact just querying a bunch of tables) and you have way more possibilities to calculate the differences, create and execute the appropriate queries.
EDIT 1
If you don't have the possiblility of creating a temporary database from the new schema, you will have to find some other way, to extract information from it. I suspect you have a sql-script with (among others) a bunch of CREATE TABLE ... statements, because that's typically what mysqldump creates. So you'll have to parse this script. Again, this seems to be way easier in javascript, if it even is possible in a MySQL stored procedure. If your schema is as well structured as your examples, it's actually just a few lines of code.
EDIT 2
And maybe you can event get some inspiration from here: Compare two MySQL databases There are some tools mentioned which do a synchronization between databases.
I've seen several posts about MySQL error #1210 but I haven't noticed one about errors occurring within phpMyAdmin. Perhaps someone can help.
Using phpMyAdmin, I fill in the GUI form to (for example) drop an obsolete field in an existing table in an existing database. It asks me to confirm that I want to drop the field, and then fails with an error "#1210 - Incorrect arguments to DATA DIRECTORY". There's none of my coding in here, no MySQL queries of mine, just a few boxes ticked and buttons pressed, yet phpMyAdmin gives an error. I get this error with any attempt to alter a table structure.
For this particular job, the table was created with the following (which was generated by an Export from another phpMyAdmin installation)...
CREATE TABLE `choreovote` (
`id` int(11) NOT NULL,
`compyear` year(4) NOT NULL,
`competition` year(4) NOT NULL,
`memberno` smallint(5) UNSIGNED NOT NULL,
`entry_id` int(11) NOT NULL,
`votes` smallint(5) UNSIGNED NOT NULL
) ENGINE=MyISAM DEFAULT CHARSET=latin1 DATA DIRECTORY='./cloggb_db/' INDEX DIRECTORY='./cloggb_db/';
ALTER TABLE `choreovote` ADD PRIMARY KEY (`id`);
ALTER TABLE `choreovote` MODIFY `id` int(11) NOT NULL AUTO_INCREMENT;
COMMIT;
And the phpMyAdmin generated query which is not working is...
ALTER TABLE choreovote DROP compyear;
Does anyone have any idea where I should look?
Many thanks!
I'm testing this on my system, and working off of some hints at DATA DIRECTORY MySQL, I was able to make it work after:
changing to a full path outside the existing MySQL data directory,
creating the intended directory outside of MySQL (in my case, I have shell access and just used mkdir), and
changing permission on the folder such that my MySQL user had permissions to access the folder and create new files.
Once I did all three of those, your SQL query ran successfully.
Installing Mura on a brand new machine and local MySQL 5.7 database. Per the install instructions I browse to the Mura index.cfm file to complete the installation. I enter in the database and DSN info. After a few seconds I get an error message.
Error Executing Database Query.
Datasource: muracms
SQL: CREATE TABLE
IF NOT EXISTS tuserremotesessions ( userID char(35) default NULL,
authToken char(32) default NULL, data text, created datetime
default NULL, lastAccessed datetime default NULL, PRIMARY KEY
(userID) )
Code: 42000
Type: 42000
All parts of a PRIMARY KEY must be NOT NULL; if you need NULL in a key, use UNIQUE instead
Refreshing browser page results in this error again. I can see that tables have already been created in the database. I have been unsuccessful at attempts to internet search for a solution.
Does anyone have an idea of what I can do to get past this error? I have successfully installed Mura on other servers before so I'm really stumped.
For those who run into this error, it is due to a change in MySql 5.7 from how MySql 5.6 worked. See http://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-3.html. Specifically
Columns in a PRIMARY KEY must be NOT NULL, but if declared explicitly
as NULL produced no error. Now an error occurs. For example, a
statement such as CREATE TABLE t (i INT NULL PRIMARY KEY) is rejected.
I edited the create table statements for several tables in {murahome}/requirements/mura/dbUpdates/5.2.0.cfm to remove the default NULL statement on two tables and then everything worked fine.
I have developed a database system utilizing MySQL to house some testing data.
CREATE TABLE testtable (
TEST_IDX int(11) NOT NULL AUTO_INCREMENT,
PASS_FLAG bit(1) NOT NULL,
RESULT_STRING varchar(500) NOT NULL,
TEST_DATE timestamp NULL DEFAULT NULL,
LAST_MODDATE timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
TESTED_BY varchar(45) NOT NULL,
PRIMARY KEY (TEST_IDX) )
ENGINE=InnoDB AUTO_INCREMENT=31 DEFAULT CHARSET=latin1;
One of the fields used is a flag to indicate the Pass/Fail Status of a test set. On my development machine I used the BIT data type, developed the database interaction code and tested the system successfully. I have a second development laptop that I used to perform bug fixes and such when deployment time came where the system also worked properly.
When I went to deploy the system on a production machine I set up MySQL and imported the database from a dump made off of the laptop. When the program, which had run successfully on both of my development machines, attempted to execute the error "data too long for column" was generated causing my inserts to fail. This doesn't make sense to me unless mysql has a setting that makes the bit/tinyint/int(1) behave in odd ways from install to install. I was able to make this function properly by simply setting the field to an INT (INT(11) it think) but I should not have had to do this and I would like to know why this happened. Perhaps someone could clarify how BIT data types work in MYSQL.