How to scope a MySQL JOOQ rename table query to the same database? - mysql

I have a scala application that manages multiple MySQL database schemas, which includes modifying (adding, renaming, etc.) tables. The commands are issued over a connection pool that connects to a generic management database in the database server.
Because the application is designed to be cross-database, I use JOOQ to render SQL queries (execution is done via a separate JDBC module).
I experience issues with JOOQs alterTable(...).renameTo(...) DSL - consider the following example:
We have a table "TestTable" in database "TestDatabase". Let's say I want to rename that table simply to "Foo", keeping it in "TestDatabase".
This code:
...
val context = DSL.using(SQLDialect.MYSQL_5_7)
val query = context
.alterTable(table(name("TestDatabase", "TestDatabase")))
.renameTo(name("TestDatabase", "Foo"))
...
Generates: ALTER TABLE `TestDatabase`.`TestTable` RENAME TO `Foo`
However, since the connection pool I'm using is connected to my management database, it just renames the table to "Foo" and moves it to my management database. I would have expected the SQL to be: ALTER TABLE `TestDatabase`.`TestTable` RENAME TO `TestDatabase`.`Foo`. I tried a variety of alternatives to invoke the .renameTo method and convice it to use the fully qualified name, to no avail:
.renameTo(table(name(...) -> same behaviour.
.renameTo("`TestDatabase`.`Foo`") -> Escapes the name with backticks, treats it as one name instead of a qualified name.
I'm wondering if I'm missing something, if this is intended behaviour, or maybe even a bug or design shortcoming of JOOQ.
Is there a way to rename the table using fully qualified names?
Thank you!

That's a bug in jOOQ: https://github.com/jOOQ/jOOQ/issues/8042
Your workaround is close. This doesn't work:
.renameTo("`TestDatabase`.`Foo`")
As you've noticed, behind the scenes, the DSL.name() API is used to wrap the target name, because the renameTo() method doesn't implement the plain SQL templating API. You can, however, explicitly use plain SQL templating by writing as a workaround:
.renameTo(table("`TestDatabase`.`Foo`"))

Related

Is the a way for SQLDelight to allow unrecognized expression?

I use SQLDelight's MySQL dialect on my server. Recently I plan to migrate a table to combine many fields into a JSON field so the server code no longer needs to know the complex data structure. As part of the migration, I need to do something like this during runtime - when the sever sees a client with the new version, it knows the client won't access the old table anymore, so it's safe to migrate the record to new table.
INSERT OR IGNORE INTO new_table SELECT id, a, b, JSON_OBJECT('c', c, 'd', JSON_OBJECT(…)) FROM old_table WHERE id = ?;
The only problem is - Unlike the SQLite dialect, the MySQL dialect doesn't recognize JSON_OBJECT or other JSON expressions, even though in this case it doesn't have to - no matter how complex the query is, the result is not passed back to Kotlin.
I wish I could add the feature by myself, but I'm pretty new to Kotlin. So my question is: is there a way to evade the rigid syntax check? I could also retrieve from old table, convert the format in Kotlin, then write to the new table, but that would take hundreds of lines of complex code, instead of just one INSERT.
I assume from your links you're on the alpha releases already, in alpha03 you can add currently unsupported behaviour by creating a local SQLDelight module (see this example) and adding the JSON_OBJECT to the functionType override. Also new function types are one of the easiest things to contribute up to SQLDelight so if you want it in the next release
For the record I ended up using CONCAT with COALESCE as a quick and dirty hack to scrape the fields together as JSON.

Knex : universal way to get the last inserted id

I'm using Knex, because I'm working on an application that I would like to use with multiple database servers, currently Sqlite3, Postgres and MySQL.
I'm realizing that this might be more difficult that I expected.
On MySQL, it appears that this syntax will return an array with an id:
knex('table').insert({ field: 'value'}, 'id');
On postgres I need something like this:
knex('table').insert({ field: 'value'}, 'id').returning(['id']);
In each case, the structure they return is different. The latter doesn't break MySQL, but on SQlite it will throw a fatal error.
The concept of 'insert a record, get an id' seems to exist everywhere though. What am I missing in Knex that lets me write this once and use everywhere?
Way back in 2007, I implemented the database access class for a PHP framework. It was to support MySQL, PostgreSQL, SQLite, Microsoft SQL Server, Oracle, and IBM DB2.
When it came time to support auto-incremented columns, I discovered that all of these implement that feature differently. Some have SERIAL, some have AUTO-INCREMENT (or AUTOINCREMENT), some have SEQUENCE, some have GENERATED, some support multiple solutions.
The solution was to not try to write one implementation that worked with all of them. I wrote classes using the Adapter Pattern, one for each brand of SQL database, so I could implement each adapter class tailored to the features supported by the respective database. The adapter satisfied an interface that I defined in my framework, to allow the primary key column to be defined and the last inserted id to be fetched in a consistent manner. But the internal implementation varied.
This was the only sane way to develop that code, in my opinion. When it comes to variations of SQL implementations, it's a fallacy that one can develop "portable" code that works on multiple brands.

Entity Framework 4.1 Custom Database Initializer strategy

I would like to implement a custom database initialization strategy so that I can:
generate the database if not exists
if model change create only new tables
if model change create only new fields without dropping the table and losing the data.
Thanks in advance
You need to implement IDatabaseInitializer interface.
Eg
public class MyInitializer : IDatabaseInitializer<MyDbContext>
{
public void InitializeDatabase(MyDbContext context)
{
//your logic here
}
}
And then set your initializer at your application startup
Database.SetInitializer<ProductCatalog>(new MyInitializer());
Here's an example
You will have to manually execute commands to alter the database.
context.ObjectContext.ExecuteStoreCommand("ALTER TABLE dbo.MyTable ADD NewColumn VARCHAR(20) NULL");
You can use a tool like SQL Compare to script changes.
There is a reason why this doesn't exist yet. It is very complex and moreover IDatabaseInitializer interface is not very prepared for such that (there is no way to make such initialization database agnostic). Your question is "too broad" to be answered to your satisfaction. With your reaction to #Eranga's correct answer you simply expect that somebody will tell you step by step how to do that but we will not - that would mean we will write the initializer for you.
What you need to do what you want?
You must have very good knowledge of SQL Server. You must know how does SQL server store information about database, tables, columns and relations = you must understand sys views and you must know how to query them to get data about current database structure.
You must have very good knowledge of EF. You must know how does EF store mapping information. You must be able to explore metadata get information about expected tables, columns and relations.
Once you have old database description and new database description you must be able to write a code which will correctly explore changes and create SQL DDL commands for changing your database. Even this look like the simplest part of the whole process this is actually the hardest one because there are many other internal rules in SQL server which cannot be violated by your commands. Sometimes you really need to drop table to make your changes and if you don't want to lose data you must first push them to temporary table and after recreating table you must push them back. Sometimes you are doing changes in constraints which can require temporarily turning constrains off, etc. There is good reason why tools which do this on SQL level (comparing two databases) are probably all commercial.
Even ADO.NET team doesn't implemented this and they will not implement it in the future. Instead they are working on something called migrations.
Edit:
That is true that ObjectContext can return you script for database creation - that is exactly what default initializers are using. But how it could help you? Are you going to parse that script to see what changed? Are you going to execute that script in another connection to use the same code as for current database to see its structure?
Yes you can create a new database, move data from the old database to a new one, delete the old one and rename a new one but that is the most stupid solution you can ever imagine and no database administrator will ever allow that. Even this solution still requires analysis of changes to create correct data transfer scripts.
Automatic upgrade is a wrong way. You should always prepare upgrade script manually with help of some tools, test it and after that execute it manually or as part of some installation script / package. You must also backup your database before you are going to do any changes.
The best way to achieve this is probably with migrations:
http://nuget.org/List/Packages/EntityFramework.SqlMigrations
Good blog posts here and here.

Adding a custom column data type in Active Record

On my local machine I develop my Rails application using MySQL, but on deployment I am using Heroku which uses PostgreSQL. I am in need of creating a new data type, specifically I wish to call it longtext, and it is going to need to map to separate column types in either database.
I have been searching for this. My basic idea is that I am going to need to override some hash inside of the ActiveRecord::ConnectionAdapters::*SQL adapter(s) but I figured I would consult the wealth of knowledge here to see if this is a good approach (and, if possible, pointers on how to do it) or if there is a quick win another way.
Right now the data type is "string" and I am getting failed inserts because the data type is too long. I want the same functionality on both MySQL and PgSQL, but it looks like there is no common data type that gives me an unlimited text blob column type?
The idea is that I want to have this application working correctly (with migrations) for both database technologies.
Much appreciated.
Why don't you install PostgreSQL on your dev machine? Download it, click "ok" a few times and you're up and running. It isn't rocket science :-)
http://www.postgresql.org/download/
PostgreSQL doesn't have limitations on datatypes, you can create anything you want, it's up to your imagination:
CREATE DOMAIN (simple stuff only)
CREATE TYPE (unlimited)
The SQL that Frank mentioned is actually the answer, but I really was looking for a more specific way to do RDBMS specific Rails migrations. The reason is that I want to maintain the fact that my application can run on both PostgreSQL and MySQL.
class AddLongtextToPostgresql < ActiveRecord::Migration
def self.up
case ActiveRecord::Base.connection.adapter_name
when 'PostgreSQL'
execute "CREATE DOMAIN longtext as text"
execute "ALTER TABLE chapters ALTER COLUMN html TYPE longtext"
execute "ALTER TABLE chapters ALTER COLUMN body TYPE longtext"
else
puts "This migration is not supported on this platform."
end
end
def self.down
end
end
That is effectively what I was looking for.

What is the use of SYNONYM?

What is the use of SYNONYM in SQL Server 2008?
In some enterprise systems, you may have to deal with remote objects over which you have no control. For example, a database that is maintained by another department or team.
Synonyms can help you decouple the name and location of the underlying object from your SQL code. That way you can code against a synonym table even if the table you want is moved to a new server/database or renamed.
For example, I could write a query like this:
insert into MyTable
(...)
select ...
from remoteServer.remoteDatabase.dbo.Employee
but then if the server, or database, schema, or table changes it would impact my code. Instead I can create a synonym for the remote server and use the synonym instead:
insert into MyTable
(...)
select ...
from EmployeeSynonym
If the underlying object changes location or name, I only need to update my synonym to point to the new object.
http://www.mssqltips.com/sqlservertip/1820/use-synonyms-to-abstract-the-location-of-sql-server-database-objects/
Synonyms provide a great layer of abstraction, allowing us to use friendly and/or local names for verbosely named or remote tables, views, procedures and functions.
For Example
Consider you have the server1 and dbschema as ABC and table name as Employee and now you need to access the Employee table in your Server2 to perform a query operation.
So you have to use like Server1.ABC.Employee it exposes everything ServerName,SchemaName and TableName.
Instead of this you can create a synonym link Create Synonym EmpTable for Server1.ABC.Employee
So you can access like Select * from Peoples p1 inner join EmpTable emp where emp.Id=p1.ID
So it gives the advantages of Abstraction, Ease of change,scalability.
Later on if you want to change Servername or Schema or tablename, just you have to change the synonym alone and there is no need for you do search all and replace them.
If you used it than you will feel the real advantage of synonym. It can also combine with linked server and provide more advantages for developers.
An example of the usefulness of this
might be if you had a stored procedure
on a Users database that needed to
access a Clients table on another
production server. Assuming you
created the stored procedure in the
database Users, you might want to set
up a synonym such as the following:
USE Users; GO CREATE SYNONYM Clients
FOR Offsite01.Production.dbo.Clients;
GO
Now when writing the stored procedure
instead of having to write out that
entire alias every time you accessed
the table you can just use the alias
Clients. Furthermore, if you ever
change the location or the name of the
production database location all you
need to do is modify one synonym
instead of having to modify all of the
stored procedures which reference the
old server.
From: http://blog.sqlauthority.com/2008/01/07/sql-server-2005-introduction-and-explanation-to-synonym-helpful-t-sql-feature-for-developer/
Seems (from here) to create an alias for another table, so that you can refer to it easily. Like as
select * from table longname as ln
but permanent and pervasive.
Edit: works for user-defined functions, local and remote objects, not only tables.
I've been a long-time Oracle developer and making the jump to SQL Server.
But, another great use for synonyms is during the development cycle. If you have multiple developers modifying the same schema, you can use a synonym to point to your own schema rather than modifying the "production" table directly. That allows you to do your thing and other developers will not be impacted while you are making modifications and debugging.
I am glad to see these in SQL Server 2008...
A synonym is a database object that serves the following purposes:
Provides an alternative name for another database object, referred to as the base object, that can exist on a local or remote server.
Provides a layer of abstraction that protects a client application from changes made to the name or location of the base object.
Have never required the first one but the second issue is rather helpful.
msdn is your friend
You can actually create a synonym in an empty database and refer it to an object in another database, and thus make it work as it should even though it is in a completely empty database (besides the synonym that you created of course).