I'm using Knex, because I'm working on an application that I would like to use with multiple database servers, currently Sqlite3, Postgres and MySQL.
I'm realizing that this might be more difficult that I expected.
On MySQL, it appears that this syntax will return an array with an id:
knex('table').insert({ field: 'value'}, 'id');
On postgres I need something like this:
knex('table').insert({ field: 'value'}, 'id').returning(['id']);
In each case, the structure they return is different. The latter doesn't break MySQL, but on SQlite it will throw a fatal error.
The concept of 'insert a record, get an id' seems to exist everywhere though. What am I missing in Knex that lets me write this once and use everywhere?
Way back in 2007, I implemented the database access class for a PHP framework. It was to support MySQL, PostgreSQL, SQLite, Microsoft SQL Server, Oracle, and IBM DB2.
When it came time to support auto-incremented columns, I discovered that all of these implement that feature differently. Some have SERIAL, some have AUTO-INCREMENT (or AUTOINCREMENT), some have SEQUENCE, some have GENERATED, some support multiple solutions.
The solution was to not try to write one implementation that worked with all of them. I wrote classes using the Adapter Pattern, one for each brand of SQL database, so I could implement each adapter class tailored to the features supported by the respective database. The adapter satisfied an interface that I defined in my framework, to allow the primary key column to be defined and the last inserted id to be fetched in a consistent manner. But the internal implementation varied.
This was the only sane way to develop that code, in my opinion. When it comes to variations of SQL implementations, it's a fallacy that one can develop "portable" code that works on multiple brands.
Related
I use SQLDelight's MySQL dialect on my server. Recently I plan to migrate a table to combine many fields into a JSON field so the server code no longer needs to know the complex data structure. As part of the migration, I need to do something like this during runtime - when the sever sees a client with the new version, it knows the client won't access the old table anymore, so it's safe to migrate the record to new table.
INSERT OR IGNORE INTO new_table SELECT id, a, b, JSON_OBJECT('c', c, 'd', JSON_OBJECT(…)) FROM old_table WHERE id = ?;
The only problem is - Unlike the SQLite dialect, the MySQL dialect doesn't recognize JSON_OBJECT or other JSON expressions, even though in this case it doesn't have to - no matter how complex the query is, the result is not passed back to Kotlin.
I wish I could add the feature by myself, but I'm pretty new to Kotlin. So my question is: is there a way to evade the rigid syntax check? I could also retrieve from old table, convert the format in Kotlin, then write to the new table, but that would take hundreds of lines of complex code, instead of just one INSERT.
I assume from your links you're on the alpha releases already, in alpha03 you can add currently unsupported behaviour by creating a local SQLDelight module (see this example) and adding the JSON_OBJECT to the functionType override. Also new function types are one of the easiest things to contribute up to SQLDelight so if you want it in the next release
For the record I ended up using CONCAT with COALESCE as a quick and dirty hack to scrape the fields together as JSON.
I have a scala application that manages multiple MySQL database schemas, which includes modifying (adding, renaming, etc.) tables. The commands are issued over a connection pool that connects to a generic management database in the database server.
Because the application is designed to be cross-database, I use JOOQ to render SQL queries (execution is done via a separate JDBC module).
I experience issues with JOOQs alterTable(...).renameTo(...) DSL - consider the following example:
We have a table "TestTable" in database "TestDatabase". Let's say I want to rename that table simply to "Foo", keeping it in "TestDatabase".
This code:
...
val context = DSL.using(SQLDialect.MYSQL_5_7)
val query = context
.alterTable(table(name("TestDatabase", "TestDatabase")))
.renameTo(name("TestDatabase", "Foo"))
...
Generates: ALTER TABLE `TestDatabase`.`TestTable` RENAME TO `Foo`
However, since the connection pool I'm using is connected to my management database, it just renames the table to "Foo" and moves it to my management database. I would have expected the SQL to be: ALTER TABLE `TestDatabase`.`TestTable` RENAME TO `TestDatabase`.`Foo`. I tried a variety of alternatives to invoke the .renameTo method and convice it to use the fully qualified name, to no avail:
.renameTo(table(name(...) -> same behaviour.
.renameTo("`TestDatabase`.`Foo`") -> Escapes the name with backticks, treats it as one name instead of a qualified name.
I'm wondering if I'm missing something, if this is intended behaviour, or maybe even a bug or design shortcoming of JOOQ.
Is there a way to rename the table using fully qualified names?
Thank you!
That's a bug in jOOQ: https://github.com/jOOQ/jOOQ/issues/8042
Your workaround is close. This doesn't work:
.renameTo("`TestDatabase`.`Foo`")
As you've noticed, behind the scenes, the DSL.name() API is used to wrap the target name, because the renameTo() method doesn't implement the plain SQL templating API. You can, however, explicitly use plain SQL templating by writing as a workaround:
.renameTo(table("`TestDatabase`.`Foo`"))
When using Liquibase, is there any way to use existing data to generate some of the data that is to be inserted?
For example, say I'd want to update a row with id 5, but I don't know up front that the id will be 5, as this is linked to another table where I will actually be getting the id from. Is there any way for me to tell Liquibase to get the id from SELECT query?
I'm guessing this isn't really possible as I get the feeling Liquibase is really designed for a very structured non-dynamic approach, but it doesn't hurt to ask.
Thanks.
You cannot use the built-in changes to insert data based on existing data, but you can use the tag with insert statements with nested selects.
For example:
<changeSet>
<sql>insert into person (name, manager_id) values ('Fred', (select id from person where name='Ted'))</sql>
</changeSet>
Note: the SQL (and support for insert+select) depends on database vendor.
It is possible write your own custom refactoring class to generate SQL. The functionality is designed to support the generation of static SQL based on the changeset's parameters.
So.. it's feasible to obtain a connection to the database, but the health warning attached to this approach is that the generated SQL is dynamic (your data could change) and tied tightly to your database instance.
An example of problems this will cause is an inability to generate a SQL upgrade script for a DBA to run against a production database.
I've been thinking about this use-case for some time. I still don't know if liquibase is the best solution for this data management problem or whether it needs to be combined with an additional tool like dbunit.
I'm trying to push a brand new Ruby on Rails app to Heroku. Currently, it sits on MySQL. It looks like Heroku doesn't really support MySQL and so we are considering using PostgreSQL, which they DO support.
How difficult should I expect this to be? What do I need to do to make this happen?
Again, please note that my DB as of right now (both development & production) are completely empty.
Common issues:
GROUP BY behavior. PostgreSQL has a rather strict GROUP BY. If you use a GROUP BY clause, then every column in your SELECT must either appear in your GROUP BY or be used in an aggregate function.
Data truncation. MySQL will quietly truncate a long string to fit inside a char(n) column unless your server is in strict mode, PostgreSQL will complain and make you truncate your string yourself.
Quoting is different, MySQL uses backticks for quoting identifiers whereas PostgreSQL uses double quotes.
LIKE is case insensitive in MySQL but not in PostgreSQL. This leads many MySQL users to use LIKE as a case insensitive string equality operator.
(1) will be an issue if you use AR's group method in any of your queries or GROUP BY in any raw SQL. Do some searching for column "X" must appear in the GROUP BY clause or be used in an aggregate function and you'll see some examples and common solutions.
(2) will be an issue if you use string columns anywhere in your application and your models aren't properly validating the length of all incoming string values. Note that creating a string column in Rails without specifying a limit actually creates a varchar(255) column so there actually is an implicit :limit => 255 even though you didn't specify one. An alternative is to use t.text for your strings instead of t.string; this will let you work with arbitrarily large strings without penalty (for PostgreSQL at least). As Erwin notes below (and every other chance he gets), varchar(n) is a bit of an anachronism in the PostgreSQL world.
(3) shouldn't be a problem unless you have raw SQL in your code.
(4) will be an issue if you're using LIKE anywhere in your application. You can fix this one by changing a like b to lower(a) like lower(b) (or upper(a) like upper(b) if you like to shout) or a ilike b but be aware that PostgreSQL's ILIKE is non-standard.
There are other differences that can cause trouble but those seem like the most common issues.
You'll have to review a few things to feel safe:
group calls.
Raw SQL (including any snippets in where calls).
String length validations in your models.
All uses of LIKE.
If you have no data to migrate, it should be as simple as telling your Gemfile to use the pg gem instead, running bundle install, and updating your database.yml file to point to your PostgreSQL databases. Then just run your migrations (rake db:migrate) and everything should work great.
Don't feel you have to migrate to Postgres - there are several MySQL Addon providers available on Heroku - http://addons.heroku.com/cleardb is the one I've had the most success with.
It should be simplicity itself: port the DDL from MySQL to PostgreSQL.
Does Heroku have any schema creation scripts? I'd depend on those if they were available.
MySQL and PostgreSQL are different (e.g. identity type for MySQL, sequences for PostgreSQL). But the port shouldn't be too hard. How many tables? Tens are doable.
I need to update a C# application that imports data into a database using LINQ. I am new to LINQ. The problem I am trying to solve is that there are two versions of the DB. They have the same table names and are 90% identical in structure, but have one table (out of about 60) which has a different definition.
If LINQ were not involved, I would simply select a different query depending on which version of the application (DB) the user wanted to import to, and leave the remainder of the application as is.
My impression is that LINQ is intended for situations in which the DB structure is cast in stone, and that I cannot have two LINQ table definitions having the same name and simply or easily switch between them (or do so at all).
In this case, must I have (at least) a separate entire Linq.DataContext for each version of the DB? Or have I misunderstood something basic here?
You might be able to make that happen using separate mappings. In this case you would have to hand code your mappings as apposed to the attribute-based mapping that the LINQ designer or SqlMetal does for you. I've never done it, but I think it might work. I just googled for "Linq to Sql POCO mapping" and found this: Achieving POCO s in Linq to SQL. This person is loading his mapping from an xml file at runtime. You could conditionally load one of two different mapping files.