Configuring Index text length MySQL in Doctrine - mysql

I have a text field in my database and a index on it for the first 10 characters. How do I specify that in my Doctrine Entity?
I can't find any information about database specific options for indexes anywhere :/
This is my "partial" MySQL create statement:
KEY `sourceaddr_index` (`sourceaddr`(10)),
And this is my #Index in doctrine:
#ORM\Index(name="sourceaddr_index", columns={"sourceaddr"}, options={}),
This dosnt interfere with the regular use, but I noticed the problem when deploying development to a new laptop, and creating the database based on my entities...
Any help would be appreciated :)

Possible since Doctrine 2.9, see: https://github.com/doctrine/dbal/pull/2412
#Index(name="slug", columns={"slug"}, options={"lengths": {191}})
Unfortunately, Doctrine seem to be very picky with whitespace location, so e.g. update --dump-sql yields:
DROP INDEX slug ON wp_terms;
CREATE INDEX slug ON wp_terms (slug(191));
and even if you execute those, they messages will stay there (tested with MariaDB 10.3.14).

I've had very good luck naming the index in Doctrine, after manually creating it in MySQL. It's not pretty or elegant, and it's prone to cause errors moving from dev to production if you forget to recreate the index. But, Doctrine seems to understand it respect it.
In my entity, I have the following definition. Doctrine ignores the length option - it's wishful thinking on my part.
/**
* Field
*
* #ORM\Table(name="field", indexes={
* #ORM\Index(name="field_value_bt", columns={"value"}, options={"length": 100})
* })
And in MySQL, I execute
CREATE INDEX field_value_bt ON field (value(100))
As far as I've seen, Doctrine just leaves the index alone so long as it's named the same.

In short: you can't set this within Doctrine. Doctrine's ORM is specifically focused on cross vendor compatability and the type of index you're describing, though supported in many modern RDBMS, is somewhat outside the scope of Doctrine to handle.
Unfortunately there isn't an easy way around this if you use Doctrine's schema updater (in Symfony that would be php app/console doctrine:schema:update --force) as if you manually update the database, Doctrine will sometimes, regress that change to keep things in sync.
In instances where I've needed something like this I've just set up a fixture that sends the relevant ALTER TABLE statement via SQL. If you're going to be distributing your code (i.e. it may run on other/older databases) you can wrap the statement with a platform check to make sure.
It's not ideal but once your app/software stabilises, issues like this shouldn't happen all that often.

Related

Using doctrine with multiple MySQL Databases

I'm starting a new project from an existing MySQL DB and I would like to use symfony+doctrine for that.
The problem is that my current DB has multiple DB in it. For instance, it has db.tables like:
customers.info
customers.orders
items.catalog
items.stock
etc....
I've tried to search online but I've realized that one of the problem is that "database" word is used to define 2 very different things: database "software", like mysql, postgres, mariaDB, etc... and databases as in SQL "CREATE DATABASE".
So when I'm looking at symfony doc, I found this page, which states that I cannot use Doctrine ORM since I have multiple DB: https://symfony.com/doc/current/doctrine/multiple_entity_managers.html
But the more I read it, the more I have the feelings that what there are saying is "you need one entityManager for Mysql, one for Postgres, etc... and Entities cannot define associations across different entity managers" and not "Entities cannot define associations across different DB from the same DB software"
AM I right? and if yes, how can I achieve such a thing, knowing that I need to provide a database name in the connection URL (like mysql://user:pass#127.0.0.1/oneOfMyDb )
Thanks!
Ok so I finally found the answer, which may be useful for other people in the same situation.
It is possible to use doctrine with multiple database/schema in mySQL. yes, the problem here is that MySQL kinda mixed the concept of DB and schema, hence the confusion.
In order to do this, you need to declare the table and schema used for every entity, for instance:
<?php
namespace App\Entity;
use App\Repository\PropertyRepository;
use Doctrine\ORM\Mapping as ORM;
/**
* #ORM\Entity(repositoryClass=PropertyRepository::class)
* #ORM\Table(name="property", schema="myOtherDB")
*/
class Property
{
// some stuff here...
}
This way, no matter which DB name you declare in the connection, it will connect to you other DB (schema) and you will be able to fetch datas from foreign keys, even if this data is stored in a table in a different DB (schema).
I hope this will help some people!

MySQL search with typo

In my MySQL database I have a user table. I need to perform search as you type with typo over the user name field. There are few very old question on this topic. I tested the builtin full text search of mysql but it didn't work as expected (it does not handle typo) [I knew but I tried anyway].
What's my best option? I thought there should be an easy solution nowadays. I'm thinking about replicating the user table on elasticsearch and do the instant search from there, but I'd really like to avoid the syncronization nightmare that this will cause.
Thanks!!
You could use SOUNDEX for mysql. We have tried that but I can say that it does not work that well and it also makes the search a bit slow.
We Had a similar issue and switched to ES.
What we did is as follows:
Created a trigger for the table that will be synced to ES. The
trigger will write to a new table. The columns of such a table would
be:
IdToUpdate Operation DateTime IsSynced
The Operation would be create, update, delete. IsSynced will tell
whether the update is pushed to ES.
Then add a corn job that would query this table for all rows that will have issynced set to say '0', Add those ID's and operation to a Queue like RabbitMQ. And set the ISSynced to 1 for those ID's
The reason to use RabbitMQ is that it will make sure that the update is forwarded to ES. In case of failure we can always re-queue the the object.
Write a consumer to get the objects from the queue and update ES.
Apart from this you will also have to create a utility that will create an ES index from the database for first time use.
And you can also look at Fuzzy Search of ES that will handle typo's
Also Completion suggester which also supports fuzzy search.

MySQL Indexes in Doctrine / Zend Framework 2

I have some serious Performance Problems and i found out that it is because the lack of Indexes in MySQL. So i added an Index to the Table Definition of the Entity:
#ORM\Table(
name="test",
indexes={
#ORM\Index(name="test_idx", columns={"testfield"})
}
)
These lines are ok (hopefully) but resulting in simply nothing, When i run doctrine orm:validate-schema it says, that the Database is in Sync. When i add the Index manually to MySQL it says it is no longer in sync, and wants to drop the index. I am a little bit confused because the CLI-Tool does not add the Index (but drops it, if it exists), and i am not getting any error message? What ist wrong?
Should work this way, though, the Index-Support ist broken at the moment. Have a look at:
https://github.com/doctrine/DoctrineORMModule/issues/368

Query to detect MySQL

I'm fixing a bug in a proprietary piece of software, where I have some kind of JDBC Connection (pooled or not, wrapped or not,...). I need to detect if it is a MySQL connection or not. All I can use is an SQL query.
What would be an SQL query that succeeds on MySQL each and every time (MySQL 5 and higher is enough) and fails (Syntax error) on every other database?
The preferred way, using JDBC Metadata...
If you have access to a JDBC Connection, you can retrieve the vendor of database server fairly easily without going through an SQL query.
Simply check the connection metadata:
string dbType = connection.getMetaData().getDatabaseProductName();
This will should give you a string that beings with "MySQL" if the database is in fact MySQL (the string can differ between the community and enterprise edition).
If your bug is caused by the lack of support for one particular type of statement which so happens that MySQL doesn't support, you really should in fact rely on the appropriate metadata method to verify support for that particular feature instead of hard coding a workaround specifically for MySQL. There are other MySQL-like databases out there (MariaDB for example).
If you really must pass through an SQL query, you can retrieve the same string using this query:
SELECT ##version_comment as 'DatabaseProductName';
However, the preferred way is by reading the DatabaseMetaData object JDBC provides you with.
Assuming your interesting preconditions (which other answers try to work around):
Do something like this:
SELECT SQL_NO_CACHE 1;
This gives you a single value in MySQL, and fails in other platforms because SQL_NO_CACHE is a MySQL instruction, not a column.
Alternatively, if your connection has the appropriate privileges:
SELECT * FROM mysql.db;
This is an information table in a database specific to MySQL, so will fail on other platforms.
The other ways are better, but if you really are constrained as you say in your question, this is the way to do it.
MySql may be the only db engine that uses backticks. That means something like this should work.
SELECT count(*)
FROM `INFORMATION_SCHEMA.CHARACTER_SETS`
where 1=3
I might not have the backticks in the right spot. Maybe they go like this:
FROM `INFORMATION_SCHEMA`.`CHARACTER_SETS`
Someone who works with MySql would know.

switching from MySQL to PostgreSQL for Ruby on Rails for the sake of Heroku

I'm trying to push a brand new Ruby on Rails app to Heroku. Currently, it sits on MySQL. It looks like Heroku doesn't really support MySQL and so we are considering using PostgreSQL, which they DO support.
How difficult should I expect this to be? What do I need to do to make this happen?
Again, please note that my DB as of right now (both development & production) are completely empty.
Common issues:
GROUP BY behavior. PostgreSQL has a rather strict GROUP BY. If you use a GROUP BY clause, then every column in your SELECT must either appear in your GROUP BY or be used in an aggregate function.
Data truncation. MySQL will quietly truncate a long string to fit inside a char(n) column unless your server is in strict mode, PostgreSQL will complain and make you truncate your string yourself.
Quoting is different, MySQL uses backticks for quoting identifiers whereas PostgreSQL uses double quotes.
LIKE is case insensitive in MySQL but not in PostgreSQL. This leads many MySQL users to use LIKE as a case insensitive string equality operator.
(1) will be an issue if you use AR's group method in any of your queries or GROUP BY in any raw SQL. Do some searching for column "X" must appear in the GROUP BY clause or be used in an aggregate function and you'll see some examples and common solutions.
(2) will be an issue if you use string columns anywhere in your application and your models aren't properly validating the length of all incoming string values. Note that creating a string column in Rails without specifying a limit actually creates a varchar(255) column so there actually is an implicit :limit => 255 even though you didn't specify one. An alternative is to use t.text for your strings instead of t.string; this will let you work with arbitrarily large strings without penalty (for PostgreSQL at least). As Erwin notes below (and every other chance he gets), varchar(n) is a bit of an anachronism in the PostgreSQL world.
(3) shouldn't be a problem unless you have raw SQL in your code.
(4) will be an issue if you're using LIKE anywhere in your application. You can fix this one by changing a like b to lower(a) like lower(b) (or upper(a) like upper(b) if you like to shout) or a ilike b but be aware that PostgreSQL's ILIKE is non-standard.
There are other differences that can cause trouble but those seem like the most common issues.
You'll have to review a few things to feel safe:
group calls.
Raw SQL (including any snippets in where calls).
String length validations in your models.
All uses of LIKE.
If you have no data to migrate, it should be as simple as telling your Gemfile to use the pg gem instead, running bundle install, and updating your database.yml file to point to your PostgreSQL databases. Then just run your migrations (rake db:migrate) and everything should work great.
Don't feel you have to migrate to Postgres - there are several MySQL Addon providers available on Heroku - http://addons.heroku.com/cleardb is the one I've had the most success with.
It should be simplicity itself: port the DDL from MySQL to PostgreSQL.
Does Heroku have any schema creation scripts? I'd depend on those if they were available.
MySQL and PostgreSQL are different (e.g. identity type for MySQL, sequences for PostgreSQL). But the port shouldn't be too hard. How many tables? Tens are doable.