MySQL 8.0.17 and above supports multi-valued indexes for indexing columns which are a JSON array.
Creating such an index requires specific use of DDL, like this:
INDEX zips( (CAST(custinfo->'$.zipcode' AS UNSIGNED ARRAY)) )
The question is how to get this declaration done in the Prisma schema.
The documentation for defining indexes seems to have nothing about this. The equivalent in Postgres (GIN index) is covered and has a specific syntax in the Prisma schema: (##index([value(ops: JsonbPathOps)], type: Gin)
I guess this means that support for multi-valued MySQL index doesn't exist, but maybe I'm missing something?
Any other thoughts / ideas on this?
Is this maybe something that's coming soon in Prisma?
Related
I've a web app developed by java. Currently I'm in a part of my app that I need to use MySql like in order to search for a string in mysql table contain 100000+ rows. When I had my research I found that MySql like doesn't use indexes but if you have the wildcard at the end of your string example: hello% but I need %hello% which like doesn't use index in these kinds of wildcards. And I also read on the internet that there are other technologies such as postgresql which can give you the ability of using indexes for searching string.
My question is Just because of like do I need to change MySql DB with all it's other features to postgresql DB, Do we have any alternative way on MySql To search for a string that uses indexes?, Do I Install them both and use each for it's own use ( If there is no other way );
All replies are much appreciated.
Do we have any alternative way on MySql To search for a string
Have you looked into MySQL Full-Text Search which uses fulltext index; provided you are using either InnoDB or MyISAM engine
I have a text field in my database and a index on it for the first 10 characters. How do I specify that in my Doctrine Entity?
I can't find any information about database specific options for indexes anywhere :/
This is my "partial" MySQL create statement:
KEY `sourceaddr_index` (`sourceaddr`(10)),
And this is my #Index in doctrine:
#ORM\Index(name="sourceaddr_index", columns={"sourceaddr"}, options={}),
This dosnt interfere with the regular use, but I noticed the problem when deploying development to a new laptop, and creating the database based on my entities...
Any help would be appreciated :)
Possible since Doctrine 2.9, see: https://github.com/doctrine/dbal/pull/2412
#Index(name="slug", columns={"slug"}, options={"lengths": {191}})
Unfortunately, Doctrine seem to be very picky with whitespace location, so e.g. update --dump-sql yields:
DROP INDEX slug ON wp_terms;
CREATE INDEX slug ON wp_terms (slug(191));
and even if you execute those, they messages will stay there (tested with MariaDB 10.3.14).
I've had very good luck naming the index in Doctrine, after manually creating it in MySQL. It's not pretty or elegant, and it's prone to cause errors moving from dev to production if you forget to recreate the index. But, Doctrine seems to understand it respect it.
In my entity, I have the following definition. Doctrine ignores the length option - it's wishful thinking on my part.
/**
* Field
*
* #ORM\Table(name="field", indexes={
* #ORM\Index(name="field_value_bt", columns={"value"}, options={"length": 100})
* })
And in MySQL, I execute
CREATE INDEX field_value_bt ON field (value(100))
As far as I've seen, Doctrine just leaves the index alone so long as it's named the same.
In short: you can't set this within Doctrine. Doctrine's ORM is specifically focused on cross vendor compatability and the type of index you're describing, though supported in many modern RDBMS, is somewhat outside the scope of Doctrine to handle.
Unfortunately there isn't an easy way around this if you use Doctrine's schema updater (in Symfony that would be php app/console doctrine:schema:update --force) as if you manually update the database, Doctrine will sometimes, regress that change to keep things in sync.
In instances where I've needed something like this I've just set up a fixture that sends the relevant ALTER TABLE statement via SQL. If you're going to be distributing your code (i.e. it may run on other/older databases) you can wrap the statement with a platform check to make sure.
It's not ideal but once your app/software stabilises, issues like this shouldn't happen all that often.
I have a CharField(max_length=260) in a utf8-general-ci MySQL DB. The column is too long to be fully indexed so I want to use the prefix index feature of MySQL.
"Indexes can be created that use only the leading part of column values, using col_name(length) syntax to specify an index prefix length"
See: http://dev.mysql.com/doc/refman/5.0/en/create-index.html
Is there a way to do this in django? What is the best way to go?
I don't see any other options except executing SQL query directly.
Read this tutorial: https://docs.djangoproject.com/en/dev/topics/db/sql/#executing-custom-sql-directly
I have a dataset with a lot of columns I want to import into a MySQL database, so I want to be able to create tables without specifying the column headers by hand. Rather I want to supply a filename with the column labels in it to (presumably) the MySQL CREATE TABLE command. I'm using standard MySQL Query Browser tools in Ubuntu, but I didn't see in option for this in the create table dialog, nor could I figure out how to write a query to do this from the CREATE TABLE documentation page. But there must be a way...
A CREATE TABLE statement includes more than just column names
Table name*
Column names*
Column data types*
Column constraints, like NOT NULL
Column options, like DEFAULT, character set
Table constraints, like PRIMARY KEY* and FOREIGN KEY
Indexes
Table options, like storage engine, default character set
* mandatory
You can't get all this just from a list of column names. You should write the CREATE TABLE statement yourself.
Re your comment: Many software development frameworks support ways to declare tables without using SQL DDL. E.g. Hibernate uses XML files. YAML is supported by Rails ActiveRecord, PHP Doctrine and Perl's SQLFairy. There are probably other tools that use other format such as JSON, but I don't know one offhand.
But eventually, all these "simplified" interfaces are no less complex to learn as SQL, while failing to represent exactly what SQL does. See also The Law of Leaky Abstractions.
Check out SQLFairy, because that tool might already convert from files to SQL in a way that can help you. And FWIW MySQL Query Browser (or under its current name, MySQL Workbench) can read SQL files. So you probably don't have to copy & paste manually.
I read an article around schema-less database which sounds cool. (http://bret.appspot.com/entry/how-friendfeed-uses-mysql)
But what isn't clear to me is how do they run search queries on this data? Since the data is in JSON format how do we look for it?
For attributes that are needed for filtering / searching, they are first indexed using a separate table. This makes the data more transparent.
Let me quote what this post says: http://bret.appspot.com/entry/how-friendfeed-uses-mysql
Indexes are stored in separate tables. To create a new index, we create a new table storing the attributes we want to index on all of our database shards.
I'd imagine they have a separate search engine with its own index - probably not even in MySQL, something like Solr.
They're using sphinx for that