When working with JSON in MariaDB it is possible to index single-point values using virtual columns e.g.
ALTER TABLE features ADD feature_street VARCHAR(30) AS (JSON_UNQUOTE(feature->"$.properties.STREET"));
ALTER TABLE features ADD INDEX (feature_street);
Does anybody know whether it is possible to index JSON arrays in the same way so that when querying based on the values of the array members, each array does not have to be scanned?
I can't find anything in the docs which suggests this is possible.
Create a "virtual" column of the element of the JSON column and index it.
https://mariadb.com/kb/en/mariadb/virtual-computed-columns/
The elements of an array inside a JSON string -- That is another matter.
Related
Given you have a JSON column called slug structured like this:
{
"en_US": "slug",
"nl_NL": "nl/different-slug"
}
This could be indexed by adding generated columns to the table that point to the values of en_US and nl_NL. This works fine but adding a third locale would require a table schema update.
Would it be possible to let MySQL automagicly index all the key value pairs in the JSON without explicitly defining them in the schema?
As mysql manual on json data type says:
JSON columns, like columns of other binary types, are not indexed directly; instead, you can create an index on a generated column that extracts a scalar value from the JSON column. See Indexing a Generated Column to Provide a JSON Column Index, for a detailed example.
So, the answer is no, mysql cannot index the contents of a json column automatically. You need to define and index generated columns.
I have 'one to many' relationship in two tables. In that case I want to write a store procedured in mySql which can accept the list of child table object and update the tables. The challenge I am facing is what will be the data type of the in parameter for list of object.
You can try to use VARCHAR(65535) in MySQL.
There is no list data type in MySQL.
Given the info that you are coming from Oracle DB, you might wanna know that MySQL does not have a strict concept of objects. And, as answered here, unfortunately, you cannot create a custom data type on your own.
The way to work around it is to imagine a table as a class. Thus, your objects will become records of the said table.
You have to settle for one of the following approaches:
Concatenated IDs: Store the concatenated IDs you want to operate on in a string equivalent datatype- like VARCHAR(5000) or TEXT. This way you can either split and loop over the string or compose a prepared statement dynamically and execute it.
Use a temporary table: Fetch the child table objects, on the fly, into the temporary table and process them. This way, once you create the temporary table with the fields & constraints that you like, you can use
SELECT ... INTO TEMPORARY_TABLE_NAME to populate the table accordingly. The SELECT statement should fetch the properties you need.
Depending on the size of the data, you might want to choose the temp table approach for larger data sets.
You can use Text data type for store large amount of data in single variable
you can define in sp as:
In Variable_name TEXT
I have a schema for BigQuery in which the Record field is JSON-like, however, the keys in the JSON are dynamic i.e. new keys might emerge with new data and it is hard to know how many keys in total there are. According to my understanding, it is not possible to use BigQuery for such a table since the schema of the record field type needs to be explicitly defined or else it will throw an error.
The only other alternative is to use JSON_EXTRACT function while querying the data which would parse through the JSON (text) field. Is there any other way we can have dynamic nested schemas in a table in BigQuery?
A fixed schema can be created for common fields, and you can set them as nullable. And a column as type string can be used to store the rest of the JSON and use the JSON Functions to query for data.
We all the time have a meta column in our table, which holds additional raw unstructured data as a JSON object.
Please note that currently you can store up to 2 Megabytes in a string column, which is decent for a JSON document.
To make it easier to deal with the data, you can create views from your queries that use JSON_EXTRACT, and reference the view table in some other more simpler query.
Also at the streaming insert phase, your app could denormalize the JSON into proper tables.
I want to do a scan on a table on dynamodb using boto, my problem is I want to paginate using the max_results and exclusive_start_key
Actually it looks like the only way to access the LastEvaluatedKey to pass it as exclusive_start_key is to manually keep track of primary keys and pass the last one as exclusive_start_key
But that is not my problem, my problem is I don't know what format (what object type) I should pass to exclusive_start_key it does not accept an int even when the table has an integer hash_key?
According to the documentation, Layer2 implementation of Scan expects either a list or a Tuple as a representation of the Primary Key.
(hash_key,) for a single key table
(hash_key, range_key) for a composed key table
Please note that there also is a (tricky) way to directly read the esk from the Scan generator in Boto.
I'm sure this is either totally impossible or really easy:
If I'm creating a table and I want one of the columns to have limited options, it seems that I use either the ENUM or SET value type. But I have to define the possible values at that moment. What if I have another table which has two columns, a primary key column and a data column, and I want the ENUM for my new table to be set to the primary key of the already existing column?
I'm sure I can just write in the values long-hand, but ideally what I need is for new values to be entered into the list table and for the table with the enum column to just accept that the value choices will include anything new added to that list table.
Is this possible without needing to manipulate the structure of the new table each time something is added to the list?
i think this link help :
http://dev.mysql.com/doc/refman/5.0/en/enum.html
have a discussion of it
in the user comments
start :
"In MySQL 5.0, you can convert an enum's values into a dynamically-defined table of values, which then provides effectively a language-neutral method to handle this kind of conversion (rather than relying on PHP, Tcl, C, C++, Java, etc. specific code).
"
he do it with stored PROCEDURE
The easiest way is to use a regular column without contraints. If you're interested in all the current values, use DISTINCT to query them:
select distinct YourColumn from YourTable
That way, you don't have any maintenance and can store whatever you like in the table.
The foreign key table you mention is also a good option. The foreign key will limit the original column. Before you do the actual insert, you run a query to expand the "enum" table:
insert into EnumTable (name)
select 'NewEnumValue'
where not exists (select * from EnumTable where name = 'NewEnumValue')
Not sure what exactly you're trying to achieve btw; limit the column, but automatically expand the choices when someone breaks the limit?