This question already has an answer here:
convert MySQL SET data type to Postgres
(1 answer)
Closed 9 years ago.
I'm trying to use MySQL SET type in PostgreSQL, but I found only Arrays, that has quite similar functionality but doesn't met requirements.
Does PostgreSQL has similar datatype?
You can use following workarounds:
1. BIT strings
You can define your set of maximum N elements as simply BIT(N).
It is little bit awkward to populate and retrieve - you will have to use bit masks as set members. But bit strings really shine for set operations: intersection is simply &, union is |.
This type is stored very efficiently - bit per bit with small overhead for length.
Also, it is nice that length is not really limited (but you have to decide it upfront).
2. HSTORE
HSTORE type is an extension, but very easy to install. Simply executing
CREATE EXTENSION hstore
for most installations (9.1+) will make it available. Rumor has it that PostgreSQL 9.3 will have HSTORE as standard type.
It is not really a set type, but more like Perl hash or Python dictionary: it keeps arbitrary set of key=>value pairs.
With that, it is not very efficient (certainly not BIT string efficient), but it does provide functions essential for sets: || for union, but intersection is little bit awkward: use
slice(a,akeys(b)) || slice(b,akeys(a))
You can read more about HSTORE here.
What about an array with a check constraint:
create table foobar
(
myset text[] not null,
constraint check_set
check ( array_length(myset,1) <= 2
and (myset = array[''] or 'one'= ANY(myset) or 'two' = ANY(myset))
)
);
This would match a the definition of SET('one', 'two') as explained in the MySQL manual.
The only thing that this would not do, is to "normalize" the array. So
insert into foobar values (array['one', 'two']);
and
insert into foobar values (array['two', 'one']);
would be displayed differently than in MySQL (where both would wind up as 'one','two')
The check constraint will however get messy with more than 3 or 4 elements.
Building on a_horse_with_no_name's answer above, I would suggest something just a little more complex:
CREATE FUNCTION set_check(in_value anyarray, in_check anyarray)
RETURNS BOOL LANGUAGE SQL IMMUTABLE AS
$$
WITH basic_check AS (
select bool_and(v = any($2)) as condition, count(*) as ct
FROM unnest($1) v
GROUP BY v
), length_check AS (
SELECT count(*) = 0 as test FROM unnest($1)
)
SELECT bool_and(condition AND ct = 1)
FROM basic_check
UNION
SELECT test from length_check where test;
$$;
Then you should be able to do something like:
CREATE TABLE set_test (
my_set text[] CHECK (set_check(my_set, array['one'::text,'two']))
);
This works:
postgres=# insert into set_test values ('{}');
INSERT 0 1
postgres=# insert into set_test values ('{one}');
INSERT 0 1
postgres=# insert into set_test values ('{one,two}');
INSERT 0 1
postgres=# insert into set_test values ('{one,three}');
ERROR: new row for relation "set_test" violates check constraint "set_test_my_set_check"
postgres=# insert into set_test values ('{one,one}');
ERROR: new row for relation "set_test" violates check constraint "set_test_my_set_check"
Note this assumes that for your set, every value must be unique (we are talking sets here). The function should perform very well and should meet your needs. However this has the advantage of handling any size sets.
Storage-wise it is completely different from MySQL's implementation. It will take up more space on disk but should handle sets with as many members as you like, provided that you aren't running up against storage limits.... So this should have a superset of functionality in comparison to MySQL's implementation. One significant difference though is that this does not collapse the array into distinct values. It just prohibits them. If you need that too, look at a trigger.
This solution also leaves the ordinality of input data intact so '{one,two}' is distinct from '{two,one}' so if you need to ensure that behavior has changed, you may want to look into exclusion constraints on PostgreSQL 9.2.
Are you looking for enumerated data types?
PostgreSQL 9.1 Enumerated Types
From reading the page referenced in the question, it seems like a SET is a way of storing up to 64 named boolean values in one column. PostgreSQL does not provide a way to do this. You could use independent boolean columns, or some size of integer and twiddle the bits directly. Adding two new tables (one for the valid names, and the other to join names to detail rows) could make sense, especially if there is the possibility of needing to associate any other data to individual values.
some time ago I wrote one similar extension
https://github.com/okbob/Enumset
but it is not complete
some more complete and close to mysql is functionality from pltoolkit
http://okbob.blogspot.cz/2010/12/bitmapset-for-plpgsql.html
http://pgfoundry.org/frs/download.php/3203/pltoolbox-1.0.2.tar.gz
http://postgres.cz/wiki/PL_toolbox_%28en%29
function find_in_set can be emulated via arrays
http://okbob.blogspot.cz/2009/08/mysql-functions-for-postgresql.html
Related
I've tried to dig through the documentation and can't seem to find anything for what I'm looking for. Are there equivalent functions for the various JSON/B operators (->, ->>, #>, ?, etc) in PostgreSQL?
Edit: To clarify, I would like to know if it is possible to have the following grouped queries return the same result:
SELECT '{"foo": "bar"}'::json->>'foo'; -- 'bar'
SELECT json_get_value('{"foo": "bar"}'::json, 'foo'); -- 'bar'
SELECT '{"foo": "bar"}'::jsonb ? 'foo'; -- t
SELECT jsonb_key_exists('{"foo": "bar"}'::jsonb, 'foo'); -- t
You can use the system catalogs to discover the function equivalent to each operator.
select * from pg_catalog.pg_operator where oprname ='?';
This shows that the function is named "jsonb_exists". Some operators are overloaded and will give more than one function, you have to look at the argument types to distinguish them.
Every operator has a function 'behind' it. That function may or may not be documented in its own right.
AFAIK there are two functions which are equivalents to #> and #>> operators.
These are:
#> json_extract_path
#>> json_extract_path_text
->> json_extract_path_text -- credits: #a_horse_with_no_name
Discover more in the docs
Other than that you could extract json to a table and take values you need using regular SQL query vs a table using json_each or json_each_text.
Same thing with checking if a key exists in a JSON would be to use json_object_keys and also query the table which comes out of it.
If you need to wrap things up in a different language / using ORM then what you could do is move the data retrieval logic to a PL/SQL procedure and just execute it and obtain prepared data from it.
Obviously, you could also build your own functions that would implement the behaviour of forementioned operators, but the question is: is it really worth it? It will definitely be slower.
The SQLite JSON1 extension has some really neat capabilities. However, I have not been able to figure out how I can update or insert individual JSON attribute values.
Here is an example
CREATE TABLE keywords
(
id INTEGER PRIMARY KEY,
lang INTEGER NOT NULL,
kwd TEXT NOT NULL,
locs TEXT NOT NULL DEFAULT '{}'
);
CREATE INDEX kwd ON keywords(lang,kwd);
I am using this table to store keyword searches and recording the locations from which the search was ininitated in the object locs. A sample entry in this database table would be like the one shown below
id:1,lang:1,kwd:'stackoverflow',locs:'{"1":1,"2":1,"5":1}'
The location object attributes here are indices to the actual locations stored elsewhere.
Now imagine the following scenarios
A search for stackoverflow is initiated from location index "2". In this case I simply want to increment the value at that index so that after the operation the corresponding row reads
id:1,lang:1,kwd:'stackoverflow',locs:'{"1":1,"2":2,"5":1}'
A search for stackoverflow is initiated from a previously unknown location index "7" in which case the corresponding row after the update would have to read
id:1,lang:1,kwd:'stackoverflow',locs:'{"1":1,"2":1,"5":1,"7":1}'
It is not clear to me that this can in fact be done. I tried something along the lines of
UPDATE keywords json_set(locs,'$.2','2') WHERE kwd = 'stackoverflow';
which gave the error message error near json_set. I'd be most obliged to anyone who might be able to tell me how/whether this should/can be done.
It is not necessary to create such complicated SQL with subqueries to do this.
The SQL below would solve your needs.
UPDATE keywords
SET locs = json_set(locs,'$.7', IFNULL(json_extract(locs, '$.7'), 0) + 1)
WHERE kwd = 'stackoverflow';
I know this is old, but it's like the first link when searching, it deserves a better solution.
I could have just deleted this question but given that the SQLite JSON1 extension appears to be relatively poorly understood I felt it would be more useful to provide an answer here for the benefit of others. What I have set out to do here is possible but the SQL syntax is rather more convoluted.
UPDATE keywords set locs =
(select json_set(json(keywords.locs),'$.**N**',
ifnull(
(select json_extract(keywords.locs,'$.**N**') from keywords where id = '1'),
0)
+ 1)
from keywords where id = '1')
where id = '1';
will accomplish both of the updates I have described in my original question above. Given how complicated this looks a few explanations are in order
The UPDATE keywords part does the actual updating, but it needs to know what to updatte
The SELECT json_set part is where we establish the value to be updated
If the relevant value does not exsit in the first place we do not want to do a + 1 on a null value so we do an IFNULL TEST
The WHERE id = bits ensure that we target the right row
Having now worked with JSON1 in SQLite for a while I have a tip to share with others going down the same road. It is easy to waste your time writing extremely convoluted and hard to maintain SQL in an effort to perform in-place JSON manipulation. Consider using SQLite in memory tables - CREATE TEMP TABLE... to store intermediate results and write a sequence of SQL statements instead. This makes the code a whole lot eaiser to understand and to maintain.
Hi guys I have the following code at some point of my trigger. (AFTER INSERT TRIGGER)
DECLARE Interv int;
At some point i use the following code:
SELECT myfield FROM TABLE WHERE interv=new.interv
But apparently the mysql confuses the value of two variables (NEW.INTERV AND INTERV) and donĀ“t returns me the correct query's value.
But, if i use DECLARE Interv_Value int instead DECLARE Interv int the SELECT in question works fine.
Any ideas? I'm using MySQL 5.1.68.
What is confusing about this?
When you have the query:
SELECT myfield
FROM TABLE
WHERE interv = new.interv;
Then MySQL has to figure out where all unqualified names come from. It has some rules. In this case:
- First look at the columns in the `FROM` clause.
- Then it look in the environment
(These are actually scoping rules and also involve subqueries.) So, your query is interpreted as:
SELECT myfield
FROM TABLE
WHERE table.interv = new.interv;
As a general rule, write your queries using table aliases to specify where columns come from. So the above query could be written as:
SELECT myfield
FROM TABLE t
WHERE t.interv = new.interv;
Although you don't really want this version.
If you want to use variables, use user-defined variables with an # or give the variables names that are unlikely to conflict with column names (I use v_ prefixes for this).
In PostgreSQL 9.3, there are multiple ways to build an expression, which points to a json field's nested property:
data->'foo'->>'bar'
data#>>'{foo,bar}'
json_extract_path_text(data, 'foo', 'bar')
Therefore PostgreSQL only use these indexes, if the query's expression is an exact match with the index's expression.
CREATE TABLE json_test_index1(data json);
CREATE TABLE json_test_index2(data json);
CREATE TABLE json_test_index3(data json);
CREATE INDEX ON json_test_index1((data->'foo'->>'bar'));
CREATE INDEX ON json_test_index2((data#>>'{foo,bar}'));
CREATE INDEX ON json_test_index3((json_extract_path_text(data, 'foo', 'bar')));
-- these queries use an index, while all other combinations not:
EXPLAIN SELECT * FROM json_test_index1 WHERE data->'foo'->>'bar' = 'baz';
EXPLAIN SELECT * FROM json_test_index2 WHERE data#>>'{foo,bar}' = 'baz';
EXPLAIN SELECT * FROM json_test_index3 WHERE json_extract_path_text(data, 'foo', 'bar') = 'baz';
My questions are:
Is this behaviour intended? I thought the query optimizer should (at least) use the index with the #>> operator, when the query contains the appropriate call of json_extract_path_text() -- and vice versa.
If I want to use more of these expressions in my application (not just one, f.ex. stick to the -> & ->> operators), what indexes should I build? (I hope, not all of them.)
Are there any chance, that some future Postgres versions' optimizers will understand the equivalence of these expressions?
EDIT:
When i create an additional operator for that:
CREATE OPERATOR ===> (
PROCEDURE = json_extract_path_text,
LEFTARG = json,
RIGHTARG = text[]
);
This query (table from the previous example) still not uses its index:
EXPLAIN SELECT * FROM json_test_index3 WHERE data ===> '{foo,bar}' = 'baz';
Bonus question:
While Postgres expands the operators into function calls (behind the scenes), why this still not using its index?
You must use GIN index for JSON and JSONB datatype.
You can use operator parameters for your planned query
Examples:
CREATE INDEX idx_tbl_example ON tbl_example USING GIN(your_jsonb_field);
If you are planning only use #> operator, you can use with jsonb_path_ops parameter
CREATE INDEX idx_tbl_example ON tbl_example USING GIN(your_jsonb_field jsonb_path_ops);
Other choices is documented on postgresql site
I think you can use this:
CREATE INDEX idx_tbl_example ON tbl_example USING GIN(your_jsonb_field json_extract_path_text);
I have this union statement when I try to take parameters from a form and pass it to a union select statement it says too many parameters. This is using MS ACCESS.
SELECT Statement FROM table 1 where Date = Between [Forms]![DateIN]![StartDate]
UNION
SELECT Statement FROM table 2 where Date = Between [Forms]![DateIN]![StartDate]
This is the first time I am using windows DB applications to do Database apps. I am Linux type of person and always use MySQL for my projects but for this one have to use MS Access.
Is there anther way to pass parameters to UNION Statement because this method of defining values in a form can work on Single SELECT statements. But I don't know why this problem exist.
Between "Determines whether the value of an expression falls within a specified range of values" like this ...
expr [Not] Between value1 And value2
But your query only gives it one value ... Between [Forms]![DateIN]![StartDate]
So you need to add And plus another date value ...
Between [Forms]![DateIN]![StartDate] And some_other_date
Also Date is a reserved word. If you're using it as a field name, enclose it in brackets to avoid confusing the db engine: [Date]
If practical, rename the field to avoid similar problems in the future.
And as Gord pointed out, you must also bracket table names which include a space. The same applies to field names.
Still getting problems when using this method of calling the values or dates from the form to be used on the UNION statement. Here is the actual query that I am trying to use.
I don't want to recreate the wheel but I was thinking that if the Date() can be used with between Date() and Date()-6 to represent a 7 days range then I might have to right a module that takes the values from the for and then returns the values that way I can do something like Sdate() and Edate() then this can be used with Between Sdate() and Edate().
I have not tried this yet but this can be my last option I don't even know if it will work but it is worth a try. But before i do that i want to try all the resources that Access can help me make my life easy such as its OO Stuff it has for helping DB programmers.
SELECT
"Expenditure" as [TransactionType], *
FROM
Expenditures
WHERE
(((Expenditures.DateofExpe) Between [Forms]!Form1![Text0] and [Forms]![Form1]![Text11]))
UNION
SELECT
"Income" as [TransactionType], *
FROM
Income
WHERE
(((Income.DateofIncom) Between [Forms]!Form1![Text0] and [Forms]![Form1]![Text11] ));
Access VBA has great power but I don't want to use it as of yet as it will be hard to modify changes for a user that does not know how to program. trying to keep this DB app simple as possible for a dumb user to fully operate.
Any comments is much appreciated.