I was searching in Internet but I found nothing.
I want to know if there is an equivalent in mongodb for this mysql command:
insert into table1 select variables from table2;
what i want to do is to select a document and insert it to another document as it's subdocument.
thanks for any help!
There is no way to do this in a single database command.
Whether you do this in the shell or in your scripting language of choice using a MongoDB driver, you will have to query for the document from collection1 and then do an appropriate update for collection2 using result from the query.
For example:
var doc = db.coll1.findOne( {_id:12345} );
db.coll2.update( {_id:98765}, {$push: {subs:doc} }, {upsert:1} )
What the update says is for document matching first argument do update in second argument (push into subs array value that's in doc) and last argument says create this document if it does not already exist (i.e. if _id: 98765 isn't already in collection2 it will be created).
Related
I'll start this off by saying I know that there are more practical ways to solve this. It's more of an intellectual curiosity than anything else.
I've inherited a MySQL database where some columns are stored as varchar(5) but actually contain the literals "True" or "False". Changing the structure of the data is not an option right now due to other issues. I'm mapping the columns to an ORM (SQLAlchemy), and I want the column to be mapped to a Boolean data type in the supporting codebase using a type adapter. (I've written this adapter already; it's not the problem.)
To help make the mapping process faster, I'm writing a small query to look at the INFORMATION_SCHEMA table and build a line of Python code defining the column using the ORM's syntax. I cannot assume that the data type varchar(5) is a Boolean column - I need to inspect the contents of that column to see if there are values contained in it besides True and False.
Can I write a query that will both get the column type from INFORMATION_SCHEMA and check the actual values stored in that column?
Here is the query I have so far:
SELECT CONCAT(
"Column(""",
col.column_name,
""", ",
(CASE
WHEN col.DATA_TYPE = "int" THEN "Integer"
-- Code in question
WHEN
col.DATA_TYPE = "varchar"
AND col.CHARACTER_MAXIMUM_LENGTH = 5
AND NOT EXISTS(
-- Doesn't seem to work
SELECT DISTINCT col.COLUMN_NAME
FROM col.TABLE_NAME
WHERE col.COLUMN_NAME NOT IN ("True", "False")
)
THEN "BoolStoredAsVarchar"
WHEN col.DATA_TYPE = "varchar" THEN CONCAT("String(", col.CHARACTER_MAXIMUM_LENGTH, ")")
-- Default if it's not a recognized column type
ELSE col.DATA_TYPE
END),
"),"
) AS alchemy
FROM information_schema.columns AS col
WHERE
col.TABLE_SCHEMA = "my_schema"
AND col.TABLE_NAME = "my_table"
ORDER BY col.ORDINAL_POSITION;
Running this code gives me a permissions error: Error Code: 1142. SELECT command denied to user 'user'#'host' for table 'table_name'. Presumably it's trying to use col.TABLE_NAME as a literal instead of interpreting it.
I've also tried creating a simple stored procedure and making table_name into a variable. However, replacing the FROM clause inside the EXISTS with a variable name gives me a syntax error instead.
Again, it's easy enough to run the query myself to see what's in that column. I'd just like to know if this is possible, and if so, how to do it.
You can't do what you're trying to do in a single query.
The reason is that table names (or any other identifier) must be fixed in the query at the time it is parsed, which is before it has read any values from tables. Thus you can't read the name of a table as a string from information_schema and also read from the table with that name in the same query.
You must read the table name from information_schema and then use that result to format a second query.
This isn't a problem specific to MySQL. It's true of any SQL implementation.
Is it possible to achieve something like this?
Suppose name and plural_name are fields of Animal's table.
Suppose pluralise_animal is a helper function which takes a string and returns its plural literal.
I cannot loop over the animal records for technical reasons.
This is just an example
Animal.update_all("plural_name = ?", pluralise_animal("I WANT THE ANIMAL NAME HERE, the `name` column's value"))
I want something similar to how you can use functions in MySQL while modifying column values. Is this out-of-scope or possible?
UPDATE animals SET plural_name = CONCAT(name, 's') -- just an example to explain what I mean by referencing a column. I'm aware of the problems in this example.
Thanks in advance
I cannot loop over the animal records for technical reasons.
Sorry, this cannot be done with this restriction.
If your pluralizing helper function is implemented in the client, then you have to fetch data values back to the client, pluralize them, and then post them back to the database.
If you want the UPDATE to run against a set of rows without fetching data values back to the client, then you must implement the pluralization logic in an SQL expression, or a stored function or something.
UPDATE statements run in the database engine. They cannot call functions in the client.
Use a ruby script to generate a SQL script that INSERTS the plural values into a temp table
File.open(filename, 'w') do |file|
file.puts "CREATE TEMPORARY TABLE pluralised_animals(id INT, plural varchar(50));"
file.puts "INSERT INTO pluralised_animals(id, plural) VALUES"
Animal.each.do |animal|
file.puts( "( #{animal.id}, #{pluralise_animal(animal.name)}),"
end
end
Note: replace the trailing comma(,) with a semicolon (;)
Then run the generated SQL script in the database to populate the temp table.
Finally run a SQL update statement in the database that joins the temp table to the main table...
UPDATE animals a
INNER JOIN pluralised_animals pa
ON a.id = pa.id
SET a.plural_name = pa.plural;
I have a trigger in postgresql 12 that fires like so:
WHEN (OLD.some_jsonb_object_column IS DISTINCT FROM NEW.some_jsonb_object_column)
I would like to only run this trigger when values have changed, and not run them if only keys have changed. In this use case, it can be guaranteed that we are not adding and removing keys at the same time. I do not know what the object keys will be ahead of time, so I cannot get the values via ->>.
I have tried something akin to:
WHEN (jsonb_each(OLD.some_jsonb_object_column) IS DISTINCT FROM jsonb_each(NEW.some_jsonb_object_column))
which results in the error:
set-returning functions are not allowed in trigger WHEN conditions
Is there a way to get the values of a jsonb object without using a set-returning function?
To test if a new key has been added:
key_added := EXISTS (SELECT *
FROM json_object_keys(NEW.some_jsonb_object_column) AS n
EXCEPT
SELECT *
FROM json_object_keys(OLD.some_jsonb_object_column) AS o
);
Similarly, you can check if a key was removed.
That should solve your problem.
I've created a MySQL sproc which returns 3 separate result sets. I'm implementing the npm mysql package downstream to exec the sproc and get a result structured in json with the 3 result sets. I need the ability to filter the json result sets that are returned based on some type of indicator in each result set. For example, if I wanted to get the result set from the json response which deals specifically with Suppliers then I could use some type of js filter similar to this:
var supplierResultSet = mySqlJsonResults.filter(x => x.ResultType === 'SupplierResults');
I think SQL Server provides the ability to include a hard-coded column value in a SQL result set like this:
select
'SupplierResults',
*
from
supplier
However, this approach appears to be invalid in MySQL b/c MySQL Workbench is telling me that the sproc syntax is invalid and won't let me save the changes. Do you know if something like what I'm trying to achieve is possible in MySQL and if not then can you recommend alternative approaches that would help me achieve my ultimate goal of including some type of fixed indicator in each result set to provide a handle for downstream filtering of the json response?
If I followed you correctly, you just need to prefix * with the table name or alias:
select 'SupplierResults' hardcoded, s.* from supplier s
As far as I know, this is the SQL Standard. select * is valid only when no other expression is added in the selec clause; SQL Server is lax about this, but most other databases follow the standard.
It is also a good idea to assign a name to the column that contains the hardcoded value (I named it hardcoded in the above query).
In MySQL you can simply put the * first:
SELECT *, 'SupplierResults'
FROM supplier
Demo on dbfiddle
To be more specific, in your case, in your query you would need to do this
select
'SupplierResults',
supplier.* -- <-- this
from
supplier
Try this
create table a (f1 int);
insert into a values (1);
select 'xxx', f1, a.* from a;
Basically, if there are other fields in select, prefix '*' with table name or alias
This question already has an answer here:
convert MySQL SET data type to Postgres
(1 answer)
Closed 9 years ago.
I'm trying to use MySQL SET type in PostgreSQL, but I found only Arrays, that has quite similar functionality but doesn't met requirements.
Does PostgreSQL has similar datatype?
You can use following workarounds:
1. BIT strings
You can define your set of maximum N elements as simply BIT(N).
It is little bit awkward to populate and retrieve - you will have to use bit masks as set members. But bit strings really shine for set operations: intersection is simply &, union is |.
This type is stored very efficiently - bit per bit with small overhead for length.
Also, it is nice that length is not really limited (but you have to decide it upfront).
2. HSTORE
HSTORE type is an extension, but very easy to install. Simply executing
CREATE EXTENSION hstore
for most installations (9.1+) will make it available. Rumor has it that PostgreSQL 9.3 will have HSTORE as standard type.
It is not really a set type, but more like Perl hash or Python dictionary: it keeps arbitrary set of key=>value pairs.
With that, it is not very efficient (certainly not BIT string efficient), but it does provide functions essential for sets: || for union, but intersection is little bit awkward: use
slice(a,akeys(b)) || slice(b,akeys(a))
You can read more about HSTORE here.
What about an array with a check constraint:
create table foobar
(
myset text[] not null,
constraint check_set
check ( array_length(myset,1) <= 2
and (myset = array[''] or 'one'= ANY(myset) or 'two' = ANY(myset))
)
);
This would match a the definition of SET('one', 'two') as explained in the MySQL manual.
The only thing that this would not do, is to "normalize" the array. So
insert into foobar values (array['one', 'two']);
and
insert into foobar values (array['two', 'one']);
would be displayed differently than in MySQL (where both would wind up as 'one','two')
The check constraint will however get messy with more than 3 or 4 elements.
Building on a_horse_with_no_name's answer above, I would suggest something just a little more complex:
CREATE FUNCTION set_check(in_value anyarray, in_check anyarray)
RETURNS BOOL LANGUAGE SQL IMMUTABLE AS
$$
WITH basic_check AS (
select bool_and(v = any($2)) as condition, count(*) as ct
FROM unnest($1) v
GROUP BY v
), length_check AS (
SELECT count(*) = 0 as test FROM unnest($1)
)
SELECT bool_and(condition AND ct = 1)
FROM basic_check
UNION
SELECT test from length_check where test;
$$;
Then you should be able to do something like:
CREATE TABLE set_test (
my_set text[] CHECK (set_check(my_set, array['one'::text,'two']))
);
This works:
postgres=# insert into set_test values ('{}');
INSERT 0 1
postgres=# insert into set_test values ('{one}');
INSERT 0 1
postgres=# insert into set_test values ('{one,two}');
INSERT 0 1
postgres=# insert into set_test values ('{one,three}');
ERROR: new row for relation "set_test" violates check constraint "set_test_my_set_check"
postgres=# insert into set_test values ('{one,one}');
ERROR: new row for relation "set_test" violates check constraint "set_test_my_set_check"
Note this assumes that for your set, every value must be unique (we are talking sets here). The function should perform very well and should meet your needs. However this has the advantage of handling any size sets.
Storage-wise it is completely different from MySQL's implementation. It will take up more space on disk but should handle sets with as many members as you like, provided that you aren't running up against storage limits.... So this should have a superset of functionality in comparison to MySQL's implementation. One significant difference though is that this does not collapse the array into distinct values. It just prohibits them. If you need that too, look at a trigger.
This solution also leaves the ordinality of input data intact so '{one,two}' is distinct from '{two,one}' so if you need to ensure that behavior has changed, you may want to look into exclusion constraints on PostgreSQL 9.2.
Are you looking for enumerated data types?
PostgreSQL 9.1 Enumerated Types
From reading the page referenced in the question, it seems like a SET is a way of storing up to 64 named boolean values in one column. PostgreSQL does not provide a way to do this. You could use independent boolean columns, or some size of integer and twiddle the bits directly. Adding two new tables (one for the valid names, and the other to join names to detail rows) could make sense, especially if there is the possibility of needing to associate any other data to individual values.
some time ago I wrote one similar extension
https://github.com/okbob/Enumset
but it is not complete
some more complete and close to mysql is functionality from pltoolkit
http://okbob.blogspot.cz/2010/12/bitmapset-for-plpgsql.html
http://pgfoundry.org/frs/download.php/3203/pltoolbox-1.0.2.tar.gz
http://postgres.cz/wiki/PL_toolbox_%28en%29
function find_in_set can be emulated via arrays
http://okbob.blogspot.cz/2009/08/mysql-functions-for-postgresql.html