How to assign a JSON variable with row_to_json in PostgreSQL - json

I'm trying to create an annonymous function in PostgreSQL to create mock data for an application.. I would like to do a SELECT query first (to get data from a random charter) and convert all the rows into a JSON with row_to_json, then assign the result into a variable of type JSON.
I need this charter information so I can add it into bookings table.
This is not working, I think I don't know how to associate the result of the select with the variable previously created; I'm getting the error that charterData is null and I would like to know how I can achieve this..
This is the annonymous func in SQL:
BEGIN;
DO $$
DECLARE charterData JSON;
DECLARE bookingId INTEGER;
BEGIN
SELECT row_to_json(t) INTO charterData FROM (select charter_id, name from charters) t WHERE charter_id = 1;
INSERT INTO bookings (charter, yacht, email, date, guests, total, start_hour, end_hour, hotel, arrival_date) values (charterData, '{"test":1}', 'a', '12/10/1995', 8, '78', '123', '123', '123', '123')
RETURNING booking_id INTO bookingId;
END $$;
COMMIT;
Table charter:
Table "public.charters"
Column | Type | Collation | Nullable | Default
-------------+-------------------+-----------+----------+----------------------------------------------
charter_id | integer | | not null | nextval('charters_charter_id_seq'::regclass)
name | character varying | | not null |
description | character varying | | not null |
sail_hours | integer | | not null |
Indexes:
"charters_pk" PRIMARY KEY, btree (charter_id)
"name_charter" UNIQUE CONSTRAINT, btree (name)
Referenced by:
TABLE "bookings" CONSTRAINT "charters_bookings_fk" FOREIGN KEY (charter) REFERENCES charters(name) ON DELETE CASCADE
TABLE "pricing" CONSTRAINT "charters_pricing_fk" FOREIGN KEY (charter_id) REFERENCES charters(charter_id) ON DELETE CASCADE
Bookings table:
Table "public.bookings"
Column | Type | Collation | Nullable | Default
----------------+-------------------+-----------+----------+----------------------------------------------
booking_id | integer | | not null | nextval('bookings_booking_id_seq'::regclass)
charter | json | | not null |
yacht | json | | not null |
email | character varying | | not null |
date | date | | not null |
guests | integer | | not null |
total | numeric | | not null |
start_hour | character varying | | not null |
end_hour | character varying | | not null |
alcohol | character varying | | |
transportation | character varying | | |
others | character varying | | |
arrival_date | character varying | | |
hotel | character varying | | |
Indexes:
"bookings_pk" PRIMARY KEY, btree (booking_id)
"end_hour" UNIQUE CONSTRAINT, btree (end_hour)
"start_hour" UNIQUE CONSTRAINT, btree (start_hour)
Foreign-key constraints:
"charters_bookings_fk" FOREIGN KEY (charter) REFERENCES charters(name) ON DELETE CASCADE
"yachts_bookings_fk" FOREIGN KEY (yacht) REFERENCES yachts(name) ON DELETE CASCADE
Referenced by:
TABLE "bookings_extra" CONSTRAINT "bookings_extra_fk" FOREIGN KEY (booking_id) REFERENCES bookings(booking_id) ON DELETE CASCADE

Okay I have found the answer... Was kinda silly but maybe this answer will help someone
BEGIN;
DO $$
DECLARE charter JSON;
DECLARE bookingId INTEGER;
BEGIN
charter := (SELECT row_to_json(t) FROM (SELECT charter_id, name FROM charters) t WHERE charter_id = $1);
INSERT INTO bookings
(charter, yacht, email, date, passengers, total, start_hour, end_hour, hotel, arrival_date, charter_price)
values (charter, '{"test":1}', 'a', '12/10/1995', 8, '78', '123', '123', '123', '123', '132')
RETURNING booking_id INTO bookingId;
END $$;
COMMIT;

Related

MySQL: Adding column to existing table. Have the values ready, how do I enter them altogether?

I have table with data on old game characters. I'd like to add a gender column.
If I do
ALTER TABLE characters
ADD gender ENUM('m','f') AFTER char_name
then I get a column full of NULLs. How do I get the values in?
Using an INSERT statement tries to tag them all into new rows, instead of replacing the NULLs.
Using an UPDATE statement requires a new statement for every single entry.
Is there any way to just drop a "VALUES ('m'),('f'),('f'),('m'),('f') etc" into the ALTER statement or anything else and update them all efficiently?
There is no way to fill in specific values during ALTER TABLE. The value will be NULL or else a default value you define for the column.
You may find INSERT...ON DUPLICATE KEY UPDATE is a convenient way to fill in the values.
Example:
CREATE TABLE characters (
id serial primary key,
char_name TEXT NOT NULL
);
INSERT INTO characters (char_name) VALUES
('Harry'), ('Ron'), ('Hermione');
SELECT * FROM characters;
+----+-----------+
| id | char_name |
+----+-----------+
| 1 | Harry |
| 2 | Ron |
| 3 | Hermione |
+----+-----------+
Now we add the gender column. It will add the new column with NULLs.
ALTER TABLE characters
ADD gender ENUM('m','f') AFTER char_name;
SELECT * FROM characters;
+----+-----------+--------+
| id | char_name | gender |
+----+-----------+--------+
| 1 | Harry | NULL |
| 2 | Ron | NULL |
| 3 | Hermione | NULL |
+----+-----------+--------+
Now we update the rows:
INSERT INTO characters (id, char_name, gender) VALUES
(1, '', 'm'), (2, '', 'm'), (3, '', 'f')
ON DUPLICATE KEY UPDATE gender = VALUES(gender);
It looks strange to use '' for the char_name, but it will be ignored anyway, because we don't set it in the ON DUPLICATE KEY clause. The original char_name is preserved. Specifying the value in the INSERT is necessary only because the column is defined NOT NULL and has no DEFAULT value.
SELECT * FROM characters;
+----+-----------+--------+
| id | char_name | gender |
+----+-----------+--------+
| 1 | Harry | m |
| 2 | Ron | m |
| 3 | Hermione | f |
+----+-----------+--------+
DBFiddle

Mysql autoincrement does not increment

I have this table:
mysql> desc Customers;
+------------+------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+------------+------------------+------+-----+---------+-------+
| CustomerID | int(10) unsigned | NO | PRI | NULL | |
| Name | char(50) | NO | | NULL | |
| Address | char(100) | NO | | NULL | |
| City | char(30) | NO | | NULL | |
+------------+------------------+------+-----+---------+-------+
Now, If I want to insert sample data:
mysql> insert into Customers values(null, 'Julia Smith', '25 Oak Street', 'Airport West');
ERROR 1048 (23000): Column 'CustomerID' cannot be null
I know I cannot make the ID null, but that should be job of mysql to set it numbers and increment them. So I try to simple not specifying the id:
mysql> insert into Customers (Name, Address, City) values('Julia Smith', '25 Oak Street', 'Airport West');
Field 'CustomerID' doesn't have a default value
Now I am in trap. I cannot make id null (which is saying for mysql "increment my ID"), and I cannot omit it, becuase there is no default value. So how should I make mysql to handle ids for me in new insertions?
Primary key means that every CustomerID has to be unique. and you defined it also as NOT NULL, so that an INSERT of NULL is not permitted
instead of >| CustomerID | int(10) unsigned | NO | PRI | NULL |
Make it
CustomerID BIGINT AUTO_INCREMENT PRIMARY KEY
and you can enter your data without problem
ALTER TABLE table_name MODIFY CustomerID BIGINT AUTO_INCREMENT PRIMARY KEY
#Milan,
Delete the CustomerID var from the table. And add this field again with the following details:
Field: CustomerID,
Type: BIGINT(10),
Default: None,
Auto_increment: tick in the checkbox
Click SAVE button to save this new field in the table. Now I hopefully it will work while inserting the new record. Thanks.

Implicitly insert unique records into foreign key table on insert

Is it possible to add into or remove type_name entries from the type table when a record is inserted in the language table?
I am receiving a #1452 (foreign key constraint) error:
Cannot add or update a child row:
a foreign key constraint fails (`test`.`language`,
CONSTRAINT `language_ibfk_1`
FOREIGN KEY (`type_name`)
REFERENCES `type` (`type_name`)
ON DELETE CASCADE
ON UPDATE CASCADE)
Table Schema
CREATE TABLE IF NOT EXISTS `type` (
`type_name` VARCHAR(128) NOT NULL UNIQUE,
PRIMARY KEY(`type_name`)
);
CREATE TABLE IF NOT EXISTS `language` (
`language_id` INT NOT NULL AUTO_INCREMENT,
`language_name` VARCHAR(256) NOT NULL,
`type_name` VARCHAR(128) NOT NULL,
PRIMARY KEY(`language_id`),
FOREIGN KEY (`type_name`)
REFERENCES `type`(`type_name`)
ON DELETE CASCADE
ON UPDATE CASCADE
);
Insert Statements
INSERT INTO `language`(`language_name`, `type_name`) VALUES
('C', 'programming'),
('Java', 'programming'),
('Python', 'scripting'),
('PHP', 'scripting'),
('HTML', 'markup'),
('XML', 'markup');
Implicitly inserted values due to CASCADE.
INSERT INTO `type`(`type_name`) VALUES
('programming'),
('scripting'),
('markup');
If you really have to use such a solution, you could use a trigger before insert on language table:
CREATE TRIGGER trigger_name BEFORE INSERT ON language FOR EACH ROW
BEGIN
INSERT IGNORE INTO type
(type_name)
VALUES
(NEW.type_name);
END
So, using piotrgajow's response, I have come up with a way to normalize my table by adding a type_index to the type table.
I converted his trigger to a MariaDB/MySQL trigger:
DELIMITER $$
CREATE TRIGGER `insert_language_type`
BEFORE INSERT ON `language`
FOR EACH ROW
BEGIN
INSERT IGNORE INTO `type`
(`type_name`)
VALUES
(NEW.`type_name`);
END;$$
DELIMITER ;
I took this a step further and created a way to re-index the types by creating a type_id column and associating it with the language table. And, in the process, dropping the type_name column from the language table.
/** Add the new type_id index to both tables. */
ALTER TABLE `type` ADD `type_id` INT FIRST;
ALTER TABLE `language` ADD `type_id` INT;
/** Index the type_id values. */
SET #i = 0;
UPDATE `type` SET `type_id`=(#i:=#i+1);
/** Apply the new type_id values to the languages. */
UPDATE `language` L, `type` T
SET L.`type_id` = T.`type_id`
WHERE L.`type_name` = T.`type_name`;
/** Remove all constraints and drop the type_name column. */
ALTER TABLE `type` DROP PRIMARY KEY;
ALTER TABLE `language` DROP FOREIGN KEY `language_ibfk_1`;
ALTER TABLE `language` DROP COLUMN `type_name`;
/** Set primary key for type and add constraint to the language. */
ALTER TABLE `type` MODIFY COLUMN `type_id` INT AUTO_INCREMENT PRIMARY KEY;
ALTER TABLE `language` ADD FOREIGN KEY (`type_id`) REFERENCES `type`(`type_id`);
/** Remove the trigger, because it is meaningless. */
DROP TRIGGER `insert_language_type`;
Before
+-------------------------------------------+ +-------------+
| language | | type |
+-------------+---------------+-------------+ +-------------+
| language_id | language_name | type_name | | type_name |
+-------------+---------------+-------------+ +-------------+
| 1 | C | programming | | markup |
| 2 | Java | programming | | programming |
| 3 | Python | scripting | | scripting |
| 4 | PHP | scripting | +-------------+
| 5 | HTML | markup |
| 6 | XML | markup |
+-------------+---------------+-------------+
After
+---------------------------------------+ +-------------------- --+
| language | | type |
+-------------+---------------+---------+ +---------+-------------+
| language_id | language_name | type_id | | type_id | type_name |
+-------------+---------------+---------+ +---------+-------------+
| 1 | C | 2 | | 1 | markup |
| 2 | Java | 2 | | 2 | programming |
| 3 | Python | 3 | | 3 | scripting |
| 4 | PHP | 3 | +---------+-------------+
| 5 | HTML | 1 |
| 6 | XML | 1 |
+-------------+---------------+---------+

SQL Update table from another

I have table as following:
dev=> \d statemachine_history
Table "public.statemachine_history"
Column | Type | Modifiers
---------------+--------------------------+-------------------------------------------------------------------
id | bigint | not null default nextval('statemachine_history_id_seq'::regclass)
schema_name | character varying | not null
event | character varying | not null
identifier | integer | not null
initial_state | character varying | not null
final_state | character varying | not null
triggered_at | timestamp with time zone | not null default statement_timestamp()
triggered_by | text |
command | json |
flag | json |
created_at | timestamp with time zone |
created_by | json |
updated_at | timestamp with time zone |
updated_by | json |
Indexes:
"statemachine_log_pkey" PRIMARY KEY, btree (id)
"unique_statemachine_log_id" UNIQUE, btree (id)
"statemachine_history_identifier_idx" btree (identifier)
"statemachine_history_schema_name_idx" btree (schema_name)
AND
dev=> \d booking
Table "public.booking"
Column | Type | Modifiers
----------------+--------------------------+------------------------------------------------------
id | bigint | not null default nextval('booking_id_seq'::regclass)
pin | character varying |
occurred_at | timestamp with time zone |
membership_id | bigint |
appointment_id | bigint |
created_at | timestamp with time zone |
created_by | json |
updated_at | timestamp with time zone |
updated_by | json |
customer_id | bigint |
state | character varying | not null default 'booked'::character varying
Indexes:
"booking_pkey" PRIMARY KEY, btree (id)
Foreign-key constraints:
"booking_appointment_id_fkey" FOREIGN KEY (appointment_id) REFERENCES appointment(id)
"booking_customer_id_fkey" FOREIGN KEY (customer_id) REFERENCES customer(id)
"booking_membership_id_fkey" FOREIGN KEY (membership_id) REFERENCES membership(id)
Referenced by:
TABLE "booking_decline_reason" CONSTRAINT "booking_decline_reason_booking_id_fkey" FOREIGN KEY (booking_id) REFERENCES booking(id)
I am trying to update the booking.update_at from the statemachine_history.updated_at
Letting you know that there is a one to many relationship between the 2 tables so i want to MAX(statemachine_history.updated_at)
My try is:
UPDATE booking SET updated_at=
(
SELECT MAX(updated_at)
FROM statemachine_history
WHERE schema_name='Booking'
AND identifier=id
GROUP BY identifier
);
However the bookings.updated_at becomes null
All you really need to do is to make sure id reference booking.id by naming it explicitly;
UPDATE booking SET updated_at=
(
SELECT MAX(updated_at)
FROM statemachine_history
WHERE schema_name='Booking'
AND identifier = booking.id
GROUP BY identifier
);
A quick SQLfiddle to test with.
If there are real time requirements for the query, you'll want to look into TomH's join in another answer though.
This should do what you need:
UPDATE B
SET
updated_at = SQ.max_updated_at
FROM
Booking B
INNER JOIN
(
SELECT
identifier,
MAX(updated_at) AS max_updated_at
FROM
Statemachine_History SH
GROUP BY
identifier
) AS SQ ON SQ.identifier = B.id
[Solved] PSQL Query:
UPDATE booking SET updated_at=
(
SELECT MAX(updated_at)
FROM statemachine_history
WHERE schema_name='Booking'
AND identifier=booking.id
GROUP BY identifier
) WHERE exists(SELECT id FROM statemachine_history WHERE schema_name='Booking' AND identifier=booking.id);
This part:
WHERE exists(SELECT id FROM statemachine_history WHERE schema_name='Booking' AND identifier=booking.id);
is to avoid updating booking.updated_at, in case there is not such relation in statemachine_history table

MySQL update to other tables

I want to be able to insert data into t1 and have data get populated in table t2 with the primary key as a foreign key in t2.
Basically, how come in my current setup when I INSERT INTO t1 (first_name, last_name) values ( "blah", "blah"); and then do SELECT * FROM t2; t2 it says Empty Set (0.00 sec) for t2? Shouldn't it at least show the default id of 1?
t1:
+------------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------+------------------+------+-----+---------+----------------+
| first_name | varchar(20) | NO | | NULL | |
| last_name | varchar(20) | NO | | NULL | |
| id | int(10) unsigned | NO | PRI | NULL | auto_increment |
+------------+------------------+------+-----+---------+----------------+
t2:
+-----------+------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------+------------------+------+-----+---------+-------+
| address | varchar(50) | NO | | NULL | |
| id | int(10) unsigned | NO | MUL | NULL | |
| last_name | varchar(20) | YES | | NULL | |
+-----------+------------------+------+-----+---------+-------+
In a relational database, a FOREIGN KEY is a declaration that you intend to insert values into T2 that must match an already existing value in T1, and that you want the database to refuse to perform any action that would break this relationship.
It does not mean that the database will create records on its own in order to satisfy a relationship. If you try to insert a value into T2 that does not exist in T1, the command will fail; it will not add the required record to T1.
That is the opposite of what you're suggesting, however, in which you want the foreign key values to get automatically generated. However, there's no requirement that a primary key value actually have references and, furthermore, no limit on the number of times that primary key value can be referenced — so how would the database guess what should be created in T2?
That said, if you want some of your own code to execute automatically when data is added to T1, code which can do whatever you want, you can create a trigger on T1.
No, tables won't propagate automatically. (You can however do it with triggers) You will have to insert into t2.
You can create a trigger on table t1 so that it inserts a row into t2 with the correct id and the other fields NULL
Foreign keys will not insert records for you.
DELIMITER ;;
CREATE TRIGGER insert_addr_rec BEFORE INSERT ON t1
FOR EACH ROW BEGIN
INSERT INTO t2 SET id=NEW.id, last_name=NEW.last_name
END ;;
DELIMITER ;
NB untested code