Postgres Output id of dup rows between 2 tables - json

I have 2 tables. a master table and a master_staging table. I want to know which rows in the master_staging table also appear in the master table. bellow is the structure of master_staging
--------------+------------------------+-------
id | integer |
firstname | character varying(255) |
lastname | character varying(255) |
email | text |
address | character varying(255) |
country | character varying(255) |
phones | json |
twitters | json |
linkedin | character varying(255) |
urls | json |
source | character varying(255) |
notes | json |
conflict_id | integer | **************************
businessname | character varying(255) |
warnings | json | **************************
has_warning | boolean | **************************
deleted | boolean | **************************
The structure of the master table is the exact same except it does not contain the columns with '**************************' in the third column of the above table.
One solution is to loop over all rows in the master_staging table and query the master table for rows which have the same firstname, lastname, email and from there 'manually' check if all other fields are identical.
I am hoping there is a more elegant solution which will allow me to make one run one sql statement which returns the ids of all rows in master_staging which have duplicates in master. The main problem that is stumping me is I'm not sure how to do deep json equality checks?

You can JOIN two tables on whatever fields make a record unique. For example,
Select M.id
From master M, master_staging MS
Where M.email= MS.email
and M.firstName = MS.firstName
and M.lastName = MS.lastName

Related

How to update specific value without updating a whole value in MySQL

I have a table like this
+-----+------------------+
| id | name |
+-----+------------------+
| 1 | John;Black;Mike |
+-----+------------------+
| 2 | White;Mike;John |
+-----+------------------+
| 3 | Jacob;Mike |
+-----+------------------+
| 4 | Will;Mason;Mike |
+-----+------------------+
as result of
SELECT * FROM people WHERE name LIKE '%Mike%';
Is there any query on how to update specific name Mike to Michael without updating a whole value. like John;Black;Mike to John,Black,Michael in all rows automatically.
You could use replace
update people
set name = replace( name, 'Mike', 'Michael')
where name LIKE '%Mike%';
anyway you should avoid storing comma separated value .. you should think to a proper normalized table for this data ..

mysql copy many records with one change

Here is a table Evact:
+--------------+-----------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+--------------+-----------------------+------+-----+---------+-------+
| EvActMas | char(10) | NO | PRI | | |
| EvActSub | char(10) | NO | PRI | | |
| EvActCode | char(10) | NO | PRI | | |
| EvActIncOutg | enum('I','O','B','N') | YES | | NULL | |
| EvActBudAct | enum('B','A','O') | YES | | NULL | |
...other columns ...
and here are some records:
EvActMas EvActSub EvActCode EvActIncOutg EvActBudAct ..other..
Bank-2017 Incoming mth01 I A
Bank-2017 Incoming mth02 I A
Bank-2017 Incoming mth03 I A
Bank-2017 Incoming mth04 I A
Bank-2017 Incoming mth05 I A
Bank-2017 Incoming mth06 I A
I want to add six new records to the table where 'Incoming' is changed to 'Outgoing' and 'I' is changed to 'O'.
I did it the hard way by creating a new table from the old one; updating the new table and then inserting back into Evact:
Create table btemp like Evact;
update btemp set Evact = 'Outgoing', EvActIncOutg = 'O';
insert into Evact select * from btemp;
That worked, but I want to get better at SQL. What I wish for is a way to do this in one step by joining Evact to itself in some way. Does anyone have a suggestion?
If you want to insert a bunch of rows that are part copies of existing rows:
INSERT INTO evact
SELECT evactmas, 'Outgoing', evactcode, 'O', evactbudact, ...other..
FROM evact
You make a Select statement that is the data you want to insert, some columns in the select are the values as-is, other columns are the new values
If you aren't specifying all the columns in the select you'll have to put a list of column names in brackets after the INTO so MySQL knows which columns are to get what data. You can only omit the columns list if your select query selects the same number of columns in the table (in which case the columns selected must be in the same order as the table columns to be inserted into)
If your table has a calculated primary key (auto increment for example) specify the value to insert as 0 or NULL to have MySQL calculate a new value for it, or name all the columns except that one after the INTO and omit it from the select list

SQL comparing strings from two tables with different data formats

I have two tables, with different columns that I would like to compare. There is an issue in our system with serial numbers, and I want to make sure that all of the serial numbers (CMMTTEXT - in comma delimitted form) in Table B are being transferred to Table A (SERLTNUM - where each individual serial number has its own line)
Basically, what I would like to try and do is take the SOPNUMBER's from the last 3 months (which I would get from Table C), then get all rows from Table B and Table A with the last 3 months SOPNUMBER's and then somehow to make sure all serial numbers in CMMTTEXT in Table B are in Table A as SERLTNUM.
I know how to get all of the data, but I'm not sure what I can do in order to compare the two columns in SQL when they have different data formats. I am trying to think if there is someway I can just use substr() to search CMMTTXT but don't know how I could then display rows where there was no match found.
The LNITMSEQ table is an ID that corresponds to different line items in an order.
Table A
+-----------+----------+----------+---------------+
| SOPNUMBER | LNITMSEQ | SERLTNUM | ITEMNMBR |
+-----------+----------+----------+---------------+
| I327478 | 16384 | ABC123 | someItem |
+-----------+----------+----------+---------------+
| I327478 | 32768 | DEF123 | someOtherItem |
+-----------+----------+----------+---------------+
Table B
+-----------+----------+-----------------------------+
| SOPNUMBER | LNITMSEQ | CMMTTEXT |
+-----------+----------+-----------------------------+
| I327478 | 16384 | ABC123,ABC124,ABC125,ABC126 |
+-----------+----------+-----------------------------+
| I327478 | 32768 | DEF123,DEF124,DEF125,DEF126 |
+-----------+----------+-----------------------------+
Table C
+-----------+-----------+
| SOPNUMBER | DATE |
+-----------+-----------+
| I327478 | 5/20/2017 |
+-----------+-----------+
| I327479 | 5/21/2017 |
+-----------+-----------+
I have commented above, but a clearer answer can be found here for what you need:
SQL split values to multiple rows
You can use FIND_IN_SET function like as follows
SELECT * FROM TableA INNER JOIN TAableB
ON FIND_IN_SET(TableA.SERLTNUM, TableB.CMMTTEXT) > 0
FIND_IN_SET function returns a value in the range of 1 to N if the string str is in the string list strlist consisting of N substrings. for more detail see the manual

MySQL Moving table from varchar to int

I am moving an old Mantis table that had a varchar(64) category_id column to a new Mantis table that has a int(10) category_id column.
The simplified structure is as follows
bug_table (Old DB)
+----+-------------+-------------+--------+
| id | project_id | category_id | report |
+----+-------------+-------------+--------+
| 1 | 0 | Server | crash |
| 2 | 0 | Database | error |
| 3 | 1 | Server | bug |
| 4 | 1 | Server | crash |
+----+-------------+-------------+--------+
category_table (New DB)
+----+------------+----------+
| id | project_id | name |
+----+------------+----------+
| 0 | 1 | Server |
| 1 | 1 | Database |
| 2 | 2 | Server |
| 3 | 2 | Database |
+----+------------+----------+
I need a magical query that will replace category_id in the bug_table with the numerical category_id in the category_table. Thankfully I am able to match rows by project_id and categories by name.
Here is the query I am working on but have gotten stuck in the complexity
UPDATE bug_table b SET b.category_id = c.id USING category_table WHERE b.category_id = c.name
I like to approach such a task a little differently than you do for a new lookup/reference table.
To me, the new category table would only have id and name columns. There are only two rows based on the sample data: Server and Database. Yes, I realize there could be other names, but those can easily be added, and should be added, before proceeding to maximize the id matching that follows.
Next I would add a new column to the bug table that could be called 'category_new' with the data type that will store the new category id. Alternatively, you could rename the existing category_id column to category, and the new column for the id's could then be column_id.
After all that is done then you can update the new column by joining the category on names and set the id that matches: (note this assumes the non-alternative approach mentioned in step 2)
UPDATE bug_table JOIN category_table ON bug_table.category_id = category_table.name
SET bug_table.category_new = category_table.id
After that runs, check the new column to verify the updated id's.
Finally, after successful update, now the old category_id column (with the names) from the bugs_table can be dropped, and the category_new column can be renamed as the category_id.
=====
Note that if you decide to go with the alternative column approach mentioned, of course the query will be similar but differ slightly. Then only a column drop is needed at the end
If there are other tables to apply the same category changes, the operation (basically steps 2 through 5) would be similar for those tables too.

INSERT / UPDATE SQL random & unique VARCHAR

I need to be able to INSERT/UPDATE UNIQUE RANDOM UTF8 ALPHANUMERICAL VARCHAR 55 into a table field called 'key'.
Can't find out any good query example, does anyone can show me or link me something?
This answer is based on mysql.
This select will create 55 char long random strings:
select substr(concat(md5(rand()),md5(rand())),1,55);
to fill your table column you might want to try out:
create table example (keycol varchar(55));
insert into example (keycol) values (substr(concat(md5(rand()),md5(rand())),1,55));
The result will be:
select keycol from example;
+---------------------------------------------------------+
| keycol |
+---------------------------------------------------------+
| 4517f4be669301a4a529b53fc18d646dec42d4d07d911d33a67c863 |
| 3caa1c98f0f9ee39515aa6f4ddb3f84fa41abd5392f610c5d24bcd9 |
| 8e52cb4ce29e58514671c9b68f19832f26ddf53f277621ac420bd2e |
| 3adcccfb6cb729ce1c0a14fb75f6fd54f58992dc0751527c969e007 |
| c28c5879589dc90f4fb0963673e5668fa5789d325423ba043e0243b |
| 8f7a2af97d73261008f0d0d7249480fde56a3a91f2ce6e8bf0b0070 |
| ff4f74f25b92da3eaab282218c23a75d4cfa77c8f8bfdf74d7ebdf9 |
+---------------------------------------------------------+