MySQL Query Update Multiple Tables - mysql

I have a contact management system and an sql dump of my contacts with five or six columns of data in it I want to import into three specific tables. Wondering what the best way to go about this is. I have already uploaded the sql dump...its a single table now in in my contact management database.
The tables in the crm require in the companies table only the contactID...and in the songs table:
companyID,
contactID,
date added (not required) and
notes (not required)
Then there is the third table, the contact table which only requires contactname.
I have already uploaded data to each of the three tables (not sure if my order is correct on this) but now need to update and match the data in the fourth table (originally the sql dump) with the other three and update everything with its unique identifier.
Table Structures:
+DUMP_CONTACTS
id <<< I dont need this ID, the IDs given to each row in the CRM are the important ones.
contact_name
company
year
event_name
event_description
====Destination Tables====
+++CONTACTS TABLE++
*contactID < primary key
*contact_name
+++COMPANIES TABLE+++
*companyID < primary key
*name
*contact_ID
*year
++++Events++++
*EventID < primary key
*companyID
*contactID
*eventname
*description

There are parts of your post that I still don't understand, so I'm going to give you SQL and then you can run them in a testing environment and we can take it from there and/or go back and start again:
-- Populate CONTACTS_TABLE with contact_name from uploaded dump
INSERT INTO CONTACTS_TABLE (contact_name)
SELECT contact_name FROM DUMP_CONTACTS
-- Populate COMPANIES with information from both CONTACTS_TABLE + dump
INSERT INTO COMPANIES (name, contact_ID, year)
SELECT d.company, c.contactID, d.year
FROM DUMP_CONTACTS AS d
INNER JOIN CONTACTS_TABLE AS c
ON d.contact_name = c.contact_name
-- Populate SONGS_TABLE with info from COMPANIES
INSERT INTO SONGS_TABLE (companyID, contactID)
SELECT cm.companyID, cm.contact_ID
FROM COMPANIES AS cm
-- Populate Events with info from COMPANIES + dump
INSERT INTO Events (companyID, contactID, eventname, description)
SELECT cm.companyID, cm.contact_ID, d.event_name, d.event_description
FROM DUMP_CONTACTS AS d
INNER JOIN COMPANIES AS cm
ON d.company = cm.name
I first populate CONTACTS_TABLE and then, since the contactID is required for records in COMPANIES, insert records from CONTACTS_TABLE joining the dump. SONGS_TABLE takes data directly from COMPANIES, and lastly the Events gets its data by joining COMPANIES and the dump.

Related

How to get different columns from different table in single join in gorm

Hi I have 2 tables companies and user. (user_id is foreign key in companies table).
I want to get company details of all users holding a particular phone number.(phone number is present in user table).
For that I am executing gorm query like that
*gorm.db.Table("companies").Joins("inner join users ON companies.user_id = users.id AND users.phone_number IN (?)",['1234'.'3456']).Find(&out)
Here is out is my struct that contains all columns of companies table along with that I added a field phone_number which is from users table.
My problem is I am getting all data of companies table in out but I am not getting phone_number in out as it os from users table.
I can do this by selecting all columns of companies table and phone number of users table in select() , but as companies have 50 columns so it has very tough task to do.
Can anyone help me to achieve this in a single query without selecting all columns by its name.
I want Gorm query as I am writing this in Golang.

How to populate a new column in a new database when transferring data from a old database

I have a old database (using MariaDB) and I have to make a new one that's close to the same but has a few differences, and I have to insert all the data from the old one into the new one. I've populated the new one with all the equal data, but I'm stuck on getting the 'new' data into it.
The change is in the old database there was a column in multiple tables containing a country name, but in the new database Country has it's own table, so instead of a country name in a column, it is instead just the foreign key CountryID from the country table.
So the issue is I have to populate the new columns CountryID with whatever the Countries CountryID is. For example, if the country field in the customers table in the old database was USA, when I translate the data, instead of putting USA it has to go to the new Country table, find the CountryID that is equal to USA, and put that ID in the field instead. (Something like this)
Old Customers Table
--------------------
Country
USA
Canada
New Customers Table
-------------------
CountryID
3
7
CountryTable
----------
CountryID CountryName
3 USA
7 Canada
I know it's probably just a simple insert into with some condition but can't figure out the proper syntax for it.
I've tried different insert into statements similar to the following but keep getting errors:
insert into newDatabase.customers(CountryID)
select oldDatabase.customers.Country
from oldDatabase.customers
where oldDatabase.customers.country = newDatabase.countryTable.CountryName;
insert into newDatabase.customers(CountryID)
select oldDatabase.customers.Country
from oldDatabase.customers
inner join newDatabase.countryTable as c on c.countryName = oldDatabase.customers.Country
where oldDatabase.customers.country = newDatabase.countryTable.countryName;
The end goal is you want to insert the id from CountryTable into your new customers table, which means you are going to need that table. You are inserting the data from the old customers table, so it'll look like this:
INSERT INTO newdb.customers(CountryID)
SELECT ct.CountryID
FROM olddb.customers as oldc
INNER JOIN newdb.country_table as ct
ON ct.CountryName = oldc.Country;
You don't need a WHERE clause because you aren't trying to filter the rows from the old customers table. You just need the ID from the country table to be mapped with your old customers table. For that reason you are JOINing by the country's name to the country table to get that extra information.

Import from Excel to SQL with conditional check for duplicates

I have a huge number of data stored in PDF files which I would like to convert into a SQL database. I can extract the tables from the PDF files with some online tools. I also know how to import this into MySQL. BUT:
The list contains users with names, birth dates and some other properties. A user may exist in other PDF files too. So when I'm about to convert the next file into Excel and import it to MySQL, I want to check if that user already exists in my table. And this should be done based on several properties - we may have the same user name, but with different date of birth, that can be a new record. But if all the selected properties match then that specific user would be a duplicate and shouldn't be imported.
I guess this is something I can do with a copy from temporary table but not sure what the selection should be. Let's say user name is stored in column A, date of birth in column B and city in column C. What would be the right script to verify these in the existing table and skip copy if all three match with an existing record?
Thanks!
1- Create a permanent table
Create table UploadData
(
id int not null AUTO_INCREMENT,
name varchar(50),
dob datetime,
city varchar(30)
)
2- Import your data in Excel to your SQL DB. This is how you do it in Sql Server mentioned below, not sure about MySQL but might be something similar. You said you know how to do it already in your question, that's why I am not specifying each step for MySQL
Right-click to your DB, go to Tasks -> Import Data, From: Microsoft Excel, To: Your DB name, Select UploadData table, (check Edit Columns to make sure the columns are matching), finish uploading from Excel to your SQL DB.
3- Check if data exists in your main table, if not, add.
CREATE TEMPORARY TABLE #matchingData (id int, name varchar(50), dob datetime, city (varchar(30))
INSERT INTO #matchingData
select u.id, u.name, u.dob, u.city
from main_table m
inner join UploadData u on u.name = m=name
and u.dob = m.dob
and u.city = m.city
insert into main_table (name, dob, city)
select name, dob, city
from UploadData
where id not in (select id from #matchingData)
4- No need UploadData table anymore. So: DROP TABLE UploadData
Add primary key constraints to Column A, Column B and Column C
It will avoid duplicate rows but can have duplicate values under single column.
Note: There is a limit on maximum number of primary keys in a particular table.

Search and update on INSERT

A client needs to migrate a large volume of data and I feel this question could be generic enough for SO.
Legacy system
Student profiles contain fields like names, emails etc, as well as university name. The university name is represented as a string and as such is repeated which is wasteful and slow.
Our new form
A more efficient solution is to have a table called university that only stores the university name once with a foreign key (university_id) and the HTML dropdown just POSTs the university_id to the server. This makes things much faster for doing GROUP BY queries, for example. New form data going into the database works fine.
The problem
How can we write a query that will INSERT all the other columns (first_name, last_name, email, ...) but then rather than inserting the university string, find out its university_id from the university table and INSERT the corresponding int instead of the original string? (scenario: data is in a CSV file that we will manipulate into INSERT INTO syntax)
Many thanks.
Use INSERT INTO ... SELECT with a LEFT JOIN. Left is chosen so that student record won't get discarded if it has a null value for university_name.
INSERT INTO students_new(first_name, last_name, email, university_id)
SELECT s.first_name, s.last_name, s.email, u.university_id
FROM students_old s
LEFT JOIN university u ON s.university_name = u.university_name
Table and column names are to be replaced for real ones. Above assumes that your new table for students holding foreign key to university is students_new while the old one (from before normalisation) is students_old.

Database table copying

I am trying to rectify a previous database creation with tables that contains data that needs to be saved. Instead of recreating a completely new database since some of the tables are still reusable, I need to split a table that exists into 2 new tables which I have done. Now I am trying to insert the data into the 2 new tables and because of duplicate data in the old table I am having a hard time doing this.
Old table structure:
ClientProjects clientId PK
clientName
clientProj
hashkey MD5 (clientname and clientProj)
new table structures:
client clientId PK
clientName
projects queryId PK
clientId PK
projectName
I hope this makes sense. The problem is that in the old table for example you have clients with multiple clientIds.
Supposing your clientName is unique you could so something like:
INSERT client (clientId, clientName)
SELECT MAX(clientID), clientName FROM oldTable GROUP BY clientName;
INSERT project (clientId, projectName)
SELECT n.clientId, o.projectName from client n
INNER JOIN oldTable o on o.clientName = n.clientName;