I've a table with suppliers what has the following structure:
Pay attention to provider field. It's a VARCHAR now. I've got to support this system from another developer and now we need to have the list of providers and store additional info so I created another table for storing providers actually.
It has the following structure: id, name , margin, outer_name, etc.
I plan to change type of provider to INT(32) and it will point to provider table.
The problem is that MySQL doesn't support transactions for changing the structure of database.
If I changed type of field from string to integer I lose all previous data. IF something goes wrong in the middle. I'm lost.
Would it be ok dump data to file using serialisation and reading them from there?
Are there any better ways to do it?
Please follow the below steps to migrate data to new table and alter column.
1. Insert all data into new table (Provider Table)
INSERT INTO providerTable (NAME)
SELECT DISTINCT provider
FROM suppliers;
2. Update providerID into Main (Supplier) table
UPDATE suppliers s
INNER JOIN providerTable p ON s.provider = p.name
SET s.provider = p.id;
Before altering the table please verify your data into supplier table.
3. Then alter column datatype of Main (Supplier) table
ALTER TABLE suppliers CHANGE provider provider INT(4) NOT NULL
Using this approach you don't need to take backup of table. You won't loss any data.
Related
I have an interesting problem. I have a relational database to which i can use custom scripts to create the tables. It's pseudo SQL and doesn't use standard create syntax. Rather it's fairly limited. What i want to do is store my schema in a MySQL database.
In my custom relational database I have a table called:
Person with fields
id as NUMBER not nullable PK
name as TEXT (64) characters max
year of birth as DATE
So in order to generate the create scripts I thought of using MySQL database to store the schema. For example
I have a MySQL table called custom_table with id and name
e.g. 1, Person would be the first record in it
I have another MySQL table called custom_fields with the following:
field_id as int, not null, pk
table_name_id, foreign key to custom_table
field_name as varchar(255)
field_type as varchar(255)
is_primary_key as tinyint(1)
is_nullable as tinyint(1)
The data set would look like:
field_id
table_name_id
field_name
field_type
is_primary_key
is_nullable
1
1
id
NUMBER
1
0
2
1
name
TEXT
0
1
3
1
year
DATE
0
1
The part that I am stuck on, is how/where do i store the length of the TEXT field. I have other field types such as decimal which accept additional parameters or default values as well.
I was thinking of maybe have a table called field_date, field_number, field_text which would be related back to the custom_fields table via foreign key relationship but i am unsure how to enforce the fact that each field_id should only exist at most one time in any other table. Any insight would be appreciated or direction to research. My challenge is that I haven't been able to find anything in stack or other sites related to something like this.
Yes database tables can and do store the schema. It's called the 'catalog'. Every SQL database should have one, and it's maintained by every CREATE TABLE, etc. And you can query it, just like any other table.
If your (rather mysterious) "pseudo SQL" DBMS doesn't do that, get a proper DBMS. Don't try to re-invent the wheel, because trying to maintain a 'shadow' of the actual schema will lead to anomalies.
I have a huge number of data stored in PDF files which I would like to convert into a SQL database. I can extract the tables from the PDF files with some online tools. I also know how to import this into MySQL. BUT:
The list contains users with names, birth dates and some other properties. A user may exist in other PDF files too. So when I'm about to convert the next file into Excel and import it to MySQL, I want to check if that user already exists in my table. And this should be done based on several properties - we may have the same user name, but with different date of birth, that can be a new record. But if all the selected properties match then that specific user would be a duplicate and shouldn't be imported.
I guess this is something I can do with a copy from temporary table but not sure what the selection should be. Let's say user name is stored in column A, date of birth in column B and city in column C. What would be the right script to verify these in the existing table and skip copy if all three match with an existing record?
Thanks!
1- Create a permanent table
Create table UploadData
(
id int not null AUTO_INCREMENT,
name varchar(50),
dob datetime,
city varchar(30)
)
2- Import your data in Excel to your SQL DB. This is how you do it in Sql Server mentioned below, not sure about MySQL but might be something similar. You said you know how to do it already in your question, that's why I am not specifying each step for MySQL
Right-click to your DB, go to Tasks -> Import Data, From: Microsoft Excel, To: Your DB name, Select UploadData table, (check Edit Columns to make sure the columns are matching), finish uploading from Excel to your SQL DB.
3- Check if data exists in your main table, if not, add.
CREATE TEMPORARY TABLE #matchingData (id int, name varchar(50), dob datetime, city (varchar(30))
INSERT INTO #matchingData
select u.id, u.name, u.dob, u.city
from main_table m
inner join UploadData u on u.name = m=name
and u.dob = m.dob
and u.city = m.city
insert into main_table (name, dob, city)
select name, dob, city
from UploadData
where id not in (select id from #matchingData)
4- No need UploadData table anymore. So: DROP TABLE UploadData
Add primary key constraints to Column A, Column B and Column C
It will avoid duplicate rows but can have duplicate values under single column.
Note: There is a limit on maximum number of primary keys in a particular table.
I am creating a demo application. I am stuck in a scenario, I am not getting the exact way and query to fetch data from sql database in the following scenario:
I have a table named RegistrationTable, this table has a column RegistrationId as its primary key. There is another table named ApplicationDetails, this table has a column ApplicationId as its primary key.
I have referenced ApplicationId as Foreign key to RegistrationId column.
My requirement is, single user can apply to multiple jobs. job details will be present in ApplicationDetails table.
How can I check to how may jobs the user has applied based on his email id stored in registration table.
I have a column Status in ApplicationDetails table, where as soon as user applied to a job I am updating the status column.
I am trying the following query but its not working:
SELECT Status FROM ApplicationDetails
INNER JOIN RegistrationTable ON ApplicationTable.ApplicationId = RegistrationTable.RegistrationId
WHERE RegistrationTable.EmailId = "abc#xyz.com";
Can any one please suggest me how can I go about this. I am a beginner to SQL. Please do suggest a way to solve this. Thanks in advance.
You need to change the table name in your query to ApplicationDetails. This is what you mentioned in your post
SELECT Status FROM ApplicationDetails
JOIN RegistrationTable ON ApplicationDetails.ApplicationId = RegistrationTable.RegistrationId
WHERE RegistrationTable.EmailId = "abc#xyz.com";
I have two tables in one database
customer table have customer coordinate infos
Customer type have infos about the type of customer
I want to have a destination table that has
destination customer table
key
name
adress
...
type
I did create a database vue of customer table + customer type but the result query only showed me the fields that have customer table key=customer table foreign key in customer type
and there are also fields in customer table that have no type.
How do I solve this issue
What kind of transformations did you use in your data flow task?
I rather do it on the SQL Server using INNER JOIN and something like SELECT INTO. If you want to join two tables on SSIS, use the Merge Join Transformation. However, the input for the Merge Join Transformation must be sorted so you can either use the Sort Transformation or just use ORDER BY when you get the data using OLE DB Source adaptors.
Hope this helps.
I need to run a application to collect the news feeds and update new entries in my database. So I planned to create two tables one source and other as target.
My plan is to first update all the information into source table and latter update target table with unique data (currently updated news or new records).
But the issue is some feeds are repeated in some other websites. so the application breaks immediately after reading a duplicate entry.
I have attached my MySQL query below
create table table1 (
DateandTime datetime,
Name tinytext,
Title varchar(255),
Content Longtext unique(Title)
);
I know that this sounds too basic. But i dont have any solution.
I appreciate your feedbacks and Ideas. Thank you
Few solutions:
Unique column should prevent duplicate data
INSERT WHERE NOT EXISTS
Use MERGE engine in MySQL (http://dev.mysql.com/doc/refman/5.1/en/merge-storage-engine.html)
I modified my query on Marcus Adams suggestion.
Insert Ignore table1 (
DateandTime,
Name,
Title,
Content)
values
(.......
);
I think single table is sufficient to address my issue.Thank you