I recently started to work with SQL in the Visual Studio environment, I have created the following two tables and populated them with values, these are the command for the creation of the tables users and photos:
CREATE TABLE users(
id INTEGER AUTO_INCREMENT PRIMARY KEY,
username VARCHAR(255) UNIQUE NOT NULL,
created_at TIMESTAMP DEFAULT NOW()
);
CREATE TABLE photos(
id INTEGER AUTO_INCREMENT PRIMARY KEY,
image_url VARCHAR(255) NOT NULL,
user_id INTEGER NOT NULL,
created_at TIMESTAMP DEFAULT NOW(),
FOREIGN KEY(user_id) REFERENCES users(id)
);
Now these are the statements that I ran to populate the tables
INSERT INTO users(username) Values
('Colton'),('Ruben');
INSERT INTO photos(image_url,user_id) VALUES
('/alskjd76',1),
('/lkajsd98',2);
Now if I run the statement
SELECT *
FROM photos
JOIN users;
I get the tables:
Now if I run the command:
SELECT *
FROM users
JOIN photos;
I get the table
Here are the tables of users and the tables for photos.
Now my question is why is it that "id" column in the second table is changed to 4,4,5,5 when the actual "id" column of the users table only contains the values 1,2? The first instance seems to respect this why doesn't the second?
EDIT: It seems to be displaying the following now when running the commands
SELECT *
FROM photos
JOIN users;
and when I run :
SELECT *
FROM users
JOIN photos;
Edit: this seems to be correct now, is this right, it seems to have been solved with the deletion and recreation of the tables entirely. I think that V.S studio might have mistakenly taken the table to have more present photos with id's 1-3.
Related
I need to load data into a DB using Sequelize on first application load. The initial excel data was given in the following format:
Car group fields: title | group_code
Car group data:
('Mercedes','M'),
('Volkswagen','VW');
Car Fields: car_code | owner | group_code
Car data:
('11-1135','Fred','M'),
('11-1146','Bob','VW');
--
Ideally what I want to end up with in the DB is the following:
Car group fields: group_id | title | group_code
Car group data:
(1, 'Mercedes','M'),
(2, 'Volkswagen','VW');
Car Fields: car_id | car_code | owner | group_id (refers to the group id created above)
Car data:
(1, '11-1135','Fred', 1),
(2, '11-1146','Bob', 2);
--
What is the best approach to doing this in Sequelize? In SQL I did the following to get around this problem:
1- Converted my Excel file into a bunch of SQL statements
2- Created the following script using those statements (and then i added my own code to fill in the group_id):
CREATE TABLE CarGroup(
group_id INT NOT NULL AUTO_INCREMENT,
title VARCHAR(100) NOT NULL,
group_code VARCHAR(5) NOT NULL,
PRIMARY KEY (`group_id`),
CONSTRAINT UN_car_group_code UNIQUE (group_code)
) ENGINE=InnoDB;
INSERT INTO CarGroup(title,group_code) VALUES ('Mercedes','M');
INSERT INTO CarGroup(title,group_code) VALUES ('Volkswagen','VW');
CREATE TABLE IF NOT EXISTS Car(
car_id INT NOT NULL AUTO_INCREMENT,
car_code VARCHAR(10),
owner VARCHAR(100) NOT NULL,
group_id SMALLINT, -- populated after insert
group_code VARCHAR(10) NOT NULL, -- deleted after insert
PRIMARY KEY (id),
CONSTRAINT `UN_car_code` UNIQUE (`car_code`),
CONSTRAINT `FK_car_group_id` FOREIGN KEY (`group_id`) REFERENCES `CarGroup` (`group_id`)
) ENGINE=InnoDB;
INSERT INTO Car(car_code,owner,group_code) VALUES ('11-1135','Fred','M');
INSERT INTO Car(car_code,owner,group_code) VALUES ('11-1146','Bob','VW');
-- GENERATE GROUP ID'S BASED ON GROUP CODE AND DROP GROUP CODE COLUMN --
update Car INNER JOIN CarGroup ON Car.group_code = CarGroup.group_code
SET Car.group_id = CarGroup.group_id;
alter table Car drop column group_code
I can't see how the above can be achieved by using migrations and seeding as I need to create the model then do seeding and then run the alteration. Is it easier to just run plain SQL statements in Sequelize in this case? Or should I just use the data as it is and link the two tables as a foreign key via the group_code (which is a string - not best performance in comparison to plain INT id).
Any direction on this is muchly appreciated!
Not sure if this is the best approach but since no one answered, but i have decided to do the following:
Create two tables, OriginalCars and Cars. OriginalCars has the original fields that the excel file has (i.e. car_code). The Cars table has the car_id and other fields.
Create the models
Sync the models
Check manually if there is any data in the tables, if not then populate the originalCars table with data. I then do an innerjoin of the OriginalCars with the group table, the resulting data is parsed and added to the Car table with car_id.
Delete the original table as its no longer needed
Feels a tad hacky but it only has to do this on initial load of the App to populate the initial data.
Need help how to solve this problem...
I have created a users table which has following columns
Create table users
(
uid int(10) PRIMARY KEY AUTO_INCREMENT,
uname varchar(50),
password varchar(50),
email varchar(50)
);
when i insert values with uid it executes successfully :
Insert into users values(1,'ABC','Helloworld','ABC#gmail.com');
but when i try without uid
Insert into users values('SDC','Helloworld','SDC#gmail.com');
it does not execute successfully and gives an error
ERROR 1136 (21S01): Column count doesn't match value count at row 1
my uid has AUTO_INCREMENT so it should automatically increase..
Of course auto_increment is working correctly. You just need to learn best practices about using insert. Always list all the columns (unless you really, really know what you are doing):
Insert into users (uname, password, email)
values('SDC', 'Helloworld', 'SDC#gmail.com');
The id column will be auto-incremented. If you don't list the columns, then MySQL expects values for all columns, including the auto-incremented one.
I have a main database and am moving data from that database to a second data warehouse on a periodic schedule.
Instead of migrating an entire table each time, I want to only migrate the rows that has changed since the process last run. This is easy enough to do with a WHERE clause. However, suppose some rows have been deleted in the main database. I don't have a good way to detect which rows no longer exist, so that I can delete them on the data warehouse too. Is there a good way to do this? (As opposed to reloading the entire table each time, since the table is huge)
It could be done in following steps for let’s say in this example I am using customer table:
CREATE TABLE CUSTOMERS(
ID INT NOT NULL,
NAME VARCHAR (20) NOT NULL,
AGE INT NOT NULL,
ADDRESS CHAR (25) ,
LAST_UPDATED DATETIME,
PRIMARY KEY (ID)
);
Create CDC:
CREATE TABLE CUSTOMERS_CDC(
ID INT NOT NULL,
LAST_UPDATED DATETIME,
PRIMARY KEY (ID)
);
Trigger on source table like below on delete event:
CREATE TRIGGER TRG_CUSTOMERS_DEL
ON CUSTOMERS
FOR DELETE
AS
INSERT INTO CUSTOMERS_CDC (ID, LAST_UPDATED)
SELECT ID, getdate()
FROM DELETED
In your ETL process where you are querying source for changes add deleted records information through UNION or create separate process like below:
SELECT ID, NAME, AGE, ADDRESS, LAST_UPDATED, ‘I/U’ STATUS
FROM CUSTOMERS
WHERE LAST_UPDATED > #lastpulldate
UNION
SELECT ID, null, null, null, LAST_UPDATED, ‘D’ STATUS
FROM CUSTOMERS_CDC
WHERE LAST_UPDATED > #lastpulldate
If you just fire an update query, then it wont update the rows.
The way I see: lets say you have your way where you do a where clause. Youd have that as part of an update query, unless you are doing a csv export. If you do a mysql dump of the rows you wish to update and create a new tempTable in the main database,
Then
UPDATE mainTable WHERE id = (SELECT id from tempTable WHERE id >0 and id <1000)
If there is no corresponding match, then no update gets run, and no error occurs, by using the id limits as parameters.
I have created a table named users, as follows:
CREATE TABLE users (
u_id INT UNSIGNED AUTO_INCREMENT NOT NULL PRIMARY KEY,
u_namefirst VARCHAR(100),
u_namelast VARCHAR(100),
u_email VARCHAR(100),
FULLTEXT (u_namefirst, u_namelast, u_email)
) ENGINE=MyISAM;
And populated it with data:
INSERT INTO users (u_namefirst, u_namelast, u_email) VALUES
('Michael','Williams','williams#williams.com'),
('Jon','Test','test#test.com'),
('Jane','Smith','smith#smith.com'),
('Fred','Francis','fred#fred.com'),
('Mike','Williams','mike#williams.com'),
('Michael','Burke','michael#burke.com');
Yet, when I run:
SELECT *
FROM users
WHERE MATCH ( u_namelast ) AGAINST ( 'williams');
I get: "Error Code: 1191. Can't Find FULLTEXT index matching the column list". The index definitely exists and its type is definitely Fulltext. The correct columns have been selected for that index.
I have tried running both InnoDB and MyISAM engines and I get the same result. As a test, when I exchange the last line for WHERE u_namefirst = 'michael' I get a correct result -- so I don't believe it is a problem with the existing data.
I'm running MySQL 5.6.16 (x86_64) on Windows 7 Ultimate.
Any help appreciated!
The Problem
Your problem seems to be (from running the code you've been posting) mixing up of column names. You added a FULLTEXT INDEX on the firstname column:
ALTER TABLE user ADD FULLTEXT INDEX search ( firstname ASC );
Then when you query, you're searching against the surname field which as no index:
SELECT *
FROM users
WHERE MATCH ( u_namelast ) AGAINST ( 'williams');
The solution
Either indexing the correct column, or querying the correct column should work just fine. Be sure to use the correct engine (MyISAM). You can read more about FULLTEXT indexes with MyISAM on the MySQL documentation site.
Example
Using the below example code I've given you a few options that will work, I've included the ALTER TABLE line so you can see that method works fine also.
CREATE TABLE users (
u_id INT UNSIGNED AUTO_INCREMENT NOT NULL PRIMARY KEY,
u_namefirst VARCHAR(100),
u_namelast VARCHAR(100),
u_email VARCHAR(100),
FULLTEXT (u_namefirst, u_namelast, u_email)
) ENGINE=MyISAM;
ALTER TABLE users ADD FULLTEXT INDEX search ( u_namelast ASC );
INSERT INTO users (u_namefirst, u_namelast, u_email) VALUES
('Michael','Williams','williams#williams.com'),
('Jon','Test','test#test.com'),
('Jane','Smith','smith#smith.com'),
('Fred','Francis','fred#fred.com'),
('Mike','Williams','mike#williams.com'),
('Michael','Burke','michael#burke.com');
So now we have an index on u_namefirst, u_namelast, u_email and one just one u_namelast.
You can either query against the individual index
SELECT *
FROM users
WHERE MATCH (u_namelast) AGAINST ('Williams');
Or query against the one with multiple fields
SELECT *
FROM users
WHERE MATCH (u_namefirst, u_namelast, u_email) AGAINST ('Williams');
Both should give you the following result
1 Michael Williams williams#williams.com
5 Mike Williams mike#williams.com
I have a mysql table that stores a mapping from an ID to a set of values:
CREATE TABLE `mapping` (
`ID` bigint(20) unsigned NOT NULL,
`Value` int(11) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
This table is a list of values and the ID of a row selects the set, this value belongs to.
So the column ID is unique per set, but not unique per row.
I insert data into the table using the following statement:
INSERT INTO `mapping`
SELECT 5, `value` FROM `set1`;
In this example I calculated and set the ID manually to 5.
It would be great if mysql could set this ID automatically. I know the autokey feature, but using it will not work, because all rows inserted with the same insert statement should have the same ID.
So each insert statement should generate a new ID and then use it for all inserted rows.
Is there a way to accomplish this?
I am not convinced to it (I'm not sure whether locking table is good idea, I think it's not), but this might help:
lock tables `mapping` as m write, m as m1 read;
insert into m
select (select max(id) + 1 from m1), `value` from `set1`;
ulock tables;
One option is to have an additional table with an autogenerated key on single rows. Insert (with or without an necessary or appropriate other data) into that table, thus generating the new ID, and then use the generated key to insert into the mapping table.
This moves you to a world where the non-unique id is a foreign key reference to a truly unique key. Much more in keeping with typical relational database thinking.