I have duplicated a Table to create an Archive table, and for some reason I can't make to Appending Query to work.
This is the SQL code:
INSERT INTO tblArc
SELECT tblCostumer.*
FROM tblCostumer, tblArc
WHERE (((tblArc.num)=[Enter Client Number you'd like to move to the archive]));
When I enter the costumer number, it says "You are about to append 0 row(s)" instead of appending 1 row.
That FROM clause would give you a cross join, which is probably not what you should really want ...
FROM tblCostumer, tblArc
Instead SELECT only from tblCostumer based on its primary key. For example, if the primary key is tblCostumer.num ...
INSERT INTO tblArc
SELECT tblCostumer.*
FROM tblCostumer
WHERE tblCostumer.num=[Enter Client Number you'd like to move to the archive];
And if the structures of the two tables are not the same, list the specific fields instead of ...
INSERT INTO tblArc
SELECT tblCostumer.*
Related
So I have an SQL DATABASE and I want to duplicate all rows from a table named "searches", that have the word "google" in the column named "search_url". The records in this table are more than 300,000 so I don't want to duplicate them all, just the ones that have "google" in the column named "search_url".
The new rows' ID should automatically increment, and all the rest of the columns must be the same, except for the column named "search_url" where "google" has to become "search.yahoo"
My SQL is not that good and I will appreciate all the help I can get! Thank you :)
EDIT:
Sample data:
ID SEARCH_URL SEARCHABLE_ID SEARCHABLE_TYPE
1 https://www.google.com/search?q=camping 1 App\News
2 https://www.google.com/search?q=hiking 2 App\News
So I want this to become after this is duplicate:
ID SEARCH_URL SEARCHABLE_ID SEARCHABLE_TYPE
1 https://www.google.com/search?q=camping 1 App\News
2 https://www.google.com/search?q=hiking 2 App\News
3 https://search.yahoo.com/?q=camping 1 App\News
4 https://search.yahoo.com/?q=hiking 2 App\News
Here's a step by step. Follow this process so you don't end up inserting a load of bad data.
1) Select the rows you want, except the autoincrement primary key id - mysql will manage that itself:
SELECT SEARCH_URL, SEARCHABLE_ID, SEARCHABLE_TYPE FROM TABLE --where .. ?
Use a where clause if you don't want all rows
2) You modify the data according to your wish. Here we use the string replace function to change google into yahoo:
SELECT REPLACE(SEARCH_URL, 'google', 'search.yahoo'), SEARCHABLE_ID, SEARCHABLE_TYPE FROM TABLE
Check it looks good.. (You said you wanted to change google -> search.yahoo but strictly speaking your example data demands www.google -> search.yahoo) - CHECK YOUR DATA BEFORE YOU INSERT IT
3) Then add in an insert above it, to insert into your table
INSERT INTO TABLE(SEARCH_URL, SEARCHABLE_ID, SEARCHABLE_TYPE)
SELECT REPLACE(SEARCH_URL, 'google', 'search.yahoo'), SEARCHABLE_ID, SEARCHABLE_TYPE FROM TABLE
You need to specify the full set of columns you're inserting, and you have to specify the same count of columns for insert as you have selected.
If you have an autoincrementing primary key, you can't do an insert into table select * from table because that will select the ID column and then try to insert what will technically be a bunch of already-existing values
Please use below query to get the desired output:
INSERT INTO searches(search_url, searchable_id,searchable_type)
SELECT REPLACE(search_url, 'google', 'search.yahoo'), searchable_id,searchable_type FROM searches;
Hope it works for you.
I have a table (People) with columns (id, name, year). Is there any way I can get all the ids ( SELECT id FROM people ) and use them for creating new rows in other table (people_id, additional_info). I mean that for every id in table People I want to add new row in other table containing the id and some additional information. Maybe something like for-loop is needed here ?
Well, usually you have information on a person and write one record with this information. It makes litte sense to write empty records for all persons. Moreover, when it's just one record per person, why not simply add an information column to the people table?
Anyway, you can create mock records thus:
insert into people_info (people_id, additional_info)
select id, null from people;
Insert into targetTable (id, additional_info)
select id,'additionalinfo' from PEOPLE
No for loop is needed, Just use the above query.
You can use INSERT ... SELECT syntax for MySQL to insert the rows into people_info table (documentation here).
e.g.
insert into people_info(...)
select (...) from people; <and posibly other tables>
I have a table with a field (Name) I'd like to create a unique index on, however it seems there are existing duplicates. I dont' want to just get rid of dupes since some might have information in other fields that I need. Essentially I have:
ID
ParentID
Name
Code
RelatedID
So Goal 1 is I want to keep the record that has values in the secondary fields other then ID and Name. In most cases this will be one of the dupes only.
Goal 2 is in case two identical Names both have values but in different fields I want to 'merge' those since it is remotely possible one duplicate will have values in one key field and one in the other.
Finally Goal 3 is in the case that two names both have values in a key field I'd probably want to manually review those first.
It seems to me my first step as I read this would be Goal 3; manually review duplicates where Name Field is identical, and more then one record has a non-Null/non-empty value in a key field.
Once I address this the goal would be to 'mere' the remaining records i.e keep one record with Name and any non-null/non-empty key fields from the others.
Any thoughts much appreciated.
Sounds like a solid plan - hope you have a development environment you can dry run it in.
Here is some code that may help you along
Starting with Step 3.
This statement should help you find which records need to be reviewed.
SELECT *
FROM (
SELECT name,
GROUP_CONCAT(DISTINCT parentID) AS parentID,
GROUP_CONCAT(DISTINCT code) AS code,
GROUP_CONCAT(DISTINCT RelatedID) AS RelatedID,
FROM foo
GROUP BY name
HAVING COUNT(*)>1) as summarized
WHERE parentID LIKE '%,%'
OR code LIKE '%,%'
OR RelatedID LIKE '%,%';
Anything that comes up in that query you will probably have to manually fix after figuring out why there are multiple values for the same field.
Once those fixes are in place, it's times for the merge. I would create a holding / temporary table with the correct values. MAX should take care of the logic to choose non-null values
CREATE TABLE foo_values
SELECT name, MAX(parentID) as parentID, MAX(code) AS code, MAX(RelatedID) AS RelatedID.
FROM foo
GROUP BY name
HAVING COUNT(*)>1;
In theory, now you have the merged values. You can remove the duplicate name rows using whatever technique you are most comfortable with(See here) while adding your unique index. Finally, update the secondary fields by JOINing back to foo values.
I need to add data to a MySQL database like that:
Person:
pId, nameId, titleId, age
Name:
nameId, name
Title:
titleId, title
I don't want to have any names or title more then once in the db so I didn't see a solution with LAST_INSERT_ID()
My approach looks like that:
INSERT IGNORE INTO Name(name) VALUES ("Peter");
INSERT IGNORE INTO Title(title) VALUES ("Astronaut");
INSERT INTO Person(nameId, titleId, age) VALUES ((SELECT nameId FROM Name WHERE name = "Peter"), (SELECT nameId FROM Name WHERE name = "Astronaut"), 33);
But I guess that's a quite dirty approach!?
If possible I want to add multiple persons with one query and without having anything more then one times in db.
Is this possible in a nice way? Thanks!
You could put title and name as two columns of your table and then:
set one UNIQUE index on each column if you don"t want to have two titles or two names identical in the DB
or set an UNIQUE index on (title,name) if you don't want to have two entries having both the same name and the same title.
If you really want to have separate tables, you could do as you suggested in your post, but wrapping all your insert statements in a TRANSACTION to allow rollback if you detect a duplicate somewhere.
See Design dilemma: If e-mail address already used, send e-mail "e-mail address already registered", but can't because can't add duplicate to table which appear to be exactly the same problem, but having name & email instead of name & titles.
START TRANSACTION;
INSERT INTO title(value) VALUES ("Prof.");
SELECT LAST_INSERT_ID() INTO #title_id;
-- Instead of using user-defined variable,
-- you should be able to use the last_insert_id
-- equivalent from the host language MySQL driver.
INSERT INTO username(value) VALUES ("Sylvain");
SELECT LAST_INSERT_ID() INTO #username_id;
-- Instead of using user-defined variable,
-- you should be able to use the last_insert_id
-- equivalent from the host language MySQL driver.
INSERT INTO account(username_id, email_id) VALUES (#username_id,#title_id);
COMMIT;
See LAST_INSERT_ID()
A third solution would be to SELECT before doing you insert to see in the entry are already present. But personally I wouldn't push to the check-before-set approach at the very least, this will require an extra query which is mostly superfluous if you use correctly indexes.
I'm using MySQL 4.1. Some tables have duplicates entries that go against the constraints.
When I try to group rows, MySQL doesn't recognise the rows as being similar.
Example:
Table A has a column "Name" with the Unique proprety.
The table contains one row with the name 'Hach?' and one row with the same name but a square at the end instead of the '?' (which I can't reproduce in this textfield)
A "Group by" on these 2 rows return 2 separate rows
This cause several problems including the fact that I can't export and reimport the database. On reimporting an error mentions that a Insert has failed because it violates a constraint.
In theory I could try to import, wait for the first error, fix the import script and the original DB, and repeat. In pratice, that would take forever.
Is there a way to list all the anomalies or force the database to recheck constraints (and list all the values/rows that go against them) ?
I can supply the .MYD file if it can be helpful.
To list all the anomalies:
SELECT name, count(*) FROM TableA GROUP BY name HAVING count(*) > 1;
There are a few ways to tackle deleting the dups and your path will depend heavily on the number of dups you have.
See this SO question for ways of removing those from your table.
Here is the solution I provided there:
-- Setup for example
create table people (fname varchar(10), lname varchar(10));
insert into people values ('Bob', 'Newhart');
insert into people values ('Bob', 'Newhart');
insert into people values ('Bill', 'Cosby');
insert into people values ('Jim', 'Gaffigan');
insert into people values ('Jim', 'Gaffigan');
insert into people values ('Adam', 'Sandler');
-- Show table with duplicates
select * from people;
-- Create table with one version of each duplicate record
create table dups as
select distinct fname, lname, count(*)
from people group by fname, lname
having count(*) > 1;
-- Delete all matching duplicate records
delete people from people inner join dups
on people.fname = dups.fname AND
people.lname = dups.lname;
-- Insert single record of each dup back into table
insert into people select fname, lname from dups;
-- Show Fixed table
select * from people;
Create a new table, select all rows and group by the unique key (in the example column name) and insert in the new table.
To find out what is that character, do the following query:
SELECT HEX(Name) FROM TableName WHERE Name LIKE 'Hach%'
You will se the ascii code of that 'square'.
If that character is 'x', you could update like this:(but if that column is Unique you will have some errors)
UPDATE TableName SET Name=TRIM(TRAILING 'x' FROM Name);
I'll assume this is a MySQL 4.1 random bug. Somes values are just changing on their own for no particular reason even if they violates some MySQL constraints. MySQL is simply ignoring those violations.
To solve my problem, I will write a prog that tries to resinsert every line of data in the same table (to be precise : another table with the same caracteristics) and log every instance of failures.
I will leave the incident open for a while in case someone gets the same problem and someone else finds a more practical solution.