Suppose we have a table named SMALLER, with column num_1 and num_2, both integer type, and some data in it.
It looks like this:
`num_1` `num_2`
1 2
2 3
2 8
3 4
4 5
.
.
. Much much much more
.
What Im trying to do is expand this table, and then collect all "Smaller" relations.
Such that, the result table should looks like this:
`num_1` `num_2`
1 2
1 3
1 4
1 5
1 8
2 3
2 4
2 5
2 8
3 4
3 5
4 5
I appreciate all helps !
Futhermore, what if instead of "smaller" relations, this table just have a "connected" relation, for instance, '1' connected to '2', '2' connected to '3', '2' connected to '4', such that we say 1-2, 1-3, 1-4, 2-3, 2-4.
A good place to start would be:
SELECT
A.num_1, B.num_2
FROM
Smaller AS A JOIN Smaller AS B ON (A.num_1 < B.num_2)
ORDER BY A.num_1, B.num_2;
Inside your stored procedure, put this into a cursor, iterate over the cursor and for each row do a INSERT IGNORE. Ie:
DECLARE num1,num2 INT;
DECLARE done DEFAULT 0;
DECLARE mycursor CURSOR FOR SELECT # use the select above, im lazy here
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1;
OPEN mycursor;
my_loop: LOOP
FETCH mycursor INTO num1, num2;
IF done THEN
LEAVE my_loop;
END IF;
INSERT IGNORE INTO Smaller VALUES (num1,num2);
END LOOP;
To answer your updated question, whiles not entirely sure if you mean connected as by means of relations between unique rows (you would need two columns to store this relation, so it would be quite similar). Or if you mean you have one table containing all numbers, and another two column table containing relations between the rows of the first table.
Or, finally, if you want a table just containing strings with "1-2", "1-3" etc. If thats the case, I would keep it as two individual columns and just output them as strings using CONCAT when you poll the table :)
Related
I have "table1" in which master data store like -
id name create_date validity expire_date
1 A 2015-08-01 3 2015-11-01
2 B 2015-09-01 12 2016-08-01
3 C 2015-09-15 1 2015-10-15
But now want to insert data in "table2" for expire_date according to validity period like without changing in front end. using trigger or procedure want to achieve this task.
id parent_id expire_date
1 1 2015-09-01
2 1 2015-10-01
3 1 2015-11-01
How can I achieve this using procedure or trigger.
It's hard to be specific because your question is not specific.
In general, here's the procedure to follow to design a query to insert stuff from one table into another.
First, write a SELECT query yielding a resultset containing the rows and columns you want inserted into your table. Use a list of columns to get the right columns, and appropriate WHERE clauses to get the right rows. Eyeball that query and that resultset to make sure it contains the correct information.
Second, prepend that debugged SELECT query with
INSERT INTO target_tablename (col, col, col)
Test this to make sure the correct rows are being inserted into your target table.
Third, create yourself an EVENT in your MySQL server to run the query you have just written. The event will, at the appropriate times of day, run your query.
If you take these steps out of order, you'll probably be very confused.
Can achieve the task using Store Procedure -
CREATE DEFINER=`root`#`localhost` PROCEDURE `addexpire`(IN uname varchar(50), IN cdate date, IN vm int)
BEGIN
insert into table1 (name,create_date,validity) values (uname,cdate,vm);
BEGIN
declare uparent_id INT;
declare v_val int default 0;
SET uparent_id = LAST_INSERT_ID();
while v_val < vm do
BEGIN
declare expire_date date;
SET expire_date = DATE_ADD(cdate,INTERVAL v_val+1 MONTH);
insert into table2 (parent_id,expire_date) values (uparent_id,expire_date);
set v_val=v_val+1;
END;
end while;
END;
END
I am fairly new to SQL, Big Query
I have a dataset and I want to retrieve values in column 2 corresponding to the values in column 1 if they satisfy certain conditions. I want to know how to do that. I am using Big Query Platform
Example Dataset D :
Col 1 ; Col 2
A ; 1
B ; 2
C ; 3
D ; 4
E ; 5
Query to retrieve values of col1, col2 such that col2 >2
Expected Output :
C ; 3
D ; 4
E ; 5
I am using big query platform.
According to me,
SELECT col1,col2
FROM [D]
WHERE col2>2
will give col1 and col2 as outputs where col2>2 but the values in col2 may or may not be the ones corresponding to col1.
Am I wrong ? If so, please suggest a query to get necessary output.
If you don't have a row A;5, it won't ever exist in your return. The only time you need to worry about the mismatch is if you're doing a join between one data set of {A, B, C, D, E} and another of {1, 2, 3, 4, 5}. Then every possible combination from A;1, A;2... to ...E;4, E;5 would be output, and filtering on col2 > 2 would produce A;3, B;3, C;3, ..., etc. But that isn't how your data is set up in your question, so don't worry. If you wonder how a select query will work, it's usually okay to just run it, unless it will take hours and consume tons of resources and you have a budget... but it seems more like you're doing homework.
Also don't ask for homework help on stack overflow.
I have a table with columns like (PROPERTY_ID, GPSTIME, STATION_ID, PROPERTY_TYPE, VALUE) where PROPERTY_ID is primary key and STATION_ID is foreign key.
This table records state changes; each row represents property value of some station at given time. However, its data was converted from old table where each property was a column (like (STATION_ID, GPSTIME, PROPERTY1, PROPERTY2, PROPERTY3, ...)). Because usually only one property changed at time I have lots of duplicates.
I need to remove all successive rows with same values.
Example. Old table contained values like
time stn prop1 prop2
100 7 red large
101 7 red small
102 7 blue small
103 7 red small
The converted table is
(order by time,type) (order by type,time)
time stn type value time stn type value
100 7 1 red 100 7 1 red
100 7 2 large 101 7 1 red
101 7 1 red 102 7 1 blue
101 7 2 small 103 7 1 red
102 7 1 blue 100 7 2 large
102 7 2 small 101 7 2 small
103 7 1 red 102 7 2 small
103 7 2 small 103 7 2 small
should be changed to
time stn type value
100 7 1 red
100 7 2 large
101 7 2 small
102 7 1 blue
103 7 1 red
The table contains about 22 mln rows.
My current approach is to use procedure to iterate over the table and remove duplicates:
BEGIN
DECLARE done INT DEFAULT FALSE;
DECLARE id INT;
DECLARE psid,nsid INT DEFAULT null;
DECLARE ptype,ntype INT DEFAULT null;
DECLARE pvalue,nvalue VARCHAR(50) DEFAULT null;
DECLARE cur CURSOR FOR
SELECT station_property_id,station_id,property_type,value
FROM station_property
ORDER BY station_id,property_type,gpstime;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
OPEN cur;
read_loop: LOOP
FETCH cur INTO id,nsid,ntype,nvalue;
IF done THEN
LEAVE read_loop;
END IF;
IF (psid = nsid and ptype = ntype and pvalue = nvalue) THEN
delete from station_property where station_property_id=id;
END IF;
SET psid = nsid;
SET ptype = ntype;
SET pvalue = nvalue;
END LOOP;
CLOSE cur;
END
However, it is too slow. On test table with 20000 rows it removes 10000 duplicates for 6 minutes. Is there a way to optimize the procedure?
P.S. I still have my old table intact, so maybe it is better to try and convert it without the duplicates rather than dealing with duplicates after conversion.
UPDATE.
To clarify which duplicates I want to allow and which not.
If a property changes, then changes back, I want all 3 records to be saved, even though first and the last contains same station_id, type, and value.
If there are several successive (by GPSTIME) records with same station_id, type, and value, I want only the first one (which represents the change to that value) to be saved.
In short, a -> b -> b -> a -> a should be optimized to a -> b -> a.
SOLUTION
As #Kickstart suggested, I've created new table, populated with filtered data. To refer previous rows, I've used approach similar to one used in this question.
rename table station_property to station_property_old;
create table station_property like station_property_old;
set #lastsid=-1;
set #lasttype=-1;
set #lastvalue='';
INSERT INTO station_property(station_id,gpstime,property_type,value)
select newsid as station_id,gpstime,newtype as type,newvalue as value from
-- this subquery adds columns with previous values
(select station_property_id,gpstime,#lastsid as lastsid,#lastsid:=station_id as newsid,
#lasttype as lasttype,#lasttype:=property_type as newtype,
#lastvalue as lastvalue,#lastvalue:=value as newvalue
from station_property_old
order by newsid,newtype,gpstime) sub
-- we filter the data, removing unnecessary duplicates
where lastvalue != newvalue or lastsid != newsid or lasttype != newtype;
drop table station_property_old;
Possibly create a new table, populated with a select from the existing table using a GROUP BY. Something like this (not tested so excuse any typos):-
INSERT INTO station_property_new
SELECT station_property_id, station_id, property_type, value
FROM (SELECT station_property_id, station_id, property_type, value, COUNT(*) FROM station_property GROUP BY station_property_id, station_id, property_type, value) Sub1
Regarding chainging properties, cant you put a unique constraint to ensure the combination of station/type/value columns is unique. That way you will not be able to change it to a value which will result in a duplication.
I have a table (Mysql) with different snippets with tags in a coloumns
table snippets
---------------
Id title source tag
1 "title" "srouce code" "Zend, Smarty"
2 "title2" "srouce code2" "Zend jquery"
3 "title3" "srouce code3" "doctrine"
I want to do a select statements so that I can build a tag clouds on my site.
Zend(2), smarty(1), jquery(1), doctrine(1)
Remember tages are not always sperated by space, some tages are separated by comma(,)
then I need a query to fetch all records with specific tages. which I think i can use something like that untill there is better solution for that.
Select * from snippets where tag like "%ZEND%"
looking for optimized solutions please.
create three tables!
table snippets
id | title | source_code
1 "title" "srouce code"
2 "title2" "srouce code2"
3 "title3" "srouce code3"
table tags
id | tag
1 "zend"
2 "smarty"
3 "doctrine"
4 "jquery"
table snippets_tags
id | snippet_id | tag_id
1 1 1
2 1 2
3 2 1
4 2 4
5 3 3
Tip: lower-case your tags because "Zend" and "zend" are the same
Now your tag cloud query should look like
SELECT tags.name, COUNT(snippets_tags.id) AS snippet_count
FROM tags LEFT JOIN snippets_tags ON snippets_tags.tag_id = tags.id
GROUP BY tags.id
Gives you a result like
name | snippet_count
zend 2
smarty 1
doctrine 1
jquery 1
To select all snippets belonging to a certain tag:
SELECT snippets.* FROM snippets, tags, snippets_tags
WHERE
snippets_tags.snippets_id = snippet.id AND
snippets_tags.tag_id = tags.id AND
tags.name LIKE '%zend%'
Have you thought about separating the source code and tags into separate tables?
Source Table
ID, Title, Source
1 "t1" "sc"
2 "t2" "sc"
3 "t3" "sc"
Tag Table
ID, Tag
1 "Zend"
2 "Smarty"
3 "jquery"
4 "doctrine"
SourceTagLink Table
SourceID, TagID
1 1
1 2
2 1
2 3
3 4
That way you have a unique list of tags that you can choose from, or add to.
You wont be doing any string parsing so your selects will be much faster. Sort of similar to how you assign tags to your post on this site.
EDIT
This is a function that I used to convert a multivalue string into a table with a single column it written is MSSQL but you should be able to convert it to mySQL
CREATE FUNCTION [dbo].[ParseString](#String NVARCHAR(4000), #Delimiter CHAR(1)=',')
RETURNS #Result TABLE(tokens NVARCHAR(4000))
AS
BEGIN
-- We will be seearching for the index of each occurrence of the given
-- delimiter in the string provided, and will be extracting the characters
-- between them as tokens.
DECLARE #delimiterIndex INT
DECLARE #token NVARCHAR(4000)
-- Try to find the first delimiter, and continue until no more can be found.
SET #delimiterIndex = CHARINDEX(#Delimiter, #String)
WHILE (#delimiterIndex > 0)
BEGIN
-- We have found a delimiter, so extract the text to the left of it
-- as a token, and insert it into the resulting table.
SET #token = LEFT(#String, #delimiterIndex-1)
INSERT INTO #Result(tokens) VALUES (LTRIM(RTRIM(#token)))
-- Chop the extracted token and this delimiter from our search string,
-- and look for the next delimiter.
SET #String = RIGHT(#String, LEN(#String)-#delimiterIndex)
SET #delimiterIndex = CHARINDEX(#Delimiter, #String)
END
-- We have no more delimiters, so place the remainder of the string
-- into the result as our last token.
SET #token = #String
INSERT INTO #Result(tokens) VALUES (LTRIM(RTRIM(#token)))
RETURN
END
Basically you call it like
ParseString('this be a test', ' ')
it will return a single column table
this
be
a
test
ParseString('this:be a test', ':')
returns
this
be a test
You could add a call to the function in an update trigger to populate the new tables to help you make selects much easier. Once the trigger is built, just do a simple update like the following
Update yourTable
Set Title = Title
That fill fire the trigger and populate the new tables and make everything much easier for you without affecting existing code. Of course youll need to replace all known delimeters with a single one for it to work.
Any new records added or modified will automatically fire the trigger.
At first you have to replace all characters like ',' space etc. with a fixed "separator" character like '#'. You could use a temporary table.
Then you have to create a function that loops over the fields and searches and counts the single tags.
I have a strange query to perform from a website. I have sets of arrays that contain pertinent ids from a many tables - 1 table per array. For example (the array name is the name of the table):
Array Set 1:
array "q": 1,2,3
array "u": 1,5
array "k": 7
Array Set 2:
array "t": 2,12
array "o": 8, 25
Array Set 3 (not really a set):
array "e": 5
I have another table, Alignment, which is not represented by the arrays. It performs a one to many relationship, allowing records from tables q,u, and k (array set 1, and recorded as relType/relID in the table) to be linked to records from t and o (array set 2, recorded as keyType/keyID) and e (array set 3, recorded as keyType/keyID). Example below:
Table: Alignment
id keyType keyID relType relID
1 e 5 q 1
2 o 8 q 1
3 o 8 u 1
4 t 2 q 2
5 t 2 k 7
6 t 12 q 1
So, in record 6, a record with an id of 12 from table t is being linked to a record with an id of 1 from table q.
I have to find missing links. The ideal state is that each of the ids from array set 1 have a record in the alignment table linking them to at least 1 record from array set 2. In the example, alignment record 1 does not count towards this goal, because it aligns a set 1 id to a set 3 id (instead of set 2).
Scanning the table, you can quickly see that there are some missing ids from array set 1: "q"-3 and "u"-5.
I've been doing this with script, by looping through each set 1 array and looking for a corresponding record, which generates a whole bunch of sql calls and really kills any page that calls this function.
Is there some way I could accomplish this in a single sql statement?
What would I like the results to look like (ideally):
recordset (consisting magically of data that didn't exist in the table):
relType | relID
q 3
u 5
However, I would be elated with even a binary type answer from the database - were all the proper ids found: true or false? (Though the missing records array is required for other functions, but at least I'd be able to choose between the fast and slow options).
Oh, MySQL 5.1.
User Damp gave me an excellent answer using a temporary table, a join, and an IS NULL statement. But it was before I added in the wrinkle that there was a third array set that needed to be excluded from the results, which also ruins the IS NULL part. I edited his sql statement to look like this:
SELECT *
FROM k2
LEFT JOIN alignment
USING ( relType, relID )
HAVING alignment.keyType IS NULL
OR alignment.keyType = "e"
I've also tried it with a Group By relID (i always thought that was a requirement of the HAVING clause). The problem is that my result set includes "q"-1, which is linked to all three types of records ("o","t", and "e"). I need this result excluded, but I'm not sure how.
Here's the sql I ended up with:
SELECT *
FROM k2
LEFT JOIN (
SELECT *
FROM alignment
WHERE keyType != 'e' and
(
(relType = 'q' AND relID IN ( 1, 2, 3 ))
OR
(relType = 'u' AND relID IN ( 1, 5 ))
OR
(relType = 'k' AND relID IN ( 7 ))
)
)A
USING ( relType, relID )
HAVING keyType Is Null
I have to dump the values for the IN qualifiers with script. The key was not to join to the alignment table directly.
You can try to go this route:
DROP TABLE IF EXISTS k2;
CREATE TEMPORARY TABLE k2 (relType varchar(10),relId int);
INSERT INTO k2 VALUES
('q',1),
('q',2),
('q',3),
('u',1),
('u',5),
('k',7);
SELECT * FROM k2
LEFT JOIN Alignment USING(relType,relId)
HAVING Alignment.keyType IS NULL
This should work well for small tables. Not sure about very large ones though...
EDIT
If you wanted to add a WHERE statement the query would be as follow
SELECT * FROM k2
LEFT JOIN Alignment USING(relType,relId)
WHERE Alignment.keyType != 'e'
HAVING Alignment.keyType IS NULL