I have created the following table in CQL:
CREATE TABLE new_table (
idRestaurant INT,
restaurant map<text,varchar>,
InspectionDate date,
ViolationCode VARCHAR,
ViolationDescription VARCHAR,
CriticalFlag VARCHAR, Score INT, GRADE VARCHAR,
PRIMARY KEY (InspectionDate )) ;
Then, I have inserted the data by jar, and I got the restaurant column value is like json/dictionary
select restarutant from new_table; is like the following result:
In normal SQL for selecting the json column's key value should be select json_col.key from table But that does not work for CQL, how can I select the JSON's key value as the column or for the WHERE condition filtering?
Thank you so much
Instead of using map, I would better to change the table's schema to following:
CREATE TABLE new_table (
idRestaurant INT,
restaurant_key text,
restaurant_value text,
InspectionDate date,
ViolationCode VARCHAR,
ViolationDescription VARCHAR,
CriticalFlag VARCHAR,
Score INT,
GRADE VARCHAR,
PRIMARY KEY (InspectionDate, restaurant_key )) ;
then you can select either individual row based on the restaurant_key with query like:
select * from new_table where idRestaurant = ? and restaurant_key = ?
or select everything for restaurant with:
select * from new_table where idRestaurant = ?
I have a table with fields (id , brand, model , os)
id as primary key
tables have ~ 6000 rows
Now i want to add new field with id=4012 (already exist) & increment id++ for id>4012
silliest way :
make table backup
remove entries with id >= 4012
insert new entry with id = 4012
restore table from backup
stupid, but works ))
Looking for more beautiful solution
Thx
table structure :
CREATE TABLE IF NOT EXISTS `mobileslist` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`brand` text NOT NULL,
`model` text NOT NULL,
`os` text NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=14823 ;
i try :
UPDATE mobileslist SET id = id + 1 WHERE id IN (SELECT id FROM
mobileslist WHERE id >= 4822 ORDER BY id);
but got answer :
1093 - You can't specify target table 'mobileslist' for update in FROM clause
1) Create a temporary table, with descending order by ID.
2) Perform an UPDATE query on the temporary table which sets ID = ID + 1 WHERE ID >= 4012
3) Drop the temporary table
4) Perform your insert operation on the original table.
Hope i understood it right, you want to insert a new entry at position 4012 moving & reassigning all the entries present at Id = 4012 or more with a new id incremented by 1.
Hope this helps.
Try this:
UPDATE <TableName>
SET
id = id + 1
WHERE id IN (SELECT id FROM <TableName> WHERE id >= 4012 ORDER BY id)
INSERT INTO <TableName>
(id , brand, model , os)
VALUE
(4012, "<BrandName>", "<Model>", "<OS>")
Updated Answer:
DECLARE #MaxId INT, #Difference INT
SELECT
#MaxId = MAX(id)
FROM mobileslist
SET #Difference = #MaxId - 4012
UPDATE mobileslist
SET
id = id + #Difference
where id >= 4012
INSERT INTO mobileslist
(id , brand, model , os)
VALUE
(4012, "TestBrand", "TestModel", "TestOS")
UPDATE mobileslist
SET
id = id - #Difference
where id > 4012
How to use the SET datatype in MySQL? I have a table Train in which there are fields
trainno int
Weekdays set data type
Stops set data type
train name
How to write a select query where I can compare the Stops set with a particular value like 'Mumbai'?
Create a table like:
CREATE TABLE cl_db.Train
(
trainno INT PRIMARY KEY AUTO_INCREMENT,
Stops set('aaa','bbb','ccc') NOT NULL
)
and you can query it like
select * from cl_db.Train where Stops like 'bbb'
or like
select * from cl_db.Train where FIND_IN_SET('bbb',Stops)>0;
Please look at the following staging table. It has multiple rows for the same policy.
Data to this table is loaded from a flat file I receive from external sources.
Column values can change between one row to the next row. See ColA. There could be limited columns populated in the first row. More columns will be populated in the next rows. See columns ColB and ColC, they are null initially and are populated in second and third rows.
`CREATE TABLE #Stg
(
PolicyNum VARCHAR(10) ,
ColA VARCHAR(10) ,
ColB VARCHAR(10) ,
ColC VARCHAR(10) ,
TimeStampKey VARCHAR(100)
)
INSERT #Stg
( PolicyNum, ColA, ColB, ColC, TimeStampKey )
VALUES ( 'MDT1000', 'SomeVal_A1', NULL, NULL, '2013041113033140MDT1000ZA' )
,
( 'MDT1000', 'SomeVal_A2', 'SomeVal_B', NULL, '2013041113051756MDT1000ZA' )
,
( 'MDT1000', 'SomeVal_A3', 'SomeVal_B', 'SomeVal_C', '2013041113115418MDT1000ZA' )`
From this staging table I need to load data to a destination table while maintaing history. Destination table is basically a type 2 slowly chaning dimension. In other words, I've load the first row from staging because it doesn't exist and update it with the second row and update again with the third row.
Folliwing is an example of destination schema:
CREATE TABLE #Dst
(
PolicyKey INT IDENTITY(1,1) PRIMARY KEY
, PolicyNum VARCHAR(10)
, ColA VARCHAR(10)
, ColB VARCHAR(10)
, ColC VARCHAR(10)
, IsActive BIT
, RowStartDate DATETIME
, RowEndDate DATETIME
)
Normally I'd write a merge statement or an SSIS package to handle incremental loads and scd dimensions, but since original record and change records are in the same file the standard approach doesn't work.
I'd appreciate if you can throw some light on how to approach this. I'm trying to avoid row by row operations.
Thanks,
Sam.
try this:
SELECT
Stg.*
FROM Stg
INNER JOIN
(
SELECT PolicyNum, MAX (TimeStampKey) AS MAX_TimeStampKey
FROM Stg
GROUP BY PolicyNum
) T
ON T.PolicyNum = Stg.PolicyNum
AND T.MAX_TimeStampKey = Stg.TimeStampKey
The result:
PolicyNum ColA ColB ColC TimeStampKey
---------- ---------- ---------- ---------- -------------------
MDT1000 SomeVal_A3 SomeVal_B SomeVal_C 2013041113115418MDT1000ZA
Please let us know if this helped you.
Say I have a table with three columns primaryNum, secondaryNum, chosenNum. primarynum and secondaryNum are both number values but chosenNum's value can either be "primaryNum", "secondaryNum", or "both".
The chosenNum field is a column that gives me the option to search for a number in a particular field.
For example: I might want to try to find all rows with the value 10 in the column that is stored in chosenNum. If the value of chosenNum is "both" then the row would be returned if either column (primaryNum, secondaryNum) had a value of 10.
What might my select statement look like?
It might be a better scenario if I say I would like to do a select statement like:
SELECT * FROM aTable WHERE (SELECT bVal FROM bTable WHERE aVal = #varField ) = 0;
Where #varField is the value of the value in the field with the label stored in chosenNum or either field if chosenNum = "both"
This would result in me getting back rows with id 1,2,3,4,6,7,14,15,16,19,20,21,23,24,27
Table A: Create
CREATE TABLE `test`.`aTable` (
`id` INT NOT NULL AUTO_INCREMENT ,
`primaryNum` INT NULL ,
`secondaryNum` INT NULL ,
`chosenNum` CHAR(12) NULL ,
PRIMARY KEY (`id`) );
Table B: Create
CREATE TABLE `test`.`bTable` (
`aVal` INT NULL ,
`bVal` INT NULL );
Table A: Data
INSERT INTO test.aTable VALUES (1,8,7,'secondaryNum'),(2,2,9,'secondaryNum'),(3,7,9,'both'),(4,5,1,'both'),(5,10,3,'secondaryNum'),(6,10,6,'both'),(7,7,8,'both'),(8,10,2,'primaryNum'),(9,2,1,'secondaryNum'),(10,7,2,'secondaryNum'),(11,2,2,'secondaryNum'),(12,5,1,'secondaryNum'),(13,1,6,'primaryNum'),(14,6,6,'both'),(15,4,9,'both'),(16,9,7,'primaryNum'),(17,8,3,'secondaryNum'),(18,10,7,'primaryNum'),(19,8,5,'secondaryNum'),(20,1,7,'both'),(21,7,9,'both'),(22,8,3,'primaryNum'),(23,6,2,'primaryNum'),(24,5,7,'both'),(25,2,1,'both'),(26,5,2,'secondaryNum'),(27,7,8,'primaryNum');
Table B: Data
INSERT INTO test.bTable VALUES (1,1),(2,1),(3,1),(4,1),(5,0),(6,0),(7,0),(8,1),(9,0),(10,1);
You can do something like this:
select *
from MyTable
where (chosenNum in ('both', 'primaryNum') and primaryNum = 10)
or (chosenNum in ('both', 'secondaryNum') and secondaryNum = 10)