insert multiple values in the same row of mysql table - mysql

Need to insert multiple values in the same row,For example i need to insert the different referrer came for a site.the table look like
|Date |Ref |Uri
--------------------------
28/9 |ref1 ref2 ref3 |url1
in the above table for the same date and link got the 3 different referrer.
How can i store the referrer in the same row for the particular date and retrieve individual referrer.
Hope have understand my requirement

You can do this, but you shouldn't. It contradicts the Database normalization rules, which you can see here: https://en.wikipedia.org/wiki/Database_normalization.
Use a further table which contains the primary key from your table above and connect it with each ref key. Example:
Existing Table:
T-Id |Date |Uri
--------------------------
1 | 28/9 |url1
2 | 28/9 |url2
New Table:
Id | Ref-Id | T-Id
--------------------------
1 | 1 | 1
2 | 2 | 1
3 | 3 | 1
4 | 1 | 2
5 | 3 | 2

First of all you should not do that .
You should not save data in MySQL like that. Any row must not have a column in which more than one value is saved like separated with commas ,space or anything else. Rather than that, you must separate such data into multiple rows. By this, you can easily retrieve,update and delete any row.
But if you want to save data like that then you should go for JSON datatype .
As of MySQL 5.7.8, MySQL supports a native JSON data type that enables efficient access to data in JSON (JavaScript Object Notation) documents.
It can be saved using JSON array .
A JSON array contains a list of values separated by commas and enclosed within [ and ] characters:
["abc", 10, null, true, false]
Create table ex:
CREATE TABLE `book` (
`id` mediumint(8) unsigned NOT NULL AUTO_INCREMENT,
`title` varchar(200) NOT NULL,
`tags` json DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB;
Insert data ex:
INSERT INTO `book` (`title`, `tags`)
VALUES (
'ECMAScript 2015: A SitePoint Anthology',
'["JavaScript", "ES2015", "JSON"]'
);
There are many native functions in MySql to handle JSON data type.
How to Use JSON Data Fields in MySQL Databases
Mysql JSON Data Type
In the case when referer is an entity, having many attribute then you can do as suggested by #rbr94 . In case when referer has not more than one attribute then splitting data in multiple rows or using JSON DataType will do the Job.
At last it depends on your choice of solution.

Related

How to tackle data truncation error when copying rows from one table to another, both tables having same schema?

Database has two tables with same schema.
date VARCHAR(20),
track VARCHAR(150),
race_number INT,
horse_number INT,
early_last5 FLOAT(10,1),
PRIMARY KEY (track, race_number, horse_number, date)
One is named sectional_table and another is window_sectional_table.
I want to copy all contents of sectional table to window_sectional_table.
I do the most logical thing possible.
INSERT INTO window_sectional_table SELECT * FROM sectional_table;
Unfortunately, I am terrorized by this error.
Data truncated for column 'early_last5' at row 1
I investigate row 1. Row 1 looks like this.
+------------+----------+-------------+--------------+-------------+
| date | track | race_number | horse_number | early_last5 |
+------------+----------+-------------+--------------+-------------+
| 2021-05-03 | GUNNEDAH | 1 | 1 | 0.0 |
+------------+----------+-------------+--------------+-------------+
How do I proceed? I believe the value 0.0 should have been auto filled for null value.
The Data truncated for column 'early_last5' at row 1 error is referring to your INSERT statement rather than which specific value in the database is being truncated.
I believe the value 0.0 should have been auto filled for null value.
Nope. That’s not defined in your INSERT and it’s not defined in the table creation statement.
How do I proceed?
The simplest method would be to convert the FLOAT value to a DECIMAL for the INSERT, complete with an IFNULL if you want NULL to be converted to 0.0. The FLOAT data type has long been a thorn in the side of many people who expect it to work like a typical decimal.
Here’s a quick SQL statement that should get you moving again:
INSERT INTO `window_sectional_table` (`date`, `track`, `race_number`, `horse_number`, `early_last5`)
SELECT `date`, `track`, `race_number`, `horse_number`,
CAST(IFNULL(`early_last5`, 0) AS DECIMAL(10,1))
FROM `sectional_table`
ORDER BY `track`, `race_number`, `horse_number`, `date`;

Writing JSON data into Cassandra using python client, issue with primary key choice

So I want to write data, which is coded as a JSON string into a Cassandra table. I did the following steps:
Create a Cassandra table containing columns with all the attributes of my JSON string. Here is the cql for that:
CREATE TABLE on_equipment (
ChnID varchar,
StgID varchar,
EquipID varchar,
SenID varchar,
value1 float,
value2 float,
value3 float,
electric_consumption float,
timestamp float,
measurement_location varchar,
PRIMARY KEY ((timestamp))
) WITH comment = 'A table for the on equipment readings';
Write a python Cassandra client to write the data into Cassandra from a JSON payload.
Here is the code snippet to make the INSERt query (msg.value is the json string):
session.execute('INSERT INTO ' + table_name + ' JSON ' + "'" + msg.value + "';")
I get no writing errors when doing this.
However, I ran into a problem:
The JSON data I have is from IoT sources, and one of the attributed I have is a unix timestamp. An example of a JSON record is as follows (notice the timestamp attribute):
{'timestamp': 1598279069.441547, 'value1': 0.36809349674042857, 'value2': 18.284579388599308, 'value3': 39.95615809003724, 'electric_consumption': 1.2468644044844224, 'SenID': '1', 'EquipID': 'MID-1', 'StgID': '1', 'ChnID': '1', 'measurement_location': 'OnEquipment'}
In order to insert many records, I have defined the timestamp value as the primary key of the data in the Cassandra table. The problem is that not all records are being written into Cassandra, only records who's timestamps fall into a certain group. I know this because I have produced around 100 messages and received zero write errors, yet the contents of the table only has 4 rows:
timestamp | chnid | electric_consumption | equipid | measurement_location | senid | stgid | value1 | value2 | value3
------------+-------+----------------------+---------+----------------------+-------+-------+----------+----------+----------
1.5983e+09 | 1 | 0.149826 | MID-1 | OnEquipment | 1 | 1 | 0.702309 | 19.92813 | 21.47207
1.5983e+09 | 1 | 1.10219 | MID-1 | OnEquipment | 1 | 1 | 0.141921 | 5.11319 | 78.17094
1.5983e+09 | 1 | 1.24686 | MID-1 | OnEquipment | 1 | 1 | 0.368093 | 18.28458 | 39.95616
1.5983e+09 | 1 | 1.22841 | MID-1 | OnEquipment | 1 | 1 | 0.318357 | 16.9013 | 71.5506
In other words, Cassandra is updating the values of these four rows, when it should be writing all the 100 messages.
My guess is that I incorrectly using the Cassandra primary key. The timestamp column is type float.
My questions:
Does this behaviour make sense? Can you explain it?
What can I use as the primary key to solve this?
Is there a way to make the primary key a Cassandra writing or arrival time?
Thank you in advance for your help!
You have defined the primary key as just the timestamp - if you insert data into a Cassandra table, and the data you are writing has the same primary key as data already in the table, you will overwrite it. All inserts are in effect insert/update, so when you use the same primary key value a 2nd time, it will update.
As to the solution - this is tricker - the primary key has to hold true to it's name - it is primary, e.g. unique - even if it was a timestamp instead of a float you should also have at least 1 other field (such as the IoT unique identifier) within the primary key so that 2 readings from two different devices made at the exact same time do not clash.
In Cassandra you model the data and the keys based on how you intend to access the data - without knowing that it would not be possible to know what the primary key (Partition + Clustering key) should be. You also ideally need to know something about the data cardinality and selectivity.
Identify and define the queries you intend to run against the data, that should guide your partition key and clustering key choices - which together make the primary key.
The specific issue here to add to the above is that the data is exceeding the precision that the float can be stored at - capping the value in effect and making them all identical. If you change the float to a double, it then stores the data without capping the values into the same value - which then causes the upsert instead of a new row inserted. (The JSON insert part is not relevant to the issue as it happens)
Recreating the issue as follows:
CREATE TABLE on_equipment (
ChnID varchar,
timestamp float,
PRIMARY KEY ((timestamp))
) ;
insert into on_equipment(timestamp, chnid) values (1598279061,'1');
insert into on_equipment(timestamp, chnid) values (1598279062,'2');
insert into on_equipment(timestamp, chnid) values (1598279063,'3');
insert into on_equipment(timestamp, chnid) values (1598279064,'4');
select count(*) from on_equipment;
1
select timestamp from on_equipment;
1.59827904E9
You can see the value has been rounded and capped, all 4 values capped the same, if you use smaller numbers for the timestamps it works, but isn't very useful to do so.
Changing it to a double:
CREATE TABLE on_equipment (
ChnID varchar,
timestamp double,
PRIMARY KEY ((timestamp))
) ;
insert into on_equipment(timestamp, chnid) values (1598279061,'1');
insert into on_equipment(timestamp, chnid) values (1598279062,'2');
insert into on_equipment(timestamp, chnid) values (1598279063,'3');
insert into on_equipment(timestamp, chnid) values (1598279064,'4');
select count(*) from on_equipment;
4

MySQL convert multiple columns into JSON

I'm trying to convert multiple columns from one table into single JSON in another table in mysql database (version 5.7.16). I want use SQL query.
First table look like this
CREATE TABLE `log_old` (
`id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
`temperature` DECIMAL(5,2) NULL DEFAULT NULL,
`heating_requested` BIT(1) NULL DEFAULT NULL,
PRIMARY KEY (`id`),
)COLLATE='utf8_general_ci'
ENGINE=InnoDB;
Second table look like this
CREATE TABLE `log_new` (
`id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
'data' JSON,
PRIMARY KEY (`id`),
)COLLATE='utf8_general_ci'
ENGINE=InnoDB;
data JSON has same format in all rows of log_new table, it should look like this
{
temperature: value,
heatingRequested: false
}
for example log_old look like this
+--+-----------+-----------------+
|id|temperature|heating_requested|
+--+-----------+-----------------+
|1 | 12 | true |
+--+-----------+-----------------+
|2 | 14 | true |
+--+-----------+-----------------+
|3 | 20 | false |
+--+-----------+-----------------+
and I want to log_new looked like this
+--+-----------------------------------------+
|id| data |
+--+-----------------------------------------+
|1 |{temperature:12, heatingRequested: true} |
+--+-----------------------------------------+
|2 |{temperature:14, heatingRequested: true} |
+--+-----------------------------------------+
|3 |{temperature:20, heatingRequested: false}|
+--+-----------------------------------------+
I tried to use JSON_INSERT()
SELECT JSON_INSERT((SELECT data FROM log_new ), '$.temperature',
(SELECT temperature FROM log_old));
but this throw error "subquery returns more than 1 row"
I came with only solutions thats work to use while and do it row by row but this can take long time
DELIMITER //
CREATE PROCEDURE doLog()
BEGIN
SELECT COUNT(*) into #length from log_zone;
SET #selectedid = 1;
WHILE #selectedid < #length DO
SELECT temperature,heating_requested INTO #temperature,#heating_requested FROM log_old where id=#selectedid;
SELECT JSON_OBJECT('temperature',#temperature,'heatingRequested',#heating_requested) into #data_json;
SET #selectedid = #selectedid + 1;
INSERT INTO log_new (data) VALUES (#data_json);
END WHILE;
END;
//
CALL doLog()
As all your data are available on single lines, you don't need to use subqueries or loops to build the json object.
You can try something like :
INSERT INTO log_new (data)
SELECT json_object('temperature',log_old.temperature,'heatingRequested',log_old.heating_requested)
FROM log_old
Use a programming language or BI tool. Your question is very thought out but what I am missing is why does this have to be in mysql?
An RDMS, although many have reporting addons, is not intended to provide this low level manipulation. You are entering in a reporting realm and may need to focus on a view of your data outside of the database. You would be best served using Node, PHP, Python, and just about any actual programming language with strong mysql support (which is just about every modern language out there). BI tools include several free options like Pentaho/Kettle and Google's Data Studio and countless commercial BI options like Tableau and the like.
It is my strong belief that stored procedures, although they have a place, should not be responsible for application logic.

Why ENUM does not store multiple values in MySQL?

I want to use ENUM feature in table using MySQL.
I have created a table tbl_test having id as primary key and enum_col field as ENUM data type.
CREATE TABLE tbl_test(
id INT NOT NULL AUTO_INCREMENT,
enum_col ENUM('a','b','c') NOT NULL,
PRIMARY KEY ( id )
);
When I try to store single enum value, it got inserted but when I try to store multiple enum values then it throws SQL error.
ERROR:
Data truncated for column 'enum_col' at row 1
Single ENUM value (CORRECT):
INSERT INTO tbl_test(id, enum_col) values(1, 'a');
Multiple ENUM values (FAILED):
INSERT INTO tbl_test(id, enum_col) values(2, 'a,b');
Any idea to store multiple values in ENUM data type ?
You should use SET data type instead of ENUM if you want to store multiple values
http://dev.mysql.com/doc/refman/5.7/en/set.html
That is because you can only store one value in it and in fact you absolutely should store only one value in whatever type of column.
Use a seperate table. Then you can store as much values as you like with multiple records. Example:
tbl_test
--------
id | name
1 | test_X
2 | test_Y
3 | test_Z
tbl_test_enums
--------------
test_id | enum_value
1 | a
1 | b
2 | a
3 | c

How to get maximum element from a table from DynamoDb AWS Console?

I am trying to create a database table in NoSQL in order to be able to retrieve the element with a maximum value in one of its columns.
Suppose the SQL schema looks like this:
Table_Page
PageId: int(10) - PK
Name: varbinary(255)
RevisionId: int(10) - FK
Table_Revision
RevisionId: int(10) - PK
Text: varbinary(255)
Rev_TimeStamp: binary(14)
How could I design the schema in Amazon DynamoDB Console such that it supports a query to retrieve the page with the latest Revision? Thanks!
I suppose the query you want to do is, given a page_id, find the revision of that page with the largest timestamp. (instead of finding a revision of largest timestamp, regardless of page_id)
You can design your table in DynamoDB like this:
Table_Page_Revision
HashKey: PageId
RangeKey: Rev_TimeStamp
Attrubute 1: RevisionId
Attribute 2: Text
Then another table just to store the name of a page:
Table_Page_Name
HashKey: PageId
Attribute: Name
To do your query, you can use this pseudo code:
Table_Page_Revision.query(HashKey="Your Page Id", ScanIndexForward=False, Limit=1)
We set the "scan forward" parameter to false, meaning it will start from the item with large range key to smaller range key (DESC). We also set the limit to 1 which means we are only interested in getting 1 item returned. Combined together this gives you the item with the largest "Rev_TimeStamp"
I faced the same problem and I think you are looking for a solution in the "RDB" field that don't match the freedom of data modeling that DynamodB gives you.
I managed to save a completely separated record that is the "latest version" of the data, and using a suitable sorting key, you can directly get the latest data in a single query and in "blazingly fast" (:D) fashion.
Simply add a chunk of code after every put command (or update, or upsert) that update the single "latestVersion" sorting key.
In your example you don't show which are the Partition Key and the Sorting Key, but I imagine a model like that:
Hashkey(int - Partition Key) | version(int Sorting Key) | data(json) | created(date)
If you switch the version type and go for something like:
HashKey(int - Partition Key) | version(int Sorting Key) | data(json) | created(date)
You can index the sorting key wiht somethig like:
HashKey| RevisionId | data | created
01 | 00000 | {latestData} | ...latestTimeStamp
01 | 00001 | { someData } | ...someTimeStamp
01 | 00002 | { someData } | ...someTimeStamp
02 | 00000 | {latestData} | ...latestTimeStamp
02 | 00001 | { someData } | ...someTimeStamp
When you'll insert the 01 - 00003 data, you'll overwrite the 00000 too.
Doing this, when you get the HashKey + 00000 you have always the latest data of your HashKey
Hope it helps.