I want to use ENUM feature in table using MySQL.
I have created a table tbl_test having id as primary key and enum_col field as ENUM data type.
CREATE TABLE tbl_test(
id INT NOT NULL AUTO_INCREMENT,
enum_col ENUM('a','b','c') NOT NULL,
PRIMARY KEY ( id )
);
When I try to store single enum value, it got inserted but when I try to store multiple enum values then it throws SQL error.
ERROR:
Data truncated for column 'enum_col' at row 1
Single ENUM value (CORRECT):
INSERT INTO tbl_test(id, enum_col) values(1, 'a');
Multiple ENUM values (FAILED):
INSERT INTO tbl_test(id, enum_col) values(2, 'a,b');
Any idea to store multiple values in ENUM data type ?
You should use SET data type instead of ENUM if you want to store multiple values
http://dev.mysql.com/doc/refman/5.7/en/set.html
That is because you can only store one value in it and in fact you absolutely should store only one value in whatever type of column.
Use a seperate table. Then you can store as much values as you like with multiple records. Example:
tbl_test
--------
id | name
1 | test_X
2 | test_Y
3 | test_Z
tbl_test_enums
--------------
test_id | enum_value
1 | a
1 | b
2 | a
3 | c
Related
Database has two tables with same schema.
date VARCHAR(20),
track VARCHAR(150),
race_number INT,
horse_number INT,
early_last5 FLOAT(10,1),
PRIMARY KEY (track, race_number, horse_number, date)
One is named sectional_table and another is window_sectional_table.
I want to copy all contents of sectional table to window_sectional_table.
I do the most logical thing possible.
INSERT INTO window_sectional_table SELECT * FROM sectional_table;
Unfortunately, I am terrorized by this error.
Data truncated for column 'early_last5' at row 1
I investigate row 1. Row 1 looks like this.
+------------+----------+-------------+--------------+-------------+
| date | track | race_number | horse_number | early_last5 |
+------------+----------+-------------+--------------+-------------+
| 2021-05-03 | GUNNEDAH | 1 | 1 | 0.0 |
+------------+----------+-------------+--------------+-------------+
How do I proceed? I believe the value 0.0 should have been auto filled for null value.
The Data truncated for column 'early_last5' at row 1 error is referring to your INSERT statement rather than which specific value in the database is being truncated.
I believe the value 0.0 should have been auto filled for null value.
Nope. That’s not defined in your INSERT and it’s not defined in the table creation statement.
How do I proceed?
The simplest method would be to convert the FLOAT value to a DECIMAL for the INSERT, complete with an IFNULL if you want NULL to be converted to 0.0. The FLOAT data type has long been a thorn in the side of many people who expect it to work like a typical decimal.
Here’s a quick SQL statement that should get you moving again:
INSERT INTO `window_sectional_table` (`date`, `track`, `race_number`, `horse_number`, `early_last5`)
SELECT `date`, `track`, `race_number`, `horse_number`,
CAST(IFNULL(`early_last5`, 0) AS DECIMAL(10,1))
FROM `sectional_table`
ORDER BY `track`, `race_number`, `horse_number`, `date`;
So I want to write data, which is coded as a JSON string into a Cassandra table. I did the following steps:
Create a Cassandra table containing columns with all the attributes of my JSON string. Here is the cql for that:
CREATE TABLE on_equipment (
ChnID varchar,
StgID varchar,
EquipID varchar,
SenID varchar,
value1 float,
value2 float,
value3 float,
electric_consumption float,
timestamp float,
measurement_location varchar,
PRIMARY KEY ((timestamp))
) WITH comment = 'A table for the on equipment readings';
Write a python Cassandra client to write the data into Cassandra from a JSON payload.
Here is the code snippet to make the INSERt query (msg.value is the json string):
session.execute('INSERT INTO ' + table_name + ' JSON ' + "'" + msg.value + "';")
I get no writing errors when doing this.
However, I ran into a problem:
The JSON data I have is from IoT sources, and one of the attributed I have is a unix timestamp. An example of a JSON record is as follows (notice the timestamp attribute):
{'timestamp': 1598279069.441547, 'value1': 0.36809349674042857, 'value2': 18.284579388599308, 'value3': 39.95615809003724, 'electric_consumption': 1.2468644044844224, 'SenID': '1', 'EquipID': 'MID-1', 'StgID': '1', 'ChnID': '1', 'measurement_location': 'OnEquipment'}
In order to insert many records, I have defined the timestamp value as the primary key of the data in the Cassandra table. The problem is that not all records are being written into Cassandra, only records who's timestamps fall into a certain group. I know this because I have produced around 100 messages and received zero write errors, yet the contents of the table only has 4 rows:
timestamp | chnid | electric_consumption | equipid | measurement_location | senid | stgid | value1 | value2 | value3
------------+-------+----------------------+---------+----------------------+-------+-------+----------+----------+----------
1.5983e+09 | 1 | 0.149826 | MID-1 | OnEquipment | 1 | 1 | 0.702309 | 19.92813 | 21.47207
1.5983e+09 | 1 | 1.10219 | MID-1 | OnEquipment | 1 | 1 | 0.141921 | 5.11319 | 78.17094
1.5983e+09 | 1 | 1.24686 | MID-1 | OnEquipment | 1 | 1 | 0.368093 | 18.28458 | 39.95616
1.5983e+09 | 1 | 1.22841 | MID-1 | OnEquipment | 1 | 1 | 0.318357 | 16.9013 | 71.5506
In other words, Cassandra is updating the values of these four rows, when it should be writing all the 100 messages.
My guess is that I incorrectly using the Cassandra primary key. The timestamp column is type float.
My questions:
Does this behaviour make sense? Can you explain it?
What can I use as the primary key to solve this?
Is there a way to make the primary key a Cassandra writing or arrival time?
Thank you in advance for your help!
You have defined the primary key as just the timestamp - if you insert data into a Cassandra table, and the data you are writing has the same primary key as data already in the table, you will overwrite it. All inserts are in effect insert/update, so when you use the same primary key value a 2nd time, it will update.
As to the solution - this is tricker - the primary key has to hold true to it's name - it is primary, e.g. unique - even if it was a timestamp instead of a float you should also have at least 1 other field (such as the IoT unique identifier) within the primary key so that 2 readings from two different devices made at the exact same time do not clash.
In Cassandra you model the data and the keys based on how you intend to access the data - without knowing that it would not be possible to know what the primary key (Partition + Clustering key) should be. You also ideally need to know something about the data cardinality and selectivity.
Identify and define the queries you intend to run against the data, that should guide your partition key and clustering key choices - which together make the primary key.
The specific issue here to add to the above is that the data is exceeding the precision that the float can be stored at - capping the value in effect and making them all identical. If you change the float to a double, it then stores the data without capping the values into the same value - which then causes the upsert instead of a new row inserted. (The JSON insert part is not relevant to the issue as it happens)
Recreating the issue as follows:
CREATE TABLE on_equipment (
ChnID varchar,
timestamp float,
PRIMARY KEY ((timestamp))
) ;
insert into on_equipment(timestamp, chnid) values (1598279061,'1');
insert into on_equipment(timestamp, chnid) values (1598279062,'2');
insert into on_equipment(timestamp, chnid) values (1598279063,'3');
insert into on_equipment(timestamp, chnid) values (1598279064,'4');
select count(*) from on_equipment;
1
select timestamp from on_equipment;
1.59827904E9
You can see the value has been rounded and capped, all 4 values capped the same, if you use smaller numbers for the timestamps it works, but isn't very useful to do so.
Changing it to a double:
CREATE TABLE on_equipment (
ChnID varchar,
timestamp double,
PRIMARY KEY ((timestamp))
) ;
insert into on_equipment(timestamp, chnid) values (1598279061,'1');
insert into on_equipment(timestamp, chnid) values (1598279062,'2');
insert into on_equipment(timestamp, chnid) values (1598279063,'3');
insert into on_equipment(timestamp, chnid) values (1598279064,'4');
select count(*) from on_equipment;
4
Need to insert multiple values in the same row,For example i need to insert the different referrer came for a site.the table look like
|Date |Ref |Uri
--------------------------
28/9 |ref1 ref2 ref3 |url1
in the above table for the same date and link got the 3 different referrer.
How can i store the referrer in the same row for the particular date and retrieve individual referrer.
Hope have understand my requirement
You can do this, but you shouldn't. It contradicts the Database normalization rules, which you can see here: https://en.wikipedia.org/wiki/Database_normalization.
Use a further table which contains the primary key from your table above and connect it with each ref key. Example:
Existing Table:
T-Id |Date |Uri
--------------------------
1 | 28/9 |url1
2 | 28/9 |url2
New Table:
Id | Ref-Id | T-Id
--------------------------
1 | 1 | 1
2 | 2 | 1
3 | 3 | 1
4 | 1 | 2
5 | 3 | 2
First of all you should not do that .
You should not save data in MySQL like that. Any row must not have a column in which more than one value is saved like separated with commas ,space or anything else. Rather than that, you must separate such data into multiple rows. By this, you can easily retrieve,update and delete any row.
But if you want to save data like that then you should go for JSON datatype .
As of MySQL 5.7.8, MySQL supports a native JSON data type that enables efficient access to data in JSON (JavaScript Object Notation) documents.
It can be saved using JSON array .
A JSON array contains a list of values separated by commas and enclosed within [ and ] characters:
["abc", 10, null, true, false]
Create table ex:
CREATE TABLE `book` (
`id` mediumint(8) unsigned NOT NULL AUTO_INCREMENT,
`title` varchar(200) NOT NULL,
`tags` json DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB;
Insert data ex:
INSERT INTO `book` (`title`, `tags`)
VALUES (
'ECMAScript 2015: A SitePoint Anthology',
'["JavaScript", "ES2015", "JSON"]'
);
There are many native functions in MySql to handle JSON data type.
How to Use JSON Data Fields in MySQL Databases
Mysql JSON Data Type
In the case when referer is an entity, having many attribute then you can do as suggested by #rbr94 . In case when referer has not more than one attribute then splitting data in multiple rows or using JSON DataType will do the Job.
At last it depends on your choice of solution.
I have a table with column
merchant_id | phone
1 | 879182782
2 | 324239324
now what i want is a query to insert multiple values for phone field
merchant_id | phone
1 | 879182782,989838273
2 | 324239324,849238420,349289393
Can anyone help me with example query? I tried update and all but couldn't work
I agree with Gordon, but with you want anyway, you can use "CONCAT", here an example for you:
create table tabla(
merchant_id int AUTO_INCREMENT PRIMARY KEY,
phone text
);
insert into tabla(merchant_id,phone)
VALUES (1,'809-541-8935');
insert into tabla(merchant_id,phone)
VALUES (2,'809-541-8935');
insert into tabla(merchant_id,phone)
VALUES (3,'809-541-8935');
UPDATE tabla
SET phone = concat(phone, ',809-537-7791')
where merchant_id = 1
DEMO
My problem is: I have a table with an auto_increment column. When I insert some values, all is right.
Insert first row : ID 1
Insert second row : ID 2
Now I want to insert a row at ID 10.
My problem is, that after this there are only rows inserted after ID 10 (which is the normal behaviour ).
But I want that the database first fills up ID 3-9 before making that.
Any suggestions?
EDIT:
To clarify: this is for an URL shortener I want to build for myself.
I convert the id to a word(a-zA-z0-9) for searching, and for saving in the database I convert it to a number which is the ID of the table.
The Problem is now:
I shorten the first link (without a name) -> ID is 1 and the automatically name is 1 converted to a-zA-Z0-9 which is a
Next the same happenes -> ID is 2 and the name is b, which is 2 converted.
Next interesting, somebody want to name the link test -> ID is 4597691 which is the converted test
Now if somebody adds another link with no name -> ID is 4597692 which would be tesu because the number is converted.
I want that new rows will be automatically inserted at the last gap that was made (here 3)
You could have another integer column for URL IDs.
Your process then might look like this:
If a default name is generated for a link, then you simply insert a new row, fill the URL ID column with the auto-increment value, then convert the result to the corresponding name.
If a custom name is specified for a URL, then, after inserting a row, the URL ID column would be filled with the number obtained from converting the chosen name to an integer.
And so on. When looking up for integer IDs, you would then use the URL ID column, not the table auto-increment column.
If I'm missing something, please let me know.
You could do 6 dummy inserts and delete/update them later as you need. The concept of the auto increment, by design, is meant to limit the application's or user's control over the number to ensure a unique value for every single record entered into the table.
ALTER TABLE MY_TABLE AUTO_INCREMENT = 3;
You would have to find first unused id, store it as user variable, use as id for insert.
SELECT #id := t1.id +1
FROM sometable t1 LEFT JOIN sometable t2
ON t2.id = t1.id +1 WHERE t2.id IS NULL LIMIT 1;
INSERT INTO sometable(id, col1, col2, ... ) VALUES(#id, 'aaa', 'bbb', ... );
You will have to run both queries for every insert if you still have gaps, its up to you to decide whether it is worth doing it.
not 100% sure what you're trying to achieve but something like this might work:
drop table if exists foo;
create table foo
(
id int unsigned not null auto_increment primary key,
row_id tinyint unsigned unique not null default 0
)
engine=innodb;
insert into foo (row_id) values (1),(2),(10),(3),(7),(5);
select * from foo order by row_id;
+----+--------+
| id | row_id |
+----+--------+
| 1 | 1 |
| 2 | 2 |
| 4 | 3 |
| 6 | 5 |
| 5 | 7 |
| 3 | 10 |
+----+--------+
6 rows in set (0.00 sec)