I have the following table
ID
Name
Currencies
Aliases
1
User
["USD","EURO"]
{"User2":["3"]}
I want to write SQL that returns the following result based on currencies and aliases to show people that have an alias of a user and users that have same currencies
NAME
Currencies
Aliases
User
1
NULL
User2
NULL
1
My initial SQL is the following
SELECT NAME
FROM table
WHERE JSON_CONTAINS(Currencies,'"EURO"',"$")
OR JSON_CONTAINS(Aliases,'"3"',"$");
The problem with the code above that I can't differentiate if those users share the same aliases or the same currencies, it doesn't really need to be the same format of the table above anyway to diff is ok
btw i am using MySQL(10.5.10-MariaDB-1:10.5.10+maria~bionic)
samples:
CREATE TABLE IF NOT EXISTS table(
ID INT,
NAME VARCHAR(255),
Currencies LONGTEXT,
Aliases LONGTEXT,
PRIMARY KEY(ID)
);
insert data:
REPLACE INTO table(ID, NAME, Currencies, Aliases) VALUES (:id, :name, :currencies, :aliases);
If you want flags, just move the expressions to the SELECT:
SELECT NAME, JSON_CONTAINS(Currencies, '"EURO"', '$'),
JSON_CONTAINS(Aliases, '"3"', '$')
FROM table
WHERE JSON_CONTAINS(Currencies, '"EURO"', '$') OR
JSON_CONTAINS(Aliases, '"3"', '$');
Related
I have created the following table in CQL:
CREATE TABLE new_table (
idRestaurant INT,
restaurant map<text,varchar>,
InspectionDate date,
ViolationCode VARCHAR,
ViolationDescription VARCHAR,
CriticalFlag VARCHAR, Score INT, GRADE VARCHAR,
PRIMARY KEY (InspectionDate )) ;
Then, I have inserted the data by jar, and I got the restaurant column value is like json/dictionary
select restarutant from new_table; is like the following result:
In normal SQL for selecting the json column's key value should be select json_col.key from table But that does not work for CQL, how can I select the JSON's key value as the column or for the WHERE condition filtering?
Thank you so much
Instead of using map, I would better to change the table's schema to following:
CREATE TABLE new_table (
idRestaurant INT,
restaurant_key text,
restaurant_value text,
InspectionDate date,
ViolationCode VARCHAR,
ViolationDescription VARCHAR,
CriticalFlag VARCHAR,
Score INT,
GRADE VARCHAR,
PRIMARY KEY (InspectionDate, restaurant_key )) ;
then you can select either individual row based on the restaurant_key with query like:
select * from new_table where idRestaurant = ? and restaurant_key = ?
or select everything for restaurant with:
select * from new_table where idRestaurant = ?
I tried concat function to combine two columns, i got the output also but
my question is why i don't see new column being added to the table. Is concatenating is just a temporary result?
SELECT CONCAT(Name,',',Continent)AS new_address FROM Country
If you want to add a column to the table, you need to alter the table:
alter table country add new_address varchar(255);
Then you can set the value using update:
update country
set new_address = concat_ws(' ', name, continent);
I prefer concat_ws() for this type of operation because it does not return NULL if one of the columns is NULL.
Note: The table has the "correct" values after the update. But, subsequent changes to the table might require that you re-run the update or that you use a trigger to maintain consistency.
On best practice is to define a view to do the calculation:
create view v_country as
select c.*, concat_ws(' ', name, continent) as new_address
from country;
When you access the data through the view, the new_address field will always be correct.
Yes this creates a column that only exists in your SELECT query.
It certainly does not alter the underlying table.
If you wanted to add this computation to the underlying table you could add a generated column as of MySQL 5.7.6.
CREATE TABLE Country
(
Name VARCHAR(100) NOT NULL,
Continent VARCHAR(100) NOT NULL
);
INSERT INTO Country
VALUES ('France', 'Europe'),
('Nigeria','Africa');
ALTER TABLE Country
ADD new_address VARCHAR(201) AS (CONCAT(Name,',',Continent));
SELECT *
FROM Country;
Online Demo
I've got a SQL 2008 R2 table defined like this:
CREATE TABLE [dbo].[Search_Name](
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](300) NULL),
CONSTRAINT [PK_Search_Name] PRIMARY KEY CLUSTERED ([Id] ASC))
Performance querying the Name field using CONTAINS and FREETEXT works well.
However, I'm trying to keep the values of my Name column unique. Searching for an existing entry in the Name column is unbelievably slow for a large number of names (usually batches of 1,000), even with an index on the Name field. Query plans indicate I'm using the index as expected.
To search for an existing value, my query looks like this:
SELECT TOP 1 Id, Name from Search_Name where Name = 'My Name Value'
I've tried duplicating the Name column to another column and searching on the new column, but the net effect was the same.
At this point, I'm thinking I must be mis-using this feature.
Should I just stop trying to prevent duplication? I'm using a linking table to join these search name values to the underlying data. It seems somehow 'dirty' to just store a whole bunch of duplicate values...
...or is there faster way to take a list of 1,000 names and see which ones are already stored in the database?
The first change to make is to get the entire list to SQL Server at one time. Regardless of how you add the names to the existing table, doing it as a set operation will make a big difference in performance.
Passing the List as a table-valued parameter (TVP) is a clean way to handle it. Have a look here for an example. You can still use an OUTPUT clause to track which rows did or didn't make the cut, for example:
-- Some sample existing names.
declare #Search_Name as Table ( Id Int Identity, Name VarChar(32) );
insert into #Search_Name ( Name ) values ( 'Bob' ), ( 'Carol' ), ( 'Ted' ), ( 'Alice' );
select * from #Search_Name;
-- Some (prospective) new names.
declare #New_Names as Table ( Name VarChar(32) );
insert into #New_Names ( Name ) values ( 'Ralph' ), ( 'Alice' ), ( 'Ed' ), ( 'Trixie' );
select * from #New_Names;
-- Add the unique new names.
declare #Inserted as Table ( Id Int, Name VarChar(32) );
insert into #Search_Name
output inserted.Id, inserted.Name into #Inserted
select New.Name
from #New_Names as New left outer join
#Search_Name as Old on Old.Name = New.Name
where Old.Id is NULL;
-- Results.
select * from #Search_Name;
-- The names that were added and their id's.
select * from #Inserted;
-- The names that were not added.
select New.Name
from #New_Names as New left outer join
#Inserted as I on I.Name = New.Name
where I.Id is NULL;
Alternatively, you could use a MERGE statement and OUTPUT the names that were added, those that weren't, or both.
I want to create a table employee with id,name,dept,username attributes.
The id column values are auto_increment. I want to concatenate the values of id with dept to generate the username.
Following is the query that I wrote:
create table employee emp_id MEDIUMINT NOT NULL AUTO_INCREMENT, name char(30) NOT NULL, dept char(6)NOT NULL, username varchar NOT NULL PRIMARY KEY(emp_id);
How to insert values into this table properly? Please help.
If your usernames will always be the concatenation of id and dept, you don't need to create a separate column for it, you can just select it using the MySQL CONCAT function, like this:
SELECT id, name, dept, CONCAT(id, '-', dept) AS username FROM employee
This will return results for username like 13-sales.
If you want to actually store the username (maybe because you want the ability to customize it later), you'll have to first insert the row to get the id, and then update the row setting the username to the concatenated id and dept.
You can use null on MySQL for auto traslate as the last id:
INSER INTO employee (name, dept) VALUES (?,?);
UPDATE employee SET username = concant(emp_id,dept) WHERE emp_id = NULL;
Simply Asking, Is there any function available in mysql to split single row elements in to multiple columns ?
I have a table row with the fields, user_id, user_name, user_location.
In this a user can add multiple locations. I am imploding the locations and storing it in a table as a single row using php.
When i am showing the user records in a grid view, I am getting problem for pagination as i am showing the records by splitting the user_locations. So I need to split the user_locations ( single row to multiple columns).
Is there any function available in mysql to split and count the records by character ( % ).
For Example the user_location having US%UK%JAPAN%CANADA
How can i split this record in to 4 columns.
I need to check for the count values (4) also. thanks in advance.
First normalize the string, removing empty locations and making sure there's a % at the end:
select replace(concat(user_location,'%'),'%%','%') as str
from YourTable where user_id = 1
Then we can count the number of entries with a trick. Replace '%' with '% ', and count the number of spaces added to the string. For example:
select length(replace(str, '%', '% ')) - length(str)
as LocationCount
from (
select replace(concat(user_location,'%'),'%%','%') as str
from YourTable where user_id = 1
) normalized
Using substring_index, we can add columns for a number of locations:
select length(replace(str, '%', '% ')) - length(str)
as LocationCount
, substring_index(substring_index(str,'%',1),'%',-1) as Loc1
, substring_index(substring_index(str,'%',2),'%',-1) as Loc2
, substring_index(substring_index(str,'%',3),'%',-1) as Loc3
from (
select replace(concat(user_location,'%'),'%%','%') as str
from YourTable where user_id = 1
) normalized
For your example US%UK%JAPAN%CANADA, this prints:
LocationCount Loc1 Loc2 Loc3
4 US UK JAPAN
So you see it can be done, but parsing strings isn't one of SQL's strengths.
The "right thing" would be splitting the locations off to another table and establish a many-to-many relationship between them.
create table users (
id int not null auto_increment primary key,
name varchar(64)
)
create table locations (
id int not null auto_increment primary key,
name varchar(64)
)
create table users_locations (
id int not null auto_increment primary key,
user_id int not null,
location_id int not null,
unique index user_location_unique_together (user_id, location_id)
)
Then, ensure referential integrity either using foreign keys (and InnoDB engine) or triggers.
this should do it
DELIMITER $$
DROP PROCEDURE IF EXISTS `CSV2LST`$$
CREATE DEFINER=`root`#`%` PROCEDURE `CSV2LST`(IN csv_ TEXT)
BEGIN
SET #s=CONCAT('select \"',REPLACE(csv_,',','\" union select \"'),'\";');
PREPARE stmt FROM #s;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
END$$
DELIMITER ;
You should do this in your client application, not on the database.
When you make a SQL query you must statically specify the columns you want to get, that is, you tell the DB the columns you want in your resultset BEFORE executing it. For instance, if you have a datetime stored, you may do something like select month(birthday), select year(birthday) from ..., so in this case we split the column birthday into 2 other columns, but it is specified in the query what columns we will have.
In your case, you would have to get exactly that US%UK%JAPAN%CANADA string from the database, and then you split it later in your software, i.e.
/* get data from database */
/* ... */
$user_location = ... /* extract the field from the resultset */
$user_locations = explode("%", $user_location);
This is a bad design, If you can change it, store the data in 2 tables:
table users: id, name, surname ...
table users_location: user_id (fk), location
users_location would have a foreign key to users thorugh user_id field