A JobID goes as follows: ALC-YYYYMMDD-001. The first three are a companies initials, the last three are an incrementing number that resets daily and increments throughout the day as jobs are added for a maximum of 999 jobs in a day; it is these last three that I am trying to work with.
I am trying to get a before-insert trigger to look for the max JobID of the day, and add one so I can have the trigger derive the proper JobID. For the first job, it will of course return null. So here is what I have so far.
Through the following I can get a result of '000'.
set #maxjobID =
(select SUBSTRING(
(Select MAX(
SUBSTRING((Select JobID FROM jobs WHERE SUBSTRING(JobID,5,8)=date_format(curdate(), '%Y%m%d')),4,12)
)
),14,3)
);
select lpad((select ifnull(#maxjobID,0)),3,'0')
But I really need to add one to this keeping the leading zeros to increment the first and subsequent jobs of the day. My problem is as soon as try to add '1' I get a return of 'BLOB'. That is:
select lpad((select ifnull(#maxjobID,0)+1),3,'0')
returns 'BLOB'
I need it to return '001' so I can concatenate that result with the CO initials and the current date.
try casting VARCHAR back to INTEGER
SELECT lpad(SELECT (COALESCE(#maxjobID,0, CAST(#maxjobID AS SIGNED)) + 1),3,'0')
If you're using the MyISAM storage engine, you can implement exactly this with AUTO_INCREMENT, without denormalising your data into a delimited string:
For MyISAM tables, you can specify AUTO_INCREMENT on a secondary column in a multiple-column index. In this case, the generated value for the AUTO_INCREMENT column is calculated as MAX(auto_increment_column) + 1 WHERE prefix=given-prefix. This is useful when you want to put data into ordered groups.
In your case:
Normalise your schema:
ALTER TABLE jobs
ADD initials CHAR(3) NOT NULL FIRST,
ADD date DATE NOT NULL AFTER initials,
ADD seq SMALLINT(3) UNSIGNED NOT NULL AFTER date,
;
Normalise your existing data:
UPDATE jobs SET
initials = SUBSTRING_INDEX(JobID, '-', 1),
date = STR_TO_DATE(SUBSTRING(JobID, 5, 8), '%Y%m%d'),
seq = SUBSTRING_INDEX(JobID, '-', -1)
;
Set up the AUTO_INCREMENT:
ALTER TABLE jobs
DROP PRIMARY KEY,
DROP JobID,
MODIFY seq SMALLINT(3) UNSIGNED NOT NULL AUTO_INCREMENT,
ADD PRIMARY KEY(initials, date, seq)
;
You can then recreate your JobID as required on SELECT (or even create a view from such a query):
SELECT CONCAT_WS(
'-',
initials,
DATE_FORMAT(date, '%Y%m%d'),
LPAD(seq, 3, '0')
) AS JobID,
-- etc.
If you're using InnoDB, whilst you can't generate sequence numbers in this fashion I'd still recommend normalising your data as above.
So, I found a query that works (thus far).
Declare maxjobID VARCHAR(16);
Declare jobincrement SMALLINT;
SET maxjobID =
(Select MAX(
ifnull(SUBSTRING(
(Select JobID FROM jobs WHERE SUBSTRING(JobID,5,8)=date_format(curdate(), '%Y%m%d')),
5,
12),0)
)
);
if maxjobID=0
then set jobincrement=1;
else set jobincrement=(select substring(maxjobID,10,3))+1;
end if;
Set NEW.JobID=concat
(New.AssignedCompany,'-',date_format(curdate(), '%Y%m%d'),'-',(select lpad(jobincrement,3,'0')));
Thanks for the responses! Especially eggyal for pointing out the auto_increment capabilities in MyISAM.
Related
It's been my first question to this website, I'm sorry if I used any wrong keywords. I have been with one problem from quite a few days.
The Problem is, I have a MYSQL table named property where I wanted to add a ref number which will be a unique 6 digit non incremental number so I alter the table to add a new column named property_ref which has default value as 1.
ALTER TABLE property ADD uniqueIdentifier INT DEFAULT (1) ;
Then I write a script to first generate a number then checking it to db if exist or not and If not exist then update the row with the random number
Here is the snippet I tried,
with cte as (
select subIdentifier, id from (
SELECT id, LPAD(FLOOR(RAND() * (999999 - 100000) + 100000), 6, 0) AS subIdentifier
FROM property as p1
WHERE "subIdentifier" NOT IN (SELECT uniqueIdentifier FROM property as p2)
) as innerTable group by subIdentifier
)
UPDATE property SET uniqueIdentifier = (
select subIdentifier from cte as c where c.id = property.id
) where property.id != ''
this query returns a set of record for almost all the rows but I have a table of entries of total 20000,
but this query fills up for ~19000 and rest of the rows are null.
here is a current output
[current result picture]
If anyone can help, I am extremely thanks for that.
Thanks
Instead of trying to randomly generate unique numbers that do not exist in the table, I would try the approach of randomly generating numbers using the ID column as a seed; as long as the ID number is unique, the new number will be unique as well. This is not technically fully "random" but it may be sufficient for your needs.
https://www.db-fiddle.com/f/iqMPDK8AmdvAoTbon1Yn6J/1
update Property set
UniqueIdentifier = round(rand(id)*1000000)
where UniqueIdentifier is null
SELECT id, round(rand(id)*1000000) as UniqueIdentifier FROM test;
I am converting data from one database to another. The target db has a table called provider with primary key provider_no varchar(6).
I'm writing an insert to copy from source table to target table, and need an incremented key for provider_no. Is there a function to return even the iterations for one insert statement?
There are a lot more columns, but the basic problem i'm trying to solve is:
INSERT INTO `target`.`provider`
(`provider_no`,
`lastUpdateDate`)
SELECT
'', --incremented value
now()
from `source`.`provider`;
Auto Increment only works for int values, but i'm not at liberty here to change the data type.
Also, the source table doesn't have a usable primary key value that I can use for this copy.
To increment the varchar, first cast it to a number (either signed, or unsigned) like so:
INSERT INTO `target`.`provider`
(`provider_no`,
`lastUpdateDate`)
SELECT
cast(the_varchar_field as signed) + 1, --incremented value
now()
from `source`.`provider`;
Example:
mysql> select cast("001" as unsigned) + 1;
+-----------------------------+
| cast("001" as unsigned) + 1 |
+-----------------------------+
| 2 |
+-----------------------------+
Sorry, i thought you wanted to increment the varchar field from the source table.
To 'emulate' an auto increment field as you want to do, we can do it with variables like this:
insert into provider
select #cnt := #cnt +1, now()
from sourceprovider, (select #cnt := 0) q;
And here's a little demo: http://sqlfiddle.com/#!9/09fd4/1
Here's a solution I just implemented. There's a unique column called did on the source table, which I can use indirectly. There's probably a more elegant way to do it, but:
create table temp_key (
key_id int auto_increment primary key,
did varchar(4));
insert into temp_key (did) select did from source.provider;
INSERT INTO target.provider
(provider_no,
lastUpdateDate)
SELECT
k.key_id,
now()
FROM source.provider d
JOIN temp_key k on d.did = k.did;
drop table temp_key;
I have a table called t_home_feature with the following columns: id, type, sort_order. I then executed the following MySQL statement:
INSERT INTO t_home_feature (SELECT news_id, 'news', ( SELECT max(sort_order) + 1 FROM t_home_feature ) FROM t_news )
I then did a SELECT * FROM t_home_feature but the sort_order for all rows has a value that is equal to the number of rows in t_home_feature prior to the insert statement, instead of a value like previous row + 1.
How can I modify my insert query to achieve a previous row + 1 output?
You could turn the sort_order into an auto_increment field, which means the database will automatically increment it and you need not refer to it in your insert. This has to be a key, but not a primary key. For example, here's an example from:
http://forums.mysql.com/read.php?22,264498,264967#msg-264967
There the link has an example of a workaround:
create table ai (
id int auto_increment not null,
xx varchar(9),
key(id),
primary key (xx));
You may have to do something fancy with local variables. Perhaps something like this:
SELECT COUNT(1) INTO #maxval FROM t_home_feature;
INSERT INTO t_home_feature
SELECT news_id, 'news', #maxval:=#maxval+1 FROM t_news ;
No need to any auto increment values from tables.
I have done things like this when answering questions before. Here are four(4) examples of questions I answered in the DBA StackExchange using local variables:
https://dba.stackexchange.com/questions/29007/update-ranking-on-table/29009#29009
https://dba.stackexchange.com/questions/29016/selecting-without-repititions/29018#29018
https://dba.stackexchange.com/questions/18987/update-rank-on-a-large-table/18990#18990
https://dba.stackexchange.com/questions/10251/whats-wrong-with-this-update-rank-query10320#10320
I am currently working on a reporting project. In my datawarehouse I need a dimension table "Time" containing all dates (since 01-01-2011 maybe?) and which increments automatically everyday having this format yyyy-mm-dd.
I'm using MySQL on Debian by the way.
thanks
JT
You can add DATE field and use a query like this -
INSERT INTO table(date_column, column1, column2)
VALUES(DATE(NOW()), 'value1', 'value2');
Also, you can add TIMESTAMP column with ON UPDATE CURRENT_TIMESTAMP, in this case date-time value will be updated automatically.
Automatic Initialization and Updating for TIMESTAMP
See this answer
Or This one
There are a number of suggestions there. If your date range is going to be moderate, perhaps a year or two, and assuming your report uses a stored procedure to return the results, you could just create a temporary table on the fly using a rownum technique with limit to get you all of the dates in the range. Then join with your data as required.
Failing that the Union trick in the second answer seems to perform well according to the comments and can be extended to whatever maximum range you will need. It's very messy though!
This article seems to cover what you want. See also this question for another example of the columns you might want to have on your table. You should definitely generate a large amount of dates in advance instead of updating the table daily; it saves a lot of work and complications. 100 years are only ~36500 rows, which is a small table.
Temporary tables or procedural code are not good solutions for a data warehouse, because you want your reporting tool to be able to access the dimension tables. And if your RDBMS has optimizations for star schema queries (I don't know if MySQL does or not) then it would need to see the dimension too.
Here is what I am using to create and populate time dimension table:
DROP TABLE IF EXISTS time_dimension;
CREATE TABLE time_dimension (
id INTEGER PRIMARY KEY, -- year*10000+month*100+day
db_date DATE NOT NULL,
year INTEGER NOT NULL,
month INTEGER NOT NULL, -- 1 to 12
day INTEGER NOT NULL, -- 1 to 31
quarter INTEGER NOT NULL, -- 1 to 4
week INTEGER NOT NULL, -- 1 to 52/53
day_name VARCHAR(9) NOT NULL, -- 'Monday', 'Tuesday'...
month_name VARCHAR(9) NOT NULL, -- 'January', 'February'...
holiday_flag CHAR(1) DEFAULT 'f' CHECK (holiday_flag in ('t', 'f')),
weekend_flag CHAR(1) DEFAULT 'f' CHECK (weekday_flag in ('t', 'f')),
UNIQUE td_ymd_idx (year,month,day),
UNIQUE td_dbdate_idx (db_date)
) Engine=MyISAM;
DROP PROCEDURE IF EXISTS fill_date_dimension;
DELIMITER //
CREATE PROCEDURE fill_date_dimension(IN startdate DATE,IN stopdate DATE)
BEGIN
DECLARE currentdate DATE;
SET currentdate = startdate;
WHILE currentdate <= stopdate DO
INSERT INTO time_dimension VALUES (
YEAR(currentdate)*10000+MONTH(currentdate)*100 + DAY(currentdate),
currentdate,
YEAR(currentdate),
MONTH(currentdate),
DAY(currentdate),
QUARTER(currentdate),
WEEKOFYEAR(currentdate),
DATE_FORMAT(currentdate,'%W'),
DATE_FORMAT(currentdate,'%M'),
'f',
CASE DAYOFWEEK(currentdate) WHEN 1 THEN 't' WHEN 7 then 't' ELSE 'f' END
);
SET currentdate = ADDDATE(currentdate,INTERVAL 1 DAY);
END WHILE;
END
//
DELIMITER ;
TRUNCATE TABLE time_dimension;
CALL fill_date_dimension('1800-01-01','2050-01-01');
OPTIMIZE TABLE time_dimension;
Simply Asking, Is there any function available in mysql to split single row elements in to multiple columns ?
I have a table row with the fields, user_id, user_name, user_location.
In this a user can add multiple locations. I am imploding the locations and storing it in a table as a single row using php.
When i am showing the user records in a grid view, I am getting problem for pagination as i am showing the records by splitting the user_locations. So I need to split the user_locations ( single row to multiple columns).
Is there any function available in mysql to split and count the records by character ( % ).
For Example the user_location having US%UK%JAPAN%CANADA
How can i split this record in to 4 columns.
I need to check for the count values (4) also. thanks in advance.
First normalize the string, removing empty locations and making sure there's a % at the end:
select replace(concat(user_location,'%'),'%%','%') as str
from YourTable where user_id = 1
Then we can count the number of entries with a trick. Replace '%' with '% ', and count the number of spaces added to the string. For example:
select length(replace(str, '%', '% ')) - length(str)
as LocationCount
from (
select replace(concat(user_location,'%'),'%%','%') as str
from YourTable where user_id = 1
) normalized
Using substring_index, we can add columns for a number of locations:
select length(replace(str, '%', '% ')) - length(str)
as LocationCount
, substring_index(substring_index(str,'%',1),'%',-1) as Loc1
, substring_index(substring_index(str,'%',2),'%',-1) as Loc2
, substring_index(substring_index(str,'%',3),'%',-1) as Loc3
from (
select replace(concat(user_location,'%'),'%%','%') as str
from YourTable where user_id = 1
) normalized
For your example US%UK%JAPAN%CANADA, this prints:
LocationCount Loc1 Loc2 Loc3
4 US UK JAPAN
So you see it can be done, but parsing strings isn't one of SQL's strengths.
The "right thing" would be splitting the locations off to another table and establish a many-to-many relationship between them.
create table users (
id int not null auto_increment primary key,
name varchar(64)
)
create table locations (
id int not null auto_increment primary key,
name varchar(64)
)
create table users_locations (
id int not null auto_increment primary key,
user_id int not null,
location_id int not null,
unique index user_location_unique_together (user_id, location_id)
)
Then, ensure referential integrity either using foreign keys (and InnoDB engine) or triggers.
this should do it
DELIMITER $$
DROP PROCEDURE IF EXISTS `CSV2LST`$$
CREATE DEFINER=`root`#`%` PROCEDURE `CSV2LST`(IN csv_ TEXT)
BEGIN
SET #s=CONCAT('select \"',REPLACE(csv_,',','\" union select \"'),'\";');
PREPARE stmt FROM #s;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
END$$
DELIMITER ;
You should do this in your client application, not on the database.
When you make a SQL query you must statically specify the columns you want to get, that is, you tell the DB the columns you want in your resultset BEFORE executing it. For instance, if you have a datetime stored, you may do something like select month(birthday), select year(birthday) from ..., so in this case we split the column birthday into 2 other columns, but it is specified in the query what columns we will have.
In your case, you would have to get exactly that US%UK%JAPAN%CANADA string from the database, and then you split it later in your software, i.e.
/* get data from database */
/* ... */
$user_location = ... /* extract the field from the resultset */
$user_locations = explode("%", $user_location);
This is a bad design, If you can change it, store the data in 2 tables:
table users: id, name, surname ...
table users_location: user_id (fk), location
users_location would have a foreign key to users thorugh user_id field