Say I have a simple mysql table containing a name field which is just a varchar. The name field contains a string of the following format. "channelname,unix_timestamp,unix_timestamp". e.g. "bbc1,123456789,123456889". I need to select all rows, where a channelname matches, and where a given timestamp falls within the range of the 2 timestamps. For example, given the timestamp 123456800, and the channelname 'bbc1' The above record should be selected.
How I would accomplish this is to first select all records with "name like 'bbc1,%' split out the two timestamp fields in the calling code, and filter the results there to those containing the given timestamp. Is there a better, more efficient way. My DB could have a very large number of records which match "name like 'bbc1,%'", and it's only expected to grow as time goes on.
I unfortunately don't have the ability to alter the table to add the two timestamp fields, the only thing I have to go on is that single name field. It's also possible that the name field may contain some arbitrary string not of the given format for some records, however all records which start with the given channel name should match this format.
Related
I want to update my columns for rows specified by WHERE command, but I want to update my field in a way that it extracts number part of the string from each specified field, multiplies that with a number (that I will specify) and give number output in all those specific fields extracted by WHERE command in that column.
For example, assume I want to update all my fields in a column which are like (5.6 AUD/1000, 4.5 AUD/1000, 9.7 AUD/1000), so I want to first identify fields ending with /1000 and update only those fields in the column by multiplying the number part of the string (which is 5.6, 4.5, 9.7) with any number (let's say 10). I want that other fields on the column remains unchanged.
SELECT * from sorted WHERE Column8 REGEXP '/1000$';
gives me all the specific fields that I wish to update. But I want to update them in the way I specified above, which is that I want to extract number part from the string and multiply that with a number and update those fields only.
I am able to extract all the fields with the condition I mentioned, I'm facing difficulty in update these fields in the column.
SELECT * from sorted WHERE Column8 REGEXP '/1000$';
SELECT CAST(Column8 AS UNSIGNED)*10 FROM sorted
wHERE
column8 REGEXP '/1000$';
The above code gives me required updated fields, but I want them reflected in my column.
I expect my output to be a column where only those fields ending with '/1000' should get updated in a way that the number part of the string is multiplied with 10.
I have casted the varchar field named string to decimal type and multiplied with static value 10 . I have checked in sql server.
DECLARE #temp TABLE
(
string NVARCHAR(50)
)
INSERT INTO #temp (string)
VALUES
('5.6 AUD/1000'),
('4.5 AUD/1000'),
('9.7 AUD/1000')
select cast(left(string, patindex('%[^0-9./]%', string) - 1) As decimal(18,2))* 10
from #temp
Is it mandatory to know the table name from which you want to fetch some data based on some criteria? For example, here I have a database with 20 tables containing the same columns, only different row data.
And I want to fetch some data, from outside, based on some criteria, without being able to know the table name. Is there some sort of query available, something like:
SELECT * FROM SOME TABLE WHERE ID = [randomnumber]
I have a table full of traffic accident data with column headers such as 'Vehicle_Manoeuvre' which contains integers for example 13 represents the vehicle manoeuvre which caused the accident was 'overtaking moving vehicle'.
I know the mappings from integers to text as I have a (quite large) excel file with this data.
An example of what I want to know is percentage of the accidents involved this type of manoeuvre but I don't want to have to open the excel file and find the mappings of integers to text every time I write a query.
I could manually change the integers of all the columns (write query with all the possible mappings of each column, add them as new column, then delete the orginial columns) but this sould take a long time.
Is it possible to create some type of variable (like an array with first column as integers and second column with the mapped text) that SQL could use to understand how text relates to the integers allowing me to write a query below:
SELECT COUNT(Vehicle_Manoeuvre) FROM traffictable WHERE Vehicle_Manoeuvre='overtaking moving vehicle';
rather than:
SELECT COUNT(Vehicle_Manoeuvre) FROM traffictable WHERE Vehicle_Manoeuvre=13;
even though the data in the table is still in integer form?
You would do this with a Maneeuvres reference table:
create table Manoeuvres (
ManoeuvreId int primary key,
Name varchar(255) unique
);
insert into Manoeuvres(ManoeuvreId, Name)
values (13, 'Overtaking');
You might even have such a table already, if you know that 13 has a special meaning.
Then use a join:
SELECT COUNT(*)
FROM traffictable tt JOIN
Manoeuvres m
ON tt.Vehicle_Manoeuvre = m.ManoeuvreId
WHERE m.name = 'Overtaking';
I want to perform a random function rand(100001, 1000000) which have to generate random no s which has to be unique in two different field.
To explain it clearly
I have two tables say table A which has a record with status Submitted, Approved and table B which have only records with status Rejected.
I am having a field called ackno in table A as well as table B which needs to be random no and unique in comparison with both table field.
Is this possible ? Can anybody give a solution ?
A random number won't be unique. In this case you have 999.999 numbers, based on the number of records you can calculate the probability that 2 numbers are the same.
I see 2 ways to ensure a unique value.
First of all you could use autoincrement. Let table A start with 1 and table B start with 1.000.000. As long as table A has less than a million rows, you're good.
Second is to use the uuid. A UUID is a 128-bit number represented by a utf8 string of five hexadecimal numbers in aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee format.
Note that both methods are deliver guessable values. If that's a problem, you could add a random number to it and encode it, like:
SELECT SHA2(CONCAT(UUID(), RAND()), 256);
i have a table which has big int column used for storing the time stamp.The time stamp value which we are getting from our application is 13 digit number like 1280505757693.And there are many rows in this table right now probably more than half a million entries.Should i use Index on timestamp column or not ???? Any suggestions ?
Are the numbers contiguous or do they contain encoded information in some form? By this I mean is 1280505757693 + 1 one tick beyond 1280505757693? If so, then you can create an index and it will be useful for equal to matches and range matches; otherwise, only for equal to matches.
If you are keeping timestamps in your database you may wish to consider MySQL's TIMESTAMP and DATETIME types; see here http://dev.mysql.com/doc/refman/5.1/en/datetime.html . You can certainly create indexes on those.