I have table with auction lots:
mysql> desc lots;
+------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------+--------------+------+-----+---------+----------------+
| id | int(10) | NO | PRI | NULL | auto_increment |
| account_id | int(10) | YES | | NULL | |
| item_id | int(10) | YES | | NULL | |
| bid | int(10) | YES | | NULL | |
| buyout | int(10) | YES | | NULL | |
| leader | int(13) | YES | | NULL | |
| left | int(10) | YES | | NULL | |
+------------+--------------+------+-----+---------+----------------+
left - field in unix timestamp format like '1391143424'. This shows the timestamp after which the lot should be expired.
I need to develop efficient algorithm for:
-when customer click on SEARCH button, the result will show only non-expired lots
-once lot expired it should be automatically put into completed_lots table
I have some ideas, but want to verify if there is any better approaches:
1: add additional search criteria like:
WHERE '$currtime' <= left;
2: setup some asynchronous tasks using crontab that will check lots table for expired lots every 5 minutes and move it to completed_lots table.
Are there any better ways? maybe using another mysql filed types for unix time, or create some mysql functions that will do that job automatically?
Related
I am doing a side project to help me learn SQL.
I have setup 2 different tables:
computers
+------------------+-----------+------+-----+---------+-------+
│| Field | Type | Null | Key | Default | Extra |
│+------------------+-----------+------+-----+---------+-------+
│| serial_number | char(25) | NO | PRI | NULL | |
│| operating_system | char(10) | YES | | NULL | |
│| purchase_year | int(4) | YES | | NULL | |
│| assigned_to | char(100) | YES | | NULL | |
│+------------------+-----------+------+-----+---------+-------+
employees
│+------------+-----------+------+-----+---------+-------+
│| Field | Type | Null | Key | Default | Extra |
│+------------+-----------+------+-----+---------+-------+
│| email | char(100) | NO | PRI | NULL | |
│| first_name | char(25) | NO | | NULL | |
│| last_name | char(25) | NO | | NULL | |
│| office | char(5) | NO | | NULL | |
│| assigned | char(25) | YES | | NULL | |
│+------------+-----------+------+-----+---------+-------+
These both have a few entries while I am testing, but in trying to write a search function based off the employee email, I am reaching a snag with SQL queries. I'm pouring through the documentation, but not understanding it well, and can't find a good example of what I am trying to do to follow along with.
Here is what I am attempting to do with the query:
I want to grab a the employee row matching email address provided, and if the "employees.assigned" field is set (not null, think EXIST is used?) then I want to also grab the "computers.serial_number" row matching that column value
I can do what I want with 2 separate queries, but I want to see if it is possible with only one to clean up code and make the query as fast as possible. Any further documentation you think is worthwhile for this project is very welcome as well!
For those people finding this on google:
What I found worked for my need:
SELECT * FROM employees LEFT JOIN computers ON employees.assigned=computers.serial_number WHERE email='email#example.com';
Here's my table.
+-------------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------------+--------------+------+-----+---------+----------------+
| ID | int(11) | NO | PRI | NULL | auto_increment |
| Postcode | varchar(255) | YES | | NULL | |
| Town | varchar(255) | YES | | NULL | |
| Region | varchar(255) | YES | | NULL | |
| Company Name | varchar(255) | YES | | NULL | |
| Fee | double | YES | | NULL | |
| Company Benefits | varchar(255) | YES | | NULL | |
| Date Updated | date | YES | | NULL | |
| Website | mediumtext | YES | | NULL | |
| Updated By | varchar(255) | YES | | NULL | |
| Notes | varchar(255) | YES | | NULL | |
| LNG | varchar(255) | YES | | NULL | |
| LAT | varchar(255) | YES | | NULL | |
+-------------------+--------------+------+-----+---------+----------------+
You can see we have an "Updated by" column.
How can I make it so that, when a user updates the row, the "Updated By" column automatically updates (or inserts if it's a new row they're adding) with the currently logged-in users name?
Many Thanks
You will have to explicitly make sure about that and whenever an UPDATE is happening then you need to update that column as well saying below. Best way to assure it, have your application logic fill in the column whenever an UPDATE to the record is happening from current logged-in user principle or claim
update tbl1
set ...,
Updated By = <logged in user name>
where Id = <some val>
You can use USER() or CURRENT_USER() in Update or Insert statements to achieve needed effect.
From my side - the only one secure way is to create stored procedures, providing inserts or updates to desired table.
Indeed, this problem was discussed here:
mysql Set default value of a column as the current logged in user
Something like this !
CREATE TRIGGER `updater`.`tableName_BEFORE_INSERT` BEFORE INSERT ON `tableName`
FOR EACH ROW
BEGIN
Set New.Updated_By = current_user();
END
I have a webapp that I'm building. This webapp will take as input some products (cars, motos, boats, houses, etc...) and each product will have one or more photos associated with it. The id of each of photo is generated by the uniqid() function of php.
My problem is:I can't seem to fit more than two id_photos into the same column
+-----------+------------------------------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+------------------------------------------+------+-----+---------+----------------+
| carid | int(11) | NO | MUL | NULL | auto_increment |
| brand | enum('Alfa Romeo','Aston Martin','Audi') | NO | | NULL | |
| color | varchar(20) | NO | | NULL | |
| type | enum('gasoline','diesel','eletric') | YES | | NULL | |
| price | mediumint(8) unsigned | YES | | NULL | |
| mileage | mediumint(8) unsigned | YES | | NULL | |
| model | text | YES | | NULL | |
| year | year(4) | YES | | NULL | |
| id_photos | varchar(30) | YES | | NULL | |
+-----------+------------------------------------------+------+-----+---------+----------------+
What I would like to happen is something like this: INSERT INTO cars(id_photos) values ('id_1st_photo', 'id_2nd_photo')
Ending up having something like this:
| 60 | Audi | Yellow | diesel | 252352 | 1234112 | R8 | 1990 | id_1st_photo id_2nd_photo |
Eventually I would have to grab those photos from the folders they are in which is something like this: /var/www/website/$login/photos/id_of_photo with the query select id_photos from cars where carid=$id.
You may found some data types that is not proprelly good for the data that the server will receive but I'm one week into mysql and I'll worry about data types later on.
First of all I don't know if that is possible, if it's not how can I design something to work like that?
I have found this question that is quite the same of mine but I can't seem to implement something like this: add multiple values in one column
You can insert the concatenated values into a field. But it is not a good practice. You can create another table with foreign key having the id of the parent table.
You can easily adapt the approach in the linked question and even remove one table needed:
You first table stays almost the same, but has the id_photos column removed:
+-----------+------------------------------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+------------------------------------------+------+-----+---------+----------------+
| carid | int(11) | NO | MUL | NULL | auto_increment |
| brand | enum('Alfa Romeo','Aston Martin','Audi') | NO | | NULL | |
| color | varchar(20) | NO | | NULL | |
| type | enum('gasoline','diesel','eletric') | YES | | NULL | |
| price | mediumint(8) unsigned | YES | | NULL | |
| mileage | mediumint(8) unsigned | YES | | NULL | |
| model | text | YES | | NULL | |
| year | year(4) | YES | | NULL | |
+-----------+------------------------------------------+------+-----+---------+----------------+
Then you'll add a second table to store the links to the photo ids:
+-----------+------------------------------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+------------------------------------------+------+-----+---------+----------------+
| carid | int(11) | NO | MUL | NULL | |
| id_photos | varchar(30) | NO | | NULL | |
+-----------+------------------------------------------+------+-----+---------+----------------+
Both tables are linked by the field carid (You should even make carid in the second table a foreign key pointing to the one in the first table).
Each id_photos then results in a new row in the second table.
To query the data you probably need a JOIN between both tables and maybe a GROUP BY to reduce the result to one row per carid again, but this depends on the other usecases.
You can insert the string formatted woth multiple photo name
INSERT INTO cars(id_photos) values ('id_1st_photo, id_2nd_photo')
In this way you don'have a well normalized database structure so you have problem when retrive the singole foto name ..
i suggest you of normalize the id_photo column in a separata table with reference to the master table and in this way store each single photo in one row
I am posting this thread in order to have some advices regarding the performance of my SQL query.
I have actually 2 tables, one which called HGVS_SNP with about 44657169 rows and another on run table which has an average of 2000 rows.
When I try to update field Comment of my run table it takes lot's of time to perform the query. I was wondering if there is any method to increase my SQL query.
Structure of HGVS_SNP Table:
+-----------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------+-------------+------+-----+---------+-------+
| snp_id | int(11) | YES | MUL | NULL | |
| hgvs_name | text | YES | | NULL | |
| source | varchar(8) | NO | | NULL | |
| upd_time | varchar(32) | NO | | NULL | |
+-----------+-------------+------+-----+---------+-------+
My run table has the following structure:
+----------------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+----------------------+--------------+------+-----+---------+-------+
| ID | varchar(7) | YES | | NULL | |
| Reference | varchar(7) | YES | MUL | NULL | |
| HGVSvar2 | varchar(120) | YES | MUL | NULL | |
| Comment | varchar(120) | YES | | NULL | |
| Compute | varchar(20) | YES | | NULL | |
+----------------------+--------------+------+-----+---------+-------+
Here's my query:
UPDATE run
INNER JOIN SNP_HGVS
ON run.HGVSvar2=SNP_HGVS.hgvs_name
SET run.Comment=concat('rs',SNP_HGVS.snp_id) WHERE run.Compute not like 'tron'
I`m guessing since you JOIN a text column with a VARCHAR(120) column that you don`t really need a text column. Make it a VARCHAR so you can index it
ALTER TABLE `HGVS_SNP` modify hgvs_name VARCHAR(120);
ALTER TABLE `HGVS_SNP` ADD KEY idx_hgvs_name (hgvs_name);
This will take a while on large tables
Now your JOIN should be much faster,also add an index on compute column
ALTER TABLE `run` ADD KEY idx_compute (compute);
And the LIKE is unnecessary,change it to
WHERE run.Compute != 'tron'
I'm working on an ECommerce website, in which there are 2 database tables in MySQL, one is products and the other one is taxonomies, products and taxonomies are many to many relationship, and taxonomies have a tree structure, meaning there's a parent_id field in taxonomies table to identify the parent id of a taxonomy.
When user selects one taxonomy, I want to get all the products that belong to this taxonomy and all its offspring taxonomies, I did this by first finding out all the offspring taxonomies of the selected taxonomy, then get paginated products result from there, but in my site there are in total 5000 taxonomies, and my solution makes the site slow like a dog...... Any advice on how I could achieve this for the sake of performance?
products table:
+-------------------+----------------------+------+-----+---------------------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------------+----------------------+------+-----+---------------------+----------------+
| id | int(10) unsigned | NO | PRI | NULL | auto_increment |
| code | bigint(20) | NO | UNI | NULL | |
| SKU | varchar(255) | NO | | NULL | |
| name | varchar(100) | NO | | NULL | |
| description | varchar(2000) | NO | | NULL | |
| short_description | varchar(200) | NO | | NULL | |
| price | decimal(8,2) | NO | | 0.00 | |
| discounted_price | decimal(8,2) | NO | | 0.00 | |
| stock | smallint(5) unsigned | NO | | 0 | |
| sales | smallint(5) unsigned | NO | | 0 | |
| num_reviews | smallint(6) | NO | | 0 | |
| weight | decimal(5,2) | NO | | 0.00 | |
| overall_rating | decimal(3,2) | NO | | 5.00 | |
| activity_id | int(10) unsigned | YES | MUL | NULL | |
| created_at | timestamp | NO | | 0000-00-00 00:00:00 | |
| updated_at | timestamp | NO | | 0000-00-00 00:00:00 | |
+-------------------+----------------------+------+-----+---------------------+----------------+
taxonomies table:
+--------------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------------+------------------+------+-----+---------+----------------+
| id | int(10) unsigned | NO | PRI | NULL | auto_increment |
| name | varchar(100) | YES | UNI | NULL | |
| parent_id | int(10) unsigned | YES | MUL | NULL | |
| num_products | smallint(6) | NO | | 0 | |
+--------------+------------------+------+-----+---------+----------------+
product_taxonomy table:
+-------------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+------------------+------+-----+---------+----------------+
| id | int(10) unsigned | NO | PRI | NULL | auto_increment |
| product_id | int(10) unsigned | NO | MUL | NULL | |
| taxonomy_id | int(10) unsigned | NO | MUL | NULL | |
+-------------+------------------+------+-----+---------+----------------+
In case depth of single level one can use the following query
SELECT * FROM `product_taxonomy`
INNER JOIN (SELECT * FROM `taxonomies` WHERE `id` = 100 OR `parent_id` = 100) `taxonomies`
ON `product_taxonomy`.`taxonomy_id` = `taxonomies`.`id`
LEFT JOIN `products` ON `product_taxonomy`.`product_id` = `products`.`id`
You can add limit, offset to the above query for pagination.
100 in the above query represents the taxonomy id requested by the user.
Apart from this I would suggest :-
1) id in your product table to renamed if possible to product_id as referenced in your product_taxonomy and I presume in other tables, similarly taxonomy_id.
This way when you join query column name would be the same.
2) I hope product_taxonomy.product_id, product_taxonomy.taxonomy_id are indexed for faster querying.
Update:
What you had mentioned in the comment below is a hierarchical data problem and not what relational database ideally intended for.
Solution 1
IF you know for sure that you will have only 4 levels / generation then you can do 4 join queries.
I can elaborate on this if you need to.
Solution 2
If you are not too deep or committed to the architecture of this project I would recommend restructuring it such a way, where recursion is taken care of by the server side scripting. i.e You change your CMS/taxonomy management in such a way that whenever you add/remove/modify taxonomy the script will update a table called taxonomy_childs with all possible offspring for a given category so that you have a flat data at your disposal when you need it.
Personally I would prefer this. I always like my database to match my business logic requirement.
I can elaborate on this if you need to.
Solution 3
As mentioned earlier hierarchical data is not a strong point of a relational database. Having said that you can implement something called as Nested Set Model.
Please read more at http://mikehillyer.com/articles/managing-hierarchical-data-in-mysql/
You would need to add 3 columns to your taxonomy table :- level_depth, lft, rht.
Please let me know which solution would you want me to elaborate.