I'm pretty sure this particular quirk isn't a duplicate so here goes.
I have a table of services. In this table, I have about 40 rows of the following columns:
Services:
id_Services -- primary key
Name -- name of the service
Cost_a -- for variant a of service
Cost_b -- for variant b of service
Order -- order service is displayed in
The user can go into an admin tool and update any of this information - including deleting multiple rows, adding a row, editing info, and changing the order they are displayed in.
My question is this, since I will never know how many rows will be incoming from a submission (there could be 1 more or 100% less), I was wondering how to address this in my query.
Upon submission, every value is resubmitted. I'd hate to do it this way but the easiest way I can think of is to truncate the table and reinsert everything... but that seems a little... uhhh... bad! What is the best way to accomplish this?
RE-EDIT: For example: I start with 40 rows, update with 36. I still have to do something to the values in rows 37-40. How can I do this? Are there any mysql tricks or functions that will do this for me?
Thank you very much for your help!
You're slightly limited by the use case; you're doing insertion/update/truncation that's presented to the user as a batch operation, but in the back-end you'll have to do these in separate statements.
Watch out for concurrency: use transactions if you can.
Related
Example tables (not actual database):
In this example, I would have the SecurityCode(Unique), and Time. My current solution involves attempting to add a new Person using the security code, then querying the ID, then adding to the Times table. This is 3 separate statements and could likely be a lot faster. Any advice on how to optimise this?
Thanks.
Edit: I previously forgot to mention that this is normally done in a batch of 30-40 records.
I am also considering using SecurityCode as the foreign key in Times.
I think there are many ways of achieve this, the easiest:
Try using "IF", you only need it for the first step of your statement, the last two are independent to the result of this evaluation.
Plus, save your security code in a variable, then you will save one table scan (you already have it)
**please note its just pseudo-code**
IF (exists select * from person where securityCode = #securityCode) then
Step 1
End
Step 2
Step 3
Can you try it?
The fastest way seemed to be to batch ignore insert all security codes, then batch insert all Times with a subquery to select the correct ID from Person.
I would like to store random numbers in one MySql table, randomly retrieve one and insert it into another table column each time a new record is created. I want to delete the retrieved number from the random number table as it is used.
The random numbers are 3 digit, there are 900 of them.
I have read several posts here that describe the problems using unique random numbers and triggering their insertion. I want to use this method as it seems to be reliable while generating few problems.
Can anyone here give me an example of a sql query that will accomplish the above? (If sql query is not the recommended way to do this please feel free to recommend a better method.)
Thank you for any help you can give.
I put together the two suggestions here and tried this trigger and query:
CREATE TRIGGER rand_num before
INSERT ON uau3h_users FOR EACH ROW
insert into uau3h_users (member_number)
select random_number from uau3h_rand900
where random_number not in (select member_number from uau3h_users)
order by random_number
limit 1
But it seems that there is already a trigger attached to that table so the new one cause a conflict, things stopped working until I removed it. Any ideas about how accomplish the same using another method?
You are only dealing with 900 records, so performance is not a major issue.
If you are doing a single insert into a table, you can do something like the following:
insert into t(rand)
select rand
from rand900
where rand not in (select rand from t)
order by rand()
limit 1
In other words, you don't have to continually delete from one table and move to the other. You can just choose to insert values that don't already exist. If performance is a concern, then indexes will help in this case.
More than likely you need to take a look into Triggers. You can do some stuff for instance after inserting a record in a table. Refer this link to more details.
http://dev.mysql.com/doc/refman/5.0/en/create-trigger.html
I am pretty new to this so sorry for my lack of knowledge.
I set up a few tables which I have successfully written to and and accessed via a Perl script using CGI and DBI modules thanks to advice here.
This is a member list for a local band newsletter. Yeah I know, tons of apps out there but, I desire to learn this.
1- I wanted to avoid updating or inserting a row if an piece of my input matches column data in one particular column/field.
When creating the table, in phpmyadmin, I clicked the "U" (unique) on that columns name in structure view.
That seemed to work and no dupes are inserted but, I desire a hard coded Perl solution so, I understand the mechanics of this.
I read up on "insert ignore" / "update ignore" and searched all over but, everything I found seems to not just skip a dupe.
The column is not a key or autoinc just a plain old field with an email address. (mistake?)
2- When I write to the database, I want to do NOTHING if the incoming email address matches one in that field.
I desire the fastest method so I can loop through their existing lists export data, (they cannot figure out the software) with no racing / locking issues or whatever conditions in which I am in obvious ignorance.
Since I am creating this from scratch, 1 and 2 may be in fact partially moot. If so, what would be the best approach?
I would still like an auto increment ID so, I can access via the ID number or loop through with some kind of count++ foreach.
My stone knife approach may be laughable to the gurus here but, I need to start somewhere.
Thanks in advance for your assistance.
With the email address column declared UNIQUE, INSERT IGNORE is exactly what you want for insertion. Sounds like you already know how to do the right thing!
(You could perform the "don't insert if it already exists" functionality in perl, but it's difficult to get right, because you have to wrap the test and update in a transaction. One of the big advantages of a relational database is that it will perform constraint checks like this for you, ensuring data integrity even if your application is buggy.)
For updating, I'm not sure what an "update ignore" would look like. What is in the WHERE clause that is limiting your UPDATE to only affect the 1 desired row? Perhaps that auto_increment primary key you mentioned? If you are wanting to write, for example,
UPDATE members SET firstname='Sue' WHERE member_id = 5;
then I think this "update ignore" functionality you want might just be something like
UPDATE members SET firstname='Sue' WHERE member_id = 5
AND email != 'sue#example.com';
which is an odd thing to do, but that's my best guess for what you might mean :)
Just do the insert, if data would make the unique column not be unique you'll get an SQL error, you should be able to trap this and do whatever is appropriate (e.g. ignore it, log it, alert user ...)
I know of two ways to delete data from a database table
DELETE it forever
Use a flag like isActive/isDeleted
Now the problem with isActive is that I have to track everywhere in my SQL queries that whether the record is active or not. Using DELETE however gets rid of the data forever.
What would be the best way to backup this data?
Assuming I have multiple tables in a database, should I have a common function which just backs everything up and stores it in another table (in XML probably?) or is there any other way.
I am using MySQL but am curious about techniques used in other DBs as well.
Replace the table with a view that hides the inactive items.
Or write a trigger on DELETE that backs up the row to an archive table.
You could use a trigger that fires on deleting records to back them up into some kind of graveyard table.
You could use an isDeleted column and defien a view which selects all columns except isDeleted with the condition isDeleted=false. Then have all your stps work only with the view.
You could maintain a history table, where you back the record up and time stamp
One of the biggest reasons for not deleting data is that it may be required for a relation - for example the the user may decide to delete an old customer from the database, but you still need the customer record because it is referenced by old invoices (which may have a much longer lifespan).
Based on this the best solution is often the "IsDeleted" type of column, combined with a view (Quassnoi has mentioned partitioning, which can help with performance issues that might pop up due to a lot of invisible data).
You can partition your tables on the DELETED column and define the views which would include the condition:
… AND deleted = 0
This will make the queries over the active data just as simple and efficient.
Well, if you were using SqlServer you can use triggers, which will allow you to move the record to a deleted table.
I have a model Post which has a expiry_date. I want to know what is the
best way to manage scalability in this case. 2 options:
Whenever I want to SELECT from the table, I need to include where
expiry_date > NOW. If the table Post grows like a monster, I will be in
trouble. Imagine after 3 years or more. Indexes will be huge too.
Have a trigger, cron job, or a plugin (if it exists) that would go
around the table and move expired items to a new table Post_Archive.
That way, I maintain only current Posts in my main table, which implies
that over 3 years I won't be as bad as option 1.
If you need to archive data on a continuous basis (your #2) than a good option is MaatKit.
http://www.maatkit.org/
It can "nibble" away at data in chunks rather than running mass queries which consume lots of resources (and avoiding polluting your key cache).
So yes, you would run a Maatkit job from cron.
In the meantime, if you want to do #1 at the same time, you could maybe implement a view which conveniently wraps up the "WHERE expiry_dat > NOW" condition so you dont have to include it all on your code.
A cron job sounds good to me, and it can be done by feeding a simple script directly to the mysql command, e.g., roughly:
CREATE TEMPORARY TABLE Moving
SELECT * FROM Post WHERE expiry > NOW();
INSERT INTO Post_Archive
SELECT * FROM Moving;
DELETE FROM Post
WHERE id IN (SELECT id FROM Moving);
DROP TEMPORARY TABLE Moving;