If I am building a multi-shop e-commerce solution and want the orders table to maintain a shop based sequential ID, what is the best way of doing this?
For instance imagine these order IDs in sequence: -
UK0001
UK0002
UK0003
DE0001
UK0004
DE0002
etc.
through grouped PK ID MySQL / MyISAM
MySQL will manage this automatically if a country field and
an auto incrementing ID field are
used. But MyISAM has some inherent
problems such as table locking and
this feature seems like it's feature
that is only available in MyISAM so
moving database engine would not be
possible with this solution.
Programmatically. Let's say we have
two fields: order_id (global auto
inc PK column managed by DB),
order_number (country specific
sequential ID field maintained
through code) and the table also has
a shop_id column to associate orders
to shops.
So - after the new order record has been created and the DB engine has assigned an ID to the new record, and the newly created order ID has been retrieved in code as variable $newID
select order_number+1 as new_order_number from orders where order_id < $newID and shop_id = UK order by order_id desc limit 1
(this is pseudo code / sql btw)
Questions:
is this a feasible solution? Or is there a better more efficient way to do this?
When the table has 1 million + records in it, will the additional query overhead per order submission cause problems, or not?
It seems there'd be a chance of order_number clashes if two orders are placed for the same country and they get processed simultaneously. If this a possibility? if so, is there a way of protecting against it? (perhaps a unique index and a transaction?)
Look forward to your help!
Thanks
Yes you are definitely on the right track. Set the ORDER ID field as UNIQUE and do the exact same thing you were trying to do. Add a catch statement where if it is not added because of the UNIQUE error, then try to insert it again with the same statement to ensure that the ORDER ID is never inserted with the same ID at the same time.
Related
My MySQL table is set up with an auto-incrementing primary key. Whenever I
SELECT * FROM [MYTABLE]
The record with the highest primary key value is not displayed last. Is there a reason for this? I can
SELECT * FROM [MYTABLE] ORDER BY [MYTABLE].ID ASC
and the highest is displayed last. Sorry, I am not at liberty to share anything from the database. Perhaps there is some sort of underlying data field (like a record number) not contained in the table that is being used for the sort. This is MySQL 5.7.19 on a Windows server. It seems to me that sorting by a primary key makes more sense than what it's doing.
MySQL version 5.7.19.
What you want is not applicable.
Well, of course, MySquile inherently fetches data no matter how it is stored.
Sort the index table each time you change it, list the main key in the table, or sort the call each time.
select with ordering
SELECT * FROM [MYTABLE] ORDER BY [MYTABLE].ID ASC
Keep in mind that you can never guarantee the order of an auto-incremented column once you start inserting and removing data, so you shouldn't be relying on any order except that which you specify using ORDER BY in your queries. This is an expensive operation that you are doing, as it requires indexes to be completely re-created, so I wouldn't suggest doing it often.
Can I somehow make that mysql inserts data in a way that the database keeps an predefined order?
Let's say I make a highscore list. I could always use ORDER BY when selecting the stuff, but all the ordering in 1,000,000+ datasets takes alot of performance when a user browses the highscore list.
My idea is now that I want to insert the data in a way, that the table is always ordered by score desc, so all the ORDER BY work doesn't have to happen when users browse the list.
Tables have no inherent order that you should rely upon (they may, by coincidence, return rows in the same order that they were inserted, or by primary key sort order), so if you need a particular order, you should always use an ORDER BY clause.
However, the system may be able to perform the ordering more cheaply if it has an index available on the column on which you will have your ORDER BY clause - so add an index:
CREATE INDEX IX_Table_Scores ON `Table` (score desc);
Also there is another solution. If you store these much records in order by than there is tedious. Another way is you can create index on high score column so mysql internally create one index so when you try to fetch these record in order by than its performance is better than direct fetching query.
Please create index of high score column.
My database knowledge is reasonable I would say, im using MySQL (InnoDb) for this and have done some Postgres work as well. Anyway...
I have a large amount of Yes or No questions.
A large amount of people can contribute to the same poll.
A user can choose either option and this will be recorded in the database.
User can change their mind later and swap choices which will require an update to the data stored.
My current plan for storing this data:
POLLID, USERID, DECISION, TIMESTAMP
Obviously user data is in another table.
To add their choice, I would have to query to see if they have voted before and insert, otherwise, update.
If I want to see the poll results I would need to go iterate through all decisions (albeit indexed portions) every time someone wants to see the poll.
My questions are
Is there any more efficient way to store/query this?
Would I have an index on POLLID, or POLLID & USERID (maybe just a unique constraint)? Or other?
Additional side question: Why dont I have an option to choose HASH vs BTREE indexes on my tables like i would in Postgres?
The design sounds good, a few ideas:
A table for polls: poll id, question.
A table for choices: choice id, text.
A table to link polls to choices: poll id->choice ids.
A table for users: user details, user ids.
A votes table: (user id, poll id), choice id, time stamp. (brackets are a unique pair)
Inserting/updating for a single user will work fine, as you can just check if an entry exists for the user id and the poll id.
You can view the results much easier than iterating through by using COUNT.
e.g.: SELECT COUNT(*) FROM votes WHERE pollid = id AND decision = choiceid
That would tell you how many people voted for "choiceid" in the poll "pollid".
Late Edit:
This is a way of inserting if it doesn't exist and updating if it does:
IF EXISTS (SELECT * FROM TableName WHERE UserId='Uid' AND PollId = 'pollid')
UPDATE TableName SET (set values here) WHERE UserId='Uid' AND PollId = 'pollid'
ELSE
INSERT INTO TableName VALUES (insert values here)
I have two tables, each one has a primary ID column as key. I want the two tables to share one increasing key counter.
For example, when the two tables are empty, and counter = 1. When record A is about to be inserted to table 1, its ID will be 1 and the counter will be increased to 2. When record B is about to be inserted to table 2, its ID will be 2 and the counter will be increased to 3. When record C is about to be inserted to table 1 again, its ID will be 3 and so on.
I am using PHP as the outside language. Now I have two options:
Keep the counter in the database as a single-row-single-column table. But every time I add things to table A or B, I need to update this counter table.
I can keep the counter as a global variable in PHP. But then I need to initialize the counter from the maximum key of the two tables at the start of apache, which I have no idea how to do.
Any suggestion for this?
The background is, I want to display a mix of records from the two tables in either ASC or DESC order of the creation time of the records. Furthermore, the records will be displayed in page-style, say, 50 records per page. Records are only added to the database rather than being removed. Following my above implementation, I can just perform a "select ... where key between 1 and 50" from two tables and merge the select datasets together, sort the 50 records according to IDs and display them.
Is there any other idea of implementing this requirement?
Thank you very much
Well, you will gain next to nothing with this setup; if you just keep the datetime of the insert you can easily do
SELECT * FROM
(
SELECT columnA, columnB, inserttime
FROM table1
UNION ALL
SELECT columnA, columnB, inserttime
FROM table2
)
ORDER BY inserttime
LIMIT 1, 50
And it will perform decently.
Alternatively (if chasing last drop of preformance), if you are merging the results it can be an indicator to merge the tables (why have two tables anyway if you are merging the results).
Or do it as SQL subclass (then you can have one table maintain IDs and other common attributes, and the other two reference the common ID sequence as foreign key).
if you need creatin time wont it be easier to add a timestamp field to your db and sort them according to that field?
i believe using ids as a refrence of creation is bad practice.
If you really must do this, there is a way. Create a one-row, one-column table to hold the last-used row number, and set it to zero. On each of your two data tables, create an AFTER INSERT trigger to read that table, increment it, and set the newly-inserted row number to that value. I can't remember the exact syntax because I haven't created a trigger for years; see here http://dev.mysql.com/doc/refman/5.0/en/triggers.html
I use one table withe some casual columns such as id, name, email, etc...also I'm inserting a variable numbers of records in each transaction, to be much efficient I need to have one unique id lets call it transaction id, that would be the same for each group of data which are inserted in one transaction, should be increment. What is the best approach for doing that?
I was thought to use
select max(transaction_id) from users
and increment that value on server side, but that seams like old fashion solution.
You could have another table usergroups with an auto-incrementing primary key, you first insert a record there (maybe including some other useful information about the group). Then get the group's unique id generated during this last insert using mysql_insert_id(), and use that as the groupid for your inserts into the first table.
This way you're still using MySQL's auto-numbering which guarantees you a unique groupid. Doing select max(transaction_id) from users and incrementing this isn't safe, since it's non-atomic (another thread may have read the same max(transaction_id) before you've had a change to increment it, and will start inserting records with a conflicting groupid).
Add new table with auto_increment column
You can create new table with auto_increment column. So you'll be able to generate unique integers in thread safe way. It'll work like this:
DB::insert_into_transaction_table()
transaction_id = DB::mysql_last_insert_id() ## this is integer value
for each record:
DB::insert_into_table(transaction_id, ...other parameters...)
And you don't require mysql transactions for this.
Generate unique string on server side before inserting
You can generate unique id (for example GUID) on server side and use it for all records inserting. But your transaction_id field should be long enough to store values generated this way (some char(...) type). It'll work like this:
transaction_id = new_GUID() ## this is usually a string value
for each record:
DB::insert_into_table(transaction_id, ...other parameters...)