Long story but I'm dealing with a legacy ticket system where the 'id' field (pk/auto-increment) at one time was auto-incrementing by 1 but at some point the increment global variable was changed to 10. So I have a mess of tickets where the id for a lot of them is incremented by 1, but then starts incrementing by 10. I've been asked to cover this up.
I've created a new int field called "ticket_id" (not a pk and not auto-incrementing), and made some web-app changes to read the last ticket_id number, add 1 and then save that with a new ticket record. I've changed the web app to use the new 'ticket_id' field on the surface (while really still using the old 'id' field behind the scenes).
For the existing ticket records - is there an easy way to quickly add in an incrementing value to the new column (first ordering by the id field)? 1..2..3..
Try something like the following:
$result=mysql_query("select * from my_table");
$counter =1;
while($row = mysql_fetch_assoc($result)){
//suppose $row['id'] is your current primary key
mysql_query("update my_table set ticker_id=$counter where id=".$row['id']);
$counter++;
}
Related
I am using below code to clone records from one table to another using Entity Framework
List<MasterTableRecord> _records = context.MasterTableRecords
.Where(c => ......).ToList();
_records?.ForEach(record =>
{
var _recordValues = context.Entry(record).CurrentValues.Clone();
var recordM = new HistoryMaster();
context.Entry(recordM).CurrentValues.SetValues(_recordValues);
context.HistoryMaster.Add(recordM);
context.MasterTableRecords.Remove(record);
});
At a later time the application might re insert the record back to the Master Table from History
and at that point the auto increment ID is causing troubles. Over the period of time some other records will consume the auto increment id on the master tabe and while saving the changes it will cause Duplicate entry 'xxx' for key 'xx.PRIMARY' will occur
Example: ID : 10 moved to history and after 2 days if we move record from history back to master ,
there are chances that some other records will be already inserted to master with same ID 10 .
So while moving records back from history to master , how can we tell the system to assume new available auto increment ID
var recordM = new HistoryMaster();
context.Entry(recordM).CurrentValues.SetValues(_recordValues);
#TableGenerator(name="Emp_Gen",table="ID_GEN", pkColumnName = "GEN_NAME",pkColumnValue = "Employee_GEN",valueColumnName = "GEN_VAL",initialValue = 1000,allocationSize = 100)
Every is ok,but initialValue is not effective.
Below is the table named "employee"(Note:MySql,Hibernate-JPA is used)
I think the first row 'id' is 1000,not 1,right?But if it's 1,the second should be 101....
Who can help me a stupid man?
What it comes to the first value being 1 instead of 1001 that is Hibernate bug HHH-4228, with status Won't fix. Correct first value in your case is 1001 instead of 1000, because initialValue initializes column that stores last value returned (and not the next value to be returned).
Using following in persistence.xml (as also suggested in bug report) will fix problem with first value:
<property name="hibernate.id.new_generator_mappings" value="true"/>
Meaning of allocationSize is likely misunderstood in question. It is not step to increment. It means how many values are allocated with one database query from the table. This is rather optimization to avoid additional query every time when id value is needed for new entity.
Side product is that restart of application causes often holes to the sequence:
initialValue = 1000,allocationSize = 100
Use value 1001 (=> value in valueColumn is updated to 1100).
shutdown and start application
next value will be 1101, not 1002.
I have a site where users can log-in and add items to a list.
The user logs in and the session stores their e-mail, which I use to identify them in a user table.
Then, they can input a list item and add to a list table that contains their ID and their list items.
Sometimes the ID is added and sometimes it comes up null (however, the list item text is always added). This seems erratic because most of the time the ID is included.
Any ideas? The table type is MyISAM. I'm new to programming, btw.
Here's an example of my code:
<?php
session_start();
$item = $_REQUEST['item'];
$email = $_SESSION['email'];
if ($item)
{
mysql_connect("localhost","root","") or die("We couldn't connect!");
mysql_select_db("table");
$query = mysql_query("SELECT ID FROM users WHERE email='".$_SESSION['email']."'");
$result = mysql_result($query,0);
$user_id = $result;
mysql_query("INSERT INTO items (user_ID,item_name) VALUES('$user_id','$item')");
So every time I test it by logging into my site myself, no problems. But increasingly, I have users who try to add items and it creates a record where item_name shows up correctly but user_ID is set to 0 (default).
First off, read what I said about SQL injection attacks in the comments to your question.
It would be a good idea to store the user_id in the $_SESSION so you wouldn't have to query for it every time based on the email... but if you insist on just having the email in the $_SESSION, then you actually only need one query. Adjusted code:
<?php
session_start();
$item = mysql_real_escape_string($_REQUEST['item']);
if (!empty($item) && isset($_SESSION['email']))
{
mysql_connect("localhost","root","") or die("We couldn't connect!");
mysql_select_db("table");
mysql_query("INSERT INTO items (user_ID, item_name) VALUES ((SELECT ID FROM users WHERE email='{$_SESSION['email']}'), '$item')");
}
Like Jeff Watkins said, the session could be timed out, so it would be a good idea to first check if it's set using isset().
Otherwise if you stored the userid in the session, you could just reference it directly as $_SESSION['user_id'] instead of doing a subquery in the insert.
I have a table whit 2 columns , ID and name .I set 'YES' Identity for ID column .
I want to insert data to table whit LINQ .I want to get only name column from user in my application , and then ID column fill automatic to database and I can't give data that column and fill whitout I give it .
What should I do ?
I write in c# and work whit LINQ .
So your database field already is a INT IDENTITY - right?
Next, in your LINQ to SQL designer, you need to make sure your column is set to:
Auto Generated Value = TRUE
Auto-Sync = ON INSERT
Now if you add an entry to your database, the ID will be automatically updated from the value assigned to it by the database:
YourClass instance = new YourClass();
instance.Name = "some name";
YourDataContext.InsertOnSubmit(instance);
YourDataContext.SubmitChanges();
After the SubmitChanges call, your instance.ID should now contain the ID from the database.
If you used the DBML designer, that should be setup correctly.
The 'id' will be populated after you call SubmitChanges.
Make sure your database table is setup correctly and that your ID is set as the primary key field and to auto increment. Regenerate your DBML if you need to by deleting the object and re-adding it. It should know that the ID is not needed and the auto fill it once the insert has succeeded.
I have a site with about 30,000 members to which I'm adding a functionality that involves sending a random message from a pool of 40 possible messages. Members can never receive the same message twice.
One table contains the 40 messages and another table maps the many-to-many relationship between messages and members.
A cron script runs daily, selects a member from the 30,000, selects a message from the 40 and then checks to see if this message has been sent to this user before. If not, it sends the message. If yes, it runs the query again until it finds a message that has not yet been received by this member.
What I'm worried about now is that this m-m table will become very big: at 30,000 members and 40 messages we already have 1.2 million rows through which we have to search to find a message that has not yet been sent.
Is this a case for denormalisation? In the members table I could add 40 columns (message_1, message_2 ... message_40) in which a 1 flag is added each time a message is sent. If I'm not mistaken, this would make the queries in the cron script run much faster
?
I know that doesn't answer your original question, but wouldn't it be way faster if you selected all the messages that weren't yet sent to a user and then select one of those randomly?
See this pseudo-mysql here:
SELECT
CONCAT_WS(',', messages.ids) unsent_messages,
user.id user
FROM
messages,
user
WHERE
messages.id NOT IN (
SELECT
id
FROM
sent_messages
WHERE
user.id = sent_messages.user
)
GROUP BY ids
You could also append the id of the sent messages to a varchar-field in the members-table.
Despite of good manners, this would make it easily possible to use one statement to get a message that has not been sent yet for a specific member.
Just like this (if you surround the ids with '-')
SELECT message.id
FROM member, message
WHERE member.id = 2321
AND member.sentmessages NOT LIKE '%-' && id && '-%'
1.2 M rows # 8 bytes (+ overhead) per row is not a lot. It's so small I wouldn't even bet it needs indexing (but of course you should do it).
Normalization reduces redundancy and it is what you'll do if you have large amount of data which seems to be your case. You need not denormalize. Let there be an M-to-M table between members and messages.
You can archive the old data as your M-to-M data increases. I don't even see any conflicts because your cron job runs daily for this task and accounts only for the data for the current day. So you can archive M-to-M table data every week.
I believe there will be maintenance issue if you denormalize by adding additional coloumns to members table. I don't recommend the same. Archiving of old data can save you from trouble.
You could store only available (unsent) messages. This implies extra maintenance when you add or remove members or message types (nothing that can't be automated with foreign keys and triggers) but simplifies delivery: pick a random line from each user, send the message and remove the line. Also, your database will get smaller as messages get sent ;-)
You can achieve the effect of sending random messages by preallocating the random string in your m-m table and a pointer to the offset of the last message sent.
In more detail, create a table MemberMessages with columns
memberId,
messageIdList char(80) or varchar ,
lastMessage int,
primary key is memberId.
Pseudo-code for the cron job then looks like this...
ONE. Select next message for a member. If no row exists in MemberMessages for this member, go to step TWO. The sql to select next message looks like
select substr(messageIdList, 2*lastMessage + 1, 2) as nextMessageId
from MemberMessages
where member_id = ?
send the message identified by nextMessageId
then update lastMessage incrementing by 1, unless you have reached 39 in which case reset it to zero.
update MemberMessages
set lastMessage = MOD(lastMessage + 1, 40)
where member_id = ?
TWO. Create a random list of messageIds as a String of couplets like 2117390740... This is your random list of message IDs as an 80 char String. Insert a row to MemberMessages for your member_id setting message_id_list to your 80 char String and set last_message to 1.
Send the message identified by the first couplet from the list to the member.
You can create a kind of queue / heap.
ReceivedMessages
UserId
MessageId
then:
Pick up a member and select message to send:
SELECT * FROM Messages WHERE MessageId NOT IN (SELECT MessageId FROM ReceivedMessages WHERE UserId = #UserId) LIMIT 1
then insert MessageId and UserId to ReceivedMessages
and do send logic here
I hope that helps.
There are potential easier ways to do this, depending on how random you want "random" to be.
Consider that at the beginning of the day you shuffle an array A, [0..39] which describes the order of the messages to be sent to users today.
Also, consider that you have at most 40 Cron jobs, which are used to send messages to the users. Given the Nth cron job, and ID the selected user ID, numeric, you can choose M, the index of the message to send:
M = (A[N] + ID) % 40.
This way, a given ID would not receive the same message twice in the same day (because A[N] would be different), and two randomly selected users have a 1/40 chance of receiving the same message. If you want more "randomness" you can potentially use multiple arrays.