Using UNIQUE indices with NULL fields in MySQL? - mysql

I can check, periodically, for a list of users that are currently online. I want to turn this into something useful like a list of entries per user with login / logout time. There is no other way to determine this information apart from checking who is currently online.
After some thinking I came up with something like this:
CREATE TABLE onlineActivity (
id INT UNSIGNED NOT NULL AUTO_INCREMENT,
name CHAR (32) NOT NULL,
login_time DATETIME NOT NULL,
logout_time DATETIME NOT NULL,
time SMALLINT (3) NOT NULL DEFAULT 0,
online BOOL DEFAULT NULL,
UNIQUE (name, online),
PRIMARY KEY (id)
) ENGINE = MyISAM;
I run this query every few minutes to add/update names in the activity list:
INSERT INTO onlineActivity (name, login_time, logout_time, online)
SELECT name, now(), now(), true FROM onlineList ON DUPLICATE KEY UPDATE logout_time = now()
And this query is run for every user that has logged out:
(the names are determined by comparing two adjacent online lists, the current one and the previous one)
UPDATE onlineActivity SET online = NULL WHERE name = ? AND online = 1
The questions:
I'm worrying that using a NULL field (online) in a UNIQUE index is a bad idea, and will hurt performance. I figure that MySQL might have to do a full scan of all the online's (instead of using an index) for each name to find one that is not NULL. Could someone clarify if that is the case here? I couldn't find any information on how MySQL deals with this sort of situation.
Do other database systems (PostgreSQL, SQLite) behave differently then MySQL in this regard?
should I instead of the first query, run two queries for each name, to see if a specified user is currently online, and act accordingly on that?
I thought of this design because I wanted to minimize the amount of queries used, is this a flawed idea in itself?
This table will be getting around 300~500k new records per day. Is there something else I can do to lessen the performance decrease?
I want to store a full history of user activity, not a single entry.

I am not sure why you have a unique on name and online since what you are trying to do is create a list of online activity. Putting a unique key as you have specified will mean that you can only have a name in there three times, one for each state (null, true, false).
What you are effectively doing is trying to create a history table in which case to use your current method of populating the table you should put a unique key on (name, logout_time) with a null logout_time indicating a currently logged in user (since you would only want one logout time that is null).
Something like this:
CREATE TABLE onlineActivity (
id INT UNSIGNED NOT NULL AUTO_INCREMENT,
name CHAR (32) NOT NULL,
login_time DATETIME NOT NULL,
logout_time DATETIME NULL,
time SMALLINT (3) NOT NULL DEFAULT 0,
online BOOL not null DEFAULT false,
UNIQUE (name, logout_time),
PRIMARY KEY (id)
) ENGINE = MyISAM;
Then run this on a schedule to update the table
INSERT IGNORE INTO onlineActivity (name, login_time, logout_time, online)
SELECT name, now(), null, true FROM onlineList
And this on user logout
UPDATE onlineActivity SET online = false, logout_time = now() WHERE name = ? AND logout_time = null

Related

MySQL conditional INSERT based on user-provided data and existing data

Let's say I have a few hypothetical tables: User, Item, and Sale.
CREATE TABLE User (
id INT PRIMARY KEY NOT NULL AUTO_INCREMENT,
name VARCHAR(255) NOT NULL,
email VARCHAR(255) NOT NULL,
password VARCHAR(255) NOT NULL
);
CREATE TABLE Item (
id INT PRIMARY KEY NOT NULL AUTO_INCREMENT,
upc VARCHAR(255) NOT NULL,
description VARCHAR(255) NOT NULL,
price DECIMAL(5,2) NOT NULL
userId INT NOT NULL,
FOREIGN KEY(userId) REFERENCES User(id)
);
CREATE TABLE Sale (
id INT PRIMARY KEY NOT NULL AUTO_INCREMENT,
quantity INT NOT NULL,
total DECIMAL(5,2) NOT NULL,
itemId INT NOT NULL,
FOREIGN KEY(itemId) REFERENCES Item(id)
);
Each user can add multiple items, and can sell multiple quantities of each item. A record of each sale is going to go into the Sale table, but I have to make sure that the Item ID being entered into the Sale table is actually owned by the User who is "creating" the entry.
I've thought about a couple of ways of doing this from an application layer (e.g., JDBC).
Do a SELECT to make sure the User owns that Item.
SELECT id FROM Item WHERE id = ? AND userId = ?
If there is a match (i.e., rows returned), the User owns that Item and can insert a Sale record for that Item. This method seems a bit inefficient, however, since I have to do multiple, separate queries in order to accomplish one task. Having a connection pool (and thus reusing the same connection for each query) will help performance a little, but I'm not sure by how much.
Do a "conditional INSERT" via INSERT ... SELECT:
INSERT INTO Sale(quantity, total, itemId)
SELECT 4, 5.00, 3 FROM Dual
WHERE EXISTS(SELECT id FROM Item WHERE id = ? AND userId = ?);
I really like the idea of this option, but there's an outstanding issue that I haven't been able to work around:
The query itself would be done from an application. Parts of the insert statement are raw values, which are not known until the last second. And since the only way to do a "conditional insert" is to SELECT data from some table (dummy or otherwise), you can't put ? placeholders for a prepared statement for column names.
In other words, the 4, 5.00, and 3 in the above statement are raw values, and the only way I know of to get them into the SQL string is to do concatenation:
String sql = "INSERT INTO Sale(quantity, total, itemId) SELECT "
+ quantity + ", " + total + ...;
Which leaves the door wide open to potential SQL injection attacks. It's a bit trickier to do if the Java variables quantity and total are numeric data types (i.e., can't have quotes), but it's still a loophole that I don't want to leave open.
Is there a good way to accomplish what I'm trying to do efficiently in one SQL statement? Or is the best way option (1) above?

How to efficiently update values without a primary key in MySQL?

I am currently facing an issue with designing a database table and updating/inserting values into it.
The table is used to collect and aggregate statistics that are identified by:
the source
the user
the statistic
an optional material (e.g. item type)
an optional entity (e.g. animal)
My main issue is, that my proposed primary key is too large because of VARCHARs that are used to identify a statistic.
My current table is created like this:
CREATE TABLE `Statistics` (
`server_id` varchar(255) NOT NULL,
`player_id` binary(16) NOT NULL,
`statistic` varchar(255) NOT NULL,
`material` varchar(255) DEFAULT NULL,
`entity` varchar(255) DEFAULT NULL,
`value` bigint(20) NOT NULL)
In particular, the server_id is configurable, the player_id is a UUID, statistic is the representation of an enumeration that may change, material and entity likewise. The value is then aggregated using SUM() to calculate the overall statistic.
So far it works but I have to use DELETE AND INSERT statements whenever I want to update a value, because I have no primary key and I can't figure out how to create such a primary key in the constraints of MySQL.
My main question is: How can I efficiently update values in this table and insert them when they are not currently present without resorting to deleting all the rows and inserting new ones?
The main issue seems to be the restriction MySQL puts on the primary key. I don't think adding an id column would solve this.
Simply add an auto-incremented id:
CREATE TABLE `Statistics` (
statistis_id int auto_increment primary key,
`server_id` varchar(255) NOT NULL,
`player_id` binary(16) NOT NULL,
`statistic` varchar(255) NOT NULL,
`material` varchar(255) DEFAULT NULL,
`entity` varchar(255) DEFAULT NULL,
`value` bigint(20) NOT NULL
);
Voila! A primary key. But you probably want an index. One that comes to mind:
create index idx_statistics_server_player_statistic on statistics(server_id, player_id, statistic)`
Depending on what your code looks like, you might want additional or different keys in the index, or more than one index.
Follow the below hope it will solve your problem :-
- First use a variable let suppose "detailed" as money with your table.
- in your project when you use insert statement then before using statement get the maximum of detailed (SELECT MAX(detailed)+1 as maxid FROM TABLE_NAME( and use this as use number which will help you to FETCH,DELETE the record.
-you can also update with this also BUT during update MAXIMUM of detailed is not required.
Hope you understand this and it will help you .
I have dug a bit more through the internet and optimized my code a lot.
I asked this question because of bad performance, which I assumed was because of the DELETE and INSERT statements following each other.
I was thinking that I could try to reduce the load by doing INSERT IGNORE statements followed by UPDATE statements or INSERT .. ON DUPLICATE KEY UPDATE statements. But they require keys to be useful which I haven't had access to, because of constraints in MySQL.
I have fixed the performance issues though:
By reducing the amount of statements generated asynchronously (I know JDBC is blocking but it worked, it just blocked thousand of threads) and disabling auto-commit, I was able to improve the performance by 600 times (from 60 seconds down to 0.1 seconds).
Next steps are to improve the connection string and gaining even more performance.

Auto add days to datetime as each days passes in mysql

I want to make a column in mysql database that when user login first time in system, it stores that datetime in mysql table. And since that day in other column days will add according to his register date. Like 1, 2, 3,....and so on. So, is there any way I can achieve the results? Please guide me soon.
You can do this with just one column (to hold the registration / first login date) and the DATEDIFF function:
CREATE TABLE users (
ID int(11) NOT NULL AUTO_INCREMENT,
name varchar(20) NOT NULL,
registered_at datetime NOT NULL,
PRIMARY KEY (ID)
);
INSERT INTO users SET
name = 'myname',
registered_at = NOW();
SELECT registered_at, DATEDIFF(NOW(), registered_at) AS days_since
FROM users
WHERE name = 'myname';

Values within primary key occasionally changes over time

I got some data defined in a table in a MySQL database like this
CREATE TABLE `T_dev` (
id VARCHAR(20) NOT NULL,
date DATETIME NOT NULL,
amount VARCHAR(9) DEFAULT NULL,
PRIMARY KEY (id,date)
);
I then insert a record, for example
INSERT INTO T_dev VALUES
('10000','2009-08-05 23:00:00','35')
However, one month later I get a report that tells me that this exact record should have amount equal to 30, thus
INSERT INTO T_dev VALUES
('10000','2009-08-05 23:00:00','30')
However, that can´t be done because of the primary key I´ve defined. I would like to overwrite the old record with the new one, but not really change my primary key. Any suggestions?
Thanks.
Alexander
Since the record already exists, you don't use the INSERT statement. Instead use an UPDATE statement to change the value to 30 for that specific id and date combination:
UPDATE T_dev SET amount = '30'
WHERE id = '10000' AND date = '2009-08-05 23:00:00'
Just an observation, your table is a little out of the norm. Typically primary keys are of type INT and your amount would probably be better off as a DECIMAL.
use an update statement
UPDATE T_dev
SET amount = 30
WHERE id=10000 AND date = '2009-08-05 23:00:00'

mysql selects with joins with foreign keys when using a supplied id

Also, if this isn't the right place then i'll move it to stack overflow. I posted it here, since when i had a question about mysql over there, they said that this is where such questions should be asked.
The problem that i'm having is with my chat system. I have a normal chat table and then i also have on that is used for private messages. As such the private message chat table only contains the message id that was private, and also the person who was supposed to receive it.
the table schema is something like this.
chat(
id int unsigned not null,
sayer_id int unsigned not null,
message_type tinyint unsigned not null,
message varchar(150) not null,
timesent timestamp on update current_timestamp,
);
the indexes are on id, and timesent. Primary is id, and also a normal index on timesent.
The second table is just.
chat_private(
message_id int unsigned not null,
target_id int unsigned not null
)
the primary is on message id and is also a foreign key. target id currently doesn't have one at the moment but i may put a normal index on it.
Now then, the query that i'm attempting to do is something like SELECT message, timesent FROM chat WHERE timesent>=last_update AND target_id=$userid;
last_update is a session variable stating when the last time that updates were performed.
$userid is the id from the session variable on the serverside.
Also I don't know how to easily, do something similar to that since i want to have the join in it from the chat_private table but i also don't want to have to deal with all of the ids from it since it only holds the target_id/message_id and thus would be rather pointless.
What's the easiest way to work with the data as it is? If it's impossible to work with, then i'll just move the private chat and other subsequent ones into their own table and adjust the data sizes accordingly.
You want something like
SELECT chat.message, chat.timesent FROM chat LEFT JOIN chat_private ON chat.id=chat_private.message_id WHERE chat.timesent>='$timestamp' AND chat_private.target_id=$target_id