EJB Timer for deleting database entries - mysql

I am currently working on a j2ee web application. The application features a way for users to reset their passwords if they forget them.
I have a database table with 3 columns: username, key, and timestamp.
When the user requests a password change, I add an entry in that table with their username and a random key (making sure that their are no duplicate keys in the table, also that a user can only appear once in the table). I also add the current time. I then send them an e-mail with a link to the application that contains their key, something like:
mysite.com/app/reset?key=abcxyz123
The servlet that handles this request looks at the key in the url to find the matching entry in the reset table to determine which user the key belongs to. If the key doesn't match an entry, I show an error page, if it does, I show the password reset screen. Once the user changes their password, I manually delete the entry from that reset table.
I am trying to implement the equivalent of a time to live for the password reset links, so that I don't have entries loitering in the table unnecessarily, and I thought of 2 options, the first of which I have implemented:
1) Create an EJB Timer that fires every minute that will delete entries in the reset table where the timestamp is older than 30 minutes. This is a manual process in that I am using hibernate as my jpa implementation, so I retrieve all the entries from the table, examine their timestamps, and delete the old ones.
2) Create a database job that deletes rows over a certain age?
My question is, does anyone see any drawbacks to the first approach, and second, is the 2nd option even possible with mysql? I figure that if I can use the 2nd approach, I can get rid of the timer, and let the database handle the time to live aspect of the password reset links, and that may be more efficient.
I haven't been doing j2ee development for that long, but based on the knowledge that I have, these seemed like 2 logical approaches. I welcome any input.

3) Create script that will connect to db, execute delete, disconnect. Then you can schedule this script via operating system e.g. crontab.
Regarding option 1 - Drawback of that solution is that it uses application server resources for stuff that can be done on database only and is not dependent/uses any application logic.
Benefit is that whole app is self contained and you don't need any additional installation/setup task on database as with 2 and 3.

Related

Node.js with express.js. Users can register using the same data twice if they send post requests fast enough

I am using Node.js with the express.js framework on top of it and MySql database.
I have an endpoint for registration that takes 3 params:
Email, username, and password
And then it queries the database using SELECT to see if the username or email are taken. And if not it continues to hash the password, create a new row in the database, send an email confirmation and so on.
The problem is when someone submits two post request quickly since it takes some time to process the data insertion the request let two users have the same username/email.
Basically what is happening is that the second request query the database before the first request even insert the data (the new user) and therefore the results the second request return are that the username and the email are free.
I was wondering how can I prevent issues like that in the future.
In a race condition like this, the place where the buck should stop is the database itself. So, you should add a unique constraint on the username field, if you don't already have one:
ALTER TABLE users ADD CONSTRAINT username_unique UNIQUE (username);
What will happen now if two threads come in at almost the same time, is that each request will work its way through the code. But only one request will obtain a lock to write the new user record to the table. The other request would fail with a database error, which your Node application should able to catch.
Note that you might also want a unique constraint on the email field.
Rather than making modifications to the schema I suggest you deal with this in your node APP.
Using mutex locks are the ideal approach to solving these kinds of problems. There are different packages like redlock etc to solve these issues although redlock needs redis to work. There are other modules that don't require redis.
Also have a read Mutex locks in node

how to perevent polling duplicated data from mysql database

I have a big amount of data in a mysql database. I want to poll data from database and push them in a activemq in camel. the connection between database and queue will be lost every 15 minutes. some of the messages are lost during connection interruption. I need to know which messages are lost to poll them again from database. the messages should not be send more that one time. and this should be done without any changes in database schema.(i can not add any Boolean status field to my database).
any suggestion is welcomed.
Essentially, you need to have some unique identifier in the data you pull from the source database. Maybe it is whatever has already been defined as the primary key. Or, maybe the table has some timestamp field. Or, maybe some combination of fields will be unique.
Once you identify that, when you are putting the data into the target, reject any key that is already in the target. You could use Camel's "idempotency" features, but if you are able to check for the key in the target database, you probably won't need anything else.
If you have to make the decision about what to send, but do not have access to your remote database from App #1, you'll need to keep a record on the other side of the firewall.
You would need to do this, even if the connection did not break every 15 minutes...because you could have failures for other reasons.
If you can have an Idempotency database for App#1, another approach could be to transfer data from the local database to some other local table, and read from this. Then you poll this other table, and delete whenever the send is successful.
Example:
It looks like you're using MySql. If both databases are on MySql, you could look into MySql data-replication, rather than using your own app, with Camel.

Source and time of update of a column in MySQL

I have a column Quantity in a table Inventory of MySQL which gets updated from multiple sources. I needed to maintain a track in the table on another column called QuantityLog on the last updated time of the Quantity and the source which did it. Something like this should be the content of QuantityLog column (Text type) (only the latest update details is required):
<Log>
<UpdateTime>2015-02-23 12:00:01 PM</UpdateTime>
<Source> Feeder application</Source>
</Log>
I am aware of how to do it using trigger if only the update time is required. However, with the trigger approach is there any other mechanism to get the source and use this too?
Do note pls that I am trying to perform this via triggers only as any other mechanisms of using my application to do this will require me to change in all applications that make this change and I am not inclined to do that.
There is no way MySql can know the "feeder application", unless there is a variable or table filled with that value. If you have this, it is easy to create a trigger that updates this info into the Inventory table on each change of the Quantity field.
However, if your applications use unique mysql users to connect to the database, you can of course use the CURRENT_USER() built in function inside your TRIGGER. Alternatively, CONNECTION_ID() might be helpful when tracking who did what. For example, you can create a new table that logs the connection id of your application. In that table you could write the application name, the PID and other stuff. Of course this would mean to change our application a bit by adding the appropriate insert statement after a connection is established. The overhead should be small, since usually connections are held in pools and do not get re-created all the time.

My way around sql injection

I'm not an expert but I do have a web front processing orders that have data needing to be input for further logins. Instead of using that database, I created another one with an extra column called status. Initially when orders are processed, they are set to 0. The cron job runs every 3 minutes polling this database for all users with status 0. When run, the cron sets the status of all currently processed users to status 1 (so if there are any that do get input during runtime of the script, it will be processed next time which is only 3 minutes).
After the status of all new users is set to 1, just the password and email fields are dumped to a file and then loaded via "LOAD DATA INFILE" back into the real database that users need to log in with their client. there is no web log in form. It is for emails, just using the IMAP client. However, I do use the root account for the cron since I realized I needed to grant all privs to a user for the dumping of data and if that is going to be it, I might as well just use root to update the status column first, then dump the new data to a file, then load it into the new db and go back and delete all users with status 1. It is a simple 4 line script running mysql from the command line.
Is this a safe bet or am I risking something running a root cron every 3 min? I don't see how I can possibly have an issue since I never use root to process the web stuff. I use a separate mysql user with only INSERT privs for the web front to process new orders. Any comments? I feel like this way I can avoid sql injection even though my mysql user still has limited privs, there always might be something I don't know about.
Is this a safe bet or am I risking something
As long as it's simple LOAD DATA INFILE query - no. However,
Instead of using that database, I created another one with an extra column called status.
Such a flying circus is absolutely unnecessary.
It doesn't protect you from injection anyway.
Instead, you have to use prepared statements for ALL the queries in your application.

Send an Email when new data is inserted into the table

I just want to send a mail when a table is populated with new row in database. My database is MySQL.
Actually i have two relation job(job_id,title,user_id) and user(user_id, user_name,email)
in MySQL
I want to send an email when new records inserted in job table
i don't know how can I'll do and my front end is in PHP.
You could possibly use a trigger to do what you want, but MySQL can't make an external call from a trigger function - only internal things (like changing another row).
I think you must default to polling the database. You might find SELECT COUNT(*) FROM table; helpful, to count the records in a table to find out if anything has changed. Most DBs run such queries very fast, so it would be ok to poll the server using it if there was only one client polling. Once you have identified a change, then use other SQL to identify whether it is a significant change (i.e. one requiring an email) and remember you might have more than one email to send :-)