N1ql query resetting document expiry time in couchbase - couchbase

I have saved a document with expiry time of 20 seconds as shown below in my java code.
#Document(expiryExpression = "20",expiryUnit = TimeUnit.SECONDS)
public class Myclass {
it is deleting the document after 20 second which is fine.
But if i execute a N1ql within 20 seconds, the document is not getting deleted.
Execution time of N1ql is just 1 second.
Update Delivery d SET VehicleTrip.tripStatus = 'ENDED' where meta(d).id = 'DD_1111_145469_2017-07-11'
my query is working fine, but the problem is document is not deleted once it complete 20 seconds.

Expiration time means that the document will be not available in memory storage in 20 seconds, it does not guarantee the same time for all persistent indexes, they might lag a bit.

N1QL DMLs will not preserve expiration set by the SDK. If you modified document through N1QL you need to set the expiry again.

Related

Neo4j MySql Benchmark

I tested out the performance of both neo4j and mysql on a simple crud process, I still wonder why does it take longer time on neo4j than it is on mysql. on select process i also experience the same result, where neo4j takes quite longer time than mysql. i wonder if im not doing things properly.
-----Neo4j-----
profile match (n:User{name:"kenlz"}) set n.updated = "2016-04-18 10:00:00" using index n:User(name)
Total update time for spesific user (3 records found) : 3139 milliseconds
profile match (n:User{enabled:1}) set n.updated = "2016-04-18 10:00:00" using index n:User(name)
Total update time for any users limit 1116961 : 27563 milliseconds
-----MySql-----
update tbl_usr set updated = now() where name = 'kenlz';
Total update time for spesific user (3 records found) : 1170 milliseconds
update tbl_usr set updated = now() where enabled = 1;
Total update time for any users limit 1116961 : 5579 milliseconds
Your operations look reasonable.
But please consider that the power of a graph database like neo4j increases with the locality of data. I.e. so-called graph-traversals (E.g. visit consecutive edges and nodes on a path), which perform really bad within a relational dbms like mysql.

How does the web server communicate with the database?

This is just to explain how I think it probably works:
Let's say the web server needs data from 10 tables. The data that will finally be displayed on the client needs some kind of formatting which can be done either on the database or the web server.Let's say the time to fetch the raw data for one table is 1 sec and the time to fetch formatted data for one table is 2 sec (It takes one second to format the data for one table and the formatting can be easily done either on the web server or the database.)
Let's consider the following cases for communication:
Case 1:
for(i = 0; i < 10; i++)
{
table[i].getDataFromDB(); //2 sec - gets formatted datafrom DB, Call is completed before control goes to next statement
table[i].sendDataToClient(); //0 sec - control doesn't wait for completion of this step
}
Case 2:
for(i = 0; i < 10; i++)
{
table[i].getDataFromDB(); //1 sec - gets raw data from DB, Call is completed before control goes to next statement
table[i].formatData(); //0 sec - will be executed as a parallel process which takes 1 sec to complete (control moves to next statement before completion)
}
formatData()
{
//format the data which takes 1 sec
sendDataToClient(); //0 sec - control doesn't wait for completion of this step
}
Assume it takes no time (0 sec) to send the data from the web server to the client since it will be constant for both cases.
In case 1, the data for each table will be displayed at interval of 2 seconds on the client, and the complete data will be
on client after 20 seconds.
In case 2, the data for first table will be displayed after 2 sec, but the data for the next 9 will then be displayed at sec 3,4,...,11.
Which is the correct way and how is it achieved between popular web server and databases ?
Popular web servers and databases can work either way, depending on how the application is written.
That said, unless you have an extreme situation, you will likely find that the performance impact is small enough that your primary concern should instead by code maintainability. From this point of view, formatting the data in the application (which runs on the web server) is usually preferred, as it is usually harder to maintain business logic that is implemented on the database level.
Many web application frameworks will do much of the formatting work for you, as well.

Couchbase document expiration performance

I have a 6 nodes couchbase cluster with about 200 million documents in one bucket. I need to delete about 100 million documents within that bucket. I'm planning to have a view that gives me an index of the documents I need to delete and then do a touch operation to set the expiry of those documents to the next day.
I understand that couchbase will run a background expiry pager operation on regular intervals to delete the documents. Will this expiry pager process on 100 million documents have an impact on couchbase cluster performance?
If you set them to expire all around the same time, maybe. It depends on your cluster's sizing if it will effect performance. If it were me, unless you have some compelling reason to get rid of them all right this moment, I would play it safe just set a random TTL for a time between now and a few days from now. Then the server will take care of it for you and you do not have to worry about this.
Document expiration in Couchbase is in seconds or UNIX epoc time. If over 30 days, it has to be UNIX epoc time.

How is Notification.ProcessAfter set ? (SSRS 2008R2)

We've got some data driven subscriptions running on SSRS.
Sometimes they take an unusually long time to complete, if I check the activity on the server I find that things are relatively quite.
What I did notice is that in the ReportServer database on the Notification table there's a column called ProcessAfter.
Sometimes this value is set about 15 minutes into the future, and the subscription only completes after the time stated in that column.
What is setting this value? Since this behaviour is relatively rare.
After a few days I posted this question here, and got an answer:
When a subscription runs, there are several things happen: The SQL
Server Agent job fires and puts a row in the Event table in the RS
catalog with the settings necessary to process the subscription. The
RS server service has a limited number of threads (2 per CPU) that
poll the Event table every few seconds looking for subscriptions to
process. When it finds an event, it puts a row in the Notifications
table and starts processing the subscription.
The only reason that rows would stay in the Notification table is that
the RS service event processing threads are not processing the events.
As per my understanding, the NotificationEntered column stores the
time when the notification enters. Delivery extension provide some
settings for specifies the number of times a report server will retry
a delivery if the first attempt does not succeed (MaxRetries property)
and specifies the interval of time (in seconds) between each retry
attempt (SecondsBeforeRetry property). The default value for
SecondsBeforeRetry is 900 seconds, means 15 minutes. When the delivery
fails, it retry attempts every 15 minutes.
Reference: Monitoring and Troubleshooting Subscriptions Delivery
Extension(s) General configuration
If there are any other questions, please feel free to let me know.
Thanks, Katherine Xiong
I found the Extension(s) General Configuration link especially helpful

Updating the same MySql table each second using AJAX

I have a lobby page that goes to the MySql database every second and checks each timestamp variable(belongs to the users) from a table and if the timestamp is older than (NOW() - 3) seconds, it sets the 'connection'(bool) variable to false. Basically it checks all currently connected users.
I haven't tested on a real server yet, but I have a feeling that it's going to be really intensive process. Because every user has access to the lobby area and each user will send a request to the MySql database and update the table. That means if I have 1000 users in the lobby area that means 1000 requests per second.
My question is, is there any other way to do the same thing without sending so many requests? I looked into Cron jobs, but cron doesn't let you to run a specific script every 1 second. I think the minimum is 1 minute.
I think this will help you run your script twice every 1 minute:
function for_cron() {
//database update code
}
function check_up() {
//assuming you dont have anything to echo
//call the function
foo_cron();
sleep(30); //sleep 30 seconds
for_cron();
}
Then setup your "check_up" function to run on cron for every 1 minute
Hopt it helps