retrieve timestamp value inside muc_filter_message hook - ejabberd

Is it possible to get the timestamp of the message inside muc_filter_message hook ? I need to notify the muc messages, the notification payload must include the timestamp of the messages.
muc_filter_message(#message{from = From, body = Body} = Pkt,
#state{config = Config, jid = RoomJID} = MUCState,
FromNick) ->
?INFO_MSG("~p.", [From#jid.lserver]),
PostUrl = gen_mod:get_module_opt(From#jid.lserver, ?MODULE, post_url, fun(S) -> iolist_to_binary(S) end, list_to_binary("")),
Is there a field that I can extract from Pkt which indicates the timestamp ?
In the client side, I got this frame where archived -> id is matching with the timestamp stored in the archive table of the ejabberd database

What timestamp? A groupchat message, as described in https://xmpp.org/extensions/xep-0045.html does not contain any element or attribute about the timestamp. So, Pkt does not contain any time information.

XMPP messages (including MUC) are not timestamped when they are delivered in real time. All timestamps you see in the client application and in logs are simply taken from the local clock when a message is received - this is why the chat log and your local application tend to show different timestamps.
In your use case, I think this means you should just generate your timestamp from the current time on the server.

Related

Wavefront Alerting when no data sent

I have wavefront alerts set up with the following alert condition:
ts(mytimeseries)<20000
Recently the datasource stopped sending data to wavefront but I did not receive an alert. I cannot figure out why this did not alert. Do I need to set up a separate alert for when data is not sent. Thanks
Yes in scenarios where there is no data sent you explicitly need to define the condition for that. Well the best approach is to create a new availability alert but still if you want to manipulate the same condition you can do something like below
default( 20001 ,ts(mytimeseries))<20000
In case there is no data found it will exceed your limit and will raise the alert

Select data based on time in timezone

I am trying to make a script for sending emails at specific time based on timezone saved in users table.
I have two tables:
users -> id, firstname, lastname, email, ... , timezone (VARCHAR; America/New_York, Europe/Berlin, ...)
settings -> id, user_id, email_daily_events (TIME), email_daily_task (TIME)
I want to retrieve all fields from settings table based on time in user's timezone.
In settings.email_daily_events and settings.email_daily_task are stored times when the mail should be sent. The script is executed with cron every minute.
When the script will retrieve some fields it will send some other data to user's email address.
Whole CMS runs on NodeJS and Sequelize.
I really have no idea how to do it, any idea would be very appreciated ...
I suggest the following modules :
node-cron
var cron = require('node-cron');
cron.schedule('* * * * *', function(){
console.log('running a task every minute');
});
nodemailer
Simple example : sending a text/html email using node
timezone
Format time in JavaScript using the IANA time zone database.
var tz = require('timezone/loaded'),
equal = require('assert').equal,
utc;
// Get POSIX time in UTC.
utc = tz('2012-01-01');
// Convert UTC time to local time in a localize language.
equal(tz(utc, '%c', 'fr_FR', 'America/Montreal'),
'sam. 31 déc. 2011 19:00:00 EST');
A combination of node-cron with mysql to perform your query every minute, then use nodemailer to send the emails.
timezone might come in handy...

Preventing duplicate INSERTS MySQL

I have a table that takes INSERTS to store integers for a date. These integers are then summed for each date with sum(COLUMN) and the total is used. So, date cannot be unique as there are many inserts per date. Integer value itself cannot be unique either.
I use the system to count entries (for instance at a restaurant, club, whatever).
A person holds an iPad at the door and sends an INSERT command for how many people entered (like a group of 5 would be a row with an integer value of 5 and the current date).
If there is a bad connection and the iPad sends the request but doesn't receive an answer, then the user will attempt to perform the insert again, causing duplicates.
Would it be sensible to add a column such as "IDENTIFIER" with a random string/number/hash etc. that would then be unique, so that if the user retries the insert and the server already has the row, it will give the same reply as if the insert succeeded.
I'm having trouble navigating the logic in handling errors such as these. If it were an UPDATE command on a unique column this wouldn't be an issue, but the way I built this that's not really possible.
What about the following approach?
Client side:
Create GUID for insert-request
Send insert-request (value + date + GUID)
Wait for response
Response received --> show confirmation to user ("Completed successfully")
No response received --> request insert-response (incl. guid)
insert-response received --> show confirmation to user ("Completed successfully")
No insert-response received --> repeat 5.
Response == "not inserted" --> show error message to user ("Error, try again")
Server side:
Receive insert-request --> insert data (value, date) into table
Send confirmation --> GUID, ok
OR: receive request (GUID inserted?) --> send response guid inserted yes/no

RESTful API scenario

I am asking about the scenario of a RESTful service in a particular case. Assume that this is a file delevery service. Users submit an order then after a period of time ( 1-10 min ) a pdf file is ready for them to download. So the basics I came with:
user submits an order using GET method to the webservice ( edit: OR POST )
webservice returns an orderid via json or xml
some background and human process takes place ( 1 - 10 mins )
user checks the status of the order by passing the orderid to the webservice
if the order is ready then an statusCode and a pdfLink is returned to user
else only the statusCode is returned (i.e still proccessing, failed, etc)
Now, the question about this scenario is that how often the user ( other website ) should try to fetch the status of one specific order?
Do we need to stablish a double side webservices ? like:
server A submits the order to B
B informes A that the order is ready to get
A requests B for the pdfLink
A transferes the pdf file from server B to A
When server A submits an order to B, it could also specify an url on which it expects a call if the order is ready. This way service B does not need to know the specifics of service A. It just calls the url specified by service A.
The response service B gives to service A, could also contain a url where to download the order.
This prevents polling from server A to server B, which significantly reduces the load on service B.

Anyway to get dkims records for verifying ses domain in boto?

Tinkering around with verifying a couple of domains and found the manual process rather tedius. My DNS controller offers API access so I figured why not script the whole thing.
Trick is I can't figure out how to access the required TXT & CNAME records for DKIMS verification from boto, when I punch in
dkims = conn.verify_domain_dkim('DOMAIN.COM')
it adds DOMAIN.COM to the list of domains pending verification but doesn't provide the needed records, the returned value of dkims is
{'VerifyDomainDkimResponse': {
'ResponseMetadata': {'RequestId': 'REQUEST_ID_STRING'},
'VerifyDomainDkimResult': {'DkimTokens': {
'member': 'DKIMS_TOKEN_STRING'}}}}
Is there some undocumented way to take the REQUEST_ID or TOKEN_STRING to pull up these records?
UPDATE
If you have an aws account you can see the records I'm after at
https://console.aws.amazon.com/ses/home?region=us-west-2#verified-senders:domain
tab: Details:: Record Type: TXT (Text)
tab: DKIM:: DNS Record 1, 2, 3
these are the records required to add to the DNS controller to validate & allow DKIM signatures to take place
This is how I do it with python.
DOMINIO = 'mydomain.com'
from boto3 import Session
session = Session(
aws_access_key_id=MY_AWS_ACCESS_KEY_ID,
aws_secret_access_key=MY_AWS_SECRET_ACCESS_KEY,
region_name=MY_AWS_REGION_NAME)
client = session.client('ses')
# gets VerificationToken for the domain, that will be used to add a TXT record to the DNS
result = client.verify_domain_identity(Domain=DOMINIO)
txt = result.get('VerificationToken')
# gets DKIM tokens that will be used to add 3 CNAME records
result = client.verify_domain_dkim(Domain=DOMINIO)
dkim_tokens = result.get('DkimTokens') # this is a list
At the end of the code, you will have "txt" and "dkim_tokens" variables, a string and a list respectively.
You will need to add a TXT record to your dns, where the host name is "_amazonses" and the value is the value of "txt" variable.
Also you will need to add 3 CNAME records to your dns, one for each token present in "dkim_tokens" list, where the host name of each record is of the form of [dkimtoken]._domainkey and the target is [dkimtoken].dkim.amazonses.com
After adding the dns records, after some minutes (maybe a couple of hours), Amazon will detect and verify the domain, and will send you an email notification. After that, you can enable Dkim signature by doing this call:
client.set_identity_dkim_enabled(Identity=DOMINIO, DkimEnabled=True)
The methods used here are verify_domain_identity, verify_domain_dkim and set_identity_dkim_enabled.
You may also want to take a look a get_identity_verification_attributes and get_identity_dkim_attributes.
I think the get_identity_dkim_attributes method will return the information you are looking for. You pass in the domain name(s) you are interested in and it returns the status for that identity as well as the DKIM tokens.