Dynamic query as Inbound connections to MULE - esb

I am trying to create an SQL query inbound connection to my Mule server but I want the query itself to be dynamic (meaning I want to add a value such as: SELECT * FROM SOME_TABLE WHERE TimeStamp > SomeDynamicVariable).
How would I go about creating such an inbound connection, considering that I want to poll the database every so often?

In Mule, what you want to achieve is called "requesting" (ie. consuming an endpoint with a custom expression) and is not handled with inbound endpoints.
To achieve your goal you need:
A global JDBC endpoint using a Mule expression for the timestamp value, for example like this:
<jdbc:query key="myQuery" value="SELECT * FROM SOME_TABLE WHERE TimeStamp > #[payload]"/>
A Quartz inbound endpoint to generate an event containing in the payload the timestamp to be used in query,
A message enricher to request from the endpoint and set the resulting value in the current payload (target = #[payload]).
D.

Related

Agregate JSON events into an array in Azure Stream Analytics

I'm new to Azure Stream Analytics and query language. I have an ASA job which reads json data coming from my IoT Hub and feeds it to different functions based on one of the values. This is what I have now:
SELECT
*
INTO
storage
FROM
iothub
SELECT
*
INTO
storageQueueFunction
FROM
iothub
WHERE
recType LIKE '3'
SELECT
*
INTO
deviceTwinD2CFunctionApp
FROM
iothub
WHERE
recType LIKE '50'
SELECT
*
INTO
heartbeatD2CFunctionApp
FROM
iothub
WHERE
recType LIKE '51'
SELECT
*
INTO
ackC2D
FROM
iothub
WHERE
recType LIKE '54'
I'm pretty sure this could be done more efficiently but it's working for now.
My problem is that when a large number of events come in with recType 54, I think it is overloading my Function App "ackC2D".
My idea is to batch these types of events into a json array using something like a rolling window of 5 seconds, then send that array to the output where I can parse through the array event by event.
I haven't been able to find anything like this online, the closest I can find is aggregating data then outputting a calculation on the aggregate.
Is what I'm trying to do possible?
Thanks!
When configuring the Azure function output, you have the ability to specify 'Max batch size' and 'Max batch count' properties. If lot of input events arrive rapidly, keeping a high value for these properties will result in fewer calls to your Azure Function output (by automatically batching many outputs events in a single HTTP request).

retrieve timestamp value inside muc_filter_message hook

Is it possible to get the timestamp of the message inside muc_filter_message hook ? I need to notify the muc messages, the notification payload must include the timestamp of the messages.
muc_filter_message(#message{from = From, body = Body} = Pkt,
#state{config = Config, jid = RoomJID} = MUCState,
FromNick) ->
?INFO_MSG("~p.", [From#jid.lserver]),
PostUrl = gen_mod:get_module_opt(From#jid.lserver, ?MODULE, post_url, fun(S) -> iolist_to_binary(S) end, list_to_binary("")),
Is there a field that I can extract from Pkt which indicates the timestamp ?
In the client side, I got this frame where archived -> id is matching with the timestamp stored in the archive table of the ejabberd database
What timestamp? A groupchat message, as described in https://xmpp.org/extensions/xep-0045.html does not contain any element or attribute about the timestamp. So, Pkt does not contain any time information.
XMPP messages (including MUC) are not timestamped when they are delivered in real time. All timestamps you see in the client application and in logs are simply taken from the local clock when a message is received - this is why the chat log and your local application tend to show different timestamps.
In your use case, I think this means you should just generate your timestamp from the current time on the server.

Analyze data volume of API calls with Invantive SQL

The SQL engine hides away all nifty details on what API calls are being done. However, some cloud solutions have pricing per API call.
For instance:
select *
from transactionlines
retrieves all Exact Online transaction lines of the current company, but:
select *
from transactionlines
where financialyear = 2016
filters it effectively on REST API of Exact Online to just that year, reducing data volume. And:
select *
from gltransactionlines
where year_attr = 2016
retrieves all data since the where-clause is not forwarded to this XML API of Exact.
Of course I can attach fiddler or wireshark and try to analyze the data volume, but is there an easier way to analyze the data volume of API calls with Invantive SQL?
First of all, all calls handled by Invantive SQL are logged in the Invantive Cloud together with:
the time
data volume in both directions
duration
to enable consistent API use monitoring across all supported cloud platforms. The actual data is not logged and travels directly.
You can query the same numbers from within your session, for instance:
select * from exactonlinerest..projects where code like 'A%'
retrieves all projects with a code starting with 'A'. And then:
select * from sessionios#datadictionary
shows you the API calls made:
You can also put a query like to following at the end of your session before logging off:
select main_url
, sum(bytes_received) bytes_received
, sum(duration_ms) duration_ms
from ( select regexp_replace(url, '\?.*', '') main_url
, bytes_received
, duration_ms
from sessionios#datadictionary
)
group
by main_url
with a result such as:

RESTful API scenario

I am asking about the scenario of a RESTful service in a particular case. Assume that this is a file delevery service. Users submit an order then after a period of time ( 1-10 min ) a pdf file is ready for them to download. So the basics I came with:
user submits an order using GET method to the webservice ( edit: OR POST )
webservice returns an orderid via json or xml
some background and human process takes place ( 1 - 10 mins )
user checks the status of the order by passing the orderid to the webservice
if the order is ready then an statusCode and a pdfLink is returned to user
else only the statusCode is returned (i.e still proccessing, failed, etc)
Now, the question about this scenario is that how often the user ( other website ) should try to fetch the status of one specific order?
Do we need to stablish a double side webservices ? like:
server A submits the order to B
B informes A that the order is ready to get
A requests B for the pdfLink
A transferes the pdf file from server B to A
When server A submits an order to B, it could also specify an url on which it expects a call if the order is ready. This way service B does not need to know the specifics of service A. It just calls the url specified by service A.
The response service B gives to service A, could also contain a url where to download the order.
This prevents polling from server A to server B, which significantly reduces the load on service B.

SSIS: Overwite the data source/server in a connection string read from a package configuration value

I have looked at other posts and questions but I couldn't find what I needed.
I am relatively new to SSIS Package creation so bear with me please.
Basically, I need the package to connect to multiple servers based on a list of IPs read from a table. I have a connection string that I am reading from a config table. the connection string is generic in that the datasource is simply 255.255.255.255 and I want to replace the datasource with the IPs read from the table as I loop through during package execution.
I am using IPs since the servers I am connecting to are not on our domain. I have set up the server name as a variable within the connection manager expressions. Thus what I am hoping is that the pkg config is read to obtain the entire connection string. Then as I loop through the IPs, the server name variable will be dynamically substituted into the data source value as it loops thorugh. I hope this makes sense.
So the connstring is: (generic within config table)
Data Source=255.255.255.255,65000;User ID=test;Password=test;Initial Catalog=myDB;Provider=SQLNCLI10.1;Integrated Security=SSPI;Auto Translate=False;Application Name=SSIS-myApp;
Then as I obtain the list of IPs I want it to change to and then obviously connect as:
Data Source=1.1.1.1,1000;User ID=test;Password=test;Initial Catalog=myDB;Provider=SQLNCLI10.1;Integrated Security=SSPI;Auto Translate=False;Application Name=SSIS-myApp;
Then the next IP and connect as:
Data Source=2.2.2.2,1000;User ID=test;Password=test;Initial Catalog=myDB;Provider=SQLNCLI10.1;Integrated Security=SSPI;Auto Translate=False;Application Name=SSIS-myApp;
How can I do this using SSIS?
Create a connection manager connection to a valid database.
Right click on the newly created connection manager connection, select to Properties and copy the value for the connection string
Create a string variable and paste the connection string in your clip board in the value for the newly created variable
Add an Execute SQL Task with a statement similar to this:
SELECT TOP 1
'Data Source=' + [IPAddress] +
';User ID='+[Username] +
...
FROM dbo.IPTable
Pass the result set to the string variable you previously created
Right click on your connection manager connection and click the ellipsis next to Expressions
In the dialog that pops out, under Property, select ConnectionString and click the ellipsis next to the blank value for Expression.
In the Expression Builder, add the variable name to the variable you created. Ex: #[User::CreatedVariableName]
And you're finished. This a basic concept and you can tweak from there..