Can I create custom REST API on ejabberd? - ejabberd

I want to send data to ejabberd server using my own custom REST API. Just like ejabberd REST APIs /api/send_message. Is it possible? Or can I call my custom module directly with using hooks?

send_message is implemented in mod_admin_extra.erl
You can:
A) Edit the source code of that command, so your custom send_message code will do what you want.
B) Or copy the code of that command in that file, and add a new command called send_message_faaiq in that file that will do what you want.
C) Or you can create your own module, mod_faaiq.erl, copy that code, and change it to suit your needs.

Related

Which is the best way of parsing CSV-data in a logic app without using a custom connector?

I have an SFTP trigger in a logic app which fires when a file is added to a certain file area. It is a CSV-formatted file and I want the rows to be parsed and coverted into json. Which is the best way to convert CSV-data into json without using any custom connectors?
I cannot find any built-in connectors doing this job. And as far as I know there are no logic apps functions doing the job either.
Right now, there is no connector/action in logic app that can provide the out of box solution for your requirement. You need to loop in through the array and perform the calculation as per your requirement but I will not suggest you leverage the loop, variables action as it may take time and cost you more.
The alternative would be leveraging the inline code (JavaScript code) to do the calculation as per your requirement. Please note that you will need Integration Account to run your inline code.
Please refer to javascript code and modified if needed according to your needs. I have used '_' for differentiating the nested objects. For more details you can refer to previous discussion here.
For complex calculation you can offload this functionality to azure function and write your code as per the supported languages and call azure function from logic app.
1.Created logic app as shown below:
2 .Created container in storage account and uploaded a CSV file in container.
3.Next using compose action to split the contents of the CSV file on every new line into an array.
a. Here is the expression used in SplitLines compose action:
split(body('Get_blob_content_(V2)'),decodeUriComponent('%0D%0A'))
b. Follow the below MS Doc to write expressions:
4. Removing last(empty) line from previous output using another compose action as shown below ,
take(outputs('SplitLines'),add(length(outputs('SplitLines')),-1))
5.Separating filed names using compose action
split(first(outputs('SplitLines')), ',')
Forming json as shown below using Select action,
**From**: **`skip(outputs('RemoveLastLine'), 1)`**
**Map:**
**`outputs('SplitFieldName')[0]`** **`split(item(), ',')?[0]`**
**`outputs('SplitFieldName')[1]`** **`split(item(), ',')?[1]`**
Tested logic app and it is running successfully. 
Content of CSV file is as shown below:
Csv data is formatted as json:
Reference:Use data operations in Power Automate (contains video) — Power Automate | Microsoft Docs
Credit: #Iason Koulas

How to create an application Node.js in bluemix

I built my workspace so I have the JSON file and I want to create an application like the example of the car dashboard. I don't want to waste time to write HTML and CSS code so I want to use the code of the car dashboard example and just change the JSON file. How can I do so?
If you mean the Car Dashboard example here, you could follow the instructions on deploying the app (there are some listed), then change the car_workshop.json file and then use the Import Workspace to upload your version of the file.

CasperJS and MySQL

I want to save data to a MySQL DB that was retrieved while using casperJS.
I have not been able to find any way to do this directly.
Is there a way to DIRECTLY connect the two?
Will node-mysql work from within Casper?
No, there is no way to do it directly.
You will need to do it indirectly. Remember that CasperJS is built on top of PhantomJS which has a different execution environment that node.js. Very few node.js modules actually work inside of PhantomJS/CasperJS without change. You will have to write a script (e.g. node.js script) which has the ability to read a file and write to MySQL.
CasperJS script scrapes data and stores the data in some (temporary) file (see PhantomJS' fs module),
Call the external script with the scraped data file (see PhantomJS' child_process module) and
if necessary, delete the temporary data file either in CasperJS (see PhantomJS' fs module) or the external script.

Google Cloud Deployment Manager: Passing variables into templates

I'm using Google Cloud Deployment and I am trying to get external input into my template. Namely, I want to set a metadata variable on my instance (when creating the instance) but provide this value on execution.
I've tried:
gcloud deployment-manager deployments create test-api-backend --config test-api-backend.yaml --properties 'my_value=hello'
Which fails (The properties flag should only be used when passing in a template as your config file.)
I've tried:
my_value=hello gcloud deployment-manager deployments create test-api-backend --config test-api-backend.yaml
And use {{env['my_value']}} but the value isn't picked up.
I guess I could add the property in a .jinja file and re-write this file before I run everything, but it feels like a hack. That, or my idea of passing a variable from shell into Deploy Manager is a hack. I'm honestly not sure.
As the error message indicates, the command line properties can only be used with a template. They are essentially meant to replace the config yaml file.
The easiest thing to do is to just rename your yaml file to a .py or .jinja file. Then use that template as the file in the gcloud command instead of the yaml file.
In that new template file, add any defaults you would like if you don't pass them in on the command line.
For python, something like:
if 'myparam' in context.properties:
valuetouse = context.properities['myparam']
else:
valuetouse = mydefaultvalue
If the template uses another template then you'll also need to create a schema file for the new, top level template so you can do the imports there instead of the yaml file.
See the schema file in this github example.
https://github.com/GoogleCloudPlatform/deploymentmanager-samples/blob/master/examples/v2/igm-updater/ha-service.py.schema
If you want, you can ignore all the properties and just do the imports section.

How to set an environment variable programmatically in Jenkins/Hudson?

I have two scripts in the pre-build step in a Jenkins job, the first one a perl script, the second a system groovy script using the groovy plugin. I need information from the first perl script in my second groovy script. I think the best way would be to set some environment variable, and was wondering how that can be realized.
Or any other better way.
Thanks for your time.
The way to propagate environment variables among build steps is via EnvInject Plugin.
Here are some previous answers that show how to do it:
How to set environment variables in Jenkins?
Jenkins : Report results of intermediate [windows batch] build steps in email body
In your case, however, it may be simpler just to write to a file in one build step and read that file in another. To make sure you do not accidentally read from a previous version of the file you can incorporate BUILD_ID in the file name.
Using EnvInject Plugin from job configuration you should use Inject environment variables to the build process / Evaluated Groovy script.
Depending on the setup you may execute Groovy or shell command and save it in map containing environment variables:
Example
By either getting command result with execute method:
return [DATE: 'date'.execute().text]
or with Groovy equivalent if one exists:
return [DATE: new Date()]