Raspberry PI server - GPIO ports status JSON response - json

I'm struggling for a couple of days. Question is simple, is there a way that can I create a server on Raspberry PI that will return current status of GPIO ports in JSON format?
Example:
Http://192.168.1.109:3000/led
{
"Name": "green led",
"Status": "on"
}
I found Adafruit gpio-stream library useful but don't know how to send data to JSON format.
Thank you

There are a variety of libraries for gpio interaction for node.js. One issue is that you might need to run it as root to have access to gpio, unless you can adjust the read access for those devices. This is supposed to be fixed in the latest version of rasbian though.
I recently built a node.js application that was triggered from a motion sensor, in order to activate the screen (and deactivate it after a period of time). I tried various gpio libraries but the one that I ended up using was "onoff" https://www.npmjs.com/package/onoff mainly because it seemed to use an appropriate way to identify changes on the GPIO pins (using interrupts).
Now, you say that you want to send data, but you don't specify how that is supposed to happen. If we use the example that you want to send data using a POST request via HTTP, and send the JSON as body, that would mean that you would initialize the GPIO pins that you have connected, and then attach event handlers for them (to listen for changes).
Upon a change, you would invoke the http request and serialize the JSON from a javascript object (there are libraries that would take care of this as well). You would need to keep a name reference yourself since you only address the GPIO pins by number.
Example:
var GPIO = require('onoff').Gpio;
var request = require('request');
var x = new GPIO(4, 'in', 'both');
function exit() {
x.unexport();
}
x.watch(function (err, value) {
if (err) {
console.error(err);
return;
}
request({
uri: 'http://example.org/',
method: 'POST',
json: true,
body: { x: value } // This is the actual JSON data that you are sending
}, function () {
// this is the callback from when the request is finished
});
});
process.on('SIGINT', exit);
I'm using the npm modules onoff and request. request is used for simplifying the JSON serialization over a http request.
As you can see, I only set up one GPIO here. If you need to track multiple, you must make sure to initialize them all, distinguish them with some sort of name and also remember to unexport them in the exit callback. Not sure what happens if you don't do it, but you might lock it for other processes.

Thank You, this was very helpful. I did not express myself well, sorry for that. I don't want to send data (for now) i just want to enter web address like 192.168.1.109/led and receive json response. This is what I manage to do for now. I don't know if this is the right way. PLS can you review this or suggest better method..
var http = require('http');
var url = require('url');
var Gpio = require('onoff').Gpio;
var led = new Gpio(23, 'out');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/html'});
var command = url.parse(req.url).pathname.slice(1);
switch(command) {
case "on":
//led.writeSync(1);
var x = led.readSync();
res.write(JSON.stringify({ msgId: x }));
//res.end("It's ON");
res.end();
break;
case "off":
led.writeSync(0);
res.end("It's OFF");
break;
default:
res.end('Hello? yes, this is pi!');
}
}).listen(8080);

Related

How to link Node.js Post script to HTML form?

I have created a REST full APi, which works as I would be expecting if I am running Postman. I run the Test from an index.js file which would have the routes saved as per below file.
const config = require('config');
const mongoose = require('mongoose');
const users = require('./routes/users');
const auth = require('./routes/auth');
const express = require('express');
const app = express();
//mongoose.set();
if (!config.get('jwtPrivateKey'))
{
console.log('Fatal ERRORR: jwtPrivateKey key is not defined')
process.exit(1);
}
mongoose.connect(uri ,{
useNewUrlParser: true,
useUnifiedTopology: true,
useCreateIndex: true
})
.then(()=>console.log('Connected to MongoDB...'))
.catch(err=> console.log('Not Connected, bad ;(', err));
app.use(express.json());
//THis is only for posting the user, e.g. Registering them
app.use('/api/users', users);
app.use('/api/auth', auth);
const port = process.env.PORT || 3000;
app.listen(port, () => console.log(`Listening on port ${port}...`));
The real code is happening here. Testing this in Postmon I could establish, that the values are saved in MongoDB.
router.post('/', async (req, res) => {
//validates the request.
const { error } = validate(req.body);
if (error) return res.status(400).send(error.details[0].message);
let user = await User.findOne({email: req.body.email});
if (user) return res.status(400).send('User Already Register, try again!');
user = new User(_.pick(req.body, ['firstName','lastName','email','password','subscription']));
const salt = await bcrypt.genSaltSync(15);
user.password = await bcrypt.hash(user.password, salt);
//Here the user is being saved in the Database.
await user.save();
//const token = user.generateAuthToken();
//const token = jwt.sign({_id: user._id}, config.get('jwtPrivateKey'));
const token = user.generateAuthToken();
//We are sending the authentication in the header, and the infromation back to client
res.header('x-auth-token',token).send( _.pick(user, ['_id','firstName','lastName','email','subscription']));
});
Now my question's are:
How can I call the second code block from a , in one particular html file. When using Action="path to the users.js", the browser opens the js file code but doesn't do anything.
Do I need to rewrite the Post block part so that it would as well include the connection details to the DB? And would this mean I would keep open the connection to MongoDB once I insert Read etc.? Wouldn't this eat a lot of resources if multiple users would e.g. log in at the same time?
Or is there a way how I can use the index.js + the users.js which is refereed in the index.js file together?
All of these are theoretical questions, as I am not quite sure how to use the created API in html, then I created as walking through a tutorial.
Do I need to change the approach here?
After some longs hours I finally understood my own issue and question.
What I wanted to achieve is from an HTML page post data in MongoDB through API (this I assume is the best way how to describe this).
In order to do this I needed to:
Start server for the API function e.g. nodemon index.js, which has the information regarding the API.
Opened VS Code opened the terminal and started the API server (if I can call it like that)
Opened CMD and startet the local host for the index.html with navigating to it's folder and then writting http-server now I could access this on http://127.0.0.1:8080.
For the register.html in the form I needed to post:
This is the part which I didn't understood, but now it makes sense. Basically I start the server API seperatly and once it is started I can use e.g. Postmon and other apps which can access this link. I somehow thought html needs some more direct calls.
So After the localhost is started then the register.html will know where to post it via API.
Now I have a JOI validate issue, though on a different more simple case this worked, so I just need to fix the code there.
Thank You For reading through and Apologize if was not clear, still learning the terminology!

Slack webhooks cause cls-hooked request context to orphan mysql connections

The main issue:
We have a lovely little express app, which has been crushing it for months with no issues. We manage our DB connections by opening a connection on demand, but then caching it "per request" using the cls-hooked library. Upon the request ending, we release the connection so our connection pool doesn't run out. Classic. Over the course of months and many connections, we've never "leaked" connections. Until now! Enter... slack! We are using the slack event handler as follows:
app.use('/webhooks/slack', slackEventHandler.expressMiddleware());
and we sort of think of it like any other request, however slack requests seem to play weirdly with our cls-hooked usage. For example, we use node-ts and nodemon to run our app locally (e.g. you change code, the app restarts automatically). Every time the app restarts locally on our dev machines, and you try and play with slack events, suddenly when our middleware that releases the connection tries to do so, it thinks there is nothing in session. When you then use a normal endpoint... it works fine and essentially seems to reset slack to working okay again. We are now scared to go to prod with our slack integration, because we're worried our slack "requests" are going to starve our connection pool.
Background
Relevant subset of our package.json:
{
"#slack/events-api": "^2.3.2",
"#slack/web-api": "^5.8.0",
"express": "~4.16.1",
"cls-hooked": "^4.2.2",
"mysql2": "^2.0.0",
}
The middleware that makes the cls-hooked session
import { session } from '../db';
const context = (req, res, next) => {
session.run(() => {
session.bindEmitter(req);
session.bindEmitter(res);
next();
});
};
export default context;
The middleware that releases our connections
export const dbReleaseMiddleware = async (req, res, next) => {
res.on('finish', async () => {
const conn = session.get('conn');
if (conn) {
incrementConnsReleased();
await conn.release();
}
});
next();
};
the code that creates the connection on demand and stores it in "session"
const poolConn = await pool.getConnection();
if (session.active) {
session.set('conn', poolConn);
}
return poolConn;
the code that sets up the session in the first place
export const session = clsHooked.createNamespace('our_company_name');
If you got this far, congrats. Any help appreciated!
Side note: you couldn't pay me to write a more confusing title...
Figured it out! It seems we have identified the following behavior in the node version of slack's API (seems to only happen on mac computers... sometimes)
The issue is that this is in the context of an express app, so Slack is managing the interface between its own event handler system + the http side of things with express (e.g. returning 200, or 500, or whatever). So what seems to happen is...
// you have some slack event handler
slackEventHandler.on('message', async (rawEvent: any) => {
const i = 0;
i = i + 1;
// at this point, the http request has not returned 200, it is "pending" from express's POV
await myService.someMethod();
// ^^ while this was doing its async thing, the express request returned 200.
// so things like res.on('finished') all fired and all your middleware happened
// but your event handler code is still going
});
So we ended up creating a manual call to release connections in our slack event handlers. Weird!

Accessing indexedDB in ServiceWorker. Race condition

There aren't many examples demonstrating indexedDB in a ServiceWorker yet, but the ones I saw were all structured like this:
const request = indexedDB.open( 'myDB', 1 );
var db;
request.onupgradeneeded = ...
request.onsuccess = function() {
db = this.result; // Average 8ms
};
self.onfetch = function(e)
{
const requestURL = new URL( e.request.url ),
path = requestURL.pathname;
if( path === '/test' )
{
const response = new Promise( function( resolve )
{
console.log( performance.now(), typeof db ); // Average 15ms
db.transaction( 'cache' ).objectStore( 'cache' ).get( 'test' ).onsuccess = function()
{
resolve( new Response( this.result, { headers: { 'content-type':'text/plain' } } ) );
}
});
e.respondWith( response );
}
}
Is this likely to fail when the ServiceWorker starts up, and if so what is a robust way of accessing indexedDB in a ServiceWorker?
Opening the IDB every time the ServiceWorker starts up is unlikely to be optimal, you'll end up opening it even when it isn't used. Instead, open the db when you need it. A singleton is really useful here (see https://github.com/jakearchibald/svgomg/blob/master/src/js/utils/storage.js#L5), so you don't need to open IDB twice if it's used twice in its lifetime.
The "activate" event is a great place to open IDB and let any "onupdateneeded" events run, as the old version of ServiceWorker is out of the way.
You can wrap a transaction in a promise like so:
var tx = db.transaction(scope, mode);
var p = new Promise(function(resolve, reject) {
tx.onabort = function() { reject(tx.error); };
tx.oncomplete = function() { resolve(); };
});
Now p will resolve/reject when the transaction completes/aborts. So you can do arbitrary logic in the tx transaction, and p.then(...) and/or pass a dependent promise into e.respondWith() or e.waitUntil() etc.
As noted by other commenters, we really do need to promisify IndexedDB. But the composition of its post-task autocommit model and the microtask queues that Promises use make it... nontrivial to do so without basically completely replacing the API. But (as an implementer and one of the spec editors) I'm actively prototyping some ideas.
I don't know of anything special about accessing IndexedDB from the context of a service worker via accessing IndexedDB via a controlled page.
Promises obviously makes your life much easier within a service worker, so I've found using something like, e.g., https://gist.github.com/inexorabletash/c8069c042b734519680c to be useful instead of the raw IndexedDB API. But it's not mandatory as long as you create and manage your own promises to reflect the state of the asynchronous IndexedDB operations.
The main thing to keep in mind when writing a fetch event handler (and this isn't specific to using IndexedDB), is that if you call event.respondWith(), you need to pass in either a Response object or a promise that resolves with a Response object. As long as you're doing that, it shouldn't matter whether your Response is constructed from IndexedDB entries or the Cache API or elsewhere.
Are you running into any actual problems with the code you posted, or was this more of a theoretical question?

webrtc: failed to send arraybuffer over data channel in chrome

I want to send streaming data (as sequences of ArrayBuffer) from a Chrome extension to a Chrome App, since Chrome message API (includes chrome.runtime.sendMessage, postMessage...) does not support ArrayBuffer and JS arrays have poor performance, I have to try other methods. Eventually, I found WebRTC over RTCDataChannel might a good solution in my case.
I have succeeded to send string over a RTCDataChannel, but when I tried to send ArrayBuffer I got:
code: 19
message: "Failed to execute 'send' on 'RTCDataChannel': Could not send data"
name: "NetworkError"
It seems that it's not a bandwidths limits problem since it failed even though I sent one byte of data. Here is my code:
pc = new RTCPeerConnection(configuration, { optional: [ { RtpDataChannels: true } ]});
//...
var dataChannel = m.pc.createDataChannel("mydata", {reliable: true});
//...
var ab = new ArrayBuffer(8);
dataChannel.send(ab);
Tested on OSX 10.10.1, Chrome M40 (Stnble), M42(Canary); and on Chromebook M40.
I have filed a bug for WebRTC here.
I modified my code, now everything worked amazing:
removed RtpDataChannels option when creating RTCPeerConnection.(YES, remove RtpDataChannels option if you want data channel, what a magic world!)
on receiver side: no need createDataChannel, instead, handle onmessage, onxxx by using event.channle from pc.ondatachannel callback:
pc.ondatachannel function(event)
var receiveChannel = event.channel;
receiveChannel.onmessage = function(event){
console.log("Got Data Channel Message:", event.data);
};
};

How write and immediately read a file nodeJS

I have to obtain a json that is incrusted inside a script tag in certain page... so I can't use regular scraping techniques, like cheerio.
Easy way out, write the file (download the page) to the server and then read it using string manipulation to extract the json (there are several) work on them and save to my db hapily.
the thing is that I'm too new to nodeJS, and can't get the code to work, I think that I'm trying to read the file before it is fully written, and if read it time before obtain [Object Object]...
Here's what I have so far...
var http = require('http');
var fs = require('fs');
var request = require('request');
var localFile = 'tmp/scraped_site_.html';
var url = "siteToBeScraped.com/?searchTerm=foobar"
// writing
var file = fs.createWriteStream(localFile);
var request = http.get(url, function(response) {
response.pipe(file);
});
//reading
var readedInfo = fs.readFileSync(localFile, function (err, content) {
callback(url, localFile);
console.log("READING: " + localFile);
console.log(err);
});
So first of all I think you should understand what went wrong.
The http request operation is asynchronous. This means that the callback code in http.get() will run sometime in the future, but the fs.readFileSync, due to its synchronous nature will execute and complete even before the http request will actually be sent to the background thread that will execute it, since they are both invoked in what is commonly known as the (same) tick. Also fs.readFileSync returns a value and does not use a callback.
Even if you replace fs.readFileSync with fs.readFile instead the code still might not work properly since the readFile operation might execute before the http response is fully read from the socket and written to the disk.
I strongly suggest reading: stackoverflow question and/or Understanding the node.js event loop
The correct place to invoke the file read is when the response stream has finished writing to the file, which would look something like this:
var request = http.get(url, function(response) {
response.pipe(file);
file.once('finish', function () {
fs.readFile(localFile, /* fill encoding here */, function(err, data) {
// do something with the data if there is no error
});
});
});
Of course this is a very raw and not recommended way to write asynchronous code but that is another discussion altogether.
Having said that, if you download a file, write it to the disk and then read it all back again to the memory for manipulation, you might as well forgo the file part and just read the response into a string right away. Your code will then look something like so (this can be implemented in several ways):
var request = http.get(url, function(response) {
var data = '';
function read() {
var chunk;
while ( chunk = response.read() ) {
data += chunk;
}
}
response.on('readable', read);
response.on('end', function () {
console.log('[%s]', data);
});
});
What you really should do IMO is to create a transform stream that will strip away all the data you need from the response, while not consuming too much memory and yielding this more elegantly looking code:
var request = http.get(url, function(response) {
response.pipe(yourTransformStream).pipe(file)
});
Implementing this transform stream, however, might prove slightly more complex. So if you're a node beginner and you don't plan on downloading big files or lots of small files than maybe loading the whole thing into memory and doing string manipulations on it might be simpler.
For further information about transformation streams:
node.js stream api
this wonderful guide by substack
this post from strongloop
Lastly, see if you can use any of the million node.js crawlers already out there :-) take a look at these search results on npm
According to the http module help 'get' does not return the response body
This is modified from the request example on the same page
What you need to do is process the response with in the callback (function) passed into http.request so it can be called when it is ready (async)
var http = require('http')
var fs = require('fs')
var localFile = 'tmp/scraped_site_.html'
var file = fs.createWriteStream(localFile)
var req = http.request('http://www.google.com.au', function(res) {
res.pipe(file)
res.on('end', function(){
file.end()
fs.readFile(localFile, function(err, buf){
console.log(buf.toString())
})
})
})
req.on('error', function(e) {
console.log('problem with request: ' + e.message)
})
req.end();
EDIT
I updated the example to read the file after it is created. This works by having a callback on the end event of the response which closes the pipe and then it can reopen the file for reading. Alternatively you can use
req.on('data', function(chunk){...})
to process the data as it arrives without putting it into a temporary file
My impression is that you serializing a js object into JSON by reading it from a stream that's downloading a file containing HTML. This is do-able yet hard. Its difficult to know when you're search expression is found because if you parse as the chunks come in then you never know if you received only context and you could never find what you're looking for because it was split into 2 or many parts which were never analyzed as a whole.
You could try something like this:
http.request('u/r/l',function(res){
res.on('data',function(data){
//parse data as it comes in
}
});
This allows you to read data as it comes in. You can handle it to save to disc, db, or even parse it if you accumulated the contents within the script tags into a single string then parsed objects in that.