I try to connect to a BLE device (a LEGO Powered UP device) using WebBluetooth. I did this already using .NET/WinRT on the same notebook (works nicely) and now I try to write an adapter for using it in Blazor. The LEGO PoweredUp device implements a communication protocol using BLE GATT characteristic w/ Notifications and WriteValue.
As soon as the device is connecting, it is instantly sending a series of notifications (as kind of a response to the connect before) exposing information I need. I am able to setup the notification receiver fast enough in .NET using WinRT. However, with Chrome's WebBluetooth I only receive - depending on the timing/iteration - between the last 3 and 9 message (9 messages is expected). I guess this is just a regular race condition.
My Question: Is this expected? Can I do something against it?
Below a minimal viable test (which should return 9 messages when connected to a LEGO Technic Control+ Hub).
function writeToLog(x) {
console.log(x.target.value.buffer);
}
async function connectToLwpDevice() {
const serviceUuid = "00001623-1212-EFDE-1623-785FEABCD123".toLowerCase();
const characteristicUuid = "00001624-1212-EFDE-1623-785FEABCD123".toLowerCase();
const device = await navigator.bluetooth.requestDevice({ filters: [{ services: [serviceUuid] }] });
const connectedDevice = await device.gatt.connect();
const service = await connectedDevice.getPrimaryService(serviceUuid);
const characteristic = await service.getCharacteristic(characteristicUuid);
characteristic.addEventListener('characteristicvaluechanged', writeToLog);
await characteristic.startNotifications();
}
I don't believe there is anything that can be done about this at the moment. I've filed an issue against the Web Bluetooth specification to track the changes I believe are necessary in order to enable the reception of these notifications.
Related
I am facing an Issue I want to forward call to an agent if not answered then transfer call to next agent but the issue is I don't have first agent number. I have to make call on Ivr and send keys to connect with agent it is working fine. But Issue is if agent not answered call after 4 rings call to another agent .
Call is not timeout because that is seem to be answered by IVR and when hang-up status is completed
Is there a way to do call forwarding in that way.
Here is the code
const twiml = new Twilio.twiml.VoiceResponse();
const functionPath = '';
if (event.reason === "dialStatus") {
console.log(event.DialCallStatus);
if (event.DialCallStatus === "no-answer" || event.DialCallStatus === "busy" || (event.DialCallStatus === "completed")) {
console.log('Duration'+event.DialCallDuration);
return callback(null, twiml);
} else {
console.log(event.DialCallDuration);
return callback(null, twiml);
}
}
var phonenumber=ph.split('-');
const dialedPartyNumber =ph;
var digit='www3'
console.log(dialedPartyNumber);
console.log(digit);
const dial = twiml.dial({timeout:`5`, action: `${functionPath}?reason=dialStatus`,hangupOnStar:true });
dial.number({ sendDigits: digit }, dialedPartyNumber);
callback(null, twiml);
How I've done this before is to put the original call in a conference room. Then call the first agent and ask them to press X to join the conference. If they do not, then go to the second agent and repeat.
david
The problem you are describing isn't really an issue on Twilio, it's actually a question of state management. From your description, it sounds like you are trying to implement an "inbound queue" solution - where multiple agents are "logged-in" to a queue and will receive calls accordingly.
If what I described is what you are trying to achieve, then a I would recommend trying something like the below:
A call comes into your system and queries a remote script for the first agent.
The remote script returns a Twiml to route to the relevant agent, with a dial timeout of 8 seconds.
The remote script will re-invoked, in case where the call was not answered. When invoked, your server should know that the new invocation is from an existing session - which will respond back with a new agent.
Upon answering the call, the answering agent marks the call is answered - making sure step one doesn't return that agent while they are on the phone.
Remember, Twilio (and other CPaaS platforms as well) are asynchronous, which means that you will need to manage your call-routing and call-control states yourself.
We have a hard time understanding the meaning of the different hierarchy levels provided by the iotagent. There is the fiware-service, the fiware-servicepath and underneath it is a bunch of services that in turn have a bunch of devices associated.
Now we understood how to query for all devices and all services underneath a given fiware-service and fiware-servicepath. We also understood how to query for all fiware-servicepaths given a certain fiware-service. But how to query for all those "top level" fiware-services?
Our goal is to create a device management user interface which enables an end user to provision and unprovision the devices he is managing. Maybe we have a misconception of the fiware-service here but since one can add such services with a certain POST request our expectation would be that we can somehow query for all those services. What are we missing?
If there really is no way to query the top level services, I'd like to ask for the reasoning of this as I cannot find that in the docs.
Under NGSI-v2, all context brokers are implicitly multitenant. Using a different fiware-service for your provisioned devices should imply that the devices and their data are owned by separate business concern, so there should be no need to retrieve and combine provisioned devices from separated concerns.
When using the mongo-DB option with an IoT Agent, the fiware-service helps to provide a unique database name for each tenanted service.
There should be no need to combine the IoT Agent data (services and devices), however there may be a valid use case for combining Context Data coming from separate Tenants (after securing legal agreement from each party of course) - in this case you could create a simple proxy handler which is capable of handling the /v2/op/query and/or /v2/op/update endpoints and forwarding the request with amended headers.
const express = require('express');
const router = express.Router();
const request = require('request-promise');
const BASE_PATH =
process.env.CONTEXT_BROKER || 'http://localhost:1026/v2';
function forwardRequest(req, res) {
// Add necessary validation
const headers = req.headers;
headers['fiware-service' : 'XXXX'];
headers['fiware-servicepath': 'YYYY'];
headers['Accept': 'application/json'];
const options = {
url: BASE_PATH + req.path,
method: req.method,
headers,
json: true
};
request(options)
.then(async function(cbResponse) {
return res.send(compacted);
})
.catch(function(err) {
return res.send(err);
});
}
router.post(
'/op/query', forwardRequest
);
I have a Line-of-Business (LoB) Windows 8.1 Store application I developed for a client. The client side-loads it on several Windows 10 tablets. They use it in an environment where WiFi is spotty at best and they would like to get some sort of notification inside the app, regardless of what page they are on, notification that will let them know that they've lost connectivity to the network. I have created a method on my Web API that is not hitting the repository (database). Instead, it quickly returns some static information regarding my Web API, such as version, date and time of the invocation and some trademark stuff that I'm required to return. I thought of calling this method at precise intervals of time and when there's no response, assume that the Web API connectivity is lost. In my main page, the first one displayed when the application is started, I have the following stuff in the constructor of my view model:
_webApiStatusTimer = new DispatcherTimer();
_webApiStatusTimer.Tick += OnCheckWebApiStatusEvent;
_webApiStatusTimer.Interval = new TimeSpan(0, 0, 30);
_webApiStatusTimer.Start();
Then, the event handler is implemented like this:
private async void OnCheckWebApiStatusEvent(object sender, object e)
{
// stop the timer
_webApiStatusTimer.Stop();
// refresh the search
var webApiInfo = await _webApiClient.GetWebApiInfo();
// add all returned records in the list
if (webApiInfo == null)
{
var messageDialog = new MessageDialog(#"The application has lost connection with the back-end Web API!");
await messageDialog.ShowAsync();
// restart the timer
_webApiStatusTimer.Start();
}
}
When the Web API connection is lost, I get a nice popup message that informs me that the Web API is no longer available. The problem I have is that after a while, especially if I navigate away from the first page but not necessary, I get an UnauthorizedAccessException in my application.
I use the DispatcherTimer since my understanding is that this is compatible with
UI threads, but obviously, I still do something wrong. Anyone cares to set me on the right path?
Also, if you did something similar and found a much better approach, I'd love to hear about your solution.
Thanks in advance,
Eddie
First, If you are using Windows Store Apps, then you could possibly use a Background task to check poll for the status of the web api instead of putting this responsibility on your view model, its not the viewmodels concern
Second, if you are connecting from your Windows store app to your API then one successful authentication/ authorization for the first time, how and where do you store the token (assuming you are using token authentication). If you are (and ideally you should), is there a timer that you start which is set to the token expiration time? Is your local storage getting flushed somehow and loosing the aurthorization data?
Need more information.
We have been directly using U2F on our auth web app with the hostname as our app ID (https://auth.company.com) and that's working fine. However, we'd like to be able to authenticate with the auth server from other apps (and hostnames, e.g. https://customer.app.com) that communicate with the auth server via HTTP API.
I can generate the sign requests and what-not through API calls and return them to the client apps, but it fails server-side (auth server) because the app ID doesn't validate (clients are using their own hostnames as app ID). This is understandable, but how should I handle this? I've read about facets but I cannot get it to work at all.
The client app JS is like:
var registerRequests = // ...
var signRequests = // ...
u2f.register('http://localhost:3000/facets', registerRequests, signRequests, function(registerResponse) {
if (registerResponse.errorCode) {
return alert("Registration error: " + registerResponse.errorCode);
}
// etc.
});
This gives me an Error code 5 (timeout error) after a while. I don't see any request to /facets . Is there a way around this or am I barking up the wrong tree (or a different forest)?
————
Okay, so after a few hours of researching this; I'm pretty sure this fiendish bit of the Firefox U2F plugin is the source of some of my woes:
if (u.scheme == "http")
if (url2str(u, true) == url2str(ou, true))
return resolve(challenge);
else
return reject("Not matching appID");
https://github.com/prefiks/u2f4moz/blob/master/ext/appIdValidator.js#L106-L110
It's essentially saying, if the appID's scheme is http, only allow it if it's exactly the same as the page's host (it goes on to do the behaviour for fetching the trusted facets JSON but only for https).
Still not sure if I'm on the right track though in how I'm trying to design this.
I didn't need to worry about facets for my particular situation. In the end I just pass the client app hostname through to the Auth server via the secure API interface and it uses that as the App ID. Seems to work okay so far.
The issue I was having with facets was due to using http in dev and the Firefox U2F plugin not permitting that with JSON facets.
I have a nodeJS server, that takes JSON from three websites and sends it to be displayed on my website(in JSON). The JSON on the websites that I'm taking from is constantly updated, every 10 seconds. How can I make my NodeJS server constantly update so it has the most up to date data?
I'm assuming this isn't possible without refreshing the page, but it would be optimal if the page wasn't refreshed.
If this is impossible to do with NodeJS and there is a different method of accomplishing this, I would be extremely appreciative if you told me.
Code:
router.get("/", function(req, res){
var request = require('request-promise');
var data1;
var data2;
var data3;
request("website1.json").then(function(body){
data1 = JSON.parse(body);
return request("website2.json");
})
.then(function(body) {
data2 = JSON.parse(body);
return request("website3.json");
})
.then(function(body){
data3 = JSON.parse(body);
res.render("app.ejs", {data1: data1, data2: data2, data3: data3});
})
});
Here's some general guidelines:
Examine the APIs you have available from your external service. Find out if there is anything it offers that lets you make a webSocket connection or some other continuous TCP connection so you can get realtime (or close to realtime) notifications when things change. If so, have your server use that.
If there is no realtime notification API from the external server, then you are just going to have to poll it every xx seconds. To decide how often you should poll it, you need to consider: a) How often you really need new data in your web pages (for example, maybe data that is current within 5 minutes is OK), b) What the terms of service and rate limiting are for the 3rd party service (e.g. how often will they let you poll it) and c) how much your server can afford to poll it (from a server load point of view).
Once you figure out how often you're going to poll the external service, then you build yourself a recurring polling mechanism. The simplest way would be using setInterval() that is set for your polling interval time. I have a raspberry pi node.js server that uses a setInterval() to repeatedly check several temperature sensors. That mechanism works fine as long as you pick an appropriate interval time for your situation.
Then for communication of new information back to a connected web page, the best way to get near "real time" updates form the server is for the web page to make a webSocket or socket.io connection to your server. This is a continuously connected socket over which messages can be sent either way. So, using this mechanism, the client makes a socket.io connection to your server. The server receives that connection and the connection stays open for the lifetime of that web page. Then, anytime your server has new data that needs to be sent to that web page, it can just send a message over that socket.io connection. The web page will receive that message and can then update the contents of the web page accordingly based on the data in the message. No page refresh is needed.
Here's an outline of the server code:
// start up socket.io listener using your existing web server
var io = require('socket.io')(app);
// recurring interval to poll several external web sites.
setInterval(function () {
var results = {};
request("website1.json").then(function (body) {
results.data1 = JSON.parse(body);
return request("website2.json");
}).then(function (body) {
results.data2 = JSON.parse(body);
return request("website3.json");
}).then(function (body) {
results.data3 = JSON.parse(body);
// decide if anything has actually changed on external service data
// and whether anything needs to be sent to connected clients
io.emit("newData", results);
}).catch(function(err) {
// decide what to do if external service causes an error
});
}, 10000);
The client code would then be generally like this:
<script src="/socket.io/socket.io.js"></script>
<script>
var socket = io();
socket.on("newData", function(data) {
// process new data here and update the web page
});
</script>