getUser does not work in 4.0.4 master webchat client - smooch

I downloaded today the last release (4.0.4, from yesterday) of the webchat client from github and deployed in in my website.
I have detected that Smooch.getUser() returns 'undefined' when a new user is detected until this new user send his first message, but it doesn't happen on returning users.
<script>
Smooch.on('ready', function(){
console.log('the init has completed!');
});
var skPromise = Smooch.init({appId: 'myAppId'});
skPromise.then(
function()
{
var u = Smooch.getUser();
console.log(u._id);
});
);
</script>
smooch_local.html:26 Uncaught (in promise) TypeError: Cannot read property '_id' of undefined
at smooch_local.html:26
at anonymous
But, if i send any message after the promise has resolved, and later I try to recover the userId, the variable gets defined. It didn't happen in this way in previous 3.x.x releases of the Web Messenger chat.
This code returns a valid userId:
<script>
Smooch.on('ready', function(){
console.log('the init has completed!');
});
var skPromise = Smooch.init({appId: 'myAppId'});
skPromise.then(
function()
{
Smooch.sendMessage({type: 'text', text: 'x'}).then(
function(){
var u = Smooch.getUser();
console.log(u._id);
});
}
);
</script>
This is the console ouptut:
12:21:20.165 the init has completed!
12:21:22.947 smooch_local.html:28 1102fdee2b7d3c2abb639cbe
Does anyone knows if it's a bug or a new feature from v4.x releases?
Thanks

This is expected behaviour for Web Messenger 4.x - users are no longer automatically created at init time. Instead, user creation is deferred until after they send a message. This was mentioned in the release notes for v4.0.0
Web Messenger now uses a new optimized initialization sequence. This new sequence alters the timing of key events such as creating a new user or establishing a websocket connection.
Alternatively, you can pre-create a user with a userId before the Web Messenger is initialized, and use the login method to initialize as that user, but this may or may not be appropriate depending on your use case.

Related

Slack webhooks cause cls-hooked request context to orphan mysql connections

The main issue:
We have a lovely little express app, which has been crushing it for months with no issues. We manage our DB connections by opening a connection on demand, but then caching it "per request" using the cls-hooked library. Upon the request ending, we release the connection so our connection pool doesn't run out. Classic. Over the course of months and many connections, we've never "leaked" connections. Until now! Enter... slack! We are using the slack event handler as follows:
app.use('/webhooks/slack', slackEventHandler.expressMiddleware());
and we sort of think of it like any other request, however slack requests seem to play weirdly with our cls-hooked usage. For example, we use node-ts and nodemon to run our app locally (e.g. you change code, the app restarts automatically). Every time the app restarts locally on our dev machines, and you try and play with slack events, suddenly when our middleware that releases the connection tries to do so, it thinks there is nothing in session. When you then use a normal endpoint... it works fine and essentially seems to reset slack to working okay again. We are now scared to go to prod with our slack integration, because we're worried our slack "requests" are going to starve our connection pool.
Background
Relevant subset of our package.json:
{
"#slack/events-api": "^2.3.2",
"#slack/web-api": "^5.8.0",
"express": "~4.16.1",
"cls-hooked": "^4.2.2",
"mysql2": "^2.0.0",
}
The middleware that makes the cls-hooked session
import { session } from '../db';
const context = (req, res, next) => {
session.run(() => {
session.bindEmitter(req);
session.bindEmitter(res);
next();
});
};
export default context;
The middleware that releases our connections
export const dbReleaseMiddleware = async (req, res, next) => {
res.on('finish', async () => {
const conn = session.get('conn');
if (conn) {
incrementConnsReleased();
await conn.release();
}
});
next();
};
the code that creates the connection on demand and stores it in "session"
const poolConn = await pool.getConnection();
if (session.active) {
session.set('conn', poolConn);
}
return poolConn;
the code that sets up the session in the first place
export const session = clsHooked.createNamespace('our_company_name');
If you got this far, congrats. Any help appreciated!
Side note: you couldn't pay me to write a more confusing title...
Figured it out! It seems we have identified the following behavior in the node version of slack's API (seems to only happen on mac computers... sometimes)
The issue is that this is in the context of an express app, so Slack is managing the interface between its own event handler system + the http side of things with express (e.g. returning 200, or 500, or whatever). So what seems to happen is...
// you have some slack event handler
slackEventHandler.on('message', async (rawEvent: any) => {
const i = 0;
i = i + 1;
// at this point, the http request has not returned 200, it is "pending" from express's POV
await myService.someMethod();
// ^^ while this was doing its async thing, the express request returned 200.
// so things like res.on('finished') all fired and all your middleware happened
// but your event handler code is still going
});
So we ended up creating a manual call to release connections in our slack event handlers. Weird!

net::ERR_CONNECTION_RESET with service worker in Chrome

I have a very simple service worker to add offline support. The fetch handler looks like
self.addEventListener("fetch", function (event) {
var url = event.request.url;
event.respondWith(fetch(event.request).then(function (response) {
//var cacheResponse: Response = response.clone();
//caches.open(CURRENT_CACHES.offline).then((cache: Cache) => {
// cache.put(url, cacheResponse).catch(() => {
// // ignore error
// });
//});
return response;
}).catch(function () {
// check the cache
return getCachedContent(event.request);
}));
});
Intermittently we are seeing a net::ERR_CONNECTION_RESET error for a particular script we load into the page when online. The error is not coming from the server as the service worker is picking up the file from the browser cache. Chrome's network tab shows the service worker has successfully fetched the file from the disk cache but the request from the browser to the service worker shows as (failed)
Does anyone know the underlying issue causing this? Is there a problem with my service worker implementation?
This is likely due to a bug in Chrome (and potentially other browsers as well) that could result in a garbage collection event removing a reference to the response stream while it's still being read.
Its fix in Chrome is being tracked at https://bugs.chromium.org/p/chromium/issues/detail?id=934386.

Chrome Push Notification: This site has been updated in the background

While implementing the chrome push notification, we were fetching the latest change from our server. While doing so, the service-worker is showing an extra notification with the message
This site has been updated in the background
Already tried with the suggestion posted here
https://disqus.com/home/discussion/html5rocks/push_notifications_on_the_open_web/
But could not find anything useful till now. Is there any suggestion ?
Short Answer: You should use event.waitUntil and pass a promise to it, which returns showNotification eventually. (if you have any other nested promises, you should also return them.)
I was expriencing the same issue but after a long research I got to know that this is because delay happen between PUSH event and self.registration.showNotification(). I only missed return keyword before self.registration.showNotification()`
So you need to implement following code structure to get notification:
var APILINK = "https://xxxx.com";
self.addEventListener('push', function(event) {
event.waitUntil(
fetch(APILINK).then(function(response) {
return response.json().then(function(data) {
console.log(data);
var title = data.title;
var body = data.message;
var icon = data.image;
var tag = 'temp-tag';
var urlOpen = data.URL;
return self.registration.showNotification(title, {
body: body,
icon: icon,
tag: tag
})
});
})
);
});
Minimal senario:
self.addEventListener('push', event => {
const data = event.data.json();
event.waitUntil(
// in here we pass showNotification, but if you pass a promise, like fetch,
// then you should return showNotification inside of it. like above example.
self.registration.showNotification(data.title, {
body: data.content
})
);
});
I've run into this issue in the past. In my experience the cause is generally one of three issues:
You're not showing a notification in response to the push
message. Every time you receive a push message on the device, when
you finish handling the event a notification must be left visible on
the device. This is due to subscribing with the userVisibleOnly:
true option (although note this is not optional, and not setting it
will cause the subscription to fail.
You're not calling event.waitUntil() in response to handling the event. A promise should be passed into this function to indicate to the browser that it should wait for the promise to resolve before checking whether a notification is left showing.
For some reason you're resolving the promise passed to event.waitUntil before a notification has been shown. Note that self.registration.showNotification is a promise and async so you should be sure it has resolved before the promise passed to event.waitUntil resolves.
Generally as soon as you receive a push message from GCM (Google Cloud Messaging) you have to show a push notification in the browser. This is mentioned on the 3rd point in here:
https://developers.google.com/web/updates/2015/03/push-notificatons-on-the-open-web#what-are-the-limitations-of-push-messaging-in-chrome-42
So it might happen that somehow you are skipping the push notification though you got a push message from GCM and you are getting a push notification with some default message like "This site has been updated in the background".
This works, just copy/paste/modify. Replace the "return self.registration.showNotification()" with the below code. The first part is to handle the notification, the second part is to handle the notification's click. But don't thank me, unless you're thanking my hours of googling for answers.
Seriously though, all thanks go to Matt Gaunt over at developers.google.com
self.addEventListener('push', function(event) {
console.log('Received a push message', event);
var title = 'Yay a message.';
var body = 'We have received a push message.';
var icon = 'YOUR_ICON';
var tag = 'simple-push-demo-notification-tag';
var data = {
doge: {
wow: 'such amaze notification data'
}
};
event.waitUntil(
self.registration.showNotification(title, {
body: body,
icon: icon,
tag: tag,
data: data
})
);
});
self.addEventListener('notificationclick', function(event) {
var doge = event.notification.data.doge;
console.log(doge.wow);
});
I was trying to understand why Chrome has this requirement that the service worker must display a notification when a push notification is received. I believe the reason is that push notification service workers continue to run in the background even after a user closes the tabs for the website. So in order to prevent websites from secretly running code in the background, Chrome requires that they display some message.
What are the limitations of push messaging in Chrome?
...
You have to show a notification when you receive a push message.
...
and
Why not use Web Sockets or Server-Sent Events (EventSource)?
The advantage of using push messages is that even if your page is closed, your service worker will be woken up and be able to show a notification. Web Sockets and EventSource have their connection closed when the page or browser is closed.
If you need more things to happen at the time of receiving the push notification event, the showNotification() is returning a Promise. So you can use the classic chaining.
const itsGonnaBeLegendary = new Promise((resolve, reject) => {
self.registration.showNotification(title, options)
.then(() => {
console.log("other stuff to do");
resolve();
});
});
event.waitUntil(itsGonnaBeLegendary);
i was pushing notification twice, once in the FCM's onBackgroundMessage()
click_action: "http://localhost:3000/"
and once in self.addEventListener('notificationclick',...
event.waitUntil(clients.matchAll({
type: "window"
}).then...
so i commented click_action, ctrl+f5 to refresh browsers and now it works normal

webrtc: failed to send arraybuffer over data channel in chrome

I want to send streaming data (as sequences of ArrayBuffer) from a Chrome extension to a Chrome App, since Chrome message API (includes chrome.runtime.sendMessage, postMessage...) does not support ArrayBuffer and JS arrays have poor performance, I have to try other methods. Eventually, I found WebRTC over RTCDataChannel might a good solution in my case.
I have succeeded to send string over a RTCDataChannel, but when I tried to send ArrayBuffer I got:
code: 19
message: "Failed to execute 'send' on 'RTCDataChannel': Could not send data"
name: "NetworkError"
It seems that it's not a bandwidths limits problem since it failed even though I sent one byte of data. Here is my code:
pc = new RTCPeerConnection(configuration, { optional: [ { RtpDataChannels: true } ]});
//...
var dataChannel = m.pc.createDataChannel("mydata", {reliable: true});
//...
var ab = new ArrayBuffer(8);
dataChannel.send(ab);
Tested on OSX 10.10.1, Chrome M40 (Stnble), M42(Canary); and on Chromebook M40.
I have filed a bug for WebRTC here.
I modified my code, now everything worked amazing:
removed RtpDataChannels option when creating RTCPeerConnection.(YES, remove RtpDataChannels option if you want data channel, what a magic world!)
on receiver side: no need createDataChannel, instead, handle onmessage, onxxx by using event.channle from pc.ondatachannel callback:
pc.ondatachannel function(event)
var receiveChannel = event.channel;
receiveChannel.onmessage = function(event){
console.log("Got Data Channel Message:", event.data);
};
};

Race-condition with web workers when setting onmessage handler?

Please consider the following code and the explanation from this Mozilla tutorial "Using web workers":
var myWorker = new Worker('my_worker.js');
myWorker.onmessage = function(event) {
print("Called back by the worker!\n");
};
Line 1 in this example creates and
starts running the worker thread.
Line 2 sets the onmessage handler for
the worker to a function that is
called when the worker calls its own
postMessage() function.
The thread is started in the moment the Worker constructor is called. I wonder if there might be a race-condition on setting the onmessage handler. For example if the web worker posts a message before onmessage is set.
Does someone know more about this?
Update:
Andrey pointed out that the web worker should start its work, when it receives a message, like in the Fibonacci example in the Mozilla tutorial. But doesn't that create a new race-condition on setting the onmessage handler in the web worker?
For example:
The main script:
var myWorker = new Worker('worker.js');
myWorker.onmessage = function(evt) {..};
myWorker.postMessage('start');
The web worker script ('worker.js')
var result = [];
onmessage = function(evt) {..};
And then consider the following execution path:
main thread web worker
var worker = new Worker("worker.js");
var result = [];
myWorker.onmessage = ..
myWorker.postMessage('start');
onmessage = ..
The "var result = []" line can be left out, it will still be the same effect.
And this is a valid execution path, I tried it out by setting a timeout in the web worker! At the moment I can not see, how to use web workers without running into race-conditions?!
The answer is that both the main script and the web worker have a MessagePort queue which collects the messages before the initial worker script returns.
For details, see this thread on the WHATWG help mailing list:
http://lists.whatwg.org/pipermail/help-whatwg.org/2010-August/000606.html