Keeping sync between two devices accessing the same account (different/same session) - mysql

Kind of curious as I'm aiming for a stateless setup, how some people go about coding/setting up their session handling when many devices accessing a single account occurs.
I work with Node.JS currently but the pseudo is appreciated,
This is how my sessions look currently, ID is a unique value. (Redis stored JSON by KEY)
{"cookie": {
"originalMaxAge": null,
"expires": null,
"secure": true,
"httpOnly": true,
"domain": "",
"path": "/",
"sameSite": "strict"
},
"SameSite": "7e5b3108-2939-4b4b-afdc-39ed5dbd00d0",
"loggedin": 1,
"validated": 1,
"username": "Tester12345",
"displayself": 1,
"avatar": "{ \"folder\": \"ad566c0b-aeac-4db8-9f54-36529c99ef15/\", \"filetype\": \".png\" }",
"admin": 0,
"backgroundcolor": "#ffffff",
"namebackgroundcolor": "#000000",
"messagetextcolor": "#5d1414"}
I have no issues with this setup until I have a user logged in twice different devices and one decides to adjust their colors or avatar; one session is up to date and the other is completely lost.
I do my best when possibly to call out to database to ensure the information is up to date when it's most important but curious for this small slip up what I should be doing? I'd hate to call for database each request to get this information but think most do this any-how?
I could set up in my mind a hundred different ways to go about this but was hoping maybe someone who has dealt with this has some excellent ideas about this. I'd like to just be efficient and not make my databases work as hard if they don't need to, but I know session handling makes the call each request so trying to determine a final thought.
Open to all ideas, and my example above is a JSON insert into Redis; I'm open to changing to MySQL or another store.

One way to notify devices and keep them up-to-date about changes made elsewhere is with a webSocket or socket.io connection from device to the server. When the device logs in as a particular user and then makes a corresponding webSocket or socket.io connection to the server, the server keeps track of what user that connection belongs to. The connection stays connected for the duration of the user's presence.
Then, if a client changes something (let's use a background color as an example), and tells the server to update its state to that new color, the server can look in its list of connections to see if there are any other connections for this same account. If so, the server sends that other connection a message indicating what change has been made. That client will then receive that notification and can update their view immediately. This whole thing can happen in milliseconds without any polling by the client.
If you aren't familiar with socket.io, it is a layer on top of webSocket that offers some additional features.
In socket.io, you can add each device that connects on behalf of a specific account to a socket.io room that has a unique name derived from the account (often an email address or username). Upon login:
// join this newly connected socket to a room with the name
// of the account it belongs to
socket.join(accountName);
Then, you can easily broadcast to all devices connected to that room with one simple socket.io API call:
// send a message to all currently connected devices using this account
io.emit(accountName, msg);
When socket.io connections are disconnected, they are automatically removed from any rooms that they have been placed in.
A room is a lightweight collection of currently connected sockets so it works quite well for a use like this.

Related

Catch 22? Blocked by CORS policy: Same server, internal/external IP, no SSL

My apologies if this is a duplicate. I can find a million results about CORS policy issues, but not about this specific one:
I developed a simple "speed test" site for my users (wfh employees of my company) to access. It tests speeds across the public net to different datacenters we utilize, and via the users' VPN connection to one of our DCs.
There are more complicated elements, but for a basic round-trip "ping" I have an extremely simple PHP script on the server that contains:
<?php
header('Access-Control-Allow-Origin: *');
header('Access-Control-Allow-Headers: *');
if ($_GET['simple'] == '1')
die('{ }');
?>
It is called like this:
$.ajax({
type: 'GET',
url: sURL,
data: { ignore: (pingCounter.start = new Date().getTime()) },
dataType: 'text',
timeout: iTimeout
})
.done(function(ret) {
pingCounter.end = new Date().getTime();
[...] (additional code omitted for brevity)
(I know this has additional overhead other than the raw round-trip network traffic timing, but I don't need sub-ms accuracy. I just need to be able to tell users "the problem is on your end" or "ah yes, the problem is the latency between your house and this particular DC".)
The same server running that PHP code is addressable at the following URLs at the DC wherein our VPN server lies:
http://speedtest-int.mycompany.com/ping.php
http://speedtest-ext.mycompany.com/ping.php
Public DNS resolves like this:
speedtest-int.mycompany.com IN A 1.1.1.1 (Actual public IP redacted)
speedtest-int.mycompany.com IN A 10.1.1.1 (Actual internal IP redacted)
If I access either URL from my browser directly, it loads fine (which is to say it responds with { }).
When loading via the JS snippet above, the call to http://speedtest-ext.mycompany.com/ping.php works fine.
The call to http://speedtest-int.mycompany.com/ping.php fails with "The request client is not a secure context and the resource is in more-private address space 'private'".
Fair enough, the solution is to add Access-Control-Allow-Private-Network: *, right?
EXCEPT that apparently can only be used with SSL:
https://developer.chrome.com/blog/private-network-access-update/
I have a self-signed cert on there, but that obviously fails by policy for that reason.
I could just get a LetsEncrypt cert for multiple subdomains. EXCEPT it will never validate the URL http://speedtest-int.mycompany.com because the LetsEncrypt servers won't be able to reach that to validate ownership, as it's a private IP.
I have no control over most of my users' machines, so I can't necessarily install trusted internal certs or change browser options. Most users use Chrome.
So is my solution to buy a UCC or wildcard cert?
I feel like I'm in a catch-22, and I don't want to spend however-much on a UCC cert for an internal app that will be very very very occasionally used by one of our 25 home-based employees when I want to prove that their home "internet is bad" and not the corp network.
Thanks in advance; I'm sure there's a stupidly obvious solution I'm not seeing.
(I'm considering pushing a /32 route to my VPN users for another real public IP to be used in place of the internal IP. Then I can have the "internal" test run against an otherwise publicly accessible IP which could be validated by LetsEncrypt, but VPN users would hit it via the VPN. Is that silly?)
Edit: If anyone is curious -- or it helps to clarify my goal here -- this is the output when accessing the speedtest page:
http://s.co.tt/wp-content/uploads/2021/12/Internal_Speedtest_Example-Redacted.png
It repeats for 20 cycles (or until stopped) and runs each element a varying number of times per cycle, collecting the average time for each. It ain't pretty, but it work(ed).

REST API which needs multiple different resources?

I'm designing a REST api for running jobs on virtual machines in different domains (Active Directory domains, the virtual machines with the same name can exist in different domains).
/domains
/domains/{dname}
/domains/{dname}/vms
/domains/{dname}/vms/{cname}
And for jobs, which will be stored in a database
/jobs
/jobs/{id}
Now I need to add a new API for the following user stories.
As a user, I want to run a job (just job definition, not the stored job) on an existing VM.
As a user, I want to run a job (just job definition, not the stored job) on VM named x, which may or may not exist. The system should create the VM if x doesn't exist.
How should the api be designed?
Approach 1:
PUT /domains/{dname}
{ "state": "running_job", "vm": "vm_name", "job_definition": { .... } }
Approach 2:
PUT /domains/{dname}/vms/{vm_name}
{ "state": "running_job", "job_definition": { .... } }
Approach 3:
PUT /jobs
{ "state": "running", "domain": "name", "vm": "vm_name", "job_definition": { .... } }
Approach 4: create a new resource, saying scheduler,
PUT /scheduler
{ "domain": "name", "vm": "vm_name", "job_definition": { .... } }
(what if I need to update some attributes of scheduler in the future?)
In general, hwo to design the REST API url which needs multiple resources?
How should the api be designed?
How would you design this on the web?
There would be an HTML form, right? With a bunch of input controls to collect information from the operator about what job to use, and which VM to target, and so on. Operator would fill in the details, submit the form. The browser would then use the form to create the appropriate HTTP request to send to the server (the request-target being computed from the form meta data).
Since the server gets to decide what the request-target should be (benefits of using hypertext), it can choose any resource identifier it wants. In HTTP, a successful unsafe request happens to invalidate previously cached responses with the same request target, so one possible strategy is to consider which is the most important resource changed by successfully handling the request, and use that resource as the target.
In this specific case, we might have a resource that represents the job queue (ex /jobs), and what we are doing here is submitting a new entry in the queue, so we might expect
POST /jobs HTTP/1.1
....
If the server, in its handling of the request, also creating new resources for the specific job, then those would be indicated in the response
HTTP/1.1 201 Created
Location: /jobs/931a8a02-1a87-485a-ba5b-dd6ee716c0ef
....
Could you instead just use PUT?
PUT /jobs/931a8a02-1a87-485a-ba5b-dd6ee716c0ef HTTP/1.1
???
Yes, if (a) the client knows what spelling to use for the request-target and (b) is the client knows what the representation of the resource should look like.
Which unsafe HTTP method you use in the messages that trigger you business activities doesn't actually matter very much. You need to use the methods correctly (so that general purpose HTTP connectors don't get misled).
In particular, the important thing to remember about PUT is that the request body should be a complete representation of the resource - in other words, the request body for a PUT should match the response body of a GET. Think "save file"; we've made local edits to our copy of a resource, and we send back a copy of the entire document.

Couchbase Sync-Gateway Multiple Clients

I'am currently playing around with the Couchbase Sync-Gateway and have built a demo app.
What is the intended behavior if a user logs in with the same username on a different device (which has an empty database) or if he deleted the local database?
I'am expecting that all the data from the server should get synced back to the clients.
Is this correct?
My problem is that if i'am deleting the database or login from a different device, nothing will get synced.
Ok i figured it out and it's exactly how i thought it would be.
If i log in from a different device i get all the data synced automatically.
My problem was the missing sync function. I thought it will use a default and route all documents to the public channel automatically.
I'am now using the following simple sync-function:
"sync": `function (doc, oldDoc) {
channel('!');
access('demo#example.com', '*');
}`
This will simply route all documents to the public channel and grant my demo-user access to it.
I think this shouldn't be used in production but it's a good starting point for playing around.
Now everything is working fine.
Edit: I've now found the missing info:
https://docs.couchbase.com/sync-gateway/current/configuration-properties.html#databases-this_db-sync
If you don't supply a sync function, Sync Gateway uses the following default sync function
...
The channels property is an array of strings that contains the names of the channels to which the document belongs. If you do not include a channels property in a document, the document does not appear in any channels.

FIWARE's subscription do not notify

I have multiple subscriptions to different entities, and at some random time one of the subscriptions stops being notified.
This is because the lastNotification attribute is set in the future. Here is an example :
curl 'http://localhost:1026/v2/subscriptions/xxxxxxxxxxxxxxx'
{
"id": "xxxxxxxxxxxxxxx",
...
"status": "active",
...
"notification": {
"timesSent": 1316413,
"lastNotification": "2021-01-20T18:33:39.000Z",
...
"lastFailure": "2021-01-18T12:11:26.000Z",
"lastFailureReason": "Timeout was reached",
"lastSuccess": "2021-01-20T17:12:09.000Z",
"lastSuccessCode": 204
}
}
In this example, lastNotification is ahead of time. It is not until 2021-01-20T18: 33: 39.000Z for the subscription to be notified again.
I tried to modify lastNotification in the mongo database but that does not change it. Looks like the value 2021-01-20T18: 33: 39.000Z is cached.
The field lastFailureReason specifies the reason of the last notification failure associated to that subscription. The diagnose notification reception problems section in the documentation explains possible causes for "Timeout was reached". It makes sense to have this fail randomly if your network connection is somehow unstable.
With regards to timestamping in the future (no matter if it happens for lastNotification, lastFailure or lastSuccess) is pretty weird and probably not associated to Orion Context Broker operation. Orion takes timestamp from internal clock of the system where it is running, so maybe your system clock is set in the future.
Except if you run Orion Context Broker with -noCache (which in general is not recommended), subscriptions are cached. Thus, if you "hack" them in the DB you will not see the effect until next cache refresh (refreshes takes place at regular intervals, defined by -subCacheIval parameter).

How to find out the availability status of a Web API from a Windows Store application

I have a Line-of-Business (LoB) Windows 8.1 Store application I developed for a client. The client side-loads it on several Windows 10 tablets. They use it in an environment where WiFi is spotty at best and they would like to get some sort of notification inside the app, regardless of what page they are on, notification that will let them know that they've lost connectivity to the network. I have created a method on my Web API that is not hitting the repository (database). Instead, it quickly returns some static information regarding my Web API, such as version, date and time of the invocation and some trademark stuff that I'm required to return. I thought of calling this method at precise intervals of time and when there's no response, assume that the Web API connectivity is lost. In my main page, the first one displayed when the application is started, I have the following stuff in the constructor of my view model:
_webApiStatusTimer = new DispatcherTimer();
_webApiStatusTimer.Tick += OnCheckWebApiStatusEvent;
_webApiStatusTimer.Interval = new TimeSpan(0, 0, 30);
_webApiStatusTimer.Start();
Then, the event handler is implemented like this:
private async void OnCheckWebApiStatusEvent(object sender, object e)
{
// stop the timer
_webApiStatusTimer.Stop();
// refresh the search
var webApiInfo = await _webApiClient.GetWebApiInfo();
// add all returned records in the list
if (webApiInfo == null)
{
var messageDialog = new MessageDialog(#"The application has lost connection with the back-end Web API!");
await messageDialog.ShowAsync();
// restart the timer
_webApiStatusTimer.Start();
}
}
When the Web API connection is lost, I get a nice popup message that informs me that the Web API is no longer available. The problem I have is that after a while, especially if I navigate away from the first page but not necessary, I get an UnauthorizedAccessException in my application.
I use the DispatcherTimer since my understanding is that this is compatible with
UI threads, but obviously, I still do something wrong. Anyone cares to set me on the right path?
Also, if you did something similar and found a much better approach, I'd love to hear about your solution.
Thanks in advance,
Eddie
First, If you are using Windows Store Apps, then you could possibly use a Background task to check poll for the status of the web api instead of putting this responsibility on your view model, its not the viewmodels concern
Second, if you are connecting from your Windows store app to your API then one successful authentication/ authorization for the first time, how and where do you store the token (assuming you are using token authentication). If you are (and ideally you should), is there a timer that you start which is set to the token expiration time? Is your local storage getting flushed somehow and loosing the aurthorization data?
Need more information.