How feasible/difficult is it to run an application that runs on a router? - language-agnostic

In my example, I want to build an application that sends users who join a network some kind of interface and manage this at a central station (possibly the router, or a central server). The new user's input to this interface will be sent back to the central station and controlled.
How plausible is this? Is sending something to a newly discovered IP realistic?

As long as you control the DNS server, you can send them to any web server you like.

Completely plausible, but you'll need a router with open source firmware and you'll need to program in the language of that source code and have the toolchain to build the binary for the firmware.

The only thing I can think of is NoCatAuth and friends. The user has to use their web browser, but most are accustomed to that.

Are you trying to FORCE the users to use your application (e.g. by selling these routers via an ISP), or are you expecting users to co-operate (e.g. inside a organisation's WAN)?
If the latter, it may be sufficient to set the DHCP server inside the router to serve the address of an HTTP proxy. That will get picked up by most OS/browsers. The proxy can then be used to control web-traffic - which pages they can see, and which ones are redirected to your own web-app.
If the user is considered an adversary, it would be trivial for them to override the proxy settings. In a LAN/WAN situation, you need to make sure nothing is connecting them to the outside world, except through the proxy.

Related

How to login once across multiple subdomains on a custom domain using Auth0?

We are developing a suite of separate SPA applications where each one lives on a separate subdomain with a common parent domain.
app1.domain.com
app2.domain.com
We want to avoid customers having to sign in to each app separately. So we want to be able to sign in once to one app and remaining signed in when visiting the other apps. Similarly, signing out of one would sign out in the others.
Based on my research of the Auth0 docs, this seemed to be possible with a custom domain. So changed our plan, added the custom domain auth.domain.com and created a test SPA app that could log in. I then created two subdomains and pointed them both at the app.
Logging in to app1.domain.com worked. The auth0.{clientid}.is.authenticated cookie was created. However, the cookie’s domain was app1.domain.com, not .domain.com as I’d hoped.
I then tried visiting the app2.domain.com and confirmed that the cookie definitely wasn’t there and I wasn’t logged in.
Is there any way to configure Auth0 to keep a user logged in across all the subdomains?
(I posted this on the Auth0 Forums here but got no replies)
I opened a pull request on auth0-spa-js package to add an option to specify the cookie domain. It was accepted and merged in version 1.21.0 and later.
In your client configuration, add a cookieDomain option.
const auth0 = await createAuth0Client({
domain: '<AUTH0_DOMAIN>',
client_id: '<AUTH0_CLIENT_ID>',
redirect_uri: '<MY_CALLBACK_URL>',
audience: '<MY_AUDIENCE>',
cookieDomain: '.example.com',
})
NOTE: Top level cookie domains always start with a period ..
Use this configuration on each app/subdomain under the same top level domain and you'll notice the auth0.{clientid}.is.authenticated cookie will exist on both. Signing in on one will result in you being signed in when you visit any others.
The auth0-spa-js package is used by most of the other auth0 plugins for Vue, React, etc. so client configuration should be virtually identical.

Magento Multi-Store Setup / Store Codes Setting

I'm running Magento on a shared server with a single IP. I originally set it up as a single store with no plans to do multi-stores. Do I need to have store codes trailing each domain in magento to get this work correctly? They will all checkout at the main store URL. I have done this in the past and it has worked fine for me, but I was using store codes and with this instance I am not.
Will it completely jack up my SEO?
So I have store1.com (main store) and store2.com which needs to checkout at store1.com
Any help or link to a how to would be great. Have not been able to find a straight forward answer.
Your proposed setup of having store1.com and store2.com with a shared checkout URL of store1.com will work with a bit of work from yourself, but it's not clean or ideal in my opinion. Magento will append an SSID every time it switches domain to try and re-load the customers session data (They will have ?SSID=something). You would also need to change the checkout URL in your templates to only use the 1 domain which would require hard coding the full URL to the checkout and cart page in the store2.com templates.
Personally I would simply have separate checkouts for each domain which is supported straight out of the box in Magento without really doing anything. Why the need to have the checkout always under 1 domain? If it's because of SSL and 1 IP limitations then buy a UCC SSL certificate for multiple domains and have all the domains required to run on the server setup as SANS on the certificate. Cheap and simple. This way there is no need for store codes in URLs, SSIDs in domain switching, and the user will always stay on the same domain without any funny switching business or complications.
As a customer I would also be a little surprised to shop on one domain and then checkout on another these days, especially if one of the domains is international and this will ultimately effect your conversion rate.
You seem to be familiar with store views, so once you have setup your secondary store view, simply go into the admin and override the base URLs for the secondary domain. Point the store2.com domain to the same IP address you are using for store1.com. Setup a vhost on the server so store2.com effectively replicates the vhost for store1.com. You can use vhost directives so that magento initiates the correct store view for the relevant domain name in your new vhost.
SetEnv MAGE_RUN_CODE yourstorecode
SetEnv MAGE_RUN_TYPE store
You should now be able to have multiple sites/domains running on 1 magento instance each with an individual checkout URL. e.g. store1.com/checkout/onepage/ and store2.com/checkout/onepage/.
By using a UCC SSL certificate, the SSL will be valid for both domains and not cause you issues so no need for multiple IPs.

Using a completely decoupled frontend with user authentication

I'm playing with the idea of having a completely decoupled HTML5 frontend, but still user authentication for a web app. Is this possible or will I run into some heavy browser security issues?
The idea is to have all static content delivered through a CDN on like example.com, and having it fetch dynamic data (and user authentication) through a separate subdomain, like api.example.com.
This would speed up the loading time of the site, and I could keep the frontend stuff in a completely separate repo so that the developers don't have to worry about setting up the backend to develop and test new features.
Is this already possible in some JS framework perhaps, backbone.js, angular.js, ember.js, knockout.js ?
It definitely is, but I think it is more about approach rather than technology. I have implemented what you describe for a project (it's online but don't want to do a shameless plug here, if interested to check it out I can post the link). My stack is java in the backend exposing a REST api for both autentication and business logic. The client is a backbone.js application. I explicitely decided NOT to use sessions at all. It is completely stateless. This of course means that the user must be re-authenticated at every request.
When the user logs in through a slightly modified OAuth endpoint, it gets a token that must be passed at every request. Cookie works in this case as they are handled automatically by the browser. If not passed as cookie, the backend expect it as a parameter. The frontend communicates using the REST endpoints. It's a single-page application, full client side, this means that the backend serves a page that is basically empty, that include few JS files that are the application itself. No other pageload occurs. Logout is done by simply deleting the cookie or not sending the authToken, the server cannot and doesn't have to "forget" about the user. Token are nice as they can be invalidated, both expilcitely or by changing the password. I've chosen this approach as it made it easy to develop desktop app and browser plugin for my webapp without touching a single line of backend code.

Securing an API on the same domain/server as the website making the calls?

If your API and Website making ajax calls to that API are on the same server (even domain), how would you secure that API?
I only want requests from the same server to be allowed! No remote requests from any other domain, I already have SSL installed does this mean I am safe?
I think you have some confusion that I want to help you clear up.
By the very fact that you are talking about "making Ajax calls" you are talking about your application making remote requests to your server. Even if your website is served from the same domain you are making a remote request.
I only want requests from the same server to be allowed!
Therein lies the problem. You are not talking about making a request from server-to-server. You are talking about making a request from client-to-server (Ajax), so you cannot use IP restrictions (unless you know the IP address of every client that will access your site).
Restricting Ajax requests does not need to be any different than restricting other requests. How do you keep unauthorized users from accessing "normal" web pages? Typically you would have the user authenticate, create a user session on the server, pass a session cookie back tot he client that is then submitted on every request, right? All that stuff works for Ajax requests too.
If your API is exposed on the internet there is nothing you can do to stop others from trying to make requests against it (again, unless you know all of the IPs of allowed clients). So you have to have server-side control in place to authorize remote calls from your allowed clients.
Oh, and having TLS in place is a step in the right direction. I am always amazed by the number of developers that think they can do without TLS. But TLS alone is not enough.
Look at request_referer in your HTTP headers. That tell you where the request came from.
It depends what you want to secure it from.
Third parties getting their visitors to request data from your API using the credentials those visitors have on your site
Browsers will protect you automatically unless you take steps to disable that protection.
Third parties getting their visitors to request changes to your site using your API and the visitors' credentials
Nothing Ajax specific about this. Implement the usual defences against CSRF.
Third parties requesting data using their own client
Again, nothing Ajax specific about this. You can't prevent the requests being made. You need authentication/authorisation (e.g. password protection).
I already have SSL installed does this mean I am safe
No. That protects data from being intercepted enroute. It doesn't prevent other people requesting the data, or accessing it from the end points.
you can check ip address, if You want accept request only from same server, place .htaccess in api directory or in virtualhost configuration directive, to allow only 127.0.0.1 or localhost. Configuration is based on what webserver You have.

Is it possible to capture an outgoing http call from an ActionScript (Flex) module?

I'm trying to develop a test framework for some ActionScript code we're developing (Flex 3.5). What's happening is this:
As part of a Web Analytics function we are calling a track method in a class, providing the relevant information as part of the call. This method is provided in a library (SWC), and we have no access to the code.
Ultimately the track method sends an outgoing http request to the tracking server. We can see this quite happily in HttpFox.
I was hoping to be able to capture this outgoing request and interrogate it in my test class, allowing us to a) run tests in a more standalone fashion, and b) programmatically determine that the correct information is being tracked.
No problem just run this developer tool that displays all requests leaving your machine.
http://www.charlesproxy.com/
Unless you're going to use a sniffing tool, which probably would be hard to use for a programmatic evaluation, I would recommend using a proxy to channel your request. You could let the track method send the request to a php script on the proxy server, have it evaluate the request content, and then forward it to the actual tracking server. I suppose on a tracking system, you won't need to worry about the response, so it shouldn't be too hard to implement.
You could run a web server on a localhost (or any really) and just make sure the DNS entry the code is trying to access points to the server you are running.