Automatically Host Content When Specific Nameservers - subdomain

Basically, this is going to probably an incredibly generic and poorly crafted question. I do apologise in advance for that and hope you can look past that and potentially offer some solutions/help.
I am looking at starting a new project, which I guess functions similar to Shopify in a way. Users will pay a monthly fee and then get their own website which has a store-style thing on it.
I am comfortable with most aspects of making this, however, the one thing I'm not completely sure how to do is if they want to have a custom domain (which I assume most customers would). Based on my experience with services such as Spotify and Tictail, to do this I am going to have to get them to change their nameservers to my nameservers. After that, I'm not completely sure how it will function and how to set it up. All of the files for the sites are going to be pretty much the exact same so I don't need much to change there.
So basically my main question is, how would I develop it to automatically host certain content when someone sets their nameservers as my nameservers? I would like it to be completely automatic if possible, but I don't mind if there is a little manual input.
I'm super sorry if the question isn't worded properly or if it's confusing as I've never developed something like this. A simple point in the right direction would be much appreciated as I'm not too sure where to start with this.
Thanks

It depends on the server technology you use to provide such user related nameservers. If I understand you properly, you are looking to get something like:
# Your service runs here:
http://yourdomain.com
# For the user account (user specific application)
http://{unique_username}.yourdomain.com/
# Then you would have (for N users)
http://username0.yourdomain.com/
http://username1.yourdomain.com/
...
http://usernameN.yourdomain.com/
The way you have to avoid dns hijacking is to actually specify the nameserver in the server host configuration. There are many servers around which provide virtual host configurations to allow many different domain names in the same ip address.
As an example, in nginx this can be done using virtual hosts. In your case you would need to programatically create them. In order to do so, a file must be added to the folder /etc/nginx/sites-available. This could be a file called: /etc/nginx/sites-available/username0 with this content:
server {
listen 80
server_name username0.yourdomain.com;
root /path/to/app/;
...
}
So for your solution, you would create a file per customer user. In order to activate a new created virtualhost (server block), link it in the folder with path: /etc/nginx/sites-enabled
ln -s /etc/nginx/sites-avaible/username1 /etc/nginx/sites-enabled/username1
sudo service nginx reload
Read more about dns hijacking here and have a look to virtual hosts implementations, like the nginx server blocks shown above, or the apache virtual hosts.
Good luck!

Related

What should I use if xip.io is not an option in local install of Openshift?

The Openshift 'all-in-one' Vagrant box uses xip.io. The security team at my company has relayed to us that using 'xip.io' for a wildcard DNS could cause some security concerns. So given that 'xip.io' is not an option, how can I get this set up?
We ran into a similar issue at my company. The best answer, which is a bit dire, is that you'll need to set up your own custom DNS. Sorry to say because it is a bit annoying to do but not all that bad. Use this link, it should give you some guidance.
Basically, you'll need to /etc/dnsmasq.conf file to look like:
# Reverse DNS record for master
host-record=master.example.com,192.168.1.100
# Wildcard DNS for OpenShift Applications - Points to Router
address=/apps.example.com/192.168.1.100
The article goes into great detail. I'm not sure how network savvy you are, but if you're not, then I'd suggest roping in one of your ops guys to assist with this. Without a relatively good understanding of networking, setting this up would be quite difficult.
I understand that you can do this without requiring your own DNS.
Quickest way is to manually add required entries to /etc/hosts of your host system, mapping them to IP address that xip.io address would map to.
Alternatively set up dnsmasq something like the following.
$ cat /usr/local/etc/dnsmasq.conf
address=/.10.2.2.2.xip.io/10.2.2.2
address=/.ain1/10.2.2.2
address=/.10.1.2.2.xip.io/10.1.2.2
address=/.cdk/10.1.2.2
$ls /etc/resolver/
ain1 ain1-xip cdk cdk-xip ddns
$cat /etc/resolver/cdk-xip
domain 10.1.2.2.xip.io
nameserver 127.0.0.1
This is cut and paste from elsewhere and I don't use CDK myself so not sure about the IP addresses here, but I understand this shouldn't require you to set up separate DNS. The 'ain1' entry is for the OpenShift Origin all in one VM, which is equivalent to CDK for OpenShift, but using latest Origin upstream version.

maximising static IPs in google-compute-hosted microservices

My first time I have asked a question on here.
I have an expanding set of services hosted on google compute platform.
The initial round was set up in a very stressed situation, and I am now refactoring.
I currently have 3 EDIT: no thats 4 microservice VM hosts, which will all be HTTPS soon (and so need their own IP). In addition a list of test boxes, as we are developing bits. Test boxes do not need https.
question 1) Does any one have a work-round to get multiple static IPs per host? This is why i have large numbers of hosts.
question 2) How can I have more than a /29 of static IPs (eg 8 or more). This is corporate work, we will pay for services.
question 3) According to google api, I may deallocate static IPs. I cannot find an implementation for this. Do you know of one? As I have built systems like this in the past; I know there is no technical reason why there should not be an API for this.
Bonus Q, Question 4) Is there a mechanism to serialise a saved harddisk out of google cloud? This would make my CEO happy.
An ideal response is a relevent "Fine Manual" to read.
I work on GMT time. All linux hosts, probably not relevant. Although a developer, I can admin most things Linux.
UPDATE: if you delete an IP via gcloud compute addresses delete $name --region europe-west1 but don't delete the IF inside the box, this makes it not static. Which is the objective of Q3.
You can find the answers to your question below:
Its directly not possible to assign multiple IPs to an instance. One workaround to achieve this is to create multiple forwarding rules pointing to the same target pool with that instance.
Its currently not possible to reserve the whole block of IP addresses as the address are randomly assigned to the instances from the pool of IPs available.
If you have reserved static IPs in your project you can can release that IP from one instance and assign it to another.
There is no direct way to that, however one workaround I can think of is to use dd tool to clone your disk as .raw and save that to cloud storage. This clone case be used to create other disks outside your project.
I hope that helps.

Sourcegear Vault Client: Working on multiple machines

Say I use SourceGear Vault client on my desktop at work and check out a few files to a network folder. But when I am working from home and login to a terminal server (Windows RDP), Vault thinks that someone else has checked out the files and so I can't access/edit them.
Is there a way to set things up such I can checkout a file to a common network location and keep working on it from multiple machines?
Thanks
What you are seeing is normal, because the Vault cache is specific to each client.
Here are the options I could see for how to deal with this:
1) The best way is to shelve your code changes. Then you can pull your shelved changes down when you get home and continue where you left off. If you need to check in from home, then when you initially shelve your changes, you should also undo your check out so that you can check out again from home.
2) You could use a network location for yourself, but you are likely to run into the same situation when you go to check in. What this would give you is just the ability to have only one location for the code you are editing. Also, some of the statuses you see as you are switching between clients won't look right. You still would get best results by undoing your check out before leaving work, but in this case, you'd choose the option to leave your changes instead of reverting them back.
3) You can perform an additional check in. That way your code is in Vault. Then you can check it out again and continue from where you left off. Some places don't want partially completed code checked in though, so you will have to decide if this is in line with your workplace requirements.
4) You could perform a non-exclusive check out. That way you can check out twice. You will get a warning, but it will still allow you to continue. To get your changes from your work computer, you still will be well served by using Shelve.
Feel free to email me at support#sourcegear.com if you need additional help.
Thanks,
Beth Kieler
Technical Support
SourceGear LLC

How to obtain amount of transferred data through Wi-Fi from other applications?

I'm facing to this problem. I spent much time by searching some API or "something" with it's possible to obtain transferred data through Wi-Fi from other applications (how screenshot bellow shows).
Does someone know a way how to do it? Is here someone who tried (or has already done) for something similar? Or a little differently: Is this possible to do it?
Currently i think that this is not possible because i think that non-system application cannot retrieve data from other application(s) installed on device. But maybe i can "missing" something so i placed this question and will be glad for whatever suggestion.
Thanks in advance!
TrafficStats
Class that provides network traffic statistics. These statistics include bytes transmitted and received and network packets transmitted and received, over all interfaces, over the mobile interface, and on a per-UID basis.
This means you can use getUidTxBytes to get whole transmitted data and so far getUidRxBytes to get whole received data.
And you can get application's UID with:
getApplicationInfo().uid
or for other applications refer THIS
While Sercan's answer is correct, must warn you that TrafficStats is not always guaranteed to give you correct stats. Basically TrafficStats will check files in the directory /proc/uid_stat/1094/ and various files like tcp_snd, tcp_rcv etc under this directory. On some devices, these (pseudo)files are not updated. Hence you should always check for a return value of UNSUPPORTED (-1) http://developer.android.com/reference/android/net/TrafficStats.html#UNSUPPORTED
Also, typically these stats have not been including UDP data. So the numbers you report will be wrong for apps that use UDP (like VoIP apps). For more details, look at
https://code.google.com/p/android/issues/detail?id=32410
On latest Android versions, there is another /proc file that gives you a lot of details. This is at /proc/self/net/xt_qtaguid/stats. But this pseudo file will only show the stats of an app reading that. Any Android app tries to read this file, it will not get stats related to any other Android app.
Why not try reading config files containing network information?
try this:
adb shell
cd /proc/uid_stat/XXXX(Proc id)
cat tcp_rcv/tcp_snd

How can I investigate these mystery Django crashes?

A Django site (hosted on Webfaction) that serves around 950k pageviews a month is experiencing crashes that I haven't been able to figure out how to debug. At unpredictable intervals (averaging about once per day, but not at the same time each day), all requests to the site start to hang/timeout, making the site totally inaccessible until we restart Apache. These requests appear in the frontend access logs as 499s, but do not appear in our application's logs at all.
In poring over the server logs (including those generated by django-timelog) I can't seem to find any pattern in which pages are hit right before the site goes down. For the most recent crash, all the pages that are hit right before the site went down seem to be standard render-to-response operations using templates that seem pretty straightforward and work well the rest of the time. The requests right before the crash do not seem to take longer according to timelog, and I haven't been able to replicate the crashes intentionally via load testing.
Webfaction says that isn't a case of overrunning our allowed memory usage or else they would notify us. One thing to note is that the database is not being restarted (just the app/Apache) when we bring the site back up.
How would you go about investigating this type of recurring issue? It seems like there must be a line of code somewhere that's hanging - do you have any suggestions about a process for finding it?
I once had some issues like this, and it basically boiled down to my misunderstanding of thread-safety within django middleware. Basically the django middleware is I believe a singleton that is shared among all threads, and these threads were thrashing with the values set on a custom middleware class I had. My solution was to rewrite my middleware to not use instance or class attributes that changed, and to switch the critical parts of my application to not use threads at all with my uwsgi server as these seemed to be an overall performance downside for my app. Threaded uwsgi setups seem to work best when you have views that may complete at different intervals (some long running views and some fast ones).
Since you can't really describe what the failure conditions are until you can replicate the crash, you may need to force the situation with ab (apache benchmark). If you don't want to do this with your production site you might replicate the site in a subdomain. Warning: ab can beat the ever loving crap out of a server, so RTM. You might also want to give the WF admins a heads up about what you are going to do.
Update for comment:
I was suggesting using the exact same machine so that the subdomain name was the only difference. Given that you used a different machine there are a large number of subtle (and not so subtle) environmental things that could tweak you away from getting the error to manifest. If the new machine is OK, and if you are willing to walk away from the problem without actually solving it, you might simply make it your production machine and be happy. Personally I tend to obsess about stuff like this, but then again I'm also retired and have plenty of time to play with my toes. :-)