My first time I have asked a question on here.
I have an expanding set of services hosted on google compute platform.
The initial round was set up in a very stressed situation, and I am now refactoring.
I currently have 3 EDIT: no thats 4 microservice VM hosts, which will all be HTTPS soon (and so need their own IP). In addition a list of test boxes, as we are developing bits. Test boxes do not need https.
question 1) Does any one have a work-round to get multiple static IPs per host? This is why i have large numbers of hosts.
question 2) How can I have more than a /29 of static IPs (eg 8 or more). This is corporate work, we will pay for services.
question 3) According to google api, I may deallocate static IPs. I cannot find an implementation for this. Do you know of one? As I have built systems like this in the past; I know there is no technical reason why there should not be an API for this.
Bonus Q, Question 4) Is there a mechanism to serialise a saved harddisk out of google cloud? This would make my CEO happy.
An ideal response is a relevent "Fine Manual" to read.
I work on GMT time. All linux hosts, probably not relevant. Although a developer, I can admin most things Linux.
UPDATE: if you delete an IP via gcloud compute addresses delete $name --region europe-west1 but don't delete the IF inside the box, this makes it not static. Which is the objective of Q3.
You can find the answers to your question below:
Its directly not possible to assign multiple IPs to an instance. One workaround to achieve this is to create multiple forwarding rules pointing to the same target pool with that instance.
Its currently not possible to reserve the whole block of IP addresses as the address are randomly assigned to the instances from the pool of IPs available.
If you have reserved static IPs in your project you can can release that IP from one instance and assign it to another.
There is no direct way to that, however one workaround I can think of is to use dd tool to clone your disk as .raw and save that to cloud storage. This clone case be used to create other disks outside your project.
I hope that helps.
Related
I am setting up aws CDK for a new stack on aws, and the docs say essentially "use the root account to start up, but then set up a policy for a new account":
However, using their recommended assume/* policy almost immediately leads to errors when trying to cdk deploy. So what is a mechanism for determining a policy useful and applicable to setting up a full cloudformation stack deployment?
For one example use case, when setting up continuous integration to deploy multiple stacks how can we avoid giving it the keys to the kingdom?
Since I am part of the aws community builders community, I asked there as well. Suffice it to say that this is a known problem, and not a trivial one to solve. I will try to distill what I learned into an answer here in broad strokes:
Set up permission boundaries. These can forbid an agent from creating new users and privilege escalation. https://aws.amazon.com/blogs/devops/secure-cdk-deployments-with-iam-permission-boundaries/
Walk your shots/walk your permissions. In other terms, give scant few permissions, then try to deploy, find where additional permissions are needed and add those, try to deploy again, rinse and repeat. This is most applicable if you expect the services of a stack to rarely change.
Draft a permission policy of Allow all... ...then deny in particular. In other words, set a policy on the deploying agent that allows * access to all services... ...and then deny permission to create users, change other users, etc etc. Contained within this approach is: bootstrap, then customize https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html#bootstrapping-customizing
Consider a multi-account strategy, where you would add a new aws account for a different project. Because AWS is usage based payment, they allow for multi-accounting in this way where other services might have policies against multiple accounts. Control tower can help with this.
Basically, this is going to probably an incredibly generic and poorly crafted question. I do apologise in advance for that and hope you can look past that and potentially offer some solutions/help.
I am looking at starting a new project, which I guess functions similar to Shopify in a way. Users will pay a monthly fee and then get their own website which has a store-style thing on it.
I am comfortable with most aspects of making this, however, the one thing I'm not completely sure how to do is if they want to have a custom domain (which I assume most customers would). Based on my experience with services such as Spotify and Tictail, to do this I am going to have to get them to change their nameservers to my nameservers. After that, I'm not completely sure how it will function and how to set it up. All of the files for the sites are going to be pretty much the exact same so I don't need much to change there.
So basically my main question is, how would I develop it to automatically host certain content when someone sets their nameservers as my nameservers? I would like it to be completely automatic if possible, but I don't mind if there is a little manual input.
I'm super sorry if the question isn't worded properly or if it's confusing as I've never developed something like this. A simple point in the right direction would be much appreciated as I'm not too sure where to start with this.
Thanks
It depends on the server technology you use to provide such user related nameservers. If I understand you properly, you are looking to get something like:
# Your service runs here:
http://yourdomain.com
# For the user account (user specific application)
http://{unique_username}.yourdomain.com/
# Then you would have (for N users)
http://username0.yourdomain.com/
http://username1.yourdomain.com/
...
http://usernameN.yourdomain.com/
The way you have to avoid dns hijacking is to actually specify the nameserver in the server host configuration. There are many servers around which provide virtual host configurations to allow many different domain names in the same ip address.
As an example, in nginx this can be done using virtual hosts. In your case you would need to programatically create them. In order to do so, a file must be added to the folder /etc/nginx/sites-available. This could be a file called: /etc/nginx/sites-available/username0 with this content:
server {
listen 80
server_name username0.yourdomain.com;
root /path/to/app/;
...
}
So for your solution, you would create a file per customer user. In order to activate a new created virtualhost (server block), link it in the folder with path: /etc/nginx/sites-enabled
ln -s /etc/nginx/sites-avaible/username1 /etc/nginx/sites-enabled/username1
sudo service nginx reload
Read more about dns hijacking here and have a look to virtual hosts implementations, like the nginx server blocks shown above, or the apache virtual hosts.
Good luck!
The Openshift 'all-in-one' Vagrant box uses xip.io. The security team at my company has relayed to us that using 'xip.io' for a wildcard DNS could cause some security concerns. So given that 'xip.io' is not an option, how can I get this set up?
We ran into a similar issue at my company. The best answer, which is a bit dire, is that you'll need to set up your own custom DNS. Sorry to say because it is a bit annoying to do but not all that bad. Use this link, it should give you some guidance.
Basically, you'll need to /etc/dnsmasq.conf file to look like:
# Reverse DNS record for master
host-record=master.example.com,192.168.1.100
# Wildcard DNS for OpenShift Applications - Points to Router
address=/apps.example.com/192.168.1.100
The article goes into great detail. I'm not sure how network savvy you are, but if you're not, then I'd suggest roping in one of your ops guys to assist with this. Without a relatively good understanding of networking, setting this up would be quite difficult.
I understand that you can do this without requiring your own DNS.
Quickest way is to manually add required entries to /etc/hosts of your host system, mapping them to IP address that xip.io address would map to.
Alternatively set up dnsmasq something like the following.
$ cat /usr/local/etc/dnsmasq.conf
address=/.10.2.2.2.xip.io/10.2.2.2
address=/.ain1/10.2.2.2
address=/.10.1.2.2.xip.io/10.1.2.2
address=/.cdk/10.1.2.2
$ls /etc/resolver/
ain1 ain1-xip cdk cdk-xip ddns
$cat /etc/resolver/cdk-xip
domain 10.1.2.2.xip.io
nameserver 127.0.0.1
This is cut and paste from elsewhere and I don't use CDK myself so not sure about the IP addresses here, but I understand this shouldn't require you to set up separate DNS. The 'ain1' entry is for the OpenShift Origin all in one VM, which is equivalent to CDK for OpenShift, but using latest Origin upstream version.
I would like to expand/shrink the number of kubelets being used by kubernetes cluster based on resource usage. I have been looking at the code and have some idea of how to implement it at a high level.
I am stuck on 2 things:
What will be a good way for accessing the cluster metrics (via Heapster)? Should I try to use the kubedns for finding the heapster endpoint and directly query the API or is there some other way possible? Also, I am not sure on how to use kubedns to get the heapster URL in the former.
The rescheduler which expands/shrinks the number of nodes will need to kick in every 30 minutes. What will be the best way for it. Is there some interface or something in the code which I can use for it or should I write a code segment which gets called every 30 mins and put it in the main loop?
Any help would be greatly appreciated :)
Part 1:
What you said about using kubedns to find heapster and querying that REST API is fine.
You could also write a client interface that abstracts the interface to heapster -- that would help with unit testing.
Take a look at this metrics client:
https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/podautoscaler/metrics/metrics_client.go
It doesn't do exactly what you want: it gets per-Pod stats instead of per-cluster or per-node stats. But you could modify it.
In function getForPods, you can see the code that resolves the heapster service and connects to it here:
resultRaw, err := h.client.Services(h.heapsterNamespace).
ProxyGet(h.heapsterService, metricPath, map[string]string{"start": startTime.Format(time.RFC3339)}).
DoRaw()
where heapsterNamespace is "kube-system" and heapsterService is "heapster".
That metrics client is part of the "horizonal pod autoscaler" implementation. It is solving a slightly different problem, but you should take a look at it if you haven't already. If is described here: https://github.com/kubernetes/kubernetes/blob/master/docs/design/horizontal-pod-autoscaler.md
FYI: The Heapster REST API is defined here:
https://github.com/kubernetes/heapster/blob/master/docs/model.md
You should poke around and see if there are node-level or cluster-level CPU metrics that work for you.
Part 2:
There is no standard interface for shrinking nodes. It is different for each cloud provider. And if you are on-premises, then you can't shrink nodes.
Related discussion:
https://github.com/kubernetes/kubernetes/issues/11935
Side note: Among kubernetes developers, we typically use the term "rescheduler" when talking about something that rebalances pods across machines, by removing a pod from one machine and creates the same kind of pod on another machine. That is a different thing than the thing you are talking about building. We haven't built a rescheduler yet, but there is an outline of how to build one here:
https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/rescheduler.md
I use some xbee (s2) modules with zb stack for mesh networking evaluation. Therefore a multi hopping environment has to be created. The problem is, that the firmware handles the association for themselves and there is no way deeper into the stack as the api provides. To force the path of the data, without to disturb the routing mechanism, I have tried to measure, I had to put them outside their reach. To get only the next hop in association isn't that easy. I used the least power level of the output, but the distance for the test setup is to large and the rf characteristics of the environment change undetermined.
Therefore my question, has anyone experience with this issue?
Regards, Toby
I don't think it's possible through software and coordinator/routers. You could change the Node Join Time (ATNJ) to force a new router to join through a particular router (disable Node Join on all nodes except one), but that would only affect joining. Once joined to the network, the router will discover that other nodes are within range.
You could possibly do it with sleepy end devices. You can use the ATNJ trick to force an end device to join through a single router, and it will always send its messages to that router. But you won't get that many hops -- end device sends to its parent router, which sends to the target's parent router, which sends to the target end device.
You'll likely need to physically limit the range of the radios to force hopping, as demonstrated in the video you linked of Digi's K-Node test equipment with a network of over 1000 radios. They're putting the radios in RF-shielded boxes and using wired antenna connections with software-controlled attenuators to connect the modules to each other.
If you have XBee modules with the U.fl or RPSMA connector, and don't connect an antenna, it should significantly reduce the range of the module. Otherwise, with a wire whip or integrated PCB antenna, you need to put each radio in some sort of box that attenuates the signal. Perhaps someone else can offer advice on materials that will reduce the signal's range without completely blocking it.
ZigBee nodes try to automatically form an Ad-Hoc network. That is why they join the network with the strongest connection (best network coverage) available on that moment. These modules are designed in such a way, that you do not have to care much about establishing a reliable communication. They will solve networking problems most of the time.
What you want to do, is somehow force a different situation. You want to create a specific topology, in order to get some multi-hopping. That will not be the normal behavior of the nods. But you can still get what you want with some of the AT Commands.
The mentioned command "NJ" should work for you. This command locks joins after a certain time (in seconds). Let us think of a simple ZigBee network with three nodes: one Coordinator, one Router and one End-Device. Switch on the Coordinator with "NJ" set to, let us say, two minutes. Then quickly switch on the Router, so it can associate with the Coordinator within these two minutes. After these two minutes, the Coordinator will be locked and will not accept more joins. At that moment you can start the End-Device, which will have to associate with the Router necessarily. This way, you will see that messages between End-Device and Coordinator go through the Router, as you wanted.
You may get a bigger network applying this idea several times, without needing to play with the module's antennas. You can control the AT Parameters remotely (i.e. from a Computer connected to the Coordinator), so you can use some code to help you initialize the network.