Is there a way I can use more instances than the number of external IPs I have? - google-compute-engine

I have more than enough CPUs and memory to launch 100 instances but only 30 external IP addresses is there a way I can launch more instances despite that?

Chances are you don't need that many IPs at all. Only in very specific scenarios you would need all your nodes to be publicly accessible.
If you need that many instances, simply create them without public IPs. Then, create a NAT Gateway so your instances can use that to access outside your private network.
You will be able to accomplish 99% of usage scenarios this way. If you really need more IPs and you have used all of your ephemeral IPs you can request in the Form.

I guess it depends what you want to do, but the gcloud compute instances create tool has a flag --no-address which will allow you to launch an instance with no external IP address. Have a look at gcloud compute instances create --help to see if you think that would be useful.
If you wanted to use the API or instance templates, I think just leaving out the accessConfigs part of the networking section of the request body will do what you need. Compare this:
"networkInterfaces": [
{
"network": "https://www.googleapis.com/compute/v1/projects/your-project-here/global/networks/default",
"accessConfigs": [
{
"name": "External NAT",
"type": "ONE_TO_ONE_NAT"
}
]
}
Where I used the default option of "Ephemeral" for the external IP in the Google Cloud Developers console, with this:
"networkInterfaces": [
{
"network": "https://www.googleapis.com/compute/v1/projects/your-project-here/global/networks/default"
}
]
Where I selected "None" as the External IP.
To look at what the API body would look like, there is a link "View Equivalent REST" just below the Create button, it can be really useful for templates and things.

Related

MySQL InnoDB Cluster config - configure node address

I'm setting up an InnoDB Cluster using mysqlsh. This is in Kubernetes, but I think this question applies more generally.
When I use cluster.configureInstance() I see messages that includes:
This instance reports its own address as node-2:3306
However, the nodes can only find each other through DNS at an address like node-2.cluster:3306. The problem comes when adding instances to the cluster; they try to find the other nodes without the qualified name. Errors are of the form:
[GCS] Error on opening a connection to peer node node-0:33061 when joining a group. My local port is: 33061.
It is using node-n:33061 rather than node-n.cluster:33061.
If it matters, the "DNS" is set up as a headless service in Kubernetes that provides consistent addresses as pods come and go. It's very simple, and I named it "cluster" to created addresses of the form node-n.cluster. I don't want to cloud this question with detail I don't think matters, however, as surely other configurations require the instances in the cluster to use DNS as well.
I thought that setting localAddress when creating the cluster and adding the nodes would solve the problem. Indeed, after I added that to the createCluster options, I can look in the database and see
| group_replication_local_address | node-0.cluster:33061 |
After I create the cluster and look at the topology, it seems that the local address setting has no effect whatsoever:
{
"clusterName": "mycluster",
"defaultReplicaSet": {
"name": "default",
"primary": "node-0:3306",
"ssl": "REQUIRED",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures.",
"topology": {
"node-0:3306": {
"address": "node-0:3306",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.29"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "node-0:3306"
}
And adding more instances continues to fail with the same communication errors.
How do I convince each instance that the address it needs to advertise is different? I will try other permutations of the localAddress setting, but it doesn't look like it's intended to fix the problem I'm having. How do I reconcile the address the instance reports for itself with the address that's actually useful for other instances to find it?
Edit to add: Maybe it is a Kubernetes thing? Or a Docker thing at any rate. There is an environment variable set in the container:
HOSTNAME=node-0
Does the containerized MySQL use that? If so, how do I override it?
Apparently this value has to be set at startup. The option for my setup was
--report-host=${HOSTNAME}.cluster
when starting the MySQL instances resolved the issue.
Specifically for Kubernetes, an example is at https://github.com/adamelliotfields/kubernetes/blob/master/mysql/mysql.yaml

REST API which needs multiple different resources?

I'm designing a REST api for running jobs on virtual machines in different domains (Active Directory domains, the virtual machines with the same name can exist in different domains).
/domains
/domains/{dname}
/domains/{dname}/vms
/domains/{dname}/vms/{cname}
And for jobs, which will be stored in a database
/jobs
/jobs/{id}
Now I need to add a new API for the following user stories.
As a user, I want to run a job (just job definition, not the stored job) on an existing VM.
As a user, I want to run a job (just job definition, not the stored job) on VM named x, which may or may not exist. The system should create the VM if x doesn't exist.
How should the api be designed?
Approach 1:
PUT /domains/{dname}
{ "state": "running_job", "vm": "vm_name", "job_definition": { .... } }
Approach 2:
PUT /domains/{dname}/vms/{vm_name}
{ "state": "running_job", "job_definition": { .... } }
Approach 3:
PUT /jobs
{ "state": "running", "domain": "name", "vm": "vm_name", "job_definition": { .... } }
Approach 4: create a new resource, saying scheduler,
PUT /scheduler
{ "domain": "name", "vm": "vm_name", "job_definition": { .... } }
(what if I need to update some attributes of scheduler in the future?)
In general, hwo to design the REST API url which needs multiple resources?
How should the api be designed?
How would you design this on the web?
There would be an HTML form, right? With a bunch of input controls to collect information from the operator about what job to use, and which VM to target, and so on. Operator would fill in the details, submit the form. The browser would then use the form to create the appropriate HTTP request to send to the server (the request-target being computed from the form meta data).
Since the server gets to decide what the request-target should be (benefits of using hypertext), it can choose any resource identifier it wants. In HTTP, a successful unsafe request happens to invalidate previously cached responses with the same request target, so one possible strategy is to consider which is the most important resource changed by successfully handling the request, and use that resource as the target.
In this specific case, we might have a resource that represents the job queue (ex /jobs), and what we are doing here is submitting a new entry in the queue, so we might expect
POST /jobs HTTP/1.1
....
If the server, in its handling of the request, also creating new resources for the specific job, then those would be indicated in the response
HTTP/1.1 201 Created
Location: /jobs/931a8a02-1a87-485a-ba5b-dd6ee716c0ef
....
Could you instead just use PUT?
PUT /jobs/931a8a02-1a87-485a-ba5b-dd6ee716c0ef HTTP/1.1
???
Yes, if (a) the client knows what spelling to use for the request-target and (b) is the client knows what the representation of the resource should look like.
Which unsafe HTTP method you use in the messages that trigger you business activities doesn't actually matter very much. You need to use the methods correctly (so that general purpose HTTP connectors don't get misled).
In particular, the important thing to remember about PUT is that the request body should be a complete representation of the resource - in other words, the request body for a PUT should match the response body of a GET. Think "save file"; we've made local edits to our copy of a resource, and we send back a copy of the entire document.

ARM template to create a VM using an existing VNet and subnet

recently I started learning/working with ARM templates and JSON so I'm a complete newbie to this. I've been asked to make a template that creates a virtual machine selecting an existing virtual network and subnet within a subscription.
Everything works fine, except that whenever I make the deployment, the template creates a new vnet and subnet with randomized names instead of letting me pick from an existing one (the VM creates correctly though).
I used https://github.com/Azure/azure-quickstart-templates/blob/master/101-vm-simple-rhel/azuredeploy.json quickstart template as a base and added a few lines (which I will put below) to let me type the name of my vnet and subnet as it does with the VM name, but it keeps creating new ones even though I type the name correctly.
The lines I added to the code in the Parameters section are:
"virtualNetworkName": {
"type": "string",
"metadata": {
"description": "VNet to which the VM will connect."
}
},
"subnetName": {
"type": "string",
"metadata": {
"description": "Subnet to which the VM will connect."
}
}
Thank you in advance for your time!
To create a VM with the existing VNet base on the quickstart template you used, you only need to delete the virtual network resource in the resources block and the dependency on it and all the variables about the VNet and subnet except the variable subnetRef, then change this variable with your parameters like this if the VNet in the same resource group with the VM:
"subnetRef": "[resourceId('Microsoft.Network/virtualNetworks/subnets', parameters('virtualNetworkName'), parameters('subnetName'))]",
If the existing VNet in another resource group but in the same subscription, then the variable subnetRef should be changed like this:
"subnetRef": "[resourceId('otherResourceGroup', 'Microsoft.Network/virtualNetworks/subnets', parameters('virtualNetworkName'), parameters('subnetName'))]",
According to the changes, the template will use the existing VNet and subnet instead of creating new ones.
Take a look at this sample:
https://github.com/Azure/azure-quickstart-templates/tree/master/100-marketplace-sample
It shows how you can use a pattern for new/existing/none on resources in a template.

Consul 0.8 ACL migration - how to migrate

TLTR
How to migrate the pre 0.8 ACL permissions to 0.7.3?
Current setup
I am currently running an ACL enabled Consul 0.7.3 stack.
With Consul 0.8 ACLs will finally also include services and nodes, so that nodes / service (Consul) are not longer shown to anonymous users. This is exactly what I need. Today I tried to enable the new ACL "pre 0.8" using https://www.consul.io/docs/agent/options.html#acl_enforce_version_8
After doing so, my nodes could no longer authenticate against the master ( if authentication is the problem at all ).
I run the consul-network with gossip enabled, I have configured a acl_master_token:
"{acl_master_token":"<token>}"
and a token for the agents:
"{acl_token":"<token>}"
which all agents use / are configured with.
I have these ACL defaults:
{
"acl_datacenter": "stable",
"acl_default_policy": "deny",
"acl_down_policy": "deny"
}
and my Consul config looks like this:
{
"datacenter": "stable",
"data_dir": "/consul/data",
"ui": true,
"dns_config": {
"allow_stale": false
},
"log_level": "INFO",
"node_name": "dwconsul",
"client_addr" : "0.0.0.0",
"server": true,
"bootstrap": true,
"acl_enforce_version_8": true
}
What happens
When I boot, I cannot see my nodes/services using my token at all, neither the nodes/agents can register at the master,
Question
What is exactly needed to get the following:
All agents can see all nodes and all services and all KVs
Anonymous sees nothing, not KV, services or nodes (thats what is possible with 0.8 )
I looked at https://www.consul.io/docs/internals/acl.html "ACL Changes Coming in Consul 0.8" but I could not wrap my head around it. Should I now use https://www.consul.io/docs/agent/options.html#acl_agent_master_token instead of acl_token?
Thank you for any help. I guess I will not be the only one on this migration path and this particular interest, a lot of people are interested in this. You help all of them :)
It looks like the new node policy is preventing the nodes from registering properly. This should fix things:
On your Consul servers configure them with an acl_agent_token that has a policy that can write to any node, like this: node "" { policy = "write" }.
On your Consul agents, configure them with a similar one to the servers to keep things open, or you can give them a token with a more specific policy that only lets them write to some allowed prefix.
Note this gets set as the acl_agent_token which is used for internal registration operations. The acl_agent_master_token is used as kind of an emergency token to use the /v1/agent APIs if there's something wrong with the Consul servers, but it only applies to the /v1/agent APIs.
For "all agents can see all nodes and all services and all KVs" you'd add node read privileges to whatever token you are giving to your agents via the acl_token, so you'd add a policy like:
node "" { policy = "read" }
service "" { policy = "read" }
key "" { policy = "read" }
Note that this allows anyone with access to the agent's client interface to read all these things, so you want to be careful with what you bind to (usually only loopback). Or don't set acl_token at all and make callers pass in a token with each request.

Receiving JSON POST requests from HPKP error respondents

I'm experimenting with setting up HPKP (https://scotthelme.co.uk/hpkp-http-public-key-pinning/) on my web server and one of its options is to specify an error reporting URI in the header for clients to send error notices to in the form of a JSON POST request structured as such:
{
"date-time": date-time,
"hostname": hostname,
"port": port,
"effective-expiration-date": expiration-date,
"include-subdomains": include-subdomains,
"noted-hostname": noted-hostname,
"served-certificate-chain": [
pem1, ... pemN
],
"validated-certificate-chain": [
pem1, ... pemN
],
"known-pins": [
known-pin1, ... known-pinN
]
}
My question is how can I set something up within Linux to listen for the JSON POSTs on port 80 (or 443)?
Does anything exist for this already? thanks everyone for your help.
Scott Helme, who's link you included, also runs this service which takes care of it for you:
https://report-uri.io
Alternatively if you want to try it out yourself any web scripting language (cgi via perl, php... etc.) should be able to listen to a post request and dump it out to a log file. Personally I use a NodeJS service, but anything will do. I'm not aware of any scripts people have shared but that's probably because there no need as so simple (listen for post request, print out results).
Also you cannot listen on port 443 on the same domain as the site you are monitoring as the report also uses HPKP so won't be able to connect, since the only time you want to report is when you can't connect! Would work fine in report only mode though.
I know you're only experimenting but I would caution to be very careful with HPKP as its very easy to brick your site with this, and it adds a lot of extra considerations to certificate renewal. Personally I don't think it's that great as the risk it introduces, to me anyway, far out weigh the risk it mitigates for most sites. More thoughts of that from me here: https://www.tunetheweb.com/security/http-security-headers/hpkp/#downsides