I'm trying to create a GCE instance "with a container" (as supported by gcloud CLI) by using POST https://www.googleapis.com/compute/v1/projects/{project}/zones/{zone}/instances.
How can I pass the container image url in the request payload?
If you are trying to set the "Machine Type", then you can specify the URL following the syntax mentioned in this document.
In document:
Full or partial URL of the machine type resource to use for this instance, in the format: zones/zone/machineTypes/machine-type. This is provided by the client when the instance is created. For example, the following is a valid partial url to a predefined machine type:
zones/us-central1-f/machineTypes/n1-standard-1
To create a custom machine type, provide a URL to a machine type in the following format, where CPUS is 1 or an even number up to 32 (2, 4, 6, ... 24, etc), and MEMORY is the total memory for this instance. Memory must be a multiple of 256 MB and must be supplied in MB (e.g. 5 GB of memory is 5120 MB):
zones/zone/machineTypes/custom-CPUS-MEMORY
For example: zones/us-central1-f/machineTypes/custom-4-5120 For a full list of restrictions, read the Specifications for custom machine types.
If you instead, you want to create a container cluster, then the API method mentioned in this link can help you.
It doesn't seem that there's an equivalent REST API for gcloud compute instances create-with-container ..., however as suggested by #user10880591's comment, Terraform can help. Specifically, the container-vm module deals with the generation of metadata required for this kind of action.
Usage example can be found here.
Related
i need to get the response code to use in scripts
like i run a command
oci compute instance update --instance-id ocid.of.instance --shape-config '{"OCPU":"2"}' --force
i will get this message
ServiceError:
{
"code": "InternalError",
"message": "Out of host capacity.",
"opc-request-id": "3FF4337F4ECE43BBB4B8E52524E80247/37CB970D371A9C6BB01DFB23E754FE5B/18DFE9AE75B88A77AB3A1FBEBD3B191B",
"status": 500
}
in this case, i got the error message and a status code 500
but if the commond works, it will output a full json of my instance's parameters, and i can only see a line of response code 200 in debug mode
is there a way to only show the response code?
Currently OCI CLI does not provide the HTTP response code directly in the response. The response would either contain the service response in case of success or a service error message in case of error.
Can you explain how you are using the HTTP response code in your script? Could you not use the command error code (non-zero on error) to determine the error case?
The ERROR: "Out of host capacity" means The selected shape does not have any available servers in the selected region and Availability Domain (AD). Virtual Machines (VM) are dynamically provisioned. If an AD has reached a minimum threshold, new hypervisors (physical servers) will be automatically provisioned.
There may be some occasions where the additional capacity has not finished provisioning before the existing capacity is exhausted, but when retrying in 15 minutes the customer may find the shape they want is available.
Alternatively, selecting a different shape, AD or region will almost certainly have the capacity needed.
Bare metal instances: Host capacity is ordered on a proactive basis guided by the growth rate of a region. Specialized shapes such as DenseIO do not have as much spare overhead and may be more likely to run out of capacity. Customers may need to try another AD or region.
I'm using the Web Serial API to connect two different scales. They send the weight data in different ways, so I'm trying to get the serialport metadata from them (vendorId etc) because I want to detect which scale is connected. The "getInfo()" method does not work because it is undefined in the Serialport Object.
[Exposed=(DedicatedWorker,Window), SecureContext]
interface SerialPortInfo {
maplike<DOMString, DOMString?>;
};
This is the interface for the metadata but I don't even know, how to use it.
My sources: https://wicg.github.io/serial/#dom-serialportinfo
The method is declared a little strangely in that version of the specification. You can treat the return value as a plan object. If the port is a USB device then it will have usbVendorId and usbProductId properties which are the metadata you are interested in.
I'm building an application that stores files into the FIWARE Object Storage. I don't quite understand what is the correct way of storing files into the storage.
The code python code snippet below taken from the Object Storage - User and Programmers Guide shows 2 ways of doing it:
def store_text(token, auth, container_name, object_name, object_text):
headers = {"X-Auth-Token": token}
# 1. version
#body = '{"mimetype":"text/plain", "metadata":{}, "value" : "' + object_text + '"}'
# 2. version
body = object_text
url = auth + "/" + container_name + "/" + object_name
return swift_request('PUT', url, headers, body)
The 1. version confuses me, because when I first looked at the only Node.js module (repo: fiware-object-storage) that works with Object Storage, it seemed to use 1. version. As the module was making calls to the old (v.1.1) API version instead of the presumably newest (v.2.0), referencing to the python example, not sure if that is an outdated version of doing it or not.
As I played more with the module, realised it didn't work and the code for it was a total mess. So I forked the project and quickly understood that I will need rewrite it form the ground up, taking the above mention python example from the usage guide as an reference. Link to my repo.
As of writing this the only methods that aren't implement is the object storage (PUT) and object fetching (GET).
Had some addition questions about the Object Storage which I sent to fiware-lab-help#lists.fiware.org, but haven't heard anything back so asking them here.
Haven't got much experience with writing API libraries. Should I need to worry about auth token expiring? I presume it is not needed to make a new authentication, every time we interact with storage. The authentication should happen once when server is starting-up (we create a instance) and it internally keeps it. Should I implement some kind of mechanism that refreshes the token?
Does the tenant id change? From the quote below is presume that getting a tenant I just a one time deal, then later you can use it in the config to make less authentication calls.
A valid token is required to access an object store. This section
describes how to get a valid token assuming an identity management
system compatible with OpenStack Keystone is being used. If the
username, password and tenant details are known, only step 3 is
required. source
During the authentication when fetching tenants how should I select the "right" one? For now i'm just taking the first one similar as the example code does.
Is it true that a object storage container belongs to only a single region?
Use only what you call version 2. Ignore your version 1. It is commented out in the example. It should be removed from the documentation.
(1) The token will be valid for some period of time. This could be an hour or a day, depending on the setup. This period of time should be specified in the token that is returned by the authentication service. The token needs to be periodically refreshed.
(2) The tenant id does not change.
(3) Typically only one tenant id is returned. It is possible, however, that you were assigned more than one id, in which case you have to pick which one you are currently using. Containers typically belong to a single tenant and are not shared between tenants.
(4) Containers are typically limited to a single region. This may change in the future when multi-region support for a container is added to Swift.
Solved my troubles and created the NPM module that works with the FIWARE Object Storage: https://github.com/renarsvilnis/fiware-object-storage-ge
I see that AWS posts a json file with all their IP ranges here (Actual JSON HERE)
I was thinking of using this json file to check against every incoming connection in my node app but firstly I was wondering if it would be far too much overhead to loop through it for every request?
Secondly, I wasn't sure exactly how to go about this, as many IP ranges are formatted differently eg.
43.250.192.0/24
46.51.128.0/18
27.0.0.0/22
I'm not too sure what them suffix's mean.
Has anyone don something similar?
Your first concern is correct - it's a lot of overhead to loop through Amazon's IPs for each request. This should be handled at the firewall.
Nevertheless, the ip_prefix field Amazon is providing can be used to ensure valid IP addresses exist within that subnet. The node-ip module can help with this. It has a cidrSubnet function that can be used to test a prefix against a user's IP. See the below coffeescript.
ip = require 'node-ip'
amazonIPs = require 'amazonIPs.json'
someUsersIP = '192.168.1.190'
for prefix in amazonIPs.prefix
if ip.cidrSubnet(prefix).contains(someUsersIP)
console.log "#{someUsersIP} is within the #{prefix} range"
Ansible has the gce_pd module: http://docs.ansible.com/gce_pd_module.html. According to the documentation you can specify the size and mode (READ, READ-WRITE) but not the type (SSD vs. Standard). Is it possible to use the gce_pd module to create a SSD disk?
As of right now, https://github.com/ansible/ansible-modules-core/blob/devel/cloud/google/gce_pd.py has no mention of SSD at all, so it seems like it's not supported. If this is something that you really need, consider submitting a feature request.
This is now available in Ansible.
According to the updated official docs, disk_type was added in Ansible 1.9
disk_type can have these possible values:
pd-standard
pd-ssd
Here's an example:
# Simple attachment action to an existing instance
- local_action:
module: gce_pd
name: mongodata
instance_name: www1
size_gb: 30
disk_type: pd-ssd