No available shapes in this compartment and availability domain - oracle-cloud-infrastructure

I'm using the OCI San Jose region and I'm trying to create a capacity reservation in OCI under Compute -> Capacity Reservation but whenever I try to create a capacity reservation, the Shape dropdown list is greyed out with the message:
No available shapes in this compartment and availability domain
Does this mean that this region is at capacity and is not capable of reservations at this time? Or is there a limit or restriction I'm not aware of?
I've been trying each day this week but the message remains the same.

It does mean that for that specific shape, the region is at capacity and not capable of taking reservations. You can certainly file a support request to find the capacity situation. This does seem to indicate that your use of reservations is sound practice.
You can also use the command line interface to check the region for available shapes:
$ oci compute shape list
or better
$ oci compute shape list | jq -r .data[].shape | sort | uniq

Related

OpenAI Gym stepping in an externally controlled environment

I have a simulation that ticks the time every 5 seconds. I want to use OpenAI and its baselines algorithms to perform learning in this environment. For that I'd like to adapt the simulation by writing some adapter code that corresponds to the OpenAI Env API. But there is a problem: The flow of control is defined by the Agent in the OpenAI setting. But in my world, the environment steps, independent of the agent. If the agent doesn't decide or is not fast enough, the world just keeps going without him. How would one achieve this reversal of triggering the next step?
In short: OpenAI Env gets stepped by the agent. My environment gives my agent about 2-3 seconds to decide and then just tells it what's new, again offering to make choice to act or not.
As an example: My environment is rather similar to a real world stock trading market. The agent gets 24 chances to buy / sell products for a certain limit price to accumulate a certain volume for that target time and at time step 24, the reward is given to the agent and the slot is completed. The reward is based on the average price paid per item in comparison to the average price by all market participants.
At any given moment, 24 slots are traded in parallel (a 24x parallel trading of futures). I believe for this I need to create 24 environments which leads me to believe A3C would be a good choice.
After re-reading the question, it seems like OpenAI gym is not a great fit for what you’re trying to do. It is designed for running rapid experiments, which cannot be done efficiently if you are waiting on live events to occur. If you have no historical data and can only train on incoming live data, there is no point to using OpenAI gym. You can write your own code to represent the environment from that data, and that would be easier than trying to morph it into another framework, although OpenAI gym’s API does provide a good model for how your environment should work.

OBDII Based Lock/Unlock and Engine Start/Stop

I want to know what it takes to build a device that can Lock/Unlock door and Engine Start/Stop for vehicles using OBDII. Is it possible? The idea is to make them app connected using Bluetooth Low Energy/ or 3G that connects with he car.
If not possible via ODBII, then what is the best way to do it?
I did some search to see if there is an device that can do such a thing of the shelf which can be controlled using APIs/SDKs but all were propriety and not open for integration. Any suggestions?
Keep in mind that the OBD II only supports the emission related data of a vehicle. But since the CAN BUS is a serial network so you might get all the data using OBD II port and not the OBD!
Lock/Unlock system is usually part of LIN system and not OBD which is a cheaper version of OBD (I think!).
CAN BUS has no encryption therefore it is possible to read all the CAN BUS traffic but the meaning of each ECU data packet is not known and manufacturers keeps them under heavy control due to the security. So if you want to have a start/stop you probably should hack the system or anyhow find the translation of each ECU data packet.
At the End if you are not a hacker with some network and vehicle knowledge, the probability is not too much for you to write an App that you expected!

What's the free bandwidth given with Google compute engine instances

I'm unable to understand the free bandwidth/traffic allowed in per Google Compute engine instance. I'm using digitalocean and here with every server it provides free bandwidth/transfer e.g with $ 0.015- 1GB/1CPU and 2TB of Transfer is allowed.
Hence is there any free bandwidth per compute instance or google will charge for every bit transferred to/from VM.
As documented on our Network Pricing page, traffic prices depend on the source and destination. There is no "bucket of bits up til x GB" that are free like a cellphone plan or something. Rather certain types of traffic are always free, and other types are charged. For example, anything coming in from the internet. Or, anything to another VM in the same zone (using internal IPs).
If you are in Free Trial, then of course we give you usage credits, so you can use up to that total amount, in dollars, "for free."

Autoscaling GCE Instance groups based on Cloud pub/sub queue

Can GCE Instance groups be scaled up/down bases on Google Cloud PubSub queue counts or other asynchronous task queues such as PSQ?
Yes!
The feature is now in alpha: https://cloud.google.com/compute/docs/autoscaler/scaling-queue-based
I haven't tried this myself but looking at the documentation, it looks possible to set up autoscaling against Pub/Sub message queue counts.
This page [0] explains how to setup autoscaler to scale based on a standard metric provided by the Cloud Monitoring service.
This page [1] explains what metrics you can use for autoscaler. These two looks useful:
pubsub.googleapis.com/subscription/num_outstanding_messages
pubsub.googleapis.com/subscription/num_undelivered_messages
[0] https://cloud.google.com/compute/docs/autoscaler/scaling-cloud-monitoring-metrics
[1] https://cloud.google.com/monitoring/api/metrics
You can't use pubsub metrics (pubsub.googleapis.com/subscription/num_outstanding_messages or pubsub.googleapis.com/subscription/num_undelivered_messages) for that purpose.
According to the docs:
A valid utilization metric for scaling meets the following criteria:
The standard metric has a label for resource_id, and value of the label for each stream is ID of an instance.
The standard metric describes how busy an instance is, and the metric value increases or decreases proportionally to the number virtual machine instances in the group.
pubsub metrics don't meet that criteria.
However, there are two ways you can use pubsub based autoscaling:
Write your own custom metric - you can use the gcloud monitoring api to get your pubsub timeseries data. Than use it to calculate your own custom monitoring metric - for example - last time series value divided by your average/desired latency.
You can use this method with every async queue solution that you are using.
Still in alpha, there is a gcloud api for subscriber based autoscale: https://cloud.google.com/compute/docs/autoscaler/scaling-queue-based. This solution applies for google cloud pubsub only, and you can't use it with other async queue solutions.

Google Compute Engine auto scaling based on queue length

We host our infrastructure on Google Compute Engine and are looking into Autoscaling for groups of instances. We do a lot of batch processing of binary data from a queue. In our case, this means:
When a worker is processing data the CPU is always 100%
When the queue is empty we want to terminate all workers
Depending on the length of the queue we want a certain amount of workers
However I'm finding it hard to figure out a way to auto-scale this on Google Compute Engine because they appear to scale on instance-only metrics such as CPU. From the documentation:
Not all custom metrics can be used by the autoscaler. To choose a
valid custom metric, the metric must have all of the following
properties:
The metric must be a per-instance metric.
The metric must be a valid utilization metric, which means that data from the metric can be used to proportionally scale up or down
the number of virtual machines.
If I'm reading the documentation properly this makes it hard to use the auto scaling on a global queue length?
Backup solutions
Write a simple auto-scale handler using the Google Cloud API to create or destroy new workers using Instances API
Write a simple auto-scale handler using instance groups and then manually insert/remove instances using the InstanceGroups: insert
Write a simple auto-scaling handler using InstangeGroupManagers: resize
Create a custom per-instance metric which measures len(queue)/len(workers) on all workers
As of February 2018 (Beta) this is possible via "Per-group metrics" in stackdriver.
Per-group metrics allow autoscaling with a standard or custom metric
that does not export per-instance utilization data. Instead, the group
scales based on a value that applies to the whole group and
corresponds to how much work is available for the group or how busy
the group is. The group scales based on the fluctuation of that group
metric value and the configuration that you define.
More information at https://cloud.google.com/compute/docs/autoscaler/scaling-stackdriver-monitoring-metrics#per_group_metrics
The how-to is too long to post here.
As far as I understand this is not implemented yet (as at January 2016). At the moment autoscaling is only targeted at web serving scenarios, where you want to serve web pages/other web services from your machines and keep some reasonable headroom (e.g. in terms of CPU or other metrics) for spikes in traffic. Then the system will adjust the number of instances/VMs to match your target.
You are looking for autoscaling for batch processing scenarios, and this is not catered for at the moment.