I was trying to run the CartPole-v0 example on a Google Compute Engine VM.
https://gym.openai.com/docs
import gym
env = gym.make('CartPole-v0')
env.reset()
for _ in range(1000):
env.render()
env.step(env.action_space.sample()) # take a random action
I have XQuartz installed, and ssh with -X.
First, I was getting
pyglet.canvas.xlib.NoSuchDisplayException: Cannot connect to "None"
Then according to http://www.gitterforum.com/discussion/openai-gym?page=28, I use "xvfb-run -s "-screen 0 1400x900x24" /bin/bash"
Then it ran fine with the result:
Making new env: CartPole-v0
You are calling 'step()' even though this environment has already returned done = True. You should always call 'reset()' once you receive 'done = True' -- any further steps are undefined behavior
But there is nothing rendered...
Use code from Observations section on https://gym.openai.com/docs.
Related
I have the following lines of python code
import os
def hello_world():
r=os.system("curl ipinfo.io/ip")
print (r)
hello_world()
Shows the desired output when executed from command line in Google Cloud Shell but seems there is a 0 at the end of IP Address output
$ python3 main2.py
34.X.X.2490
When I deployed the same code in Google CLoud function it is showing OK as output
I have to replace the first line of code in GCF as follows to make it deploy.
def hello_world(self):
Any suggestion so that GCF displays the desired output which is the output of curl command?
Your function won't work for 2 reasons:
Firstly, you don't respect the HTTP Cloud Function Python function signature:
def hello_world(request):
....
Secondly, you can't use system call. In fact not exactly, you can perform system call, but, because you don't know which package/binaries are installed, you can't rely on this. It's serverless, you don't manage the underlying infrastructure and runtime environment.
Here you made the assumption that CURL is installed on the runtime image. Maybe yes, maybe not, maybe it was, maybe it will be remove in future!! You can't rely on that!!
If you want to manage you runtime environment, you can use Cloud Run. You will manage your runtime environment, and you can install what you want on it and then you are sure of what you can do.
Last remarks:
note: instead of performing a CURL, you can perform a http get request to the same URL to get the IP
Why do you want to know the outgoing IP? It's serverless, you also don't manage the network. You will reach the internet through a Google IPs. It can change everytime, and other cloud functions (or cloud run), from your projects or project from others (like me), are able to use the same IPs. It's Google IPs, not yours! If it's your requirement, let me know, there are solutions for that!
I need to import the existing Aurora cluster in to terraform. I tried terraform import aws_rds_cluster.sample_cluster cluster statement.
I got the state file ready also I could also do Terraform show However, When I try to destroy the cluster Terraform tries to delete the cluster without the instances under it -
So the destroy command is failing.
`Error: error deleting RDS Cluster (test): InvalidDBClusterStateFault: Cluster cannot be deleted, it still contains DB instances in non-deleting state.status code: 400, request id: 15dfbae8-aa13-4838-bc42-8020a2c87fe9`
Is there a way I can import the entire cluster that includes instances as well? I need to have a single statefile that can be used to manage entire cluster(including underlying instances).
Here is the main.tf that is getting used to call the import -
access_key = "***"
secret_key = "*****"
region = "us-east-1"
}
resource "aws_rds_cluster" "test" {
engine = "aurora-postgresql"
engine_version = "11.9"
instance_class = "db.r5.2xlarge"
name = "test"
username = "user"
password = "******"
parameter_group_name = "test"
}```
Based on the comments.
Importing just aws_rds_cluster into TF is not enough. One must also import all aws_rds_cluster_instance resources which are part of the cluster.
If the existing infrastructure is complex, instead of fully manual development of TF config files for the importing procedure, an open-sourced third party tool, called former2, could be considered. The tool can generate TF config files from existing resources:
Former2 allows you to generate Infrastructure-as-Code outputs from your existing resources within your AWS account.
TF is one of the outputs supported.
I've been using the Google apiclient library in python for various Google Cloud APIs - mostly for Google Compute - with great success.
I want to start using the library to create and control the Google Logging mechanism offered by the Google Cloud Platform.
However, this is a beta version, and I can't find any real documentation or example on how to use the logging API.
All I was able to find are high-level descriptions such as:
https://developers.google.com/apis-explorer/#p/logging/v1beta3/
Can anyone provide a simple example on how to use apiclient for logging purposes?
for example creating a new log entry...
Thanks for the help
Shahar
I found this page:
https://developers.google.com/api-client-library/python/guide/logging
Which states you can do the following to set the log level:
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
However it doesn't seem to have any impact on the output which is always INFO for me.
I also tried setting httplib2 to debuglevel 4:
import httplib2
httplib2.debuglevel = 4
Yet I don't see any HTTP headers in the log :/
I know this question is old, but it is getting some attention, so I guess it might be worth answering to it, in case someone else comes here.
Stackdriver Logging Client Libraries for Google Cloud Platform are not in beta anymore, as they hit General Availability some time ago. The link I shared contains the most relevant documentation for installing and using them.
After running the command pip install --upgrade google-cloud-logging, you will be able to authenticate with your GCP account, and use the Client Libraries.
Using them is as easy as importing the library with a command such as from google.cloud import logging, then instantiate a new client (which you can use by default, or even pass the Project ID and Credentials explicitly) and finally work with Logs as you want.
You may also want to visit the official library documentation, where you will find all the details of how to use the library, which methods and classes are available, and how to do most of the things, with lots of self-explanatory examples, and even comparisons between the different alternatives on how to interact with Stackdriver Logging.
As a small example, let me also share a snippet of how to retrieve the five most recent logs which have status more sever than "warning":
# Import the Google Cloud Python client library
from google.cloud import logging
from google.cloud.logging import DESCENDING
# Instantiate a client
logging_client = logging.Client(project = <PROJECT_ID>)
# Set the filter to apply to the logs, this one retrieves GAE logs from the default service with a severity higher than "warning"
FILTER = 'resource.type:gae_app and resource.labels.module_id:default and severity>=WARNING'
i = 0
# List the entries in DESCENDING order and applying the FILTER
for entry in logging_client.list_entries(order_by=DESCENDING, filter_=FILTER): # API call
print('{} - Severity: {}'.format(entry.timestamp, entry.severity))
if (i >= 5):
break
i += 1
Bear in mind that this is just a simple example, and that many things can be achieved using the Logging Client Library, so you should refer to the official documentation pages that I shared in order to get a more deep understanding of how everything works.
However it doesn't seem to have any impact on the output which is
always INFO for me.
add a logging handler, e.g.:
formatter = logging.Formatter('%(asctime)s %(process)d %(levelname)s: %(message)s')
consoleHandler = logging.StreamHandler()
consoleHandler.setLevel(logging.DEBUG)
consoleHandler.setFormatter(formatter)
logger.addHandler(consoleHandler)
I am trying to automate deployment of a aws VPN [IPSec] instance using python boto. I am launching new instance using, 'ec2.run_instances'.
reservations = ec2.run_instances(
image_id,
subnet_id=subnet_id,
instance_type=instance_type,
instance_initiated_shutdown_behavior='stop',
key_name=key_name,
security_group_ids=[security_group])
For this script to work, I need to disable source/destination check for this instance. I couldn't find a way to disable this using python boto. As per the boto documentation I can do this using 'modify_instance_attribute'.
http://boto.likedoc.net/en/latest/ref/ec2.html
However I couldn't find any sample script using this attribute. Please give me some examples so that I can complete this.
Thanks in advance.
From boto3 documentation the way you would do this is:
response = requests.get('http://169.254.169.254/latest/meta-data/instance-id')
instance_id = response.text
ec2_client = boto3.client('ec2')
result = ec2_client.modify_instance_attribute(InstanceId=instance_id, SourceDestCheck={'Value': False})
You would have to use the modify_instance_attribute method after you have launched the instance with run_instances. Assuming your call to run_instances returns a single instance:
instance = reservations[0].instances[0]
ec2.modify_instance_attribute(instance.id, attribute='sourceDestCheck', value=False)
I am intermittently getting this error when calling build on a drive service. I am able to reproduce this with a simple program which has the JSON credentials stored to a file.
#!/usr/bin/python
import httplib2
import sys
from apiclient.discovery import build
from oauth2client.client import Credentials
json_creds = open('creds.txt', 'r').read()
creds = Credentials.new_from_json(json_creds)
http = httplib2.Http()
http = creds.authorize(http)
try:
drive_service = build('drive', 'v2', http=http)
except Exception:
sys.exit(-1)
When I run this in a loop, I am seeing a rather high number of errors, this code in a loop fails 15-25% of the time for me.
i=0; while [ $i -lt 100 ]; do python jsoncred.py || echo FAIL ; i=$(( $i + 1 )); done | grep FAIL | wc -l
Now when I take this same code, and just replace 'drive' by 'oauth2', the code runs without problems
I have confirmed that the OAuth token that I am using is valid and have the correct scopes:
"expires_in": 2258,
"scope": "https://www.googleapis.com/auth/userinfo.profile https://www.googleapis.com/auth/drive https://www.googleapis.com/auth/userinfo.email",
Looking at my app logs, this seems to have started 2/14/2013 1PM PST. I did not push any new code, so I wonder if this a problem with the API. Is there a bug in the API causing this ?
Google are seeing some reports of increased error rates for the discovery document. Please just retry on 500 error for now, and you should be successful.
One could argue that you should have retry logic for this call anyway, since it is good practice, but the current levels are too high, so, sorry about that.
Update: this should now be fixed.