How to send a JSON message via Python Requests/URL Module to a Pi Django server from a Pizero W on the same router/LAN? - json

I have a Raspberry Pi 3+ and a Raspberry Pizero W connected to the same home wireless network.
The Pi 3+ is hosting a Django server, and the IP address of the Pi 3+ is static on the home network at 192.384.5.767.
The Pizero W has a static IP address on the home network of 192.384.5.343.
When I ping the Pi 3+ Server from the Pizero W, I can see that it is up:
pi#PIZEROW:~$ ping 192.384.5.767
64 bytes from 192.384.5.767: icmp_seq=1 ttl=64 time=10.5 ms
64 bytes from 192.384.5.767: icmp_seq=2 ttl=64 time=30.0 ms
64 bytes from 192.384.5.767: icmp_seq=3 ttl=64 time=32.1 ms
Additionally, when I access the website hosted by the Django server on a tablet, there are no issues.
However, I need two-way communication between the Pizero W and the Django server so would like to test sending a super simple JSON message from the Pizero W to the Pi 3+ server and receive an acknowledgement from the server that it received the message. I don't need the server to do anything with the message other than to receive it, discard it, and send a confirm that the message was received.
I tried doing this from the Pizero W using:
pi#PIZEROW:~$ python3
>>> import requests
>>> import JSON
>>> url = "http://192.384.5.767:8000"
>>> data = {'msg':'test'}
>>> r = requests.post(url, data)
And received the following error:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 159, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw)
File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 80, in create_connection
raise err
File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 70, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 354, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/lib/python3.7/http/client.py", line 1229, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1275, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1224, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1016, in _send_output
self.send(msg)
File "/usr/lib/python3.7/http/client.py", line 956, in send
self.connect()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 181, in connect
conn = self._new_conn()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 168, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0xb5a2b610>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 398, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='192.384.5.767', port=8000): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0xb5a2b610>: Failed to establish a new connection: [Errno 111] Connection refused'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3/dist-packages/requests/api.py", line 116, in post
return request('post', url, data=data, json=json, **kwargs)
File "/usr/lib/python3/dist-packages/requests/api.py", line 60, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='192.384.5.767', port=8000): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0xb5a2b610>: Failed to establish a new connection: [Errno 111] Connection refused'))
Not exactly sure what's going on, but it seems like despite the fact that the Django server is up and running as evidence by my tablet and the ping success, I still send and receive messages from the Pizero.
Any help / advice would be appreciated! Thank you!

This could be one of three things:
(1) your django server is not bound to the ip address or to 0.0.0.0. Normally, the runserver command starts django listening on 127.0.0.1 or localhost. To get it to bind to the correct ip address / port use
python manage.py 192.384.5.767:8000
In production environments, django applications are usually served with gunicorn or uwsgi and listening on port 80. If you are running with gunicorn / uwsgi, update the appropriate configuration setting.
(2) a firewall on the pi is blocking access. Depending on the operating system you may need to allow remote access to tcp port 80. Depending on the version of linux you may also have to add rules for apparmor and/or selinux. For example:
iptables -A INPUT -p tcp --port 80 -j ACCEPT
(3) finally, the call using requests is slightly off. It should be
requests.post(url, json=data)

Related

ValueError: Unknown URI - must be a path to an IPC socket, a websocket beginning with 'ws' or a URL beginning with 'http'

I am using Brownie and added a Ganache local network using the following command:
brownie networks add Development ganache-local host=HTTP://127.0.0.1:7545 cmd=ganache-cli
After compiling, I try to deploy the smart contracts with brownie run token.py --network ganache-local Then, I get the following error:
TokenProject is the active project.
File "brownie/_cli/__main__.py", line 64, in main
importlib.import_module(f"brownie._cli.{cmd}").main()
File "brownie/_cli/run.py", line 45, in main
network.connect(CONFIG.argv["network"])
File "brownie/network/main.py", line 40, in connect
web3.connect(host, active.get("timeout", 30))
File "brownie/network/web3.py", line 68, in connect
raise ValueError(
ValueError: Unknown URI - must be a path to an IPC socket, a websocket beginning with 'ws' or a URL beginning with 'http'
I did some digging and found this in the source code:
if self.provider is None:
if uri.startswith("ws"):
self.provider = WebsocketProvider(uri, {"close_timeout": timeout})
elif uri.startswith("http"):
self.provider = HTTPProvider(uri, {"timeout": timeout})
else:
raise ValueError(
"Unknown URI - must be a path to an IPC socket, a websocket "
"beginning with 'ws' or a URL beginning with 'http'"
)
Does this mean I have to set a provider (like Alchemy) for my local network? Does that even make sense?
After 4 days of struggling, passing the network_id=5777 argument solved my problem. Apparently, brownie cannot recognize the local network created in the Ganache app without this argument.

How to connect MySQL instance from GCP project to AWs Lambda function?

I've hosted my MySQL instance in GCP project and I want to use it's database in AWS Lambda Function. I've tried all the ways to connect to my DB in MySQL instance in GCP but the Lambda Function give me Timeout Error even though I've kept my Timeout period enough to run the function.
I've also Zipped the Package with MySQL and pymysql installed and then uploaded to Lambda but the issues still persists.
Here's the code that I've written for connecting to my DB:
import json
import boto3
import mysql.connector
import MySQLdb
def lambda_handler(event, context):
mydb = MySQLdb.connect(
host="Public Ip of MySQL Instance",
user="Username",
password="Password",
db="DbName"
)
cur = db.cursor()
cur.execute("SELECT * FROM budget")
for row in cur.fetchall():
print(row[0])
db.close()
Here's the Error that I receive:
{
"errorMessage": "(2003, \"Can't connect to MySQL server on '36.71.43.131' (timed out)\")",
"errorType": "OperationalError",
"stackTrace": [
" File \"/var/lang/lib/python3.8/imp.py\", line 234, in load_module\n return load_source(name, filename, file)\n",
" File \"/var/lang/lib/python3.8/imp.py\", line 171, in load_source\n module = _load(spec)\n",
" File \"<frozen importlib._bootstrap>\", line 702, in _load\n",
" File \"<frozen importlib._bootstrap>\", line 671, in _load_unlocked\n",
" File \"<frozen importlib._bootstrap_external>\", line 783, in exec_module\n",
" File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\n",
" File \"/var/task/lambda_function.py\", line 10, in <module>\n connection = pymysql.connect(host='36.71.43.131',\n",
" File \"/var/task/pymysql/connections.py\", line 353, in __init__\n self.connect()\n",
" File \"/var/task/pymysql/connections.py\", line 664, in connect\n raise exc\n"
]
}
Please help me to resolve this. I've tried all different ways to connect to my SQL instance but nothing works.
According to the error message, AWS Lambdathe tried to connect the Public IP address of MySQL instance directly.
You have to configure your MySQL instance to have a public IPv4 address, and to accept connections from specific IP addresses or a range of addresses by adding authorized addresses to your instance.
To configure access to your MySQL instance:
From the client machine, use What's my IP to see the IP address of the client machine.
Copy that IP address.
Go to the Cloud SQL Instances page in the Google Cloud Console.
Click the instance to open its Overview page, and record its IP address.
Select the Connections tab.
Under Authorized networks, click Add network and enter the IP address of the machine where the client is installed.
Note: The IP addresses must be IPv4. That is, the IP addresses of the instance, and of the client machine that you authorize, both must be IPv4.
Click Done. Then click Save at the bottom of the page to save your changes.

How to deploy tornado on openshift

When I start tornado application through ssh I get this error:
Traceback (most recent call last):
File "/var/lib/openshift/54f9750d4382eca672000091/app-root/runtime/repo//app/ws_server.py", line 111, in <module>
app.listen(8000)
File "/var/lib/openshift/54f9750d4382eca672000091/python/virtenv/venv/lib64/python3.3/site-packages/tornado/web.py", line 1691, in listen
server.listen(port, address)
File "/var/lib/openshift/54f9750d4382eca672000091/python/virtenv/venv/lib64/python3.3/site-packages/tornado/tcpserver.py", line 125, in listen
sockets = bind_sockets(port, address=address)
File "/var/lib/openshift/54f9750d4382eca672000091/python/virtenv/venv/lib64/python3.3/site-packages/tornado/netutil.py", line 145, in bind_sockets
sock.bind(sockaddr)
PermissionError: [Errno 13] Permission denied
I used this project as example, but it doesn't work :(
How I can resolve this problem?
If I provide ip like this:
ip = os.getenv('OPENSHIFT_PYTHON_IP')
port = int(os.getenv('OPENSHIFT_PYTHON_PORT'))
app.listen(port, ip)
I get other error:
[Errno 98] Address already in use
In addition I would like to deploy two independent applications with Flask and Tornado that have shared codebase.
This is because openshift accepts to listen only 8080 and 15000+ ports. But 8080 already used by WSGI container, so I get already in use when set 8080. If stop WSGI server tornado will work.

Google Cloud SDK 0.9.39 still fails to do setup-managed-vms

I just updated my gcloud components on my OS X Mavericks laptop, so now I have:
$ gcloud version
Google Cloud SDK 0.9.39
$ boot2docker version
Boot2Docker-cli version: v1.3.2
Git commit: e41a9ae
I was hoping managed vms setup would work, but alas:
$ gcloud preview app setup-managed-vms
Select the runtime to download the base image for:
[1] Go
[2] Java
[3] Python27
[4] All
Please enter your numeric choice (4): 2
Pulling base images for runtimes [java] from Google Cloud Storage
Pulling image: google/appengine-java
Traceback (most recent call last):
File "/Users/hussein/google-cloud-sdk/./lib/googlecloudsdk/gcloud/gcloud.py", line 175, in <module>
main()
File "/Users/hussein/google-cloud-sdk/./lib/googlecloudsdk/gcloud/gcloud.py", line 171, in main
_cli.Execute()
File "/Users/hussein/google-cloud-sdk/./lib/googlecloudsdk/calliope/cli.py", line 385, in Execute
post_run_hooks=self.__post_run_hooks, kwargs=kwargs)
File "/Users/hussein/google-cloud-sdk/./lib/googlecloudsdk/calliope/frontend.py", line 274, in _Execute
pre_run_hooks=pre_run_hooks, post_run_hooks=post_run_hooks)
File "/Users/hussein/google-cloud-sdk/./lib/googlecloudsdk/calliope/backend.py", line 928, in Run
result = command_instance.Run(args)
File "/Users/hussein/google-cloud-sdk/lib/googlecloudsdk/appengine/app_commands/setup_managed_vms.py", line 39, in Run
args.image_version)
File "/Users/hussein/google-cloud-sdk/./lib/googlecloudsdk/appengine/lib/images/pull.py", line 54, in PullBaseDockerImages
util.PullSpecifiedImages(docker_client, image_names, version, bucket)
File "/Users/hussein/google-cloud-sdk/./lib/googlecloudsdk/appengine/lib/images/util.py", line 232, in PullSpecifiedImages
'Error pulling {image}: {e}'.format(image=image_name, e=e))
googlecloudsdk.appengine.lib.images.util.DockerPullError: Error pulling google/appengine-java: 500 Server Error: Internal Server Error ("Invalid registry endpoint "http://localhost:49153/v1/". HTTPS attempt: Get https://localhost:49153/v1/_ping: read tcp 127.0.0.1:49153: connection reset by peer. HTTP attempt: Get http://localhost:49153/v1/_ping: read tcp 127.0.0.1:49153: connection reset by peer")
I'm very new to Docker and managed vas, and I'm wondering if the issue is due to my boot2docker port forwarding setup. My env is setup correctly I think;
$ env | grep DOCKER
DOCKER_HOST=tcp://192.168.59.103:2376
DOCKER_TLS_VERIFY=1
DOCKER_CERT_PATH=/Users/h/.boot2docker/certs/boot2docker-vm
With the docker host IP being:
$ boot2docker ip
docker#localhost's password:
The VM's Host only interface IP address is: 192.168.59.103
Finally, the containers on my system so far are:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
73c6d317a631 google/docker-registry:latest "./run.sh" 2 minutes ago Exited (-1) About a minute ago goofy_archimedes
40b709d6fa00 gcloud-credentials-image:latest "/true" 2 minutes ago Exited (0) 2 minutes ago gcloud-credentials-1417828737.2
a3073bc56ff2 google/docker-registry:latest "./run.sh" 47 hours ago Exited (-1) 47 hours ago distracted_bell
1b6fe130af45 11cd171d89b3 "/true" 47 hours ago Exited (0) 47 hours ago gcloud-credentials-1417707423.48
28c181e66b11 google/docker-registry:latest "./run.sh" 2 days ago Exited (0) 4 minutes ago 0.0.0.0:5000->5000/tcp elegant_darwin
Then why is gcloud python script trying to access the registry on localhost? Someone, please show me the light!

Youtube page(WebView) automation on android Chrome browser

Able to successfully run opening of Youtube page(WebView) from android chrome browser in Appium 1.0.0.3.
Guys I got struck here. Please help me to achieve my objectives.
- I can't able to identify elements inside WebView using "uiautomatorviewer.bat"
- I can't able to switch to WebView through following code
- driver.switch_to.context('WEBVIEW')
one more I tried
- driver.switch_to.context(webview)
My objective:
Scenario 1:
Open Youtube page on android Chrome browser.
Select any video for playback.
Tap on pause and play.
Tap on Full screen button.
Scenario 2:
Open URL: "http://m.youtube.com/watch?v=7-7knsP2n5w" on android Chrome browser.
Tap on play button.
Sample code:
from appium import webdriver
desired_caps = {}
desired_caps['automationName'] = 'Appium'
desired_caps['platformName'] = 'Android'
desired_caps['platformVersion'] = '4.4'
desired_caps['deviceName'] = 'Nexus 7'
desired_caps['browserName'] = 'Chrome'
driver = webdriver.Remote('http://localhost:4723/wd/hub', desired_caps)
driver.get('http://m.youtube.com')
driver.switch_to.context('WEBVIEW') # Unable to switch to webview. Getting following error.
Python interpretor error:
Traceback (most recent call last):
File "C:\Python27\lib\bdb.py", line 400, in run
exec cmd in globals, locals
File "C:\Data\VVO\Automation\ROBOT_FRAMEWORK\AndroidTest\Appium\tmp_browser.py", line 2, in <module>
from appium import webdriver
File "build\bdist.win32\egg\appium\webdriver\switch_to.py", line 31, in context
self._driver.execute(MobileCommand.SWITCH_TO_CONTEXT, {'name': context_name})
File "C:\Python27\lib\site-packages\selenium-2.42.0-py2.7.egg\selenium\webdriver\remote\webdriver.py", line 172, in execute
self.error_handler.check_response(response)
File "build\bdist.win32\egg\appium\webdriver\errorhandler.py", line 29, in check_response
raise wde
WebDriverException: Message: 'unknown command: session/d6c14715e9f6ca685030f9f56a2b698e/context'
Appium server logs:
POST /wd/hub/session/74354d82912ad3db704d67eeb04a295b/url 200 18052ms - 85b
debug: Appium request initiated at /wd/hub/session/74354d82912ad3db704d67eeb04a295b/context
debug: Request received with params: {"sessionId":"74354d82912ad3db704d67eeb04a295b","name":"WEBVIEW"}
debug: Proxying command to 127.0.0.1:9515
info: Making http request with opts: {"url":"http://127.0.0.1:9515/wd/hub/session/74354d82912ad3db704d67eeb04a295b/
context","method":"POST","json":{"sessionId":"74354d82912ad3db704d67eeb04a295b","name":"WEBVIEW"}}
debug: Proxied response received with status 404: "unknown command: session/74354d82912ad3db704d67eeb04a295b/context"
POST /wd/hub/session/74354d82912ad3db704d67eeb04a295b/context 404 8ms - 65b
When you initialize your webdriver, the url should be set to http://0.0.0.0:9515/wd/hub/
You're having issues switching contexts because Appium isn't talking to the app successfully.
Try print driver.page_source and when that works (and prints out the page xml), the rest of everything (switching context, etc) should work, too.