Using a Webpage to Control Robot Arm Written in Linux - html

Firstly I am no longer a student and currently working on a favour for a friend. I am making a website which has a live video feed of a robotic arm and a set off buttons that will allow users with the basic interaction of the robotic arm.
I have setup the website and live video feed. I do have a 4 second delay using flash media encoder and flash server 4.5. Have any suggestions in reducing the delay time?
I have done the python code required for the maplin robotic arm and now I am stuck and not sure how to link my python code with a webpage interface? Can anyone that has done this before provide with code that I could edit and learn from..
Python Code
import usb.core
import usb.util
import sys
import time
# This program is intended to control a robotic arm via USB from Linux
# The code is written in Python by Neil Polwart (c) 2011
# It is a work in progress and will improved!
# locate the device device
dev = usb.core.find(idVendor=0x1267, idProduct=0x0000)
# assigns the device to the handle "dev"
# can check the device is visible to Linux with command line command lsusb
# which should report a device with the above vendor and id codes.
# was it found?
if dev is None:
raise ValueError('Device not found') # if device not found report an error
# set the active configuration
dev.set_configuration()
# as no arguments, the first configuration will be the active one
# note as commands are sent to device as commands not data streams
# no need to define the endpoint
# defines the command packet to send
datapack=0x80,0,0
# change this packet to make different moves.
# first byte defines most of the movements, second byte shoulder rotation, third byte light
# command structure in more detail:
# http://notbrainsurgery.livejournal.com/38622.html?view=93150#t93150
print "requested move",datapack # reports the requested movement to the user
# send the command
bytesout=dev.ctrl_transfer(0x40, 6, 0x100, 0, datapack, 1000)
# outputs the command to the USB device, using the ctrl_transfer method
# 0x40, 6, 0x100, 0 defines the details of the write - bRequestType, bRequest, wValue, wIndex
# datapack is our command (3 bytes)
# the final value is a timeout (in ms) which is optional
# bytesout = the number of bytes written (i.e. 3 if successful)
print "Written :",bytesout,"bytes" # confirm to user that data was sent OK
# wait for a defined period
time.sleep(1) # waits for 1 second whilst motors move.
# now STOP the motors
datapack=0,0,0
bytesout=dev.ctrl_transfer(0x40, 6, 0x100, 0, datapack, 1000)
if bytesout == 3: print "Motors stopped"
So I need to find a way to edit the datapack line via a website interface. Any help is appreciated! I am using a Windows 7 setup but do have access to vmware

I'd set up an Apache server with mod_python and create a handler that imports your script and runs the necessary code. You can set up an AJAX script in JavaScript (with or without jQuery). Every time you want to run the Python script, a request needs to be made to the server. You can pass any information back and forth as needed via the HTTP.
Here's a good tutorial for Python and the CGI Module.

Related

Save a model weights when a program receives TIME LIMIT while learning on a SLURM cluster

I use a deep learning models written in pytorch_lightning (pytorch) and train them on slurm clusters. I submit job like this:
sbatch --gpus=1 -t 100 python train.py
When requested GPU time ends, slurm kills my program and shows such message:
Epoch 0: : 339it [01:10, 4.84it/s, loss=-34] slurmstepd: error: *** JOB 375083 ON cn-007 CANCELLED AT 2021-10-04T22:20:54 DUE TO TIME LIMIT ***
How can I configure a Trainer to save model when available time end?
I know about automatic saving after each epoch, but I have only one long epoch that lasts >10 hours, so this case is not working for me.
You can use Slurm's signalling mechanism to pass a signal to your application when it's within a certain number of seconds of the timelimit (see man sbatch). In your submission script use --signal=USR1#30 to send USR1 30 seconds before the timelimit is reached. Your submit script would contain these lines:
#SBATCH -t 100
#SBATCH --signal=USR1#30
srun python train.py
Then, in your code, you can handle that signal like this:
import signal
def handler(signum, frame):
print('Signal handler got signal ', signum)
# e.g. exit(0), or call your pytorch save routines
# enable the handler
signal.signal(signal.SIGUSR1, handler)
# your code here
You need to call your Python application via srun in order for Slurm to be able to propagate the signal to the Python process. (You can probably use --signal on the command line to sbatch, I tend to prefer writing self-contained submit scripts :))
Edit: This link has a nice summary of the issues involved with signal propagation and Slurm.

How to get list of xbee devices in network using digi xbee Python

I have a xbee code running which has a scheduler scheduled to run every 1min. This scheduler queries the network and gets the xbee devices which are online. Below is the code:
def search_devices_in_nw():
log_debug.error("Discovery process starting....")
net = device.get_network()
net.start_discovery_process(deep=True, n_deep_scans=1)
while net.is_discovery_running():
time.sleep(0.5)
active_nodes = net.get_devices()
print(active_nodes)
schedule.every(1).minutes.do(search_devices_in_nw)
while device.is_open():
schedule.run_pending()
I have two devices in my xbee network and running this code, gives 2 mac address which is correct. But if I put one of the xbee device offline, it still gives result as 2 mac address online which is incorrect.
If I stop my code and restart it, then it shows 1 mac address online. I am not sure why the code is not working fine. Can anyone please help me in this. Please help. Thanks
Documentation page: https://xbplib.readthedocs.io/en/latest/user_doc/discovering_the_xbee_network.html#discovernetwork
Per the documentation, "All discovered XBee nodes are stored in the XBeeNetwork instance."
But you can also clear the list of nodes with the clear() method on the XBeeNetwork object before initiating a new discovery:
[...]
# Instantiate a local XBee object.
xbee = XBeeDevice(...)
[...]
# Get the XBee network object from the local XBee.
xnet = xbee.get_network()
# Discover XBee devices in the network and add them to the list of nodes.
[...]
# Clear the list of nodes.
xnet.clear()
[...]

How to control Turtlebot3 in Gazebo using a web interface?

If I have a simulated Turtlebot3 robot in Gazebo, how could I link it and control its movement using a self-made HTML/Bootstrap web interface (website?) I have tried many tutorials but none of them have worked (may be because they are all from a few years ago). Would appreciate any recent links or tutorials!
you can do so by installing gazebo, gzweb, turtlebot3 package.
What is gzweb?
GzWeb is usually installed on an Ubuntu server. Gzweb is a client for
Gazebo which runs on a web browser. Once the server is set up and
running, clients can interact with the simulation simply by accessing
the server's URL on a web browser.
For gazebo and gzweb installation follow: http://gazebosim.org/tutorials?tut=gzweb_install&cat=gzweb
After creating a package using catkin create a python file turtlebot3_move_gz.py and add the following code to the python script:
#!/usr/bin/env python3
import rospy
from geometry_msgs.msg import Twist
def talker():
rospy.init_node('vel_publisher')
pub = rospy.Publisher('/cmd_vel', Twist, queue_size=10)
rate = rospy.Rate(2)
move = Twist() # defining the way we can allocate the values
move.linear.x = 0.5 # allocating the values in x direction - linear
move.angular.z = 0.0 # allocating the values in z direction - angular
while not rospy.is_shutdown():
pub.publish(move)
rate.sleep()
if __name__ == '__main__':
try:
talker()
except rospy.ROSInterruptException:
pass
Save the file
Next steps
In terminal:
Launch Gazebo simulator turtlebot3:
roslaunch turtlebot3_gazebo turtlebot3_world.launch
Start gzwebserver in a new terminal. On the server machine, start gazebo or gzserver first, it's recommended to run in verbose mode so you see debug messages:
gzserver --verbose
Fire up another terminal to start npm:
npm start
run your python turtlebot3 file in catkin_ws directory:
rosrun name_of_the_package turtlebot3_move_gz.py
Open a browser that has WebGL and websocket support (i.e. most modern browsers) and point it to the IP address and port where the HTTP server is started, for example:
http://localhost:8080
To stop gzserver or the GzWeb servers, just press Ctrl+C in their terminals.
This is not something I have done before, but with a quick search I found some useful information.
You need to use rosbridge_suite and in specific rosbridge_server. The latter provides a low-latency bidirectional communication layer between a web browser and servers. This allows a website to talk to ROS using the rosbridge protocol.
Therefore, you need to have this suite installed and then what you can do is to use it to publish a Twist message from the website (based on the website UI controls) to Turtlebot's command topic.
Don't think of Gazebo in this equation. Gazebo is the simulator and is using under-the-hood ROS topics and services to simulate the robot. What you really need to focus on is how to make your website talk with ROS and publish a Twist message to the appropriate ROS topic.
I also found a JavaScript library from ROS called roslibjs that implements the rosbridge protocol specification. You can, therefore, use JavaScript to communicate with ROS and publish robot velocities to the TurtleBot.
An example excerpt from this tutorial (not tested):
<script type="text/javascript" type="text/javascript">
var cmdVel = new ROSLIB.Topic({
ros : ros,
name : '/cmd_vel',
messageType : 'geometry_msgs/Twist'
});
var twist = new ROSLIB.Message({
linear : {
x : 0.1,
y : 0.2,
z : 0.3
},
angular : {
x : -0.1,
y : -0.2,
z : -0.3
}
});
cmdVel.publish(twist);
</script>
As you can see above the JavaScript code creates an instance of the Twist message with the linear and angular robot velocities and then publishes this message to ROS's /cmd_vel topic. What you need to do is to integrate this into your website, make the velocities in this code to be dynamic based on the website UI controls and start the rosbridge server.

Cannot send commands to remote machine using ssh2-python package

Problem
Hello my problem is that I want to use the ssh2-python package to remotely read a a bunch of files, but I can't seem to send commands to the remote host machine.
Originally I started with the paramiko package and I did get that to work, but I am dealing with a lot of large memory files (which is why I can't bring them to the local machine) and it is a bit too slow. I am currently running Python 3.6.3 & ssh2-python 0.18.0.post1 and have tried changing versions of ssh2-python, but it didn't help.
Code
import socket
from ssh2.session import Session
host_ip=socket.gethostbyname('hostname')
sock=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((host_ip,22))
session=Session()
session.handshake(sock)
print(session.userauth_list('username'))
session.userauth_password('username','password')
channel=session.open_session()
channel.execute('echo Hello')
Code Prints the Following
0
['publickey', 'gssapi-keyex', 'gssapi-with-mic', 'password']
0
0
Expectation/Thoughts
I expected the code to print Hello, but instead it just printed 0. It also printed 0 after the handshake and after the call to the authentication method and I have no idea why. It seems like I am in contact with the remote machine as it did print out which authentications it would take, but it doesn't appear to me that I am actually logged in and can do anything. I would really like to use this package as from what I read online it is significantly faster paramiko, (alternatives would be good to) but I can't seem to figure out what is going on here.
Please help and thanks in advance!
You may in fact be connected and executing commands, but channel.execute('ls') returns '0' (it's exit/status code).
If you want to read your response from the server:
channel.execute('echo Hello')
size, data = channel.read()
while size:
size, dt = channel.read()
data += dt
print(data.decode())
The API documentation for ssh2-python is rather sparse, but the examples should get you through some of the basics: https://github.com/ParallelSSH/ssh2-python/tree/master/examples
A complete version of the above is in example_echo.py

Need to be able to Insert/Delete New Groups in openfire via HTTP or MySQL

I know how to insert a new group via MySQL, and it works, to a degree. The problem is that the database changes are not loaded into memory if you insert the group manually. Sending a HUP signal to the process does work, but it is kludgy and a hack. I desire elegance :)
What I am looking to do, if possible is to make changes (additions/deletions/changes) to a group via MySQL, and then send an HTTP request to the openfire server to read the new changes. Or in the alternative, add/delete/modify groups similar to how the User Service works.
If anyone can help I would appreciate it.
It seems to me that if sending a HUP signal works for you, then that's actually quite a simple, elegant and efficient way to get Openfire to read your new group, particularly if you do it with the following command on the Openfire server (and assuming it's running a Linux/Unix OS):
pkill -f -HUP openfire
If you still want to send an HTTP request to prompt Openfire to re-read the groups, the following Python script should do the job. It is targeted at Openfire 3.8.2, and depends on Python's mechanize library, which in Ubuntu is installed with the python-mechanize package. The script logs into the Openfire server, pulls up the Cache Summary page, selects the Group and Group Metadata Cache options, enables the submit button and then submits the form to clear those two caches.
#!/usr/bin/python
import mechanize
import cookielib
# Customize to suit your setup
of_host = 'http://openfire.server:9090'
of_user = 'admin_username'
of_pass = 'admin_password'
# Initialize browser and cookie jar
br = mechanize.Browser()
br.set_cookiejar(cookielib.LWPCookieJar())
# Log into Openfire server
br.open(of_host + '/login.jsp')
br.select_form('loginForm')
br.form['username'] = of_user
br.form['password'] = of_pass
br.submit()
# Select which cache items to clear in the Cache Summary page
# On my server, 13 is Group and 14 is Group Metadata Cache
br.open(of_host + '/system-cache.jsp')
br.select_form('cacheForm')
br.form['cacheID'] = ['13','14']
# Activate the submit button and submit the form
c = br.form.find_control('clear')
c.readonly = False
c.disabled = False
r = br.submit()
# Uncomment the following line if you want to view results
#print r.read()