Capybara drag_to method works on Windows machine but doesnot works on Linux one - google-chrome

I test ruby project through Cucumber-Capybara framework. I create a test for reoder list items by drag-n-drop method: https://rubydoc.info/github/teamcapybara/capybara/master/Capybara/Node/Element:drag_to
Given('Element {string} drag to {string}') do |elem1, elem2|
div_groups = all(:css, "div.#{$divgroup}")
source = div_groups[elem1.to_i-1]
target = div_groups[elem2.to_i-1]
source.drag_to(target)
end
This code works perfectly on Windows machine (win 7 + chrome) but doesn't work on Linux machine (google virtual machine).
Can anybody help?
Update:
video record for Win machine (delay between drag operations 1.0 sec): https://disk.yandex.ru/d/F7Odxc9tuGvDPA
video record for Linux machine https://disk.yandex.ru/d/xMEkHyKvRtbvTg (can see that drag operation doesn't happen)
2 Update:
on linux chromedrivet 62bit, but on win PC - 32 bit
3 Update
I put forward a hypothesis and decided to check it.
I changed the code like this
Given('Element {string} drag to {string}') do |elem1, elem2|
div_groups = all(:css, "div.#{$divgroup}")
source = div_groups[elem1.to_i-1]
target = div_groups[elem2.to_i-1]
#source.drag_to(target, delay: 1.0)
selenium_webdriver = page.driver.browser
selenium_webdriver.action.click_and_hold(source.native).perform
sleep 0.5
selenium_webdriver.action.move_to(target.native, 0, 10).release.perform
sleep 0.5
end
And it works!

Related

portaudio is lagging while reading/recording

I have compiled and installed portaudio19 but it lags badly while recording.
It can be reproduced using:
pamon | pacat -p
or python code using pyaudio
import pyaudio
CHUNK = 1024 * 2
FORMAT = pa.paInt16
CHANNELS = 1
RATE = 8000
p = pa.PyAudio()
stream = p.open(
format = FORMAT,
channels = CHANNELS,
rate = RATE,
input=True,
output=True,
frames_per_buffer=CHUNK
)
while 1:
data = stream.read(CHUNK)
stream.write(data)
Tried diff settings for everything.
I have tried a number of settings and tweaks related to portaudio latency and also tried different things that uses portaudio like pyaudio etc.. but no love.
When trying "alsa or pulse" there is no latency so everything else should be ok.
I'm using a generic 5.10 kernel on Ubuntu 18.04.6
My goal is use pyaudio, as the other python audio modules uses portaudio I see no alternative.
PS: pamon/pacat has nothing to do with portaudio, what confuses me because I have tried audacity with pulse and it works just fine.
ALSA version 1.1.3
Portaudio stable from git v190700 2021 04 06

How to control Turtlebot3 in Gazebo using a web interface?

If I have a simulated Turtlebot3 robot in Gazebo, how could I link it and control its movement using a self-made HTML/Bootstrap web interface (website?) I have tried many tutorials but none of them have worked (may be because they are all from a few years ago). Would appreciate any recent links or tutorials!
you can do so by installing gazebo, gzweb, turtlebot3 package.
What is gzweb?
GzWeb is usually installed on an Ubuntu server. Gzweb is a client for
Gazebo which runs on a web browser. Once the server is set up and
running, clients can interact with the simulation simply by accessing
the server's URL on a web browser.
For gazebo and gzweb installation follow: http://gazebosim.org/tutorials?tut=gzweb_install&cat=gzweb
After creating a package using catkin create a python file turtlebot3_move_gz.py and add the following code to the python script:
#!/usr/bin/env python3
import rospy
from geometry_msgs.msg import Twist
def talker():
rospy.init_node('vel_publisher')
pub = rospy.Publisher('/cmd_vel', Twist, queue_size=10)
rate = rospy.Rate(2)
move = Twist() # defining the way we can allocate the values
move.linear.x = 0.5 # allocating the values in x direction - linear
move.angular.z = 0.0 # allocating the values in z direction - angular
while not rospy.is_shutdown():
pub.publish(move)
rate.sleep()
if __name__ == '__main__':
try:
talker()
except rospy.ROSInterruptException:
pass
Save the file
Next steps
In terminal:
Launch Gazebo simulator turtlebot3:
roslaunch turtlebot3_gazebo turtlebot3_world.launch
Start gzwebserver in a new terminal. On the server machine, start gazebo or gzserver first, it's recommended to run in verbose mode so you see debug messages:
gzserver --verbose
Fire up another terminal to start npm:
npm start
run your python turtlebot3 file in catkin_ws directory:
rosrun name_of_the_package turtlebot3_move_gz.py
Open a browser that has WebGL and websocket support (i.e. most modern browsers) and point it to the IP address and port where the HTTP server is started, for example:
http://localhost:8080
To stop gzserver or the GzWeb servers, just press Ctrl+C in their terminals.
This is not something I have done before, but with a quick search I found some useful information.
You need to use rosbridge_suite and in specific rosbridge_server. The latter provides a low-latency bidirectional communication layer between a web browser and servers. This allows a website to talk to ROS using the rosbridge protocol.
Therefore, you need to have this suite installed and then what you can do is to use it to publish a Twist message from the website (based on the website UI controls) to Turtlebot's command topic.
Don't think of Gazebo in this equation. Gazebo is the simulator and is using under-the-hood ROS topics and services to simulate the robot. What you really need to focus on is how to make your website talk with ROS and publish a Twist message to the appropriate ROS topic.
I also found a JavaScript library from ROS called roslibjs that implements the rosbridge protocol specification. You can, therefore, use JavaScript to communicate with ROS and publish robot velocities to the TurtleBot.
An example excerpt from this tutorial (not tested):
<script type="text/javascript" type="text/javascript">
var cmdVel = new ROSLIB.Topic({
ros : ros,
name : '/cmd_vel',
messageType : 'geometry_msgs/Twist'
});
var twist = new ROSLIB.Message({
linear : {
x : 0.1,
y : 0.2,
z : 0.3
},
angular : {
x : -0.1,
y : -0.2,
z : -0.3
}
});
cmdVel.publish(twist);
</script>
As you can see above the JavaScript code creates an instance of the Twist message with the linear and angular robot velocities and then publishes this message to ROS's /cmd_vel topic. What you need to do is to integrate this into your website, make the velocities in this code to be dynamic based on the website UI controls and start the rosbridge server.

WP8 App localization does not working

I'm working on windows phone 8 application. I have a lot of multilanguage resources, but when I have tried to test my app on "ru-RU" localization - it loads in English only.
I have tried to set
Thread.CurrentThread.CurrentUICulture
= Thread.CurrentThread.CurrentCulture
= new CultureInfo("ru-RU");
manually, but when i check AppResources.ResourceLanguage it will return "en"
When I set CultureInfo("ru") - everything works fine
You have to set all supported locals in the csproj. Example: <SupportedCultures>de-DE;es-ES;</SupportedCultures>.
Localizing a Windows Phone app Step by Step

Web Audio node connected to two gain nodes, connected to destination, duplicates speed / pitch

As the title says, if I have an audio node that emits sound and I connect it to two separate GainNodes, which in turn are connected to the Audio Context destination, the sound plays at double speed / double pitch (as if half samples are sent to one gain node and half samples to the other, and the time is halved as well).
I have created an handy jsfiddle here, just drag your sound files in the black rectangle canvas and listen.
// audioContext: Web Audio context
// decoded: decoded audioBuffer
// gainNode1, gainNode2: gain nodes
var bSrc = audioContext.createBufferSource();
bSrc.connect (gainNode1);
bSrc.connect (gainNode2);
gainNode1.connect (audioContext.destination);
gainNode2.connect (audioContext.destination);
bSrc.buffer = decoded;
bSrc.loop = false;
// You'll hear two double-speed buffers playing at unison
bSrc.start(0);
Is that by design? What I would like is to exactly "duplicate" the sound (that will be sent to two different routes, the fiddle is just a proof-of-concept for a bigger project).
Edit:
I tested this on Chrome Version 24.0.1312.56 / Ubuntu 12.10 and the behaviour is present.
The behaviour is also present on Chrome Version 24.0.1312.68 / Ubuntu 12.10
On Chrome Version 24.0.1312.57 / Mac OSX, the Audio API works well and this behaviour is not present.
Could it be a Linux-only issue?
Sounds like a Linux implementation issue. It works for me in Chrome on OS X.

Can an AIR application find out what OS it's running on?

I was wondering if an AIR application can find out what OS it's running on, e.g. Windows XP, Vista, Mac OS, etc. Also, is there a way for it to find out the current OS user name? Thanks.
As #TML stated, System.Capabilities.os will get you the operating system. Now I don't know of any direct way to get the user name, but AIR file class has a userDirectory property that will give you a reference to the logged in user's home directory. The nativePath of that object ought to have the logged in user's name.
//user directory path normally ends with the user name like
//xp : C:\Documents and Settings\userName
//mac : /Users/userName
//*nix : /home/username or /home/groupname/username
var os:String = System.Capabilities.os;
var usr:String = File.userDirectory.nativePath;
var sep:String = File.separator;
if(usr.charAt(usr.length - 1) == sep)
usr = usr.substring(0, usr.length - 1);//remove trailing slash
usr = usr.substring(usr.lastIndexOf(sep) + 1);
trace(usr);
Test with various OS's and find if there are any edge cases before using this in production code (like cases where user name is not the last part of the user directory - I am not aware of any, but just in case).
Check into flash.system.Capabilities - I believe it has what you're looking for.
Acutally, it turns out this is a duplicate question: Get Current Operating System In Adobe Air