I'm getting an error with connecting to my remote node and launching chrome via selenium.
My procedure:
start hub
start node with this bat:
java -jar selenium-server-standalone-2.41.0.jar -role node -hub
http://"hubadress":4444/register/grid
-Dwebdriver.chrome.driver="C:\Users\me\Downloads\chromedriver.exe"
then I run this code:
DesiredCapabilities dc = DesiredCapabilities.chrome();
Webdriver x = new RemoteWebDriver(new URL("http://'localhost':444/wd/hub"),dc);
which yields result: ...Path to driver executable must be set by the webdriver.chrome.driver system property.
I HAVE ALSO TRIED:
starting chromedriver on node
changing the localhost URL parameter in REMOTEWEBDRIVER to the adress of the node at port 9515 (port used for chromedriver)
Thank you so much for your time guys, I'm going insane and I need to troubleshoot some problems with firefox which works fine.
You are getting that for two possible reasons.
Reason 1
Your C:\Users\me\Downloads\chromedriver.exe does not exist. Make sure that THAT is the path.
Reason 2
The double quotes around the path might be exacerbating it. Take out the quotes to make it: -Dwebdriver.chrome.driver=c:\users\me\downloads\chromedriver.exe
Other than that, everything you have there is just fine. Make sure to address both of these reasons, and you should be golden.
You don't need to add "/wd/hub" to the url when using RemoteWebDriver
Also, I don't think that the sigle quote in you url is needed either:
http://'localhost':444/wd/hub
Oh god. The slashes were the wrong way.
Should have been:
java -jar selenium-server-standalone-2.41.0.jar -role node -hub
http://"hubadress":4444/register/grid
Dwebdriver.chrome.driver="C:/Users/me/Downloads/chromedriver.exe"
Related
I have tried to implement - https://github.com/Autodesk-Forge/forge-bim360-data.connector.dashboard
I have updated this part - npm install set FORGE_CLIENT_ID=<<YOUR CLIENT ID FROM DEVELOPER PORTAL>> set FORGE_CLIENT_SECRET=<<YOUR CLIENT SECRET>> set FORGE_CALLBACK_URL=<<your callback url of Forge e.g. http://localhost:3000/oauth/callback>> set DC_CALLBACK_URL=<<"your ngrok address here: e.g. http://abcd1234.ngrok.io/job/callback">>
I am getting the error that 400-Unknown or invalid client_id
Firstly, I rarely used Windows OS now. I simply copied the guideline of setting environment variables from other samples, while most time, I tried with debug mode (setting environment variables in launch.json) .
checking the Readme again, I found the wording is:
Windows (use Node.js command line from Start menu)
i.e. it asks to input those commands to command line of Node.js, instead of terminal of VSCode! That is why it always reports the error of client id is not defined because the variables are not set to environment at all.
The correct way is to open the command line of Node.js, and run the commands. This is a screenshot.
I'm working on a Perl6 project, but having difficulty connecting to MySQL. Even when using the DBIish (or perl6.org tutorial) example code, the connection fails. Any suggestions or advice is appreciated! User credentials have been confirmed accurate too.
I'm running this on Windows 10 with MySQL Server 8.0 and standard Perl6 with Rakudo Star. I have tried modifying the connection string in numerous ways like :$password :password<> :password() etc. but can't get a connection established. Also should note that I have the ODBC, C, C++, and.Net connectors installed.
#!/usr/bin/perl6
use v6.c;
use lib 'lib';
use DBIish;
use Register::User;
# Windows support
%*ENV<DBIISH_MYSQL_LIB> = "C:/Program Files/MySQL/MySQL Server 8.0/liblibmysql.dll"
if $*DISTRO.is-win;
my $dbh = DBIish.connect('mysql', :host<localhost>, :port(3306), :database<dbNameHere>, :user<usernameHere>, :password<pwdIsHere>) or die "couldn't connect to database";
my $sth = $dbh.prepare(q:to/STATEMENT/);
SELECT *
FROM users
STATEMENT
$sth.execute();
my #rows = $sth.allrows();
for #rows { .print }
say #rows.elems;
$sth.finish;
$dbh.dispose;
This should be connecting to the DB. Then the app runs a query, followed by printing out each resulting row. What actually happens is the application hits the 'die' message every time.
This is more of a work around, but being unable to use use a DB is crippling. So even when trying to use the NativeLibs I couldn't get a connection via DBIish. Instead I have opted to using DB::MySQL which is proving to be quite helpful. With a few lines of code this module has your DB needs covered:
use DB::MySQL;
my $mysql = DB::MySQL.new(:database<databaseName>, :user<userName>, :password<passwordHere>);
my #users = $mysql.query('select * from users').arrays;
for #users { say "user #$_[0]: $_[1] $_[2]"; }
#Results would be:
#user #1: FirstName LastName
#user #2: FirstName LastName
#etc...
This will print out a line for each user formatted as shown above. It's not as familiar as DBIish, but this module gets the job done as needed. There's plenty more you can do with it to, so I highly recommend reading the docs.
According to this github DBIish issue 127
The environmental variable DBIISH_MYSQL_LIB was removed. I don't know if anyone brought it back.
However if you add the library's path, and the file is named mysql.dll, it will work. Not a good result for the scientific method.
So more testing is needed - and perhaps
C:\Program Files\MySQL\MySQL Server 8.0\lib>mklink mysql.dll .\libmysql.dll
Oviously you can create your own lib directory and add that to your path and then add this symlink to that directory.
Hope this helps. I've spent hours..
EDIT: Still spending time - accounting later.
Something very transitory is going on. I reset the machine (perhaps always do this from now on), and still got the missing mysql.dll errors. Tried going into the MySQL lib directory to execute raku from there.. worked. changed directories.. didn't work.
Launched administrator cmd - from home directory, tried the raku command. Worked. Ok - not good, but perhaps consistent. Launched non admin cmd, tried it from the MySQL lib directory, worked. And just for giggles, tried it outside of that directory.. worked.
Now I can't get it not to work. Will explore NativeLibs::Searcher as Valle Lukas suggested!
Maybe the example in the dbiish repository is not valid anymore.
The DBIISH_MYSQL_LIB Env seems to be replaced by NativeLibs::Searcher with commit 9bc4191
Looking at NativeLibs::Searcher may help to find the root cause of the problem.
in the new version 1.3 of ORION if I make a GET http request on the root of the server, the broker immediately breaks down.
I set the logs at DEBUG level and de trace levels to 0-255, the execution seems to be correct. There is no trace which help us undestand what is happening before the crash.
We tried in diferent ORION installations(docker from docker HUB, on CENTos OVM)
Anyone knows what is happening?
Thanks in advance.
I just did a quick test and I can confirm that this is a bug. A recent change in the detection of the API version uses the URI PATH without first checking for empty URI PATH. The result is a segmentation fault. The fix is more than easy. Will be fixed in the next few days and included in the next release (1.4.0).
I installed the XDebug package at MAMP/bin/php5.2/lib/php/extensions/no-debug-non-zts-20060613
I put the following into my php.ini file:
zend_extension="/Applications/MAMP/bin/php5.2/lib/php/extensions/no-debug-non-zts-0060613/xdebug.so"
xdebug.remote_enable = On
xdebug.remote_autostart = 1
xdebug.remote_host = localhost
xdebug.remote_port = 9999
I disabled the zend optimizer.
I set the proper port # in MacGDBp.
I do get a proper stack trace from the command line.
What I'd like to do, though, is load a page in Firefox and debug with MacGDBp.
Shouldn't MacGDBp be reading and parsing whatever's coming though the specified port?
Can anyone tell me what I'm missing?
Thanks!
Well, you are a bit unspecific about your concrete setup but there seem to be few things odd with your settings.
I am not using mac and I don't know MacGDBp ... but MacGDBp suggests that it uses the old GDP-protocol. Though XDebug 2 uses by default the new DBG protocol.
You should make that explicit by setting 'xdebug.remote_handler' to your preferred protocol. In my case as I use the new protocol i feed it 'dpgp.
Here you find some information: http://www.xdebug.org/docs/remote
Also it could be that your firewall is blocking the port.
Maybe that'll do it, otherwise tell us more about the symptoms.
Best
Raffael
I was having the same problem.
What I did was:
install the xdebug-toggler plugin for Safari
load the page I was testing
enable the plugin
reload the page
And suddenly it worked.
I have a webapp that segfaults when the database in restarted and it tries to use the old connections. Running it under gdb --args apache -X leads to the following output:
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread -1212868928 (LWP 16098)]
0xb7471c20 in mysql_send_query () from /usr/lib/libmysqlclient.so.15
I've checked that the drivers and database are all up to date (DBD::mysql 4.0008, MySQL 5.0.32-Debian_7etch6-log).
Annoyingly I can't reproduce this with a trivial script:
use DBI;
use Test::More tests => 2;
my $dbh = DBI->connect( "dbi:mysql:test", 'root' );
sub test_db {
my ($number) = $dbh->selectrow_array("select 1 ");
return $number;
}
is test_db, 1, "connected to db";
warn "restart db now";
getc;
is test_db, 1, "connected to db";
Which gives the following:
ok 1 - connected to db
restart db now at dbd-mysql-test.pl line 23.
DBD::mysql::db selectrow_array failed: MySQL server has gone away at dbd-mysql-test.pl line 17.
not ok 2 - connected to db
# Failed test 'connected to db'
# at dbd-mysql-test.pl line 26.
# got: undef
# expected: '1'
This behaves correctly, telling me why the request failed.
What stumps me is that it is segfaulting, which it shouldn't do. As it only appears to happen when the whole app is running (which uses DBIx::Class) it is hard to reduce it to a test case.
Where should I start to look to debug this? Has anyone else seen this?
UPDATE: further prodding showed that it being under mod_perl was a red herring. Having reduced it to a simple test script I've now posted to the DBI mailing list. Thanks for your answers.
What this probably means is that there's a difference between your mod_perl environment and the one you were testing via your script. Some things to check:
Was your mod_perl compiled with the same version of Perl
Are the #INC's the same for both
Are you using threads in your mod_perl setup? I don't believe DBD::mysql is completely thread-safe.
I've seen this problem, but I'm not sure it had the same cause as yours. Are you by chance using a certain module for sending mails (forgot the name, sorry) from your application? When we had the problem in a project, after days of debugging we found that this mail module was doing strange things with open file descriptors, then forked off another process which called the console tool sendmail, which again did strange things with file descriptors. I guess one of the file descriptors it messed around with was the connection to the database, but I'm still not sure about that. The problem disappeared when we switched to another module for sending mails. Maybe it's worth a look for you too.
If you're getting a segfault, do you have a core file greated? If not, check ulimit -c. If that returns 0, your system won't create core files and you'll have to change that. If you do have a core file, you can use gdb or similar tools to debug it. It's not particularly fun, but it's possible. The start of the command will look something like:
gbd /usr/bin/httpd core
There are plenty of tutorials for debugging core files scattered about the Web.
Update: Just found a reference for ensuring you get core dumps from mod_perl. That should help.
This is a known problem in old DBD::mysql. Upgrade it (4.008 is not up to date).
There's a simple test script attached to https://rt.cpan.org/Public/Bug/Display.html?id=37027
that will trigger this bug.