Flutter geolocator package not retrieving location - google-maps

I've open an issue on the geolocator repository https://github.com/BaseflowIT/flutter-geolocator/issues/199
It entails the geolocator package not retrieving the location. They recently released a new version 3.0.0 and after that the I have had only aftermath.
I am using the correct dependencies:
dependencies:
geolocator: '^3.0.0'
targetSdkVersion 28 and compileSdkVersion 28
Flutter doctor gives me this:
[✓] Flutter (Channel stable, v1.0.0, on Mac OS X 10.14.3 18D109, locale en-US)
[✓] Android toolchain - develop for Android devices (Android SDK 28.0.3)
[✓] iOS toolchain - develop for iOS devices (Xcode 10.1)
[✓] Android Studio (version 3.2)
[✓] IntelliJ IDEA Community Edition (version 2018.2.5)
[✓] Connected device (1 available)
• No issues found!
Once I call await Geolocator().getCurrentPosition(desiredAccuracy: LocationAccuracy.high); the code just doesn't return anything and I have this output in terminal:
I/DynamiteModule( 4233): Considering local module
com.google.android.gms.maps_dynamite:0 and remote module
com.google.android.gms.maps_dynamite:221 I/DynamiteModule( 4233):
Selected remote version of com.google.android.gms.maps_dynamite,
version >= 221 V/DynamiteModule( 4233): Dynamite loader version >= 2,
using loadModule2NoCrashUtils W/System ( 4233): ClassLoader referenced
unknown path: W/System ( 4233): ClassLoader referenced unknown path:
/data/user_de/0/com.google.android.gms/app_chimera/m/00000030/n/armeabi-v7a
W/System ( 4233): ClassLoader referenced unknown path:
/data/user_de/0/com.google.android.gms/app_chimera/m/00000030/n/armeabi
I/Google Maps Android API( 4233): Google Play services client version:
12451000 I/Google Maps Android API( 4233): Google Play services
package version: 15090018 W/DynamiteModule( 4233): Local module
descriptor class for com.google.android.gms.googlecertificates not
found. I/DynamiteModule( 4233): Considering local module
com.google.android.gms.googlecertificates:0 and remote module
com.google.android.gms.googlecertificates:4 I/DynamiteModule( 4233):
Selected remote version of com.google.android.gms.googlecertificates,
version >= 4 W/System ( 4233): ClassLoader referenced unknown path:
/data/user_de/0/com.google.android.gms/app_chimera/m/0000002f/n/armeabi-v7a
W/System ( 4233): ClassLoader referenced unknown path:
/data/user_de/0/com.goo`gle.android.gms/app_chimera/m/0000002f/n/armeabi
I have spent a considerable amount of time on this. I am new to flutter and know that I may be missing a small thing to make it work.

I was facing the same issue. But my app started running after adding this.
[ Geolocator geolocator = Geolocator()..forceAndroidLocationManager = true; ]
Hope it will work.
Future<void> getCurrentLocation() async {
try {
Geolocator geolocator = Geolocator()..forceAndroidLocationManager = true;
Position position = await Geolocator().getCurrentPosition(
desiredAccuracy: LocationAccuracy.best,
);
return position;
} catch (err) {
print(err.message);
}
}
All the best.

On iOS you'll need to add the following entries to your Info.plist
file (located under ios/Runner) in order to access the device's
location. Simply open your Info.plist file and add the following:
<key>NSLocationWhenInUseUsageDescription</key>
<string>This app needs access to location when open.</string>
<key>NSLocationAlwaysUsageDescription</key>
<string>This app needs access to location when in the background.</string>
<key>NSLocationAlwaysAndWhenInUseUsageDescription</key>
<string>This app needs access to location when open and in the background.</string>

I was facing the same issue now it works, I've just added forceAndroidLocationManager: true to the Geolocator.getCurrentPosition.
Position position = await
Geolocator.getCurrentPosition(forceAndroidLocationManager: true,
desiredAccuracy: LocationAccuracy.lowest);

Hey #wagnerdelima had same challenge and l solved by the following :
Change the targetSdkVersion 28 and compileSdkVersion 28 to targetSdkVersion 27 and compileSdkVersion 27 and change to geolocator: '^3.0.0' to geolocator: ^2.1.1 as below:
dependencies:
flutter:
sdk: flutter
geolocator: ^2.1.1
permission_handler: "2.1.2"
google_api_availability: "1.0.4"
This was as a result of the caret ^, it is taking the google_api_availability latest, which is migrated to android x.
All the best !!

I was struggling with this problem for about 2 days. After that, I figured out this (Mac):
In Simulator iOS did not work anyway;
Connect an iOS real device After open iOS module in XCode as follow
Right-click on folder project
Choose Flutter->Open iOS module in XCode
On XCode, Select TAGETS -> Runner
In Signing & Capabilities:
Set the Team
Change the Bundle Identifier to a unique one (ex.:co.test.app456)
Run your app in XCode (it will work)
Close XCode and go back to your project in Flutter
Select the real iOS device and run your app

I had the same issue i.e it was not returning anything but keep fecthing location so i just changed from this
forceAndroidLocationManager: true
to this
forceAndroidLocationManager: false
Geolocator.getCurrentPosition(
desiredAccuracy: LocationAccuracy.bestForNavigation,
forceAndroidLocationManager: false)
.then((position) {
print(position);
});
Now its working fine. The problem is in new update because in the old package it was working fine

First things first, check the device you are using and your Flutter version.
i had same issue with only one device that is Google's Nexus 6P and Flutter Version 1.9.1+hotfix.6. The problem is with the package "geolocator" because all other devices are working fine.
I recommend new_geolocation. As it resolved the issue for all devices so far.
Secondly, how i used it, the readme file here didnt help, so went through the example project on Github here, used the same methods of this example for exact results.
I hope it will help.

This solution will work, I was facing issue with Android:
Geolocator geoLocator = Geolocator()..forceAndroidLocationManager = true;
Position position = await geoLocator.getCurrentPosition(desiredAccuracy: LocationAccuracy.low);
print(position);

I had the same problem.
I was able to solve it by:
replacing current API key with new one
clearing out all the minor bugs in file, e.g. writing remaining #override functions, adding 'const' and 'final' keywords etc.
Try these,
it might help.

If you are using the iOS Simulator, you need to set the Location different from None in order the getCurrentPosition method returns the position:

I had this problem until I tried to set the accuracy to low. Then it worked. I believe you may need extra permissions for high accuracy location. It's funny only the low accuracy works for me, none of the others. Also, the forceAndroidLocationManager has to be set to false for it to work on my IOS emulator.
This code worked for me:
Geolocator.getCurrentPosition(
desiredAccuracy: LocationAccuracy.low,
forceAndroidLocationManager: false)
.then((currloc) {})

Related

Cakephp 3.9 Application Class not found

After updating from Cake 3.8 to latest 3.9 my site no longer loads:
( ! ) Fatal error: Uncaught Error: Class 'Application' not found in webroot\index.php on line 31
( ! ) Error: Class 'Application' not found in webroot\index.php on line 31.
My src/App;lication.php;
require dirname(__DIR__) . '/vendor/autoload.php';
use App\Application;
use Cake\Http\Server;
// Bind your application to the server.
$server = new Server(new Application(dirname(__DIR__) . '/config'));
// Run the request/response through the application
// and emit the response.
$server->emit($server->run());
Line 31 is:
new Server(new Application(dirname(DIR) . '/config'));
I have tried to debug this and the error comes from the 'new Application'. As far as I can see the way that the Application class is referenced is as it is done elsewhere, in cake.php for example.
I have checked the book for version 3.9 release notes. It seems there are a few other posts on SO reporting similar issues with earlier 3.x versions but none with a proper answer.
Any suggestions as I am totally at a loss.
I created a new project via composer-update and installed all packages as required. I then copied the vendor folder of new project into the old broken project to fix. Problem was in the vendor folder it seems - what the issue was remains a mystery and I am still not clear how the problem was introduced.
The composer.json file needs this section
"autoload": {
"psr-4": {
"App\\": "src/"
}
to tell Composer to load the App namespace (along with others) into vendor/composer/autoload_psr4.php.
Then run composer dump-autoload or composer update.
I had the same problem upgrading from 2.10.24 to 3.9.

Azure IotHub, UWP, DeviceClient.OpenAsync - timeout

I am trying to connect my UWP app to the Azure IotHub with the code below. I have tried a console app with the same code and it works but not on the UWP app.
I have given all the permissions to the UWP app (internet, and all other after that).
Tried a Get with HttpClient and the internet connection works
I have also tried to use the newest target for the UWP app (Win 10, version 1803)
To connect to the IotHub from my UWP app i am using a nuget package: Microsoft.Azure.Devices.Client v 1.17.1
On the UWP the code stops at await device.OpenAsync() and timeouts after some time
Am i missing something?
The code:
string deviceConnectionString = "<CONNECTION STRING>";
var device = DeviceClient.CreateFromConnectionString(deviceConnectionString);
await device.OpenAsync();
Message bla = new Message(Encoding.ASCII.GetBytes("blablabla"));
await device.SendEventAsync(bla);
UPDATE - FIX
The transport type needs to be defined, guess the default transport type does not work with UWP. When creating the client use this:
var device = DeviceClient.CreateFromConnectionString(deviceConnectionString, TransportType.Amqp_WebSocket_Only);
I have just tested on my RPi3B with your setup such as
compiled for Target version: Win 10, version 1803
Microsoft.Azure.Devices.Client v1.17.1
Microsoft.NETCore.UniversalWindowsPlatform v6.1.7
and the runtime failed (Windows IoT Core 10.0.17723.1000
) with the following error:
Could not load file or assembly 'System.Net.Security, Version=4.0.1.2, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'. The system cannot find the file specified.
so, the workaround is:
use the Microsoft.Azure.Devices.Client version v1.6.0
after this change my test program Blinky on RPi3B is working.
The latest version client SDK does not work on UWP using AMQP.There is a Github issue#421 here. Please trace this issue. As Roman Kiss mentioned, Microsoft.Azure.Devices.Client version v1.6.0 works fine, you can try to use this version.

WebDriverException: unknown error: DevToolsActivePort file doesn't exist while trying to initiate Chrome Browser

I am trying to launch chrome with an URL, the browser launches and it does nothing after that.
I am seeing the below error after 1 minute:
Unable to open browser with url: 'https://www.google.com' (Root cause: org.openqa.selenium.WebDriverException: unknown error: DevToolsActivePort file doesn't exist
(Driver info: chromedriver=2.39.562718 (9a2698cba08cf5a471a29d30c8b3e12becabb0e9),platform=Windows NT 10.0.15063 x86_64) (WARNING: The server did not provide any stacktrace information)
My configuration:
Chrome : 66
ChromeBrowser : 2.39.56
P.S everything works fine in Firefox
Thumb rule
A common cause for Chrome to crash during startup is running Chrome as root user (administrator) on Linux. While it is possible to work around this issue by passing --no-sandbox flag when creating your WebDriver session, such a configuration is unsupported and highly discouraged. You need to configure your environment to run Chrome as a regular user instead.
This error message...
org.openqa.selenium.WebDriverException: unknown error: DevToolsActivePort file doesn't exist
...implies that the ChromeDriver was unable to initiate/spawn a new WebBrowser i.e. Chrome Browser session.
Your code trials and the versioning information of all the binaries would have given us some hint about what's going wrong.
However as per Add --disable-dev-shm-usage to default launch flags seems adding the argument --disable-dev-shm-usage will temporary solve the issue.
If you desire to initiate/span a new Chrome Browser session you can use the following solution:
System.setProperty("webdriver.chrome.driver", "C:\\path\\to\\chromedriver.exe");
ChromeOptions options = new ChromeOptions();
options.addArguments("start-maximized"); // open Browser in maximized mode
options.addArguments("disable-infobars"); // disabling infobars
options.addArguments("--disable-extensions"); // disabling extensions
options.addArguments("--disable-gpu"); // applicable to windows os only
options.addArguments("--disable-dev-shm-usage"); // overcome limited resource problems
options.addArguments("--no-sandbox"); // Bypass OS security model
WebDriver driver = new ChromeDriver(options);
driver.get("https://google.com");
disable-dev-shm-usage
As per base_switches.cc disable-dev-shm-usage seems to be valid only on Linux OS:
#if defined(OS_LINUX) && !defined(OS_CHROMEOS)
// The /dev/shm partition is too small in certain VM environments, causing
// Chrome to fail or crash (see http://crbug.com/715363). Use this flag to
// work-around this issue (a temporary directory will always be used to create
// anonymous shared memory files).
const char kDisableDevShmUsage[] = "disable-dev-shm-usage";
#endif
In the discussion Add an option to use /tmp instead of /dev/shm David mentions:
I think it would depend on how are /dev/shm and /tmp mounted.
If they are both mounted as tmpfs I'm assuming there won't be any difference.
if for some reason /tmp is not mapped as tmpfs (and I think is mapped as tmpfs by default by systemd), chrome shared memory management always maps files into memory when creating an anonymous shared files, so even in that case shouldn't be much difference. I guess you could force telemetry tests with the flag enabled and see how it goes.
As for why not use by default, it was a pushed back by the shared memory team, I guess it makes sense it should be useing /dev/shm for shared memory by default.
Ultimately all this should be moving to use memfd_create, but I don't think that's going to happen any time soon, since it will require refactoring Chrome memory management significantly.
Reference
You can find a couple of detailed discussions in:
unknown error: DevToolsActivePort file doesn't exist error while executing Selenium UI test cases on ubuntu
Tests fail immediately with unknown error: DevToolsActivePort file doesn't exist when running Selenium grid through systemd
Outro
Here is the link to the Sandbox story.
I started seeing this problem on Monday 2018-06-04. Our tests run each weekday. It appears that the only thing that changed was the google-chrome version (which had been updated to current) JVM and Selenium were recent versions on Linux box ( Java 1.8.0_151, selenium 3.12.0, google-chrome 67.0.3396.62, and xvfb-run).
Specifically adding the arguments "--no-sandbox" and "--disable-dev-shm-usage" stopped the error.
I'll look into these issues to find more info about the effect, and other questions as in what triggered google-chrome to update.
ChromeOptions options = new ChromeOptions();
...
options.addArguments("--no-sandbox");
options.addArguments("--disable-dev-shm-usage");
We were having the same issues on our jenkins slaves (linux machine) and tried all the options above.
The only thing helped is setting the argument
chrome_options.add_argument('--headless')
But when we investigated further, noticed that XVFB screen doesn't started property and thats causing this error. After we fix XVFB screen, it resolved the issue.
I had the same problem in python. The above helped. Here is what I used in python -
chrome_options = Options()
chrome_options.add_argument('--headless')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-dev-shm-usage')
driver = webdriver.Chrome('/path/to/your_chrome_driver_dir/chromedriver',chrome_options=chrome_options)
I was facing the same issue recently and after some trial and error it worked for me as well.
MUST BE ON TOP:
options.addArguments("--no-sandbox"); //has to be the very first option
BaseSeleniumTests.java
public abstract class BaseSeleniumTests {
private static final String CHROMEDRIVER_EXE = "chromedriver.exe";
private static final String IEDRIVER_EXE = "IEDriverServer.exe";
private static final String FFDRIVER_EXE = "geckodriver.exe";
protected WebDriver driver;
#Before
public void setUp() {
loadChromeDriver();
}
#After
public void tearDown() {
if (driver != null) {
driver.close();
driver.quit();
}
}
private void loadChromeDriver() {
ClassLoader classLoader = getClass().getClassLoader();
String filePath = classLoader.getResource(CHROMEDRIVER_EXE).getFile();
DesiredCapabilities capabilities = DesiredCapabilities.chrome();
ChromeDriverService service = new ChromeDriverService.Builder()
.usingDriverExecutable(new File(filePath))
.build();
ChromeOptions options = new ChromeOptions();
options.addArguments("--no-sandbox"); // Bypass OS security model, MUST BE THE VERY FIRST OPTION
options.addArguments("--headless");
options.setExperimentalOption("useAutomationExtension", false);
options.addArguments("start-maximized"); // open Browser in maximized mode
options.addArguments("disable-infobars"); // disabling infobars
options.addArguments("--disable-extensions"); // disabling extensions
options.addArguments("--disable-gpu"); // applicable to windows os only
options.addArguments("--disable-dev-shm-usage"); // overcome limited resource problems
options.merge(capabilities);
this.driver = new ChromeDriver(service, options);
}
}
GoogleSearchPageTraditionalSeleniumTests.java
#RunWith(SpringRunner.class)
#SpringBootTest
public class GoogleSearchPageTraditionalSeleniumTests extends BaseSeleniumTests {
#Test
public void getSearchPage() {
this.driver.get("https://www.google.com");
WebElement element = this.driver.findElement(By.name("q"));
assertNotNull(element);
}
}
pom.xml
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.seleniumhq.selenium</groupId>
<artifactId>selenium-java</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
In my case in the following environment:
Windows 10
Python 3.7.5
Google Chrome version 80 and corresponding ChromeDriver in the path C:\Windows
selenium 3.141.0
I needed to add the arguments --no-sandbox and --remote-debugging-port=9222 to the ChromeOptions object and run the code as administrator user by lunching the Powershell/cmd as administrator.
Here is the related piece of code:
options = webdriver.ChromeOptions()
options.add_argument('headless')
options.add_argument('--disable-infobars')
options.add_argument('--disable-dev-shm-usage')
options.add_argument('--no-sandbox')
options.add_argument('--remote-debugging-port=9222')
driver = webdriver.Chrome(options=options)
I ran into this problem on Ubuntu 20 with Python Selenium after first downloading the chromedriver separately and then using sudo apt install chromium-browser Even though they were the same version this kept happening.
My fix was to use the provided chrome driver that came with the repo package located at
/snap/bin/chromium.chromedriver
driver = webdriver.Chrome(chrome_options=options, executable_path='/snap/bin/chromium.chromedriver')
Update:
I am able to get through the issue and now I am able to access the chrome with desired url.
Results of trying the provided solutions:
I tried all the settings as provided above but I was unable to resolve the issue
Explanation regarding the issue:
As per my observation DevToolsActivePort file doesn't exist is caused when chrome is unable to find its reference in scoped_dirXXXXX folder.
Steps taken to solve the issue
I have killed all the chrome processes and chrome driver processes.
Added the below code to invoke the chrome
System.setProperty("webdriver.chrome.driver","pathto\\chromedriver.exe");
ChromeOptions options = new ChromeOptions();
options.setExperimentalOption("useAutomationExtension", false);
WebDriver driver = new ChromeDriver(options);
driver.get(url);
Using the above steps I was able to resolve the issue.
Thanks for your answers.
In my case it was problem with CI Agent account on ubuntu server, I solved this using custom --user-data-dir
chrome_options.add_argument('--user-data-dir=~/.config/google-chrome')
My account used by CI Agent didn't have necessary permissions, what was interesting everything was working on root account
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
chrome_options = Options()
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--headless')
chrome_options.add_argument('--disable-gpu')
chrome_options.add_argument('--disable-dev-shm-usage')
chrome_options.add_argument('--profile-directory=Default')
chrome_options.add_argument('--user-data-dir=~/.config/google-chrome')
driver = webdriver.Chrome(options=chrome_options)
url = 'https://www.google.com'
driver.get(url)
get_url = driver.current_url
print(get_url)
There is lot of possible reasons for the RESPONSE InitSession ERROR unknown error: DevToolsActivePort file doesn't exist error message (as we can see from the number of answers for this question). So let's dive deeper to explain what exactly this error message means.
According to chromedriver source code the message is created in ParseDevToolsActivePortFile method. This method is called from the loop after launching chrome process.
In the loop the driver check if the chrome process is still running and if the ParseDevToolsActivePortFile file was already created by chrome. There is a hardcoded 60s timeout for this loop.
I see two possible reasons for this message:
Chrome is really slow during startup - for example due to lack of system resources - mainly CPU or memory. In this case it can happen that sometimes chrome manage to start in time limit and sometimes not.
There is another issue which prevents chrome to start - missing or broken dependency, wrong configuration etc. In such case this error message is not really helpful and you should find another log message which explain the true reason of the failure.
It happens when chromedriver fails to figure out what debugging port chrome is using.
One possible cause is an open defect with HKEY_CURRENT_USER\Software\Policies\Google\Chrome\UserDataDir
But in my last case, it was some other unidentified cause.
Fortunately setting port number manually worked:
final String[] args = { "--remote-debugging-port=9222" };
options.addArguments(args);
WebDriver driver = new ChromeDriver(options);
As stated in this other answer:
This error message... implies that the ChromeDriver was unable to initiate/spawn a new WebBrowser i.e. Chrome Browser session.
Among the possible causes, I would like to mention the fact that, in case you are running an headless Chromium via Xvfb, you might need to export the DISPLAY variable: in my case, I had in place (as recommended) the --disable-dev-shm-usage and --no-sandbox options, everything was running fine, but in a new installation running the latest (at the time of writing) Ubuntu 18.04 this error started to occurr, and the only possible fix was to execute an export DISPLAY=":20" (having previously started Xvfb with Xvfb :20&).
You can get this error simply for passing bad arguments to Chrome. For example, if I pass "headless" as an arg to the C# ChromeDriver, it fires up great. If I make a mistake and use the wrong syntax, "--headless", I get the DevToolsActivePort file doesn't exist error.
I was stuck on this for a very long time and finally fixed it by adding this an additional option:
options.addArguments("--crash-dumps-dir=/tmp")
I know it's an old question and it already has a lot of answers. However, I ran into this issue, bumped into this thread and none of the proposed solutions helped. After spending a few days(!) on it I finally found a solution:
My problem was that I was using the selenium/standalone-chrome image on a MacBook with M1 chip. After switching to seleniarm/standalone-chromium everything finally started to work.
I had the same issue, but in my case chrome previously was installed in user temp folder, after that was reinstalled to Program files. So any of solution provided here was not help me. But if provide path to chrome.exe all works:
chromeOptions.setBinary("C:\\Program Files (x86)\\Google\\Chrome\\Application\\chrome.exe");
I hope this helps someone =)
In my case it happened when I've tried to use my default user profile:
...
options.addArguments("user-data-dir=D:\\MyHomeDirectory\\Google\\Chrome\\User Data");
...
This triggered chrome to reuse processes already running in background, in such a way, that process started by chromedriver.exe was simply ended.
Resolution: kill all chrome.exe processes running in background.
update capabilities in conf.js as
exports.config = {
seleniumAddress: 'http://localhost:4444/wd/hub',
specs: ['todo-spec.js'],
capabilities: {
browserName: 'chrome',
chromeOptions: {
args: ['--disable-gpu', '--no-sandbox', '--disable-extensions', '--disable-dev-shm-usage']
}
},
};
Old question but a similar issue nearly drove me to insanity so sharing my solution. None of the other suggestions fixed my issue.
When I updated my Docker image Chrome installation from an old version to Chrome 86, I got this error. My setup was not identical but we were instantiating Chrome through a selenium webdriver.
The solution was to pass the options as goog:chromeOptions hash instead of chromeOptions hash. I truly don't know if this was a Selenium, Chrome, Chromedriver, or some other update, but maybe some poor soul will find solace in this answer in the future.
For Ubuntu 20 it did help me to use my systems chromium driver instead of the downloaded one:
# chromium which
/snap/bin/chromium
driver = webdriver.Chrome('/snap/bin/chromium.chromedriver',
options=chrome_options)
And for the downloaded webdriver looks like it needs the remote debug port --remote-debugging-port=9222 to be set, as in one of the answers (by Soheil Pourbafrani):
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("--remote-debugging-port=9222")
driver = webdriver.Chrome('<path_to>/chromedriver', options=chrome_options)
Date 9/16/2021
Everything works fine running chrome with selenium locally with python inside the docker hosted ubuntu container. When attempting to run from Jenkins the error above is returned WebDriverException: unknown error: DevToolsActivePort
Environment:
-Ubuntu21.04 inside a docker container with RDP access.
-chromedriver for chrome version: 93
Solution:
Inside the python file that starts the browser I had to set the DISPLAY environment variable using the following lines:
import os
os.environ['DISPLAY'] = ':10.0'
#DISPLAY_VAR = os.environ.get('DISPLAY')
#print("DISPLAY_VAR:", DISPLAY_VAR)
In my case, I was trying to create a runnable jar on Windows OS with chrome browser and want to run the same on headless mode in unix box with CentOs on it. And I was pointing my binary to a driver that I have downloaded and packaged with my suite. For me, this issue continue to occur irrespective of adding the below:
ChromeOptions options = new ChromeOptions();
options.addArguments("--headless");
options.addArguments("--no-sandbox");
System.setProperty("webdriver.chrome.args", "--disable-logging");
System.setProperty("webdriver.chrome.silentOutput", "true");
options.setBinary("/pointing/downloaded/driver/path/in/automationsuite");
options.addArguments("--disable-dev-shm-usage"); // overcome limited resource problems
options.addArguments("disable-infobars"); // disabling infobars
options.addArguments("--disable-extensions"); // disabling extensions
options.addArguments("--disable-gpu"); // applicable to windows os only
options.addArguments("--disable-dev-shm-usage"); // overcome limited resource problems
options.addArguments("window-size=1024,768"); // Bypass OS security model
options.addArguments("--log-level=3"); // set log level
options.addArguments("--silent");//
options.setCapability("chrome.verbose", false); //disable logging
driver = new ChromeDriver(options);
Solution that I've tried and worked for me is, download the chrome and its tools on the host VM/Unix box, install and point the binary to this in the automation suite and bingo! It works :)
Download command:
wget https://dl.google.com/linux/direct/google-chrome-stable_current_x86_64.rpm
Install command:
sudo yum install -y ./google-chrome-stable_current_*.rpm
Update suite with below binary path of google-chrome:
options.setBinary("/opt/google/chrome/google-chrome");
And.. it works!
I also faced this issue while integrating with jenkins server, I was used the root user for jenkin job, the issue was fixed when I changed the user to other user. I am not sure why this error occurs for the root user.
Google Chrome Version 71.0
ChromeDriver Version 2.45
CentOS7 Version 1.153
I run selenium tests with Jenkins running on an Ubuntu 18 LTS linux. I had this error until I added the argument 'headless' like this (and some other arguments):
ChromeOptions options = new ChromeOptions();
options.addArguments("headless"); // headless -> no browser window. needed for jenkins
options.addArguments("disable-infobars"); // disabling infobars
options.addArguments("--disable-extensions"); // disabling extensions
options.addArguments("--disable-dev-shm-usage"); // overcome limited resource problems
options.addArguments("--no-sandbox"); // Bypass OS security model
ChromeDriver driver = new ChromeDriver(options);
driver.get("www.google.com");
Had the same issue. I am running the selenium script on Google cloud VM.
options.addArguments("--headless");
The above line resolved my issue. I removed the other optional arguments. I think the rest lines of code mentioned in other answers did not have any effect on resolving the issue on the cloud VM.
in my case, when i changed the google-chrome and chromedriver version, the error was fixed :)
#google-chrome version
[root#localhost ~]# /usr/bin/google-chrome --version
Google Chrome 83.0.4103.106
#chromedriver version
[root#localhost ~]# /usr/local/bin/chromedriver -v
ChromeDriver 83.0.4103.14 (be04594a2b8411758b860104bc0a1033417178be-refs/branch-heads/4103#{#119})
ps: selenium verison was 3.9.1
No solution worked for my. But here is a workaround:
maxcounter=5
for counter in range(maxcounter):
try:
driver = webdriver.Chrome(chrome_options=options,
service_log_path=logfile,
service_args=["--verbose", "--log-path=%s" % logfile])
break
except WebDriverException as e:
print("RETRYING INITIALIZATION OF WEBDRIVER! Error: %s" % str(e))
time.sleep(10)
if counter==maxcounter-1:
raise WebDriverException("Maximum number of selenium-firefox-webdriver-retries exceeded.")
It seems there are many possible causes for this error. In our case, the error happened because we had the following two lines in code:
System.setProperty("webdriver.chrome.driver", chromeDriverPath);
chromeOptions.setBinary(chromeDriverPath);
It's solved by removing the second line.
I ran into same issue, i am using UBUNTU, PYTHON and OPERA browser. in my case the problem was originated because i had an outdated version of operadriver.
Solution:
1. Make sure you install latest opera browser version ( do not use opera beta or opera developer), for that go to the official opera site and download from there the latest opera_stable version.
Install latest opera driver (if you already have an opera driver install, you have to remove it first use sudo rm ...)
wget https://github.com/operasoftware/operachromiumdriver/releases/download/v.80.0.3987.100/operadriver_linux64.zip
unzip operadriver_linux64.zip
sudo mv operadriver /usr/bin/operadriver
sudo chown root:root /usr/bin/operadriver
sudo chmod +x /usr/bin/operadriver
in my case latest was 80.0.3987 as you can see
Additionally i also installed chromedriver (but since i did it before testing, i do not know of this is needed) in order to install chromedriver, follow the steps on previous step :v
Enjoy and thank me!
Sample selenium code
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Opera()
driver.get("http://www.python.org")
assert "Python" in driver.title
elem = driver.find_element_by_name("q")
elem.clear()
elem.send_keys("pycon")
elem.send_keys(Keys.RETURN)
assert "No results found." not in driver.page_source
driver.quit()
I came across the same problem, in my case there are two different common user userA and userB in Linux system.
userA first run the selinium programe which start chrome browswer with ChromeDriver successfully, when it came to userB, the DevToolsActivePort file doesn't exist error occur.
I tried the --remote-debugging-port=9222 option, but it lead to a new exception:
selenium.common.exceptions.WebDriverException: Message: chrome not reachable
The I ran google-chome directory and see the following error:
mkdir /tmp/Crashpad/new: Permission denied (13)
The I search the problem and got this:
https://johncylee.github.io/2022/05/14/chrome-headless-%E6%A8%A1%E5%BC%8F%E4%B8%8B-devtoolsactiveport-file-doesn-t-exist-%E5%95%8F%E9%A1%8C/
chrome_options.add_argument(f"--crash-dumps-dir={os.path.expanduser('~/tmp/Crashpad')}")
Thanks to #johncylee.

Houdini arm to x86 translation "Unsupported feature" error when using shared STL in Android NDK app

I created an Android Studio project from this sample NDK project provided by Google and changed a couple things so I could try to leverage Houdini arm to x86 translation:
In app/build.gradle I set abiFilters to armeabi-v7a.
In Application.mk I changed APP_ABI from all to armeabi-v7a so that x86 native libraries won't be created.
Also in Application.mk, I changed APP_STL from stlport_static to gnustl_shared.
You can see the modified code in this repo.
Then I ran the app in the BlueStacks emulator, which supports Houdini. I get the following error:
11-21 00:42:19.742 9947-9947/? D/houdini: [9947] Loading library(version: 4.0.8.45720 RELEASE)... successfully.
11-21 00:42:19.742 9947-9947/? D/houdini: [9947] Unsupported feature (ID:0x10600cae).
11-21 00:42:19.742 9947-9947/? D/houdini: [9947] Open Native Library /data/app-lib/com.sample.teapot-2/libTeapotNativeActivity.so failed.
...
java.lang.RuntimeException: Unable to start activity ComponentInfo{com.sample.teapot/com.sample.teapot.TeapotNativeActivity}: java.lang.IllegalArgumentException: Unable to load native library: /data/app-lib/com.sample.teapot-2/libTeapotNativeActivity.so
If I make APP_STL any of the supported values with shared I get this error, and with static it works fine. I'd like to get shared working, to solve this issue in React Native. Does shared STL not work with Houdini? Any workarounds?

3rd party Framework Library not loaded: 'Image not found'

I am upgrading a framework to latest version. Earlier integration (>2 years old) had framework directly copied in the project; now getting cocoapod (0.39.0) to get framework integrated with project using xcode (7.2.1) and objective-c project.
Upon run, it generates following error:
dyld: Library not loaded: #rpath/name.framework/name
Referenced from:
/Users/xyz_xyz/Library/Developer/CoreSimulator/Devices/xxxxxxx/data/Containers/Bundle/Application/xxxxxxx/appname.app/appname
Reason: image not found
"Pod" xcode-project has correct reference and framework is present in corresponding folder
Found that nameFramework isn't linked (added) in any of the build phases. I am new to using cocoapods and not sure what changes would be necessary in Xcode build settings to make transition from directly-embedded framework to cocoapods based integration.
how to get past "dyld: Library not loaded" error?
What phase should I use to reference name.Framework during build as it's not getting generated?
How to copy bundle resources from Pod to project? Dragging-n-drop Pods/name/Resources/name.bundle prompts "copy item if needed" dialog. <- I don't think I need to do this when using cocoapods.
[update] Integration using cocoapods works fine when a sample or new project is used. It's something in the current project settings that's causing the issue.
Podfile:
platform :ios, '8.0'
# use_framework for swift based pod integration. requires cocoapod 0.39.0
#use_frameworks!
pod 'GTMOAuth2'
pod 'Typhoon'
pod 'Alamofire'
# Issue with name
pod 'name', podspec:'https://customers.pspdfkit.com/cocoapods/.../latest.podspec'
target :ABC do
pod '...', '~>1'
end
target :XYZ do
pod '...', :path => 'submodules/...'
end
[Update]
- Upgraded to CocoaPods 1.0.1 & modified the Podfile to uncomment use_frameworks!, and make other changes that are required for 0.39.0 to 1.0.1 migration. Here is the updated Podfile.
platform :ios, '8.0'
# use_framework is required for dynamic links (swift) based pod integration.
use_frameworks!
target 'XYZ' do
pod 'GTMOAuth2'
pod 'Alamofire'
pod 'name', podspec:'https://customers.name.com/cocoapods/.../latest.podspec'
target :XYZ-A do
pod 'XYZ-iOS-SDK', :path => 'submodules/xyz-ios-sdk'
end
end
Fixed errors such as following by adding $(inherited) flag (where applicable)
[!] The XYZ-v2 [Release] target overrides the OTHER_LDFLAGS build setting defined in ...
Progress after above changes, Pods/Target Supported Files/XYZ-v2/ has Pods-XYZ-v2-frameworks.sh and resources.sh; earlier frameworks.sh was missing. Following is partial content of the framworks.sh, and it does contain copy instructions.
if [[ "$CONFIGURATION" == "Debug" ]]; then
install_framework "$BUILT_PRODUCTS_DIR/GTMOAuth2/GTMOAuth2.framework"
install_framework "$BUILT_PRODUCTS_DIR/GTMSessionFetcher/GTMSessionFetcher.framework"
install_framework "$BUILT_PRODUCTS_DIR/GoogleAPIClient/GoogleAPIClient.framework"
install_framework "$BUILT_PRODUCTS_DIR/Mantle/Mantle.framework"
install_framework "${PODS_ROOT}/PSPDFKit/PSPDFKit.framework"
install_framework "$BUILT_PRODUCTS_DIR/SSKeychain/SSKeychain.framework"
fi
// and for "Release" & "Distribution" as well..
Now I am trying to resolve compile errors upon build, which are related to static vs dynamic library includes.
/path../Pods/SSKeychain/Sources/SSKeychain.h:65:1: Duplicate interface definition for class ‘SSKeychain'
[Updated] Posted a new question: CocoaPods 1.0.1 Redefinition of 'XYZ', Redefinition of enumerator 'ABC', Duplicate interface definition for 'MNO'
Related:
OS X Framework Library not loaded: 'Image not found'
Seems relevant: https://github.com/CocoaPods/CocoaPods/issues/4772
Try using use_frameworks! (it's currently commented out). PSPDFKit is a dynamic framework, so you need to enable this option.
Also try it with the newest Xcode and CocoaPods >= 1.0.0. Older versions might not work correctly.
You can find more information about PSPDFKit integration via CocoaPods here: https://pspdfkit.com/guides/ios/current/getting-started/using-cocoapods
If all of this doesn't help you can reach the PSPDFKit developers directly at https://pspdfkit.com/support/request
same issue on dyld: Library not loaded: #rpath/TwilioAccessManager.framework/TwilioAccessManager
Reason: image not found
I had the same problem, this fixed it for me.I Changed Framework status required to optional .