During PDF generation with Puppeteer new files appear in cache directory on Ubuntu 16.04 docker image (/var/cache/fontconfig) for each PDF (even the same one). Force fc-cache -r doesn't clean cached files in such situation
Is there a way to reuse cached fonts instead of generating new duplicates during PDF generation with Puppeteer and Chrome installed?
Related
I use phantomjs and html to pdf node package. On local env (MacOS) everything is ok. When I use the same app on Ubuntu Server (16.04), phantomjs generates PDF with images instead of text (on image per page).
I was looking for info and found that it can be caused by external resources - yes, I used additional custom font in my styles. But after changing to local (file:/// and base in render options) problem still occurs. Don't sure why it's platform-depended issue.
Solution: install fonts locally on server (on /usr/share/fonts and /usr/local/share/fonts/). After that, Phantom can use them without any issues.
What is the default location of ChromeDriver binary and Chrome binary on windows 7 for triggering appium using java-client.jar? if i am using RemoteWebDriver and tries to initiate chrome browser, from where does the selenium initiates the chromedriver?
the code:
DesiredCapabilities capabilities = new DesiredCapabilities();
capabilities.setCapability("userName", ReadProperties.Properties("MobileUsername"));
capabilities.setCapability("password", ReadProperties.Properties("MobilePassword"));
capabilities.setCapability("udid", ReadProperties.Properties("MobileUID"));
capabilities.setCapability("browserName", ReadProperties.Properties("MobileBrowser"));
capabilities.setCapability("platformName", ReadProperties.Properties("MobilePlatform"));
log.Info(capabilities.getVersion());
mobile_driver = new RemoteWebDriver(new URL(""+ReadProperties.Properties("MobileURL")+"/wd/hub"),capabilities);
chromedriver is not installed in your System by default. Users individually have to download chromedriver from ChromeDriver - WebDriver for Chrome page and you can place it anywhere within your system.
You must ensure that Chrome is installed at the optimum location as the server expects you to have Chrome installed in the default location for each system as per the snapshot below :
Note : For Linux systems, the ChromeDriver expects /usr/bin/google-chrome to be a symlink to the actual Chrome binary. You can also override the Chrome binary location following the documentation Using a Chrome executable in a non-standard location.
The location will depend on your default download folder e.g. when you download something from the internet and it goes into the downloads folder, then that is your default download folder.
So if you downloaded chromedriver.exe the same way, then it will also be in the downloads folder.
If you are using 3rd party service to run your tests you should not care about chromedriver.
But when you run tests locally you have to download it yourself: https://chromedriver.storage.googleapis.com/index.html
And then use capability to set absolute path to this file.
Make sure you use chromedriver version compatible with your browser version.
I know it's an old question but none of the answers above helped me. I found a different solution that worked for me and it might help someone else in the future.
I have tried the below solution with only Windows 10 / Server 2016.
Step 1: Get to the Google Chrome install directory by right clicking on the Chrome icon and click on Properties. You will see the installed directory listed under 'Target' and 'Start in' options. The directory path should end with .../Chrome/Application/. Copy the whole path.
Step 2: Open File Explorer and go to the above copied path. You should see chrome.exe file with other files and folders. Copy the whole Application/ folder including chrome.exe file with other files and folders.
Step 3: Go to the below file path and paste the above Application folder.
C:\Users<YOUR USER>\AppData\Local\Google\Chrome\
After pasting the Application folder, you should have the chrome.exe file with other files and folders in the following file path:
C:\Users<YOUR USER>\AppData\Local\Google\Chrome\Application\
Now the ChromeDriver should be able to locate the Chrome Binary.
I'm trying to figure out why a stylesheet from a website I'm developing always loads via Chrome's disk cache instead of Chrome's memory cache. Expires dates are set for the file. All other resources are loaded from the memory cache.
Load files manually into a ramdisk:
Windows: ImDisk Toolkit (Tip: check the "dynamic allocation" to save on ram space, prefer to use ExFat instead of NTFS if you'd like to save on a few megabytes upon creation)
I use the node package npm install http-server -g to host it locally, since chrome does not allow to load it from file:/// for security reasons.
Load it as usual: <link rel="stylesheet" href="http://127.0.0.1:8080/tailwind.min.css">
Be aware that http-server default settings will host the files to the local network alongside just http://127.0.0.1:8080.
It might be redundant after the first load, but worth trying
http-server also seems to cache regular disk accesses, as by their doc.
On my chrome 96.0.4664.110 (Official Build) (64-bit) after I refresh the page it appears (Memory Cache) for the CSS files I've tried, at the network tab in the element inspector, after I reload the page maybe once, I can't remember.
I have the windows application installed, Live Reload extension for chrome and i'm using Sublime Text with Live Refresh and Live Reload plugins. For some reason any HTML doc i edit on sublime text is not auto updating on chrome. The extension is stuck on "Live Reload is connecting" when i enable it. What exactly am i missing, do i need to setup a local web server or something like that?
I think your issue may be that you did not add the folder that contains your HTML document to the LiveReload windows application so that it can monitor for changes to the file.
If you open up the LiveReload app, it should have a list of site folders. Try adding your test project's folder to the list.
If you load from local file it does work you must runt the local file through a web server e.g python -m http.server in the folder your .html exists otherwise the option match to local file never appears and then you can have the save and reload functionality.
I mention it because I hadn't found the problem through reading the post
Is there a way that gives me the opportunity to download html webpages through google chrome or chromium using only commands in the terminal.
WGET doesn't seem to work well for me.
You can try with httrack.
It allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer.