Access file from chrome node docker - google-chrome

I am using selenium grid docker to run my automation testsuites.
I hv configured chrome node docker to run my testcases on the browser.
In my testcases I am downloading some file from the web. It is getting downloaded in "/home/seluser/Downloads/". I need the content of the file as the content is dynamic. How I can access file downloaded on the docker image from machine where my automation is running.

One way to do this would be to mount the /home/seluser/Downloads path somewhere on your host machine. Change your docker run command to add something like -v /path/on/host:/home/seluser/Downloads and all of the saved files will be accessible on your host machine.

You can use docker cp command
docker cp <name of the container>:/home/seluser/Downloads copied_files
where copied_files is a directory where files will be copied

Related

Cannot map agent.conf using Cygnus docker installation

I have problem installing CYGNUS using docker as source, simply i cannot understand where i should map what specific agent.conf.
Image i am using is from here.
When i try to map agent.conf witch have my specific setup to container it starts and run but fail to copy, and not only that any change i made to file inside container wont stay it returns to previous default state.
While i have no issues with grouping_rules.conf using same approach.
I used docker and docker compose both same results.
Path on witch i try to copy opt/apache-flume/conf/agent.conf
docker run -v /home/igor/Documents/cygnus/agent.conf:/opt/apache-flume/conf/agent.conf fiware/cygnus-ngsi
Can some who managed to run it using his config tell me if i misunderstood location of agent.conf or something because this is weird, i used many docker images and never had issue where i was not able to copy from my machine to docker container.
Thanks in advance.
** EDIT **
Link of agent.conf
Did you copy the agent.conf file to your directory before start the container?
As you can see here, when you define a volume with "-v" option, docker copies the content of the host directory, inside the container directory using the mount point. Therefore, you must first provide the agent.conf file on your host.
The reason is that when using a "bind mounted" directory from the
host, you're telling docker that you want to take a file or directory
from your host and use it in your container. Docker should not modify
those files/directories, unless you explicitly do so. For example, you
don't want -v /home/user/:/var/lib/mysql to result in your
home-directory being replaced with a MySQL database.
If you do not have access to the agent.conf file, you can download the template in the source code from the official cygnus github repo here. You can also copy it once the docker container is running, using the docker cp option:
docker cp <containerId>:/file/path/within/container /host/path/target
Keep in mind, that you will have to edit the agent.conf file to configure it according to the database you are using. You can find in the official doc how to configure cygnus to use differents sinks like MongoDB, MySQL, etc.
I hope I have been helpful.
Best regards!

Couldn't load the external resource at: file:/var/lib/neo4j/import with Neo4j docker image

I am trying to load the node from csv in Neo4j, however, every time I try to do this I get such an error:
Neo.ClientError.Statement.ExternalResourceFailed: Couldn't load the external resource at: file:/var/lib/neo4j/import/events.csv
My event.csv file is in /var/lib/neo4j/import directory with 777 permissions. The query I try to run looks like this:
USING PERIODIC COMMIT 500 LOAD CSV WITH HEADERS FROM "file:///events.csv" AS line
CREATE (e:Event { event_id: toInteger(line.event_id),
created: line.created,
description: line.description })
I set up Neo4j using the latest version of docker image. What might be wrong with file permissions or file location?
Docker container cannot get access to files outside on the host machine, unless you mount those files to the container.
Solution is to bind-mount the directory to your container when calling the docker run command:
docker run -v /var/lib/neo4j/import:/var/lib/neo4j/import ... <IMAGE> <CMD>

Gunicorn: No module named '/path/to/my/django/project'

I was using gunicorn with nginx on Ubuntu 16.04 system to deploy a django project and want to create a systemd service for gunicorn. In /lib/systemd/system/gunicorn-mywebsite.service, I write following codes:
ExecStart=/home/myusername/sites/pythonEnv/bin/gunicorn --bind unix:/tmp/mywebsite.socket /path/to/my/django/project.wsgi:application
But when I ran service gunicorn-mywebsite start, there was problem No module named '/path/to/my/django/project'.
If I run the same command my django project directory with relative path of my wsgi:application, it will work.
How can I fix this problem?
You can't give gunicorn a path to a file, it needs to be a module path, with application entry point name. So just project.wsgi:application. If the directory containing project is not in your path, then use the --pythonpath to gunicorn to tell it where it is.

Docker on Mac: Unable to run MySQL

I am using Docker very first time. On running command: make kickoff I am getting error:
myapp_php_apache_engine_dev is up-to-date
Starting myapp_mysql_dev
ERROR: for mysql Cannot start service mysql: invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:359: container init caused \\\"rootfs_linux.go:53: mounting \\\\\\\"/Applications/MAMP/htdocs/clients/codingmachine/myapp/mysql/custom-my.cnf\\\\\\\" to rootfs \\\\\\\"/mnt/sda1/var/lib/docker/aufs/mnt/2ab6b2578ad9f8da2d453aefa5cd9b288fee30dd2d73efc3048627cf0861d55a\\\\\\\" at \\\\\\\"/mnt/sda1/var/lib/docker/aufs/mnt/2ab6b2578ad9f8da2d453aefa5cd9b288fee30dd2d73efc3048627cf0861d55a/etc/mysql/mysql.cnf\\\\\\\" caused \\\\\\\"not a directory\\\\\\\"\\\"\"\n"
ERROR: Encountered errors while bringing up the project.
make: *** [up] Error 1
When running docker toolbox, the docker daemon runs in a VirtualBox VM. The daemon (and containers, which run inside that VM) therefore don't have access to files on your host (Mac OS X).
When you bind-mount a directory from your host into the container (so that the container can access those files), files are always mounted from the host that the daemon runs on; in your case, the VirtualBox VM.
Docker Toolbox uses a "trick" to allow you to mount files from your host; files inside the /Users/ directory are shared with the VirtualBox VM using VirtualBox "guest additions". This means that when you run;
docker run -v /Users/myname/somedir:/var/www/html -d nginx
The docker daemon mounts the /Users/myname/somedir directory from the VM into the container. Due to the guest additions "trick", this path actually is shared with your OS X machine, so the container "sees" the files from your OS X machine.
However, any directory outside of the /Users/ directory is not shared between the OS X machine and the VM. If you try to bind-mount a path that does not exist inside the VM, docker creates an empty directory (it assumes you want to mount a directory, because it has no way to tell if it should be a directory or a file), and mounts that directory inside the container.
In your example, you try to bind mount;
/Applications/MAMP/htdocs/clients/codingmachine/myapp/mysql/custom-my.cnf
Inside the container at;
/etc/mysql/mysql.cnf
The /Applications directory is not shared with the VM, so docker creates an empty directory named custom-my.cnf inside the VM, then tries to mount that directory at /etc/mysql/mysql.cnf inside the container. This fails, because you cannot mount a directory on top of a file, and Linux produces an error "not a directory".
To resolve your issue;
Move the files you're trying to mount somewhere inside your /Users/ directory. Note that VirtualBox guest additions always mounts files/directories as if they're owned by user 1000 and group 1000; mysql may therefore not have write access to those files
If you have a modern Mac, running OS X 10.10 or later, check out Docker for Mac, which allows you to share any directory with Docker, and does not have the "permissions" issue; https://docs.docker.com/docker-for-mac/troubleshoot/#/volume-mounting-requires-file-sharing-for-any-project-directories-outside-of-users

How to download a file/folder from remote (openshift) to local system

How to download or backup or to save a copy of a file from openshift remote folder into my local-system folder using rhc client tool? or is there any other way other than rhc client tool to make a backup of it to my local system?
Also, Is there a way to copy an entire folder from remote(openshift) to local?
First, tar and gzip your folder on the server within a ssh session, the syntax is:
rhc ssh <app_name>
tar czf <name_of_new_file>.tar.gz <name_of_directory>
Second, after you have disconnected from the openshift server (with CTRL-D), download this file to your local system:
rhc scp <app_name> download <local_destination> <absolute_path_to_remote_file>
Then on your local machine you can extract the file and perform your actions.
Use winscp (if on windows) to ssh into your openshift app. Navigate to your folder. Drag and drop folder or files to local machine.
Filezilla - using filezilla and sftp with openshift
теперь можно так
copy a pod directory to a local directory:
oc rsync <pod-name>:/opt/app-root/src /c/source
https://docs.okd.io/3.11/dev_guide/copy_files_to_container.html