I want to verify the existence of a file in a linux container from linux virtual machine - containers

I am on my virtual machine and I must find a way to connect to the container and verify if there is a specific file. How can I do that?

If you have enabled SSH in your container, then you should be able to login to it from anywhere (even from the VM).
ssh usernae#lxc-hostname
Once logged in you can search for the file. There are various tools, but I like to use the locate command line tool.
locate <filename>
Hope this was useful.

Without SSH you can view the files and directory of the LXC containers; For this we need to find the pid (process identifier) of the container.
$>lxc-info -pHn <container-name-C1>
The above command will return you the pid number of the container launched by name C1
Now go to /proc/'{pid}'/root/; From this place you can view all the files of the lxc container by name C1; The beauty of LXC :)

Related

location of .erlang.cookie in ejabberd docker is missing

I am running ejabberd using this docker image "https://github.com/processone/docker-ejabberd/tree/master/ecs".
Wondering which is the path for .erlang.cookie inside the container? I was trying to setup cluster in different host.
I can't find it in /home/ejabberd location. Tried setting environment variable ERLANG_COOKIE while running docker still can't find it in /home/ejabberd location.
You already found where the erlang cookie file is generated and available.
Alternatively, you can use the ERLANG_COOKIE environment variable to set the cookie value, and don't care about the file. See https://github.com/processone/docker-ejabberd/tree/master/ecs#clustering-example
It is in $HOME directory when login to the container as root user, In my case /home/ejabberd. The file is hidden use ls -a to list it.
To login as root user to the container use --user root while docker exec

Cannot map agent.conf using Cygnus docker installation

I have problem installing CYGNUS using docker as source, simply i cannot understand where i should map what specific agent.conf.
Image i am using is from here.
When i try to map agent.conf witch have my specific setup to container it starts and run but fail to copy, and not only that any change i made to file inside container wont stay it returns to previous default state.
While i have no issues with grouping_rules.conf using same approach.
I used docker and docker compose both same results.
Path on witch i try to copy opt/apache-flume/conf/agent.conf
docker run -v /home/igor/Documents/cygnus/agent.conf:/opt/apache-flume/conf/agent.conf fiware/cygnus-ngsi
Can some who managed to run it using his config tell me if i misunderstood location of agent.conf or something because this is weird, i used many docker images and never had issue where i was not able to copy from my machine to docker container.
Thanks in advance.
** EDIT **
Link of agent.conf
Did you copy the agent.conf file to your directory before start the container?
As you can see here, when you define a volume with "-v" option, docker copies the content of the host directory, inside the container directory using the mount point. Therefore, you must first provide the agent.conf file on your host.
The reason is that when using a "bind mounted" directory from the
host, you're telling docker that you want to take a file or directory
from your host and use it in your container. Docker should not modify
those files/directories, unless you explicitly do so. For example, you
don't want -v /home/user/:/var/lib/mysql to result in your
home-directory being replaced with a MySQL database.
If you do not have access to the agent.conf file, you can download the template in the source code from the official cygnus github repo here. You can also copy it once the docker container is running, using the docker cp option:
docker cp <containerId>:/file/path/within/container /host/path/target
Keep in mind, that you will have to edit the agent.conf file to configure it according to the database you are using. You can find in the official doc how to configure cygnus to use differents sinks like MongoDB, MySQL, etc.
I hope I have been helpful.
Best regards!

How to set where to download the VM in minishift?

It downloads openshift into C:\Users\[user]\.minishift\machines folder. How to change this location to, say, D:\My VMs\? The config set is not very helpful in explaining setting which config for which.
Minishift verision: v1.15.1
Platform: Windows
Driver: Hyper-V
Any help would be greatly appreciated.
It looks like the machines directory can't be set directly through config. It is set relative to a base directory in instance_dirs.go.
That base directory, by default, is the .minishift directory in the home directory of the user, e.g. C:\Users\[user]\.minishift on Windows, but this can be overridden by setting the environment variable MINISHIFT_HOME.
The base directory could also be a profile directory, if you are not using the default profile (the default being minishift).
$ minishift profile list
- minishift Stopped
$ minishift profile myprofile
Profile 'myprofile' set as active profile.
The machines directory for myprofile would then be created under $MINISHIFT_HOME/profiles/myprofile/machines, e.g. on Windows C:\Users\[user]\.minishift\profiles\myprofile\machines.
So you can set MINISHIFT_HOME and move the whole contents of the .minishift directory, including machines, somewhere else but it doesn't look like you can move just machines alone.
Perhaps, you could solve this at the OS-level by creating a symlink between C:\Users\[user]\.minishift\machines and D:\My VMs\.
In case it helps others and so they don't need to test the different ways of using symlink as well as to expand on #codemonkey great answer this is what I did to use symlink as my C drive had no available space. I'm also using hyper-v as the driver.
Note: I do have minishift.exe installed in the apps folder on my D drive
Note 2: I did have to run the command prompt in admin mode
From the C:\Users\[user]\.minishift folder I moved the "machines" folder to D:\Apps\minishift-1.32.0-windows-amd64\
I first tried a soft link which didn't work, I then tried a hadr link, but I was getting errors so I used a "directory junction" link with the /J switch as such C:\WINDOWS\system32>mklink /J C:\Users\[user]\.minishift\machines D:\Apps\minishift-1.32.0-windows-amd64\machines
You should get the following result Junction created for C:\Users\[user]\.minishift\machines <<===>> D:\Apps\minishift-1.32.0-windows-amd64\machines
Then if necessary run minishift delete --clear-cache WARNING this will delete any previous images and hosts you might have!
Then start minishift as normal with minishift start
Grab a cup of coffee or go smoke a cigarette or vape as it will take awhile for the OpenShift server to be started.
Hope this answer might help others who face a similar issue.

Simply uploading a file to google compute

I want to upload a file to the disk attached to my google compute vm from my local machine.
abhigenie92_gmail_com#instance-1:~$ pwd
/home/abhigenie92_gmail_com
abhigenie92_gmail_com#instance-1:~$ gcloud compute copy-files C:\Users\sony\Desktop\Feb\Model\MixedCrowds28 Runge kutta 2nd order try.nl
ogo: ./
abhigenie92_gmail_com#instance-1:~$ gcloud compute copy-files C:\Users\sony\Desktop\Feb\Model\MixedCrowds28 Runge kutta 2nd order try.nl
ogo: /home/abhigenie92_gmail_com
ERROR: (gcloud.compute.copy-files) All sources must be
edit2: Get the following error now:
RE: edit2
Since gcloud's copy-files is a custom implementation of scp, you need to specify the complete path on your VM where you want to copy the files to. In your specific case:
LOCAL-FILE-PATH> gcloud compute copy-files [FILENAMES] [VM-NAME]:[FULL-REMOTE-PATH]
In your specific example:
C:\Users\sony\Desktop> gcloud compute copy-files copy.nlogo instance-1:/home/abhigenie92_gmail_com/
This command will then place the file(s) into your user's home directory root. Just make sure the remote path exists, and that you user has write rights to the destination.
From the looks of what you posted, you're trying to copy things from your local machine to a cloud instance from inside the instance. I'm afraid you can't do that.
I take it you have already installed the gcloud compute tool? If not, install that on your local machine (follow the link) and open up the windows command line, type gcloud auth login to authenticate, then you should be able to do what you want to with the following command:
gcloud compute copy-files C:\Users\sony\Desktop\Feb\Model\MixedCrowds28\ Runge\ kutta\ 2nd\ order\ try.nlogo <VM Name>:~/
Note that I have escaped the spaces in your filename - it's a good idea to get out of the habit of spaces in filenames - and made a couple of assumptions:
Your VM is running linux
You are okay with copying up to your home directory on the VM
If any of these assumptions is incorrect, you may have problems. To copy somewhere else, change the path in the <VM Name>:~/ part
Edit: I mangled a file extension in the original, fixed now!

mysql folder inaccessible on Ubuntu

I am trying to reset my MySQL root password following the official reference here.
In step #2, I have to do the following
Locate the .pid file that contains the server's process ID. The exact
location and name of this file depend on your distribution, host name,
and configuration. Common locations are /var/lib/mysql/,
/var/run/mysqld/, and /usr/local/mysql/data/. Generally, the file name
has an extension of .pid and begins with either mysqld or your
system's host name.
So I go to /var/lib/ and find the mysql folder. I double-clicked it, I got the following pop-up window:
The folder contents could not be displayed.
You do not have the permissions necessary to view the contents of "mysql".
I am pretty sure that I am indeed the system admin. Why is it like so and how to fix it?
Start with working with the terminal/console as a root user.
Not a system expert - but it should get you somewhere:
Get into the ubuntu terminal/console
switch to the root user (sudo bash)
Then follow this one :
https://help.ubuntu.com/community/MysqlPasswordReset