Error: ipfs configuration file already exists - ipfs

I already install ipfs via go ipfs. and I don't know why I encounter an error when I want to run "ipfs init " in my Terminal.
could anyone help me to figure out where is the problem?
here is an image of my Terminal:

The default IPFS repository path is ~/.ipfs.
Perhaps you can try to change your default path using:
export IPFS_PATH=/path/to/ipfsrepo
And then run:
ipfs init
Also, have you by any chance installed go-ipfs from snap?

In my case, I have faced this same problem. Because I also installed IPFS Desktop, it initialized some configurations. If you want a fresh start, you delete the initialized IPFS node:
> /Users/scbas/.ipfs

If you have not made any directory for IPFS, then using
ipfs init
will automatically generate a node in your base Users/Username directory.
If you want to change the directory, you can create one and then change the path using this command in the terminal according to the directory you created:
export IPFS_PATH=/Users/<your system username>/<the name of directory you created>
For example, I created an ipfs-repo folder for my IPFS node, so I have to run:
export IPFS_PATH=/Users/jonah/ipfs-repo

Related

How can I uninstall IPFS completely and restart everything from scratch and get a new peer id?

How can I uninstall IPFS completely and restart everything from scratch and get a new peer id? I tried to delete the go-ipfs folder but I can still get Error: ipfs configuration file already exists! when I do ipfs init.
The data store as well as the config will be stored in a subdirectory .ipfs of your home directory. So if you are on a UNIX based system $HOME/.ipfs. You would have to delete this directory and then run ipfs init to get an empty store and a new peer id.
Note that you can also configure the location of the store directory using the IPFS_PATH environment variable, which can be useful to get the IPFS store on a different mount point.

Cannot map agent.conf using Cygnus docker installation

I have problem installing CYGNUS using docker as source, simply i cannot understand where i should map what specific agent.conf.
Image i am using is from here.
When i try to map agent.conf witch have my specific setup to container it starts and run but fail to copy, and not only that any change i made to file inside container wont stay it returns to previous default state.
While i have no issues with grouping_rules.conf using same approach.
I used docker and docker compose both same results.
Path on witch i try to copy opt/apache-flume/conf/agent.conf
docker run -v /home/igor/Documents/cygnus/agent.conf:/opt/apache-flume/conf/agent.conf fiware/cygnus-ngsi
Can some who managed to run it using his config tell me if i misunderstood location of agent.conf or something because this is weird, i used many docker images and never had issue where i was not able to copy from my machine to docker container.
Thanks in advance.
** EDIT **
Link of agent.conf
Did you copy the agent.conf file to your directory before start the container?
As you can see here, when you define a volume with "-v" option, docker copies the content of the host directory, inside the container directory using the mount point. Therefore, you must first provide the agent.conf file on your host.
The reason is that when using a "bind mounted" directory from the
host, you're telling docker that you want to take a file or directory
from your host and use it in your container. Docker should not modify
those files/directories, unless you explicitly do so. For example, you
don't want -v /home/user/:/var/lib/mysql to result in your
home-directory being replaced with a MySQL database.
If you do not have access to the agent.conf file, you can download the template in the source code from the official cygnus github repo here. You can also copy it once the docker container is running, using the docker cp option:
docker cp <containerId>:/file/path/within/container /host/path/target
Keep in mind, that you will have to edit the agent.conf file to configure it according to the database you are using. You can find in the official doc how to configure cygnus to use differents sinks like MongoDB, MySQL, etc.
I hope I have been helpful.
Best regards!

Couldn't load the external resource at: file:/var/lib/neo4j/import with Neo4j docker image

I am trying to load the node from csv in Neo4j, however, every time I try to do this I get such an error:
Neo.ClientError.Statement.ExternalResourceFailed: Couldn't load the external resource at: file:/var/lib/neo4j/import/events.csv
My event.csv file is in /var/lib/neo4j/import directory with 777 permissions. The query I try to run looks like this:
USING PERIODIC COMMIT 500 LOAD CSV WITH HEADERS FROM "file:///events.csv" AS line
CREATE (e:Event { event_id: toInteger(line.event_id),
created: line.created,
description: line.description })
I set up Neo4j using the latest version of docker image. What might be wrong with file permissions or file location?
Docker container cannot get access to files outside on the host machine, unless you mount those files to the container.
Solution is to bind-mount the directory to your container when calling the docker run command:
docker run -v /var/lib/neo4j/import:/var/lib/neo4j/import ... <IMAGE> <CMD>

Gunicorn: No module named '/path/to/my/django/project'

I was using gunicorn with nginx on Ubuntu 16.04 system to deploy a django project and want to create a systemd service for gunicorn. In /lib/systemd/system/gunicorn-mywebsite.service, I write following codes:
ExecStart=/home/myusername/sites/pythonEnv/bin/gunicorn --bind unix:/tmp/mywebsite.socket /path/to/my/django/project.wsgi:application
But when I ran service gunicorn-mywebsite start, there was problem No module named '/path/to/my/django/project'.
If I run the same command my django project directory with relative path of my wsgi:application, it will work.
How can I fix this problem?
You can't give gunicorn a path to a file, it needs to be a module path, with application entry point name. So just project.wsgi:application. If the directory containing project is not in your path, then use the --pythonpath to gunicorn to tell it where it is.

"No system SSH available" error on OpenShift rhc snapshot save on Windows 8

Steps To Replicate
On Windows 8.
In shell (with SSH connection active):
rhc snapshot save [appname]
Error
No system SSH available. Please use the --ssh option to specify the path to your SSH executable, or install SSH.
Suggested Solution
From this post:
Usage: rhc snapshot-save <application> [--filepath FILE] [--ssh path_to_ssh_executable]
Pass '--help' to see the full list of options
Question
The path to keys on PC is:
C:\Users\[name]\.ssh
How do I define this in the rhc snaphot command?
Solution
rhc snapshot save [appname] --filepath FILE --ssh "C:\Users\[name]\.ssh"
This will show the message:
Pulling down a snapshot of application '[appname]' to FILE ...
... then after a while
Pulling down a snapshot of application '[appname]' to FILE ... DONE
Update
That saved the backup in a file called "FILE" without an extension, so I'm guessing in the future I should define the filename as something like "my_app_backup.tar.gz" ie:
rhc snapshot save [appname] --filepath "my_app_backup.tar.gz" --ssh "C:\Users\[name]\.ssh"
It will save in the repo directory, so make sure you move it out of this directory before you git add, commit, push etc, otherwise you will upload your backup too.