IPFS private network setup not works - ipfs

I am trying to create a private network of IPFS with two nodes. Each node is an EC2 instance running on AWS. I generated swarm key and configure my node as follow:
"API": "/ip4/0.0.0.0/tcp/5001",
"Announce": [],
"Gateway": "/ip4/0.0.0.0/tcp/8080",
"NoAnnounce": [],
"Swarm": [
"/ip4/0.0.0.0/tcp/4001",
"/ip6/::/tcp/4001"
]
"Bootstrap": [
"/ip4/172.31.25.195/tcp/4001/ipfs/QmaRpyg48RqdUwXVD84RSAd8hFvA2MNPvu3Z7yTxRFJKub"
]
Then I run "ipfs daemon" to start all nodes and "ipfs swarm peers" to list all connected peers. However, "ipfs swarm peers" doest list anything and I dont know what is the problem?

Its likely you need to add inbound TCP rules for ports 4001, 5001 and 8080 in your Security Groups from AWS Console.

Related

How to properly run a container with containerd's ctr using --uidmap/gidmap and --net-host option

I'm running a container with ctr and next to using user namespaces to map the user within the container (root) to another user on the host, I want to make the host networking available for the container. For this, I'm using the --net-host option. Based on a very simple test container
$ cat Dockerfile
FROM alpine
ENTRYPOINT ["/bin/sh"]
I try it with
sudo ctr run -rm --uidmap "0:1000:999" --gidmap "0:1000:999" --net-host docker.io/library/test:latest test
which gives me the following error
ctr: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"sysfs\\\" to rootfs \\\"/run/containerd/io.containerd.runtime.v2.task/default/test/rootfs\\\" at \\\"/sys\\\" caused \\\"operation not permitted\\\"\"": unknown
Everything works fine if I either
remove the --net-host flag or
remove the --uidmap/--gidmap arguments
I tried to add the user with the host uid=1000 to the netdev group, but still the same error.
Do I maybe need to use networking namespaces?
EDIT:
Meanwhile found out that it's an issue within runc. In case I use user namespaces by adding the following to the config.json
"linux": {
"uidMappings": [
{
"containerID": 0,
"hostID": 1000,
"size": 999
}
],
"gidMappings": [
{
"containerID": 0,
"hostID": 1000,
"size": 999
}
],
and additionally do not use a network namespace, which means leaving out the entry
{
"type": "network"
},
within the "namespaces" section, I got the following error from runc:
$ sudo runc run test
WARN[0000] exit status 1
ERRO[0000] container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"sysfs\\\" to rootfs \\\"/vagrant/test/rootfs\\\" at \\\"/sys\\\" caused \\\"operation not permitted\\\"\""
container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"sysfs\\\" to rootfs \\\"/vagrant/test/rootfs\\\" at \\\"/sys\\\" caused \\\"operation not permitted\\\"\""
Finally found the answer from this issue in runc. It's basically a restriction within the kernel that a user that does not own the network namespace does not have the CAP_SYS_ADMIN capability and without that can't mount sysfs. Since the user on the host that the root user within the container is mapped to did not create the host network namespace, it does not have CAP_SYS_ADMIN there.
From the discussion in the runc issue, I do see the following options for now:
remove mounting of sysfs.
Within the config.json that runc uses, remove the following section within "mounts":
{
"destination": "/sys",
"type": "sysfs",
"source": "sysfs",
"options": [
"nosuid",
"noexec",
"nodev",
"ro"
]
},
In my case, I also couldn't mount /etc/resolv.conf. By removing these 2, the container did run fine and had host network access. This does not work with ctr though.
setup a bridge from the host network namespace to the network space of the container (see here and slirp4netns).
use docker or podman if possible that seem to use slirp4netns for this purpose. There is an old moby issue that also might be interesting.

Invalid Dockerrun.aws.json version, abort deployment

Why would I be getting invalid for this? Version 1 works fine but for some reason I can't get this to load.
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "islandsound_vowpal_wabbit_test",
"image": "islandsound/vowpal_wabbit_test",
"memory": 128,
"portMappings": [
{
"hostPort": 26542,
"containerPort": 26542
}
]
}
]
}
AWSEBDockerrunVersion version 2 is not supported by single container platforms, create an environment with a multi container platform and deploy to it.
To create a multi container platform from the CLI, you can run: eb create --elb-type application -p "64bit Amazon Linux 2018.03 v2.15.2 running Multi-container Docker 18.06.1-ce (Generic)"
Answer is here:
multicontainer vs single container Dockerrun version
... the issue is because the environment created is using a "Single
Container" platform ...
If you want to run a multi-container docker instance, you must select it when creating your environment.
Through AWS portal
Platform: Docker
Platform branch: Multi-container Docker running on 64bit Amazon Linux
Platform version: (recommended)

How to run several IPFS nodes on a single machine?

For testing, I want to be able to run several IPFS nodes on a single machine.
This is the scenario:
I am building small services on top of IPFS core library, following the Making your own IPFS service guide. When I try to put client and server on the same machine (note that each of them will create their own IPFS node), I will get the following:
panic: cannot acquire lock: Lock FcntlFlock of /Users/long/.ipfs/repo.lock failed: resource temporarily unavailable
Usually, when you start with IPFS, you will use ipfs init, which will create a new node. The default data and config stored for that particular node are located at ~/.ipfs. Here is how you can create a new node and config it so it can run besides your default node.
1. Create a new node
For a new node you have to use ipfs init again. Use for instance the following:
IPFS_PATH=~/.ipfs2 ipfs init
This will create a new node at ~/.ipfs2 (not using the default path).
2. Change Address Configs
As both of your nodes now bind to the same ports, you need to change the port configuration, so both nodes can run side by side. For this, open ~/.ipfs2/configand findAddresses`:
"Addresses": {
"API": "/ip4/127.0.0.1/tcp/5001",
"Gateway": "/ip4/127.0.0.1/tcp/8080",
"Swarm": [
"/ip4/0.0.0.0/tcp/4001",
"/ip6/::/tcp/4001"
]
}
To for example the following:
"Addresses": {
"API": "/ip4/127.0.0.1/tcp/5002",
"Gateway": "/ip4/127.0.0.1/tcp/8081",
"Swarm": [
"/ip4/0.0.0.0/tcp/4002",
"/ip6/::/tcp/4002"
]
}
With this, you should be able to run both node .ipfs and .ipfs2 on a single machine.
Notes:
Whenever you use .ipfs2, you need to set the env variable IPFS_PATH=~/.ipfs2
In your example you need to change either your client or server node from ~/.ipfs to ~/.ipfs2
you can also start the daemon on the second node using IPFS_PATH=~/.ipfs2 ipfs daemon &
Hello, I use ipfs2, after running two daemons at the same time, can indeed open localhost:5001 / webui, run the second localhost:5002 / webui has an error, as shown in the attachment
Here are some ways I've used to create multiple nodes/peers ids.
I use windows 10.
1st node go-ipfs (latest version)
2nd node Siderus Orion ifps (connect to Orion node , not local) -- https://orion.siderus.io/
Use VirtualBox to run a minimal ubuntu installation. (You can set up as many as you want)
Repeat the process and you have 4 nodes or as many as you want.
https://discuss.ipfs.io/t/ipfs-manager-download-install-manage-debug-your-ipfs-node/3534 is another gui that installs and lets you manage all ipfs commands without CMD. He just released it a few days ago and it looks well worth lots of reviews.
Disclaimer I am not a coder or computer professional. Just a huge fan of IPFS! I hope we can raise awareness and change the world.

packer ssh_private_key_file is invalid

I am trying to use the OpenStack provisioner API in packer to clone an instance. So far I have developed the script:
{
"variables": {
},
"description": "This will create the baked vm images for any environment from dev to prod.",
"builders": [
{
"type": "openstack",
"identity_endpoint": "http://192.168.10.10:5000/v3",
"tenant_name": "admin",
"domain_name": "Default",
"username": "admin",
"password": "****************",
"region": "RegionOne",
"image_name": "cirros",
"flavor": "m1.tiny",
"insecure": "true",
"source_image": "0f9b69ee-4e9f-4807-a7c4-6a58355c37b1",
"communicator": "ssh",
"ssh_keypair_name": "******************",
"ssh_private_key_file": "~/.ssh/id_rsa",
"ssh_username": "root"
}
],
"provisioners": [
{
"type": "shell",
"inline": [
"sleep 60"
]
}
]
}
But upon running the script using packer build script.json I get the following error:
User:packer User$ packer build script.json
openstack output will be in this color.
1 error(s) occurred:
* ssh_private_key_file is invalid: stat ~/.ssh/id_rsa: no such file or directory
My id_rsa is a file starting and ending with:
------BEGIN RSA PRIVATE KEY------
key
------END RSA PRIVATE KEY--------
Which I thought meant it was a PEM related file so I found this was weird so I made a pastebin of my PACKER_LOG: http://pastebin.com/sgUPRkGs
Initial analysis tell me that the only error is a missing packerconfig file. Upon googling this the top searches tell me if it doesn't find one it defaults. Is this why it is not working?
Any help would be of great assistance. Apparently there are similar problems on the github support page (https://github.com/mitchellh/packer/issues) But I don't understand some of the solutions posted and if they apply to me.
I've tried to be as informative as I can. Happy to provide any information where I can!!
Thank you.
* ssh_private_key_file is invalid: stat ~/.ssh/id_rsa: no such file or directory
The "~" character isn't special to the operating system. It's only special to shells and certain other programs which choose to interpret it as referring to your home directory.
It appears that OpenStack doesn't treat "~" as special, and it's looking for a key file with the literal pathname "~/.ssh/id_rsa". It's failing because it can't find a key file with that literal pathname.
Update the ssh_private_key_file entry to list the actual pathname to the key file:
"ssh_private_key_file": "/home/someuser/.ssh/id_rsa",
Of course, you should also make sure that the key file actually exists at the location that you specify.
Have to leave a post here as this just bit me… I was using a variable with ~/.ssh/id_rsa and then I changed it to use the full path and when I did… I had a space at the end of the variable value being passed in from the command line via Makefile which was causing this error. Hope this saves someone some time.
Kenster's answer got you past your initial question, but it sounds like from your comment that you were still stuck.
Per my reply to your comment, Packer doesn't seem to support supplying a passphrase, but you CAN tell it to ask the running SSH Agent for a decrypted key if the correct passphrase was supplied when the key was loaded. This should allow you to use Packer to build with a protect SSH key as long as you've loaded it into SSH agent before attempting the build.
https://www.packer.io/docs/templates/communicator.html#ssh_agent_auth
The SSH communicator connects to the host via SSH. If you have an SSH
agent configured on the host running Packer, and SSH agent
authentication is enabled in the communicator config, Packer will
automatically forward the SSH agent to the remote host.
The SSH communicator has the following options:
ssh_agent_auth (boolean) - If true, the local SSH agent will be used
to authenticate connections to the remote host. Defaults to false.

Can't download file from S3 in AWS Lambda without any HTTP error

This is my code in AWS lambda:
import boto3
def worker_handler(event, context):
s3 = boto3.resource('s3')
s3.meta.client.download_file('s3-bucket-with-script','scripts/HelloWorld.sh', '/tmp/hw.sh')
print "Connecting to "
I just want to download a file stored in S3, but when I start the code, the program just run until timeout and print nothing on.
This is the Logs file
START RequestId: 8b9b86dd-4d40-11e6-b6c4-afcc5006f010 Version: $LATEST
END RequestId: 8b9b86dd-4d40-11e6-b6c4-afcc5006f010
REPORT RequestId: 8b9b86dd-4d40-11e6-b6c4-afcc5006f010 Duration: 300000.12 ms Billed Duration: 300000 ms Memory Size: 128 MB Max Memory Used: 28 MB
2016-07-18T23:42:10.273Z 8b9b86dd-4d40-11e6-b6c4-afcc5006f010 Task timed out after 300.00 seconds
I have this role in the this Lambda function, it shows that I have the permission to get file from S3
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:CreateNetworkInterface",
"ec2:DescribeNetworkInterfaces",
"ec2:DeleteNetworkInterface"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}
Is there any other set up I missed? Or anyway I can continue this program?
As you mentioned a timeout, I would check the network configuration. If you are going through a VPC, this may be caused by the lack of route to the internet. This can be solved using a NAT Gateway or S3 VPC endpoint. The video below explains the configuration required.
Introducing VPC Support for AWS Lambda
Per the docs your code should be a little different:
import boto3
# Get the service client
s3 = boto3.client('s3')
# Download object at bucket-name with key-name to tmp.txt
s3.download_file("bucket-name", "key-name", "tmp.txt")
Also, note that Lambda has a ephemeral file structure, meaning downloading the file, does nothing really. You just downloaded it and then the Lambda shut down and ceased to exist, you need to send it somewhere after you download it to Lambda if you want to keep it.
Also, you may need to tweak your timeout settings to be higher.
As indicated in another answer you may need a NAT Gateway or a S3 VPC endpoint. I needed it because my Lambda was in a VPC so it could access RDS. I started going through the trouble of setting up a NAT Gateway until I realized that a NAT Gateway is currently $0.045 per hour, or about $1 ($1.08) per day, which is way more than I wanted to spend.
Then I needed to consider a S3 VPC endpoint. This sounded like setting up another VPC but it is not a VPC, it is a VPC endpoint. If you go into the VPC section there is a "endpoint" section (on the left) along with subnets, routes, NAT gateways, etc. For all the complexity (in my opinion) of setting up the NAT gateway, the endpoint was surprisingly simple.
The only tricky part was selecting the service. You'll notice the service names are tied to the region you are in. For example, mine is "com.amazonaws.us-east-2.s3"
But then you may notice you have two options, a "gateway" and an "interface". On Reddit someone claimed that they charge for interfaces but not gateways, so I went with gateway and things seem to work.
https://www.reddit.com/r/aws/comments/a6yppu/eli5_what_is_the_difference_between_interface/
If you don't trust that Reddit user, I later found that AWS currently says this:
"Note: To avoid the NAT Gateway Data Processing charge in this example, you could setup a Gateway Type VPC endpoint and route the traffic to/from S3 through the VPC endpoint instead of going through the NAT Gateway. There is no data processing or hourly charges for using Gateway Type VPC endpoints. For details on how to use VPC endpoints, please visit VPC Endpoints Documentation."
https://aws.amazon.com/vpc/pricing/
Note, I also updated the pathing type per an answer in this other question, but I'm not sure that really mattered.
https://stackoverflow.com/a/44478894/764365
did you check if your time out was set correctly? I had the same issue, and it was timing out since my default value was set to 3 seconds and the file would take longer than that to download.
here is where you set your timeout setting: