This question is referred to the following project which is about Integrating Fabric Blockchain and IPFS.
The architecture basically comprises a swarm of docker containers that should communicate with each other (Three containers: Two peer nodes and one Server node). Every container is an IPFS node and has a separate configuration.
I am trying to run a dockerized environment of an IPFS cluster of nodes and view the WEB UI that comes with it. I set up the app by running all the steps described and then supposedly i would be able to see the WebUI in this address:
http://127.0.0.1:5001
Everything seem to be setup and configured as they should (I checked every docker logs <container>). Nevertheless all i get is an empty page.
When i try to view my local IPFS repository via
https://webui.ipfs.io/#/welcome
I get a message that this is probably caused by a CORS error (it makes sense) and it is suggested to change the IPFS configuration in order to by-pass the CORS error. See this!
Screenshot
I try to implement the solution by changing the Headers in the configuration but it doesn't seem to have any effect.
The confusion relies on the fact that after setting up the containers we have 3 different containers with 3 configurations and in addition the IPFS daemon is running in each one of them. Outside the containers the IPFS Daemon is not running.
I don't know if the IPFS daemon outside the containers should be running.
I'm not sure which configuration (if not all) should i modify.
Should i use a reverse proxy to solve this?
Useful Info
The development is done in a Linux-Ubuntu VM that meets all the necessary requirements.
Related
I have downloaded code ready containers on windows for installing my openshift cluster. I need to deploy 3scale on it using the operator from operators hub but the operators hub page is empty.
Digging deeper I found that a few pods on the cluster are not running and show a state "ImagePullBackOff"
I deleted the pods in order to get them restarted but the error wont go away. I checked the event logs and all the screenshotted images are attached.
Pods Terminal logs
This is an error that I keep on getting when I start my cluster. Sometimes it comes up sometimes it starts normally but maybe this has something to do with it.
Quay.io Error
This is my fist time making a deployment on openshift cluster and setting up my cluster environment. So far I am not able to resolve the issue even after deleting and restarting the cluster.
I am getting the connection time out when running the command in bootstrap.
Any configuration suggestions on networking part if I am missing
It’s says kubernetes api calling time out
This is obviously very hard to debug without having access to your environment. Some tips to debug the OKD installation:
Before starting the installation, make sure your environment meets all the prerequisites. Often, the problem lies with a faulty DNS / DHCP / networking setup. Potentially deploy a separate VM into the network to check if everything works as expected.
The bootstrap node and the Master Nodes are deployed with the SSH key you specify, so in vCenter, get the IP of the machines that are already deployed and use SSH to connect to them. Once on the machine, use sudo crictl ps and sudo crictl logs <container-id> to review the logs for the running containers, focussing on the components:
kube-apiserver
etcd
machine-controller
In your case, the API is not coming up, so reviewing the logs of the above components will likely show the root cause.
I have a basic stack of containers on their own user-defined network with a subnet of 172.21.0.0/16. My MySQL container's address is 172.21.0.2 and the PHP/Apache container's address is 172.21.0.3.
Until this point I had to permit MySQL to allow incoming connections via PHP from 172.21.0.3, which made perfect sense. Now, it seems as though the connections are coming from 172.21.0.1, the gateway, and this doesn't make much sense to me. My (basic to intermediate) understanding suggests that the gateway should only be used when traffic is destined for an address outside of its local network - but obviously in this case MySQL and PHP/Apache are on the same network.
Two of our environments have started acting like this, and while it's a simple fix to permit connections from the gateway address, I'm hesitant to proceed without an understanding as to what has happened and why. This also seems to add extra delay to database queries within the application.
Logging in to an affected environment via phpMyAdmin displays "User: root#172.21.0.1" in the "Database Server" information pane. An unaffected environment displays "root#phpmyadmin_1.test_default" (user#[container].[network]).
Both environments are using the exact same images, and the same version of Docker - 18.06.1-ce. Other than a version upgrade of Docker, nothing else has changed with regards to the docker-compose.yml I was using.
Why has my environment started acting like this? Should I prefer the connection coming in from the actual source, and not via the gateway? How can I return to that way of operation?
Thank you for any guidance or knowledge.
For anyone else that experiences a similar rut, I'm of the mind that this was caused by an upgrade of Docker from 18.03.1-ce to 18.06.1-ce via Docker's own repository. Performing a server reboot after this operation has (for now) restored sense to the networking of the stack.
The connection to my MySQL container is now correctly coming from the PHP/Apache container and not from the gateway address of the bridge network. The lag this introduced is gone, and I'm able to remove the privilege associated with the gateway address.
I'm planning to migrate my application stack to Docker. Let me describe the services I'm currently using:
HAProxy, which is used for SSL termination on all service's connections (HTTP and raw TCP connections), and forwards traffic to the services below.
Nginx, which serves static files, like updates and some information pages.
Node.js, which runs the main applications.
MySQL (MariaDB), the database used and shared by all the applications.
My question is about the database.
What's the proper way of running MariaDB in this case?
Install and run it inside my container, along with the other services?
Run the official image in a separate container, and link my container to it with the --link option of Docker's run command?
Does the first option have any disadvantage?
TeamSpeak docker container uses the second option and that's what made me question myself about the correct way of running the database in my case, but I particularly feel more inclined to package all the services inside my own image.
Docker Philosophy: Split your application into microservices and use a container for each microservice.
In your case, I recommend a MariaDB container, Using official (Library) Image gives you easier update management, but feel free to use your custom image.
An HAProxy Container, A nginx container and a nodejs container.
This way you divided your application into microservices, and you can upgrade, manage and troubleshoot them easier in an isolated environment.
If you are thinking about delivering your application to end users via docker, a simple docker-compose file will do the trick for easy launching the required containers.
Wordpress is running inside a Docker container on hostA and MySQL is running inside a Docker container on hostB. Is it possible to link these two containers to communicate to each other? Is this even possible to do something like this?
Any help on this is much appreciated as am pretty new to Docker
I can not answer you question but there is a part in the documentation about this:
https://docs.docker.com/engine/userguide/networking/default_network/container-communication/
You will find a section called: Communication between containers
Yes this is possible with docker overlay network.
The setup is not as easy as setting up a link or private network on the same host.
You will have to configure a key value store to get this working.
Here is the relevant docker documentation.
An overlay network:
https://docs.docker.com/engine/userguide/networking/dockernetworks/#an-overlay-network
Here are the steps for setup
https://docs.docker.com/engine/userguide/networking/get-started-overlay/
In my opinion, its not bad to isolate the app and database containers and connect outside the docker network. If you end up adding the key/value store like consul, you can always leverage the service discovery that comes along with it to dynamically discover the services.
I would go for https://github.com/weaveworks/weave.
Weave Net creates a virtual network that connects Docker containers across multiple hosts and enables their automatic discovery.
It might be overkill for your usecase. But it would be very helpful if you want to move the containers around in the future.