I've been doing some research but with no luck so here goes: Is the fabric-tools container image appropriate for production block-chain networks?
P.S. I know I don't have any code errors or a more technical question but I don't know where else to ask this. Sorry!
In our responses to the production requirements, we have not setup Fabric Tools image in production.
The reason was, for a tls enabled network, for this tools image to actually serve the purpose, it needs the tls certificate of each node to be mounted into this container - which I think might be questionable in a production scenario.
You can check the Dockerfile from the official fabric Github. In this line you see the tools, which are build inside the fabric-tools image.
Like in this post mentioned Difference between fabric-tools and fabric-ca-tools images it includes the tools:
configtxgen configtxlator cryptogen discover idemixgen osnadmin peer
The fabric-tools Container is a helper Container, which you can use to make things easier. For example creating a channel, joining it etc.
You can do the operations without it.
In production you would use f.e. a CA instead of creating the crypto-config artifacts with the cryptogen tool.
Also the fabric-tools container is usually used to mount the whole channel-artifacts folder and the crypto-config folder. This would not be very practical for productive use. You might have to make a modification depending on the intended use.
Related
Both docker and containerd provide golang clients and provide some interfaces, such as list images, export images or tag images. How can this be done in cri-o?
eg:
github.com/containerd/containerd
and
github.com/docker/docker/client
it seemed logical to me that such an option would be present for such a simple need, searched around and it seems it's a wanted feature but not fulfilled as it seems by these issues 1 2 3. there is some sense to this since crictl was destined to be a debugging to cri-o and not a container management tool.
from personal use, if you prefer switching from docker, podman could be an option for such operations, it's a daemon-less alternative to docker and cri-o, and employs other opensource tools to achieve its goals:
buildah - handles building and manipulating container images
skopeo - registry specific tasks relating to container images handling (probably the first candidate for your use case even by itself)
If you want to stick to the popular CLI commands podman is your guy, if you want to go as minimalist as possible, using skopeo directly could be an option
hope this helps you in your decision-making process ;)
I have container specified in my pipeline as:
container 'insilicodb/docker-impute2'
It allows me to just run the pipeline without downloading necessary programms. How to see the list of stuff it contains?
That image is not on Docker Hub, so you will need to first know which registry it is being pulled from. Insilicodb is however a known publisher on Hub. An example of theirs which lists its Dockerfile is https://hub.docker.com/r/insilicodb/ubuntu/dockerfile.
There is no built-in way to view the Dockerfile of an image you have pulled, it is up to the publisher to provide this. Images don't have to be built from a Dockerfile and may not have one at all. If there is one, it will tell you the steps taken to create that image.
By the way, "without downloading necessary programs" is the point of containers. The purpose of them is to be scripted with everything they need to run without you having to install anything.
I'm trying to get into Docker and using a current project as a learning exercise. It's a fairly basic application that uses Centos 7, Node and MySQL.
My first thought was pull a CentOS 7 image and the images for the others mentioned above.
When I tested out a Node image I realized I might not need a Centos Image but I do need MySQL... Not what the recommended way to combine images for this or even if it is the right route for this project.
Should I start with an OS image and install all the dependencies/services I would need like on any other server or do I run the images together with Docker Compose or something like that?
I tried looking at building WordPress images to see what they were doing but most tutorials just reference a prebuilt image.
I'm sure I could hack something together but I wanted to go the preferred route.
My hope was that I could specify all of these things in a Dockerfile so I could share it easily.
Any direction on this is appreciated. Thanks!
Quoting from the official best practices:
Run only one process per container
In almost all cases, you should only run a single process in a single container. Decoupling applications into multiple containers makes it much easier to scale horizontally and reuse containers. If that service depends on another service, make use of container linking.
If you want to run more than one process in a container you will need some kind of supervisor or init system and you will lose some of the features docker provides (logging, automated restarting).
All in all it is more hassle than running a container per process. And the latter is slightly secure as well, as the processes cannot attack other processes that easily.
So in your concrete case you would run one mysql container and one node container (possibly based on node:onbuild) and link mysql to node to let node talk to mysql.
12-Factor Apps suggest that you configure your application using environment variables. So far, so good. I can easily imagine that this is a good way to do it if you need to set a connection string, e.g.
But what if you have more complex configuration with lots and lots of values? I for sure do not want to have 50+ environment variables, do I?
How could I solve this, and still be compliant to the idea of 12-Factor Apps?
From a quick read of the configure link you provided, I agree with the author's claim that there is a widespread problem, but I am not convinced that their proposed solution is going to always be best. Like you, I don't relish the idea of having to define dozens of environment variables to configure an application. So here are some alternative ideas.
First, read Chapter 2 of the Config4* Getting Started Guide (disclaimer: I am the main author of that software). In particular, notice that its support for what I call adaptive configuration can go a long way towards addressing the concern that you ask about. Is Config4* the ultimate solution? Possibly not, but I think it is a good step in the right direction.
Second, the chances are that whatever application you are developing/maintaining has already settled on a particular configuration technology, such as XML files or Java property files, and it won't be feasible to migrate to using Config4*. This raises the question: is there anything you can do to avoid having a proliferation of, say, XML-based configuration files when you have multiple environments (such as dev, UAT, staging and production) in which the application will be deployed? I have outlined an approach for dealing with this issue in another StackOverflow article.
I have an subversion server running with Apache mod_dav_svn and it works nicely but the browsing ability via HTML is a bit spartan. Is there a way to customize it at all?
There's two things I'd like to do to make a huge difference:
separate the directories from the files so all the directories are at the top. Right now everything is in alphabetical order. (the picture above happens to have all the directories preceding files in alphabetical order, but trust me, that's not the normal case)
List the basic file statistics (file size, mod time, last updated version, etc)
Is it posssible to do either of these with mod_dav_svn?
In a vanilla Subversion install, the web interface is very spartan by design. (Remember the HTTP interface is designed for SVN clients, not human beings.)
You can customize the display somewhat via the SVNIndexXSLT directive. (Here is a good place to start).
If you want something richer (with logs and diff features), you will need to install a special front end. WebSVN and ViewVC are very popular. There is also Trac, but this is a higher-level tool.
A list of other repo browsing tools.
Just FYI, we use WebSVN for our repo instance. It took some effort to get it up and running, but once it is setup you can pretty much leave it alone.
WebSvn looks like it might help you. I tried trac and it is very slick but I found it to be complicated and seems overkill for what you're looking for, imo.
Not out of the box - that is, without modifying the source code. You might be interested in tools like ViewSVN or the more sophisticated trac or redmine.