Both docker and containerd provide golang clients and provide some interfaces, such as list images, export images or tag images. How can this be done in cri-o?
eg:
github.com/containerd/containerd
and
github.com/docker/docker/client
it seemed logical to me that such an option would be present for such a simple need, searched around and it seems it's a wanted feature but not fulfilled as it seems by these issues 1 2 3. there is some sense to this since crictl was destined to be a debugging to cri-o and not a container management tool.
from personal use, if you prefer switching from docker, podman could be an option for such operations, it's a daemon-less alternative to docker and cri-o, and employs other opensource tools to achieve its goals:
buildah - handles building and manipulating container images
skopeo - registry specific tasks relating to container images handling (probably the first candidate for your use case even by itself)
If you want to stick to the popular CLI commands podman is your guy, if you want to go as minimalist as possible, using skopeo directly could be an option
hope this helps you in your decision-making process ;)
Related
I am currently starting to learn and work with JADE for a project at my university. Today I struggled with one terminology in the JADE documentation. It's the word "container". Sorry, if this is a dumb question, but I am a total beginner in this sector.
From the documentation: "1.1 Containers and Platforms
Each running instance of the JADE runtime environment is called a Container as it can contain several agents. The set of active containers is called a Platform. A single special Main container must always be active in a platform and all other containers register with it as soon as they start. It follows that the first container to start in a platform must be a main container while all other containers must be “normal” (i.e. non-main) containers and must “be told” where to find (host and port) their main container (i.e. the main container to register with)."
My questions: Have these JADE containers anything in common with containers I know from Docker, Podman, LXC, and so on? Is there anything happening to encapsulate an application to avoid problems with dependencies or increase security? Anything with process trees, namespace, or something like that? Or is it just a structure to couple multiple agents that has nothing in common with docker and Co? I'm totally lost at this point... Thanks for your help!
Best regards,
Markus
First off, I've been struggling at the same point recently.
From what I've understood (please correct me if I'm wrong) is that the containers are indeed different from what you know from Docker etc.
The JADE container are rather serving as an organizational unit for agents.
You can, for instance, start a container and run several agents in it, which can identify and communicate with each other. Another possibility is, as you already mentioned in your question, to start a main container and attach other non main-containers to it within the same JADE platform.
As a further step, you can also start two main containers on different machines, each runinng a separate platform and connect them as remote platforms (e.g, by using the MTP HTTP extension).
So, basically agents are organized in containers which are again organized in platforms.
I'm trying to install Openwhisk onto Openshift. I've followed the official guide and it worked.
Now the point is that my environment would be a multitenant ecosystem, so let's pretend having two different users (Ux and Uy) who want to run their containers on my openwhisk environment.
I'd like to have the following projects in my openshift:
Core project, that hosts the Openwhisk's Ingress, Controller, Kafka and CouchDB components (maybe also the Invokers?)
UxPRJ project, that hosts only the containers running actions created by Ux (maybe also the Invokers?)
UyPRJ project, that hosts only the containers running actions created by Uy (maybe also the Invokers?)
The following images better explain what I've in mind:
or also:
Is this possible configuration possible?
Looking around, I wasn't able to find anything like that...
Thank you.
The OpenWhisk loadbalancer which assigns functions to invokers does not segregate users in the way you want, but it is possible to do what you want if you modify the loadbalancer. The way it works now is that there is a list of available invokers which form the allowed set of invokers for a function assignment. You could at that point take into consideration a partitioning based on the user and form the allowed set of invokers differently. There are other ways to realize the partitioning you want as well, but all require modification to the OpenWhisk control plane.
I've been doing some research but with no luck so here goes: Is the fabric-tools container image appropriate for production block-chain networks?
P.S. I know I don't have any code errors or a more technical question but I don't know where else to ask this. Sorry!
In our responses to the production requirements, we have not setup Fabric Tools image in production.
The reason was, for a tls enabled network, for this tools image to actually serve the purpose, it needs the tls certificate of each node to be mounted into this container - which I think might be questionable in a production scenario.
You can check the Dockerfile from the official fabric Github. In this line you see the tools, which are build inside the fabric-tools image.
Like in this post mentioned Difference between fabric-tools and fabric-ca-tools images it includes the tools:
configtxgen configtxlator cryptogen discover idemixgen osnadmin peer
The fabric-tools Container is a helper Container, which you can use to make things easier. For example creating a channel, joining it etc.
You can do the operations without it.
In production you would use f.e. a CA instead of creating the crypto-config artifacts with the cryptogen tool.
Also the fabric-tools container is usually used to mount the whole channel-artifacts folder and the crypto-config folder. This would not be very practical for productive use. You might have to make a modification depending on the intended use.
I'm trying to get into Docker and using a current project as a learning exercise. It's a fairly basic application that uses Centos 7, Node and MySQL.
My first thought was pull a CentOS 7 image and the images for the others mentioned above.
When I tested out a Node image I realized I might not need a Centos Image but I do need MySQL... Not what the recommended way to combine images for this or even if it is the right route for this project.
Should I start with an OS image and install all the dependencies/services I would need like on any other server or do I run the images together with Docker Compose or something like that?
I tried looking at building WordPress images to see what they were doing but most tutorials just reference a prebuilt image.
I'm sure I could hack something together but I wanted to go the preferred route.
My hope was that I could specify all of these things in a Dockerfile so I could share it easily.
Any direction on this is appreciated. Thanks!
Quoting from the official best practices:
Run only one process per container
In almost all cases, you should only run a single process in a single container. Decoupling applications into multiple containers makes it much easier to scale horizontally and reuse containers. If that service depends on another service, make use of container linking.
If you want to run more than one process in a container you will need some kind of supervisor or init system and you will lose some of the features docker provides (logging, automated restarting).
All in all it is more hassle than running a container per process. And the latter is slightly secure as well, as the processes cannot attack other processes that easily.
So in your concrete case you would run one mysql container and one node container (possibly based on node:onbuild) and link mysql to node to let node talk to mysql.
What is the best way to create a custom OpenShift cartridge?
Looking at documentation and examples, I am seeing a lot of old-school compiling from source installation of the component that the cartridge needs to run.
Some examples https://www.openshift.com/blogs/lightweight-http-serving-using-nginx-on-openshift https://github.com/boekkooi/openshift-diy-nginx-php/blob/master/.openshift/action_hooks/build_nginx https://github.com/razorinc/redis-openshift-example/blob/master/.openshift/action_hooks/build and a ton of others are compiling from source..
I need to create some custom cartridges on my project, but doing it this way feels wrong.
Is there any reason I cant use yum and puppet/augeas to do the building, instead of curl, make and sed?
Or is this the best practice? In that case, why are we doing this 2000 style?
I'll do my best to explain this the best way I can. Feel free to let me know If I need to explain anything in more detail.
I'm assuming you're creating a custom binary cartridge (ie. you're creating a language cartridge such as ruby, python, etc.). Since none of the nodes have that binary installed on the system the custom cartridge you're creating will need to provide that binary and its libraries.
When you install a package with yum its going to install items in several different directories (/etc, /usr/, /var, etc). Since you're creating cartridge that will be copied over to several nodes you'll need to package all these items in a way that can be copied over to a node and then be executed without having to install them to the system.
As for doc's, I would suggest taking a look at these:
https://www.openshift.com/developers/download-cartridges
https://www.openshift.com/blogs/new-openshift-cartridge-format-part-1
https://www.openshift.com/blogs/new-openshift-cartridge-format-part-2