How can we deploy our updated/new cordapp on running node without stopping node? - build.gradle

I want to make the node running and CorDapp deployment independent.
So whenever I we made any modifications on the code (in states, contracts or any other java part).
How can we build the updated project into jar and directly update it in the node.

Flows and explicit contract upgrades can currently only be updated during node restarts, as mentioned in the documentation.
You can use CorDapp distribution services to automate the distribution part, but every operator still needs to restart nodes manually.

Presently it is not possible, the node has to be restarted in order to carry out the update.

Related

Does Github Action supports the on-demand self-hosted runner?

We need to use the Github action self-hosted runner because we need to have an access to the on-premises resource.
I understand that we can run the self-hosted runner on VM or the docker container.
Can we run the self-hosted runner on-demand? Like the Github-hosted runner that always use the clean isolated VM and is destroyed at the end of job execution. Or like the job agents on Azure DevOps/Github that created the clean job agent container to run the pipeline and gets deleted at the end.
Can we do something similar with Gihub-action self-hosted runner? E.g. Allow us to register a VM with the Docker engine or Kubernetes cluster as a self-hosted runner. When I run the Github workflow, it should start a container and run the workflow inside that container. After that, it should delete the container.
Is it possible to do that now? I am not sure if there is a roadmap document somewhere for Github action.
If you use AWS, you can try ec2-github-runner. It does exactly what you're looking for.
I believe the same approach can also be implemented for the other cloud providers.
GitHub itself doesn't provide such capabilities at the moment.
I think the question is a little bit misleading but, if I understand properly, what you are trying to achieve is to have stateless GitHub Actions workflow run execution in your self-hosted runner. Unfortunately, this is currently impossible. Only GitHub-hosted runners work the way you describe. According to the documentation:
GitHub-hosted runner is always a clean isolated virtual machine, and it is destroyed at the end of the job execution.
You can read more about the differences between each runner type here.

Using berks for local development only?

I don't want to use berks in production because I don't like the idea of nodes going out to the web to pull cookbooks (I only want them to pull them from the Chef server in the normal way). But I like using Berks for local development because it resolves the dependencies for kitchen for me.
I was thinking about just adding berksfile and berksfile.lock to gitignore, but I figured I'd ask if it is possible to accomplish this with berks without removing it from production.
"nodes" will never go to the internet looking for cookbooks, they'll always be sourced from the chef server, so.... The question back is: how do you propose to deliver cookbooks to the chef server used to manage your production nodes?
What most people appear to do is commit the Berkshelf lock file and just run a "berks apply" against the target chef server. That will most likely fit your needs.
Personally, I like better separation between development and my production/non-production systems. I create a release tarball containing all the cookbooks that I've tested in development, using the "vendor" command in Berkshelf, and store this binary in a revision control system like Nexus. I suspect many would consider this over-kill, but it enables me to deliver an off-line (no internet connection required) and traceable delivery of my configuration.

Can a TeamCity build agent be configured to only run builds with a particular parameter dependency?

I have a TeamCity build agent installed on a machine which in theory is dedicated to running dynamic security scans and I don't want it doing anything else (i.e. running the duplicates finder).
Short of either creating custom agent configuration properties then customising each build's agent dependencies (which perhaps strictly speaking I should be doing anyway) or configuring the agent to only run selected configurations, is there any way to avoid this? Both of these approaches require additional configuration on a per-build basis either on every single build.
In a perfect world, I'd like to be able to tell the agent to only ever run builds which match a particular agent dependency. Is this possible or am I coming at it from the wrong direction?
I'm afraid TeamCity doesn't provide a way to specify that agent can run only configurations with a specific property (and not run other configurations).
So, there are only two ways to specify agents: either with agent requirements, or with configuring the agent to only run selected configurations.
You could probably try to make some batch change in your build configuration properties, because all build configuration settings/properties are stored in XML files on disk.
In current versions of TeamCity (e.g. 8.1) you can create a pool just for your security machine, and only assign the one machine to that pool, remembering to remove it from other pools.
Then you can assign the security project to that pool. That should solve your problem.

How to setup Hudson to do remote deployment of WAR to Tomcat?

I have bit of experience in running a simple build upon every SVN commit (it is a piece of cake)
Regarding deployment of the war to a remote production server via
Hudson, there seem to be some alternatives:
use the 'deploy' target in the app's build.xml
use the deploy-plugin of Hudson
which I fail to get working :(
What is the simplest way to do a remote deployment to Tomcat?
Are there any examples available?
And what about release management? How do we tag our releases in
your SCM?
I use Maven for builds.
Since I am working with WAS I use the WAS Builder Plugin for deployment. However, I could also just fire up a batch/shell script for deployment. My current approach is to use slaves that run on the target machine where I want to deploy and assign my deployment jobs to them.
For tagging you can use whatever you prefer. The are 3 basic options:
Let maven do it
run a command line command
use a Hudson plugin
We use subversion, so the Subversion Tagging Plugin would be a natural match. However, we don't use Hudson for tagging right now. However, there are several plugins out there for different SCMs. I usually prefer a plugin over a command line, one reason is that company policy forbids to store passwords unencrypted and it is usually fairly easy to configure.
Our strategy has been to combine the promoted builds plugin and the deploy plugin in hudson. Then, when something is "promoted," send an email and do the deployment. The deployment plugin is cargo based and works with a variety of web and app servers.
Maybe post a little info about why the deploy plugin isn't working for you?

Bring Hudson slave nodes online at certain times

I am setting up a number of slaves to my Hudson master, grouped by labels. I would like to be able to have a set of nodes that run during the day and an additional set of nodes that are turned on during the evening.
Is this possible, either directly by hudson or via plugin or script? If so what is your recommended solution?
There is an experimental feature to schedule when each slave should be available. It is in core, but you have to set a system property to enable it. So if you start Hudson with
java -Dhudson.scheduledRetention=true -jar hudson.war
You will get an extra configuration option on each node, allowing you to specify a schedule of when that node should be used.
Let the OS (or any other scheduler) control the start and stop of a node. Hudson only uses what's available. Not sure how Hudson acts if a node dies while running a job.
Update: The feature that Michael Donohue is not experimental anymore and is available for all nodes (I use the SSH node). Works great (at least the take only if needed feature).
Expanding on what #Peter Schuetze said...
Unless the nodes are VMs that you want Hudson to manage (see the VMware plugin), the start and stop operations are out of Hudson's control. Depending on how you have your slaves set up, Hudson may just automatically connect when it sees the node is online, or you may need to make sure the slave runs something at startup.
You can use the Hudson API (generally HTTP POSTs to URLs on the Hudson master) to tell Hudson that nodes are going offline ahead of time. This will help avoid builds that get killed when the node goes down. Check out the HTML source on the node's page (http://hudson/computer/node_name) to see what the web interface does for the "mark offline" and "disconnect" operations.