I am new to WordPress development. I just got confused b/w after_setup_theme and init hook
after_setup_theme runs before init and is generally used to initialize theme settings/options before a user is authenticated. According to the Codex:
This is the first action hook available to themes, triggered immediately after the active theme's functions.php file is loaded.
On the other hand, init runs after a user is authenticated:
Typically used by plugins to initialize. The current user is already authenticated by this time.
Related
I have installed Arelle through git and using master branch.
Further on I am looking to use "SEC's EdgarRenderer" and made a git clone (also from it's master branch).
I copied in the EdgarRenderer folder structure into location of Arelle/plugins/EdgarRendered and selected the plugin from Arelle.
...after reload of Arelle, (recommended by the GUI) I do not see the window menu for "view", thus I cannot start to view a iXBRL document from a browser. Still the plugin shows status enabled.
From terminal session from where I triggered Arelle to open, I do see an error:
Exception loading plug-in Edgar Renderer: No module named 'matplotlib'
Issue solved by loading the module 'matplotlib'
I'm using a provided Openshift PaaS deployment Grafana app image.
I'd like to add a plugin to that Grafana it is done by adding certain files to the file system or invoking a grafana-cli command.
I managed to do it manually with a single pole by accessing it through the oc CLI. What I don't know is how to make it persistent. I would like it to by applied whenever an Openshift pole is instantiated. I found no other way than providing a custom image for that.
Is there a supported way of adding files to an existing predefined image?
Or invoking a command on a pole after deployment? I tried the post deployment hook but it appears that the filesystem is not there yet (or I don't know how to use this hook)
A post deployment life cycle hook runs in its own container, with its own file system, not the container of the application. You want to look at a postStart hook.
$ oc explain dc.spec.template.spec.containers.lifecycle
RESOURCE: lifecycle <Object>
DESCRIPTION:
Actions that the management system should take in response to container
lifecycle events. Cannot be updated.
Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted.
FIELDS:
postStart <Object>
PostStart is called immediately after a container is created. If the
handler fails, the container is terminated and restarted according to its
restart policy. Other management of the container blocks until the hook
completes. More info:
http://kubernetes.io/docs/user-guide/container-environment#hook-details
preStop <Object>
PreStop is called immediately before a container is terminated. The
container is terminated after the handler completes. The reason for
termination is passed to the handler. Regardless of the outcome of the
handler, the container is eventually terminated. Other management of the
container blocks until the hook completes. More info:
http://kubernetes.io/docs/user-guide/container-environment#hook-details
I have a gruntfile.js in my project and the file contains a watch task that watches changes in JS and CSS files. How can I avoid manually starting this task via Grunt Console in PhpStorm?
(I often forgot to run the task and I'm often surprised why my changes are not reflected in website's behaviour. :))
Not possible at the moment.
https://youtrack.jetbrains.com/issue/WEB-11818 -- watch this and related tickets (star/vote/comment) to get notified on progress.
UPDATE: It has been implemented as of PhpStorm 2016.1 version.
You can now select any existing Run Configuration to be executed on project opening. Grunt/Gulp/NPM Script are supported.
Official help page
We have a custom plugin for Hudson which uploads the output of a build onto a remote machine. We have just started looking into using a Hudson slave to improve throughput of builds, but the projects which use the custom plugin are failing to deploy with FileNotFoundExceptions.
From what we can see, the plugin is being run on the master even when the build is happening on the slave. The file that is not being found does exist on the slave but not on the master.
Questions:
Can plugins be run on slaves? If so, how? Is there a way to identify a plugin as being 'serializable'? If Hudson slaves can't run plugins, how does the SVN checkout happen?
Some of the developers here think that the solution to this problem is to make the Hudson master's workspace a network drive and let the slave use that same workspace - is this as bad an idea as it seems to me?
Firstly, go Jenkins! ;)
Secondly, you are correct — the code is being executed on the master. This is the default behaviour of a Hudson/Jenkins plugin.
When you want to run code on a remote node, you need to get a reference to that node's VirtualChannel, e.g. via the Launcher that's probably passed into your plugin's main method.
The code to be run on the remote node should be encapsulated in a Callable — this is the part that needs to be serialisable, as Jenkins will automagically serialise it, pass it to the node via its channel, execute it and return the result.
This also hides the distinction between master and slave — even if the build is actually running on the master, the "callable" code will transparently run on the correct machine.
For example:
#Override
public boolean perform(AbstractBuild<?, ?> build, Launcher launcher,
BuildListener listener) {
// This method is being run on the master...
// Define what should be run on the slave for this build
Callable<String, IOException> task = new Callable<String, IOException>() {
public String call() throws IOException {
// This code will run on the build slave
return InetAddress.getLocalHost().getHostName();
}
};
// Get a "channel" to the build machine and run the task there
String hostname = launcher.getChannel().call(task);
// Much success...
}
See also FileCallable, and check out the source code of other Jenkins plugins with similar functionality.
I would recommend making your plugin work properly rather than using the network share solution.. :)
Question:
In the path HUDSON_HOME/jobs/<jobname>/builds/<timestamp>/workspace-files, there are a series of .tmp files. What are these files, and what feature of Hudson do they support?
Background
Using Hudson version 1.341, we have a continuous build task that runs on a slave instance. After the build is otherwise complete, including archiving the artifacts, task scanner, etc., the job appears to hang for a long period of time. In monitoring the master node, I noted that many .tmp files were being created and modified under builds//workspace=files, and that some of them were very large. This appears to be causing the delay, as the job completed at the same time that files in this path stopped changing.
Some key configuration points of the job:
It is tied to a specific slave node
It builds in a 'custom workspace'
It runs the Task Scanner plugin on a portion of the workspace to find "todo" items
It triggers a downstream job that builds in the same custom workspace on the same slave node
In this particular instance, the .tmp files were being created by the Task Scanner plugin. When tasks are found, the files in which they are found are copied back to the master node. This allows the master node to serve those files in the browser interface for Tasks.
Per this answer, it is likely that this same thing occurs with other plug-ins, too.
Plug-ins known to exhibit this behavior (feel free to add to this list)
Task Scanner
Warnings
FindBugs
There's an explanation on the hudson users mailing list:
...it looks like the warnings plugin copies any files that have compiler warnings from the workspace (possibly on a slave) into a "workspace-files" directory within HUDSON_HOME/jobs//builds/
The files then, I surmise, get processed resulting in a "compiler-warnings.xml" file within the HUDSON_HOME/jobs//builds/
I am using the "warnings" plugin, and I suspect it's related to that.