I want to setup Hudson in a way that other users can easily create new jobs with some parameters already in place (e.g. the artifact path). Bonus points if they cannot modify those parameters. Extra bonus points if I can hide extra configuration that they don't need to use.
Right now, I've created a Template job, and have the users copy from it whenever they want to create a new job. This works fine, but if I want to change some configuration in the Template, then I have to manually edit all the already created job.
Is there a better way to do this?
The Template Project Plugin should meet some of your needs; I don't think you'll be able to hide any of the configuration, though.
Related
I have an existing Laravel application with various functions, controllers, authentication, middleware, CRUD, admin functionality, and more. I want to be able to deploy this application onto some hosting and have others view all of the pages, but I do not want them to be able to edit or create values in the database. So, for example, while I have CRUD for all of the resources, I want them to be able to read all of the resources but not edit, create, or delete anything. I also want them to not have to register/login, but I know how to fix that.
I have tried LOCK TABLE [tablename] READ in MySQL, but that does not seem to have done anything. Currently, my only other idea would be to go through and set ifs checking if the user is authenticated to save database interactions, but this would be a little laborious.
Is there any feasible or simpler way of doing this? Thanks for any help.
My site is a bit more of a static site. The site is based on word-press now, and I am thinking of using auto scale feature.
The problem is that I am not good at startup scripts like python, java, etc...
I am more comfortable with bash script.
Is there a way create a snapshot of a production compute-engine and use it as a template instead of instance group without startup script complexity?
I have two instances, one is an individual instance and one is an inside instance group for auto scale. Whenever there is a update in my site, I have to change it in individual instance and move the snapshot disk as template in instance group and everything will be updated.
My question is, is that workable or do I really have to work on startup script?
Any friendly advice will be highly appreciated.
Some bash skills should be enough to write a startup script and not use the additional instance and image creation at all. See documentation for an easy example of that - just put in all bash commands that you use to prepare that instance yourself. This should be relatively easy and allows for easy modification of that process later on.
If you really want to avoid writing the script, what you’ve described should be possible: take an instance that has everything installed as you like it, then delete it keeping the disk and create an image out of that disk.
One minor improvement: you can use an instance from the existing instance group by abandoning it.
I am using Yii2 advanced structure, but I need a "Second" backend that is going to control the tenant users.
I just copy the backend folder and change the configuration for the cookies and the User Model. I don't know if this will be enough.
I just want some advice on how to do it, if someone already did it.
Thanks !
I have done something similar some time ago. Besides what you have done I have also changed the User component too to log in a different kind of user.
You should also add more folders in the tests so you can test that part too.
Anyway that should be enough yes.
Recently I submitted a file to Perforce as “add” (a new file).
Then I submitted several more changes to it.
Now I realize that the original “add” should have been an “integrate” because the file is really a copy and modification of another, existing file.
Is there a way to add the integration link after the fact?
If not, what is the easiest way of doing this? If we obliterate all the affected changelists, and then re-submit them but with the correct integration history, will that work?
Just talked to Perforce Support on the phone. The answer is no, you cannot “change history”. However, the recommended course of action is to:
Take a copy of each change made to the new file(s)
Obliterate all the added files that should have been an integrate
Re-submit each change that was made
It may be possible to generate Perforce journal (database) records that put the missing data in place. These are plain text entries that are replayed into the live database by a system admin. The database schema is documented: www.perforce.com/perforce/doc.current/schema
You'd need to be very careful and work with Perforce Support while doing this, and try it on a test system first. It's normally not worth the effort.
I would like a way for individual users to send a repo path to a hudson server and have the server start a build of that repo. I don't want to leave behind a trail of dynamically created job configuration. I'd like to start multiple simultaneous instances of the same job. Obviously this requires that the workspaces different for the different instances. I believe this isn't possible using any of the current extensions. I'm open to different approaches to what I'm trying to accomplish.
I just want the hudson server to be able to receive requests for builds from outside sources, and start them as long as there are free executors. I want the build configuration to be the same for all the builds except the location of the repo. I don't want to have dozens of identical jobs sitting around with automatically generated names.
Is there anyone out there using Hudson or Jenkins for something like this? How do you set it up? I guess with enough scripting I could dynamically create the necessary job configuration through the CLI API from a script, and then destroy it when it's done. But I want to keep the artifacts around, so destroying the job when it's done running is an issue. I really don't want to write and maintain my own extension.
This should be pretty straightforward to do with Jenkins without requiring any plugins, though it depends on the type of SCM that you use.
It's worth upgrading from Hudson in any case; there have certainly been improvements to the features required to support your use case in the many releases since becoming Jenkins.
You want to pass the repo path as a parameter to your build, so you should select the "This build is parameterized" option in the build config. There you can add a string parameter called REPO_PATH or similar.
Next, where you specify where code is checked-out from, replace the path with ${REPO_PATH}.
If you are checking out the code — or otherwise need access to the repo path — from a script, the variable will automatically be added to your environment, so you can refer to ${REPO_PATH} from your shell script or Ant file.
At this point, when pressing Build Now, you will be prompted to enter a repo path before the build will start. As mentioned in the wiki page above, you can call the buildWithParameters URL to start a build directly with the desired parameter, e.g. http://server/job/myjob/buildWithParameters?REPO_PATH=foo
Finally, if you want builds to execute concurrently, Jenkins can manage this for you by creating temporary workspaces for concurrent builds. Just enable the option
"Execute concurrent builds if necessary" in your job config.
The artifacts will be available, the same as any other Jenkins build. Though probably you want to manage how many recent artifacts are kept; this can be done by checking "Discard Old Builds", and then under Advanced…, you can select enter a value for "Max # of builds to keep with artifacts".