How can you have one Hudson job with multiple levels of users run with different options? - hudson

I am working with Hudson here and I am trying to create a single job that users with different access can run. Based on their access level, they would see different options.
For instance:
A Developer when running this job would see the build stage and be able to see the build process, and deploy it to a development server.
The Release Engineer would see the same options as the developer, but also see that he can deploy the code to a different set of servers as well.
And so forth.
Is this even possible, like role based jobs. I know I can limit the access and who can do what, but this is a little different.

Related

Azure Devops Release for application that remains the same except for appsettings.json

I am creating an Azure devops build pipeline and release. This release has a staging environment that utilizes a deployment group with 3 servers, in production it can have 50+ servers. The application will be the same across all the servers except for the appsettings file. appsettings will contain the db connections and location/server specific variables. I have looked into ways to manipulate this file on release per server, all I have come across are ways to have variable substitutions in the release for environments when you only need to switch values in a dev to staging to prod release. Is there a good way to manipulate this file per server in a deployment group rather than 50+ stages/tags, or a better way to setup my pipe and release?
Is there a good way to manipulate this file per server in a deployment
group rather than 50+ stages/tags
Afraid to say that as far as I know, we does not support this possible yet. But the if you host your app on Azure website, azure have one new feature can achieve this goal.
But if you host the app to self servers, I'm afraid that the better deployment approach in this scenario is Build once, deploy many. In another word, build the project in Build pipeline, and configure corresponding appsettings.json file on specific stage.
To improving the maintainability of the release and simplifying the configuration structure, you can make use of task group and the variable group. (Please keeping using variable substitutions in release)
Encapsulate a sequence of reusable tasks into task group, then this template will be used in every deployment group job. Note, you can make the reusable parameters as a part of template. Just abstract the app setting information and store them as variables into corresponding variable group.
At this moment, whenever you add a new server, you only need to save the corresponding app setting parameters into the created variable group. In release pipeline, you only need to add task group, and link the previous created variable group to specified stage. Execute the release pipeline, then everything will go as expect.
In post-maintenance, you just need to modify the basic configuration of deploy task task once, and it can be applied to all stages. When you need to modify the corresponding server app setting configuration, you can modify them by opening the corresponding variable group.

simultaneous instances of the same hudson/jenkins job

I would like a way for individual users to send a repo path to a hudson server and have the server start a build of that repo. I don't want to leave behind a trail of dynamically created job configuration. I'd like to start multiple simultaneous instances of the same job. Obviously this requires that the workspaces different for the different instances. I believe this isn't possible using any of the current extensions. I'm open to different approaches to what I'm trying to accomplish.
I just want the hudson server to be able to receive requests for builds from outside sources, and start them as long as there are free executors. I want the build configuration to be the same for all the builds except the location of the repo. I don't want to have dozens of identical jobs sitting around with automatically generated names.
Is there anyone out there using Hudson or Jenkins for something like this? How do you set it up? I guess with enough scripting I could dynamically create the necessary job configuration through the CLI API from a script, and then destroy it when it's done. But I want to keep the artifacts around, so destroying the job when it's done running is an issue. I really don't want to write and maintain my own extension.
This should be pretty straightforward to do with Jenkins without requiring any plugins, though it depends on the type of SCM that you use.
It's worth upgrading from Hudson in any case; there have certainly been improvements to the features required to support your use case in the many releases since becoming Jenkins.
You want to pass the repo path as a parameter to your build, so you should select the "This build is parameterized" option in the build config. There you can add a string parameter called REPO_PATH or similar.
Next, where you specify where code is checked-out from, replace the path with ${REPO_PATH}.
If you are checking out the code — or otherwise need access to the repo path — from a script, the variable will automatically be added to your environment, so you can refer to ${REPO_PATH} from your shell script or Ant file.
At this point, when pressing Build Now, you will be prompted to enter a repo path before the build will start. As mentioned in the wiki page above, you can call the buildWithParameters URL to start a build directly with the desired parameter, e.g. http://server/job/myjob/buildWithParameters?REPO_PATH=foo
Finally, if you want builds to execute concurrently, Jenkins can manage this for you by creating temporary workspaces for concurrent builds. Just enable the option
"Execute concurrent builds if necessary" in your job config.
The artifacts will be available, the same as any other Jenkins build. Though probably you want to manage how many recent artifacts are kept; this can be done by checking "Discard Old Builds", and then under Advanced…, you can select enter a value for "Max # of builds to keep with artifacts".

Show results from multiple Hudson build servers

In our company we use Hudson for our CI-servers. We have a separate server running for each current project (there is usually between 3 and 10 ongoing projects).
We would like to setup a monitor in a central location that shows the status for all the build servers at once.
I guess this has been done before, so is there anything premade to collecting and displaying this information?
I believe this is what you are looking for:
http://wiki.hudson-ci.org/display/HUDSON/Build+Publisher+Plugin
You can publish stuff (artifacts, results, etc) from multiple Hudson servers onto a single server.

Data sync solution?

For some security issues I'm in an envorinment where third party apps can't access my DB. For this reason I should have some service/tool/script (dunno what yet... i'm open to the best option, still reading to see what I'm gonna do...)
which enables me to generate on a regular basis(daily, weekly, monthly) some csv file with all new/modified records for a certain application.
I should be able to automate this process and also export at any time a new file.
So it should keep track for each application which records he still needs.
Each application will need some data in some other format (csv/xls/sql), also some fields will be needed for some application and some aren't... It should be fairly flexible...
What is the best option for me? Creating some custom tables for each application? Based on that extracting modified data?
I think you best thing here, assuming you have access to the server to let you set this up is to make a small command line program that can do the relativley simple task you need. Languages like pearl are good for this sort of thing I do believe.
once you have that 'tool' made you can schedule it through the OS of the server to run ever set amount of time. Either schedule task for a windows server or a cronjob for a linux server.
You can also (with out having to set up the scheduled task if you don't / can't want to) enable this small command line application to be called via 'CGI' this is a special way of letting applications on the server be executed at will by a web user. If you do enable this though, I suggest you add some sort of locking system so that it can only be run every so often and to stop it being run five times at once.
EDIT
You might also want to just look into database replication or adding read only users. This saves a hole lot of arseing around. Try to find a solution that dose not split or duplicate data. You can set up users to only be able to access certain parts of the database system in certain ways, such as SELECT data

Linux web front-end best practices

I want to build a web based front-end to manage/administer my Linux box. E.g. I want to be able to add users, manage the file system and all those sorts of things. Think of it as a cPanel clone but more for system admin rather that web admin.
I was thinking about creating a service that runs on my box and that performs all the system levels tasks. This way I can have a clear separation between my web based front-end and the actual logic. The server pages can than make calls to my specialized server or queue tasks that way. However, I'm not sure if this would be the best way to go about this.
I guess another important question would be, how I would deal with security when building something like this?
PS: This just as a pet project and learning experience so I'm not interested in existing solutions that do a similar thing.
Have the specialized service daemon running as a distinct user -- let's call it 'managerd'. Set up your /etc/sudoers file so that 'managerd' can execute the various commands you want it to be able to run, as root, without a password.
Have the web server drop "trigger" files containing the commands to run in a directory that is mode '770' with a group that only the web server user and 'managerd' are members of. Make sure that 'managerd' verifies that the files have the correct ownership before executing the command.
Make sure that the web interface side is locked down -- run it over HTTPS only, require authentication, and if all possible, put in IP-specific ACLs, so that you can only access it from known locations, in advance.
Your solution seems like a very sensible solution to the 'root' issue.
Couple of suggestions:
Binding the 'specialised service' to localhost as well would help to guarantee that requests can't be made externally.
Checking request call functions that perform the actions and not directly give the service full unrestricted access. So calling a function "addToGroup(user,group)" instead of a generic "performAction(command)".