In our company we use Hudson for our CI-servers. We have a separate server running for each current project (there is usually between 3 and 10 ongoing projects).
We would like to setup a monitor in a central location that shows the status for all the build servers at once.
I guess this has been done before, so is there anything premade to collecting and displaying this information?
I believe this is what you are looking for:
http://wiki.hudson-ci.org/display/HUDSON/Build+Publisher+Plugin
You can publish stuff (artifacts, results, etc) from multiple Hudson servers onto a single server.
Related
I wanted to use my xampp local server with my friend so that we can make database together as a team. How to do this?
Like we have google words, github for codes pushing and writing etc.
I know purchased website have integrated database management but, I am not purchasing any one.
I search on google but out of luck.
Unless you're doing performance testing, the free tier offered by most cloud providers should be sufficient, however you really need to run this on Linux with VPN access.
But if you're doing collaborative development work you REALLY need to be using version control. That includes use for your DDL. Exending that to your test data should be trivial.
Good morning.
I have to build a system (php/mysql) that run in a 20 domains for 20 different cities (for example). The system that run in this 20 domains is identical, the database too.
My issue is: I pretend to create a single database to serve this 20 domains, controlling the cities by something like city_id.
I wish to know if this is the best practice, or if the right way is create one database to each city/domain.
The domains are hosted in the same server, the core system is out of public_html directory.
/mysystem_classes
/public_html/city1.com
/public_html/city2.com
/public_html/city3.com
/public_html/city20.com
To serve images, css and js I will work with something like a CDN.
Normally you would setup just one virtual host, install your application there and let all the domains point to that software. Which domain points to which website is not specified on the server level, but on the application level in that case.
TYPO3 for example works like that when building multiple sites with one instance of TYPO3 and the used MySQL database. (using TypoScript or the backend configuration to define which domain belongs to which site-ID)
Wordpress has a multisite feature, which could be set up easily to use several subdomains. It uses one database and a single software instance but can deliver multiple blogs to different domains. (e.g. city1.example.org, city2.example.org). You will need to setup a wildcard domain (i.e. *.exqample.org) to let all possible subdomains point to the single vHost. This is similar to how the basic Wordpress.com-Blogs work. See: http://codex.wordpress.org/Create_A_Network
I believe you are looking for Mysql replication.
I work at a large university and have been instructed to look in to using a source control system (git, svn, etc) to manage the websites. We use Joomla which relies heavily on MySQL.
Currently, we have a barely functional system that uses a development server which pushes to a live server whenever we change a website. It's a pain and it doesn't always work. Plus, we can and often do overwrite changes that another dev has made.
We want to be able to manage content via the Joomla front end on the dev branch, then push those changes to the test branch, then to the master (live) branch.
Without getting off in tot he weeds: my question is, essentially, what is a good strategy for managing websites using a CMS like Joomla that relies on a database?
Since you also want to sync the database (content is stored in the db, while images and media are on the filesystem), you need the commit/push script to also dump the db to a file, and the pull script to load the db. This can be done with pre and post hooks, http://githooks.com/ or google it.
However there will be different parts of Joomla that you will want to sync separately.
Let's consider three servers:
edit server: where content is managed
dev server: where extensions are tested and configured
test server
production server
Let's consider three layers of information:
The user and session data: this should not be synchronized at all so people are not logged out, and if any users register on the production server their login will be preserved.
The contents, user groups and assets (privileges): this is the articles, news, images which have to go from edit to test to production and to dev (unless you have content-specific privileges at the user level i.e. each user has separate privileges on each content item)
The template, extensions, modules, menu configurations: this will go from dev to test to production and edit.
Each of these groups of data will require their own branch and their custom pre-commit hooks to include in the commit/push the relevant database tables. The list of tables for each group depends on the extensions you are using.
I have written an article it's in italian and for svn but you can grab some of the bash scripts we use: http://www.fasterjoomla.com/joomla-tips/svn-per-joomla or translated by google http://translate.google.com/translate?sl=it&tl=en&js=n&prev=_t&hl=it&ie=UTF-8&u=http%3A%2F%2Fwww.fasterjoomla.com%2Fjoomla-tips%2Fsvn-per-joomla
I am working with Hudson here and I am trying to create a single job that users with different access can run. Based on their access level, they would see different options.
For instance:
A Developer when running this job would see the build stage and be able to see the build process, and deploy it to a development server.
The Release Engineer would see the same options as the developer, but also see that he can deploy the code to a different set of servers as well.
And so forth.
Is this even possible, like role based jobs. I know I can limit the access and who can do what, but this is a little different.
I would like a way for individual users to send a repo path to a hudson server and have the server start a build of that repo. I don't want to leave behind a trail of dynamically created job configuration. I'd like to start multiple simultaneous instances of the same job. Obviously this requires that the workspaces different for the different instances. I believe this isn't possible using any of the current extensions. I'm open to different approaches to what I'm trying to accomplish.
I just want the hudson server to be able to receive requests for builds from outside sources, and start them as long as there are free executors. I want the build configuration to be the same for all the builds except the location of the repo. I don't want to have dozens of identical jobs sitting around with automatically generated names.
Is there anyone out there using Hudson or Jenkins for something like this? How do you set it up? I guess with enough scripting I could dynamically create the necessary job configuration through the CLI API from a script, and then destroy it when it's done. But I want to keep the artifacts around, so destroying the job when it's done running is an issue. I really don't want to write and maintain my own extension.
This should be pretty straightforward to do with Jenkins without requiring any plugins, though it depends on the type of SCM that you use.
It's worth upgrading from Hudson in any case; there have certainly been improvements to the features required to support your use case in the many releases since becoming Jenkins.
You want to pass the repo path as a parameter to your build, so you should select the "This build is parameterized" option in the build config. There you can add a string parameter called REPO_PATH or similar.
Next, where you specify where code is checked-out from, replace the path with ${REPO_PATH}.
If you are checking out the code — or otherwise need access to the repo path — from a script, the variable will automatically be added to your environment, so you can refer to ${REPO_PATH} from your shell script or Ant file.
At this point, when pressing Build Now, you will be prompted to enter a repo path before the build will start. As mentioned in the wiki page above, you can call the buildWithParameters URL to start a build directly with the desired parameter, e.g. http://server/job/myjob/buildWithParameters?REPO_PATH=foo
Finally, if you want builds to execute concurrently, Jenkins can manage this for you by creating temporary workspaces for concurrent builds. Just enable the option
"Execute concurrent builds if necessary" in your job config.
The artifacts will be available, the same as any other Jenkins build. Though probably you want to manage how many recent artifacts are kept; this can be done by checking "Discard Old Builds", and then under Advanced…, you can select enter a value for "Max # of builds to keep with artifacts".