I never used fortrabbit before and i have a question about it.
I know i can create apps and define the document root, but lets imagine the following:
I want to go with Yii2 Framework (Advanced template)
Advanced template have "two apps" in it (2 folders) the backend and the frontend.
On a real server we have to create two alias, eg:
admin.myapp.com -> root/backend/www
www.myapp.com -> root/frontend/www
Is possible to configure the fortrabbit to work with it within the same application and share the same resources (MySQL, cache, etc)?
your setup is possible at fortrabbit.
Just put both folders in your git repo and push to forrabbit. After that you can route the subdomains (www., admin.) to the subfolders (frontend/www, backend/www).
If your project requires a composer install during the deploy process it will not work our of the box, since we check only for the composer.json/lock in the root of your project.
However you can define your custom post-deploy scripts. In these script you could call a composer install in the subfolders.
Cheers
Oliver (fortrabbit staff)
Related
I have a Node server and multiple controllers that perform DB operations and helpers (For e-mail, for example) within that directory.
I'd like to use source from that directory within my functions. Assuming the following directory structure:
src/
server/
/app/controllers/email_helper.js
fns/
send-confirm/
What's the best way to use email_helper within the send-confirm function?
I've tried:
Symbolically linking the 'server' directory
Adding a local repo to send-confirm/package.json
Neither of the above work.
In principle, your Cloud Functions can use any other Node.js module, the same way any standard Node.js server would. However, since Cloud Functions needs to build your module in the cloud, it needs to be able to locate those dependency modules from the cloud. This is where the issue lies.
Cloud Functions can load modules from any one of these places:
Any public npm repository.
Any web-visible URL.
Anywhere in the functions/ directory that firebase init generates for you, and which gets uploaded on firebase deploy.
In your case, from the perspective of functions/package.json, the ../server/ directory doesn't fall under any of those categories, and so Cloud Functions can't use your module. Unfortunately, firebase deploy doesn't follow symlinks, which is why that solution doesn't work.
I see two possible immediate fixes:
Move your server/ directory to be under functions/. I realize this isn't the prettiest directory layout, but it's the easiest fix while hacking. In functions/package.json you can then have a local dependency on ./server.
Expose your code behind a URL somewhere. For example, you could package up a .tar and put that on Google Drive, or on Firebase Cloud Storage. Alternatively, you can use a public git repository.
In the future, I'd love it if firebase deploy followed symlinks. I've filed a feature request for that in Firebase's internal bug tracker.
I'm starting a new web app with Openshift (jboss, mysql). It's the first time I use openshift and after reading through some doc and experimenting a bit with it, I'm having one question regarding best practices for the architecture of my app.
There will be some files generated by- or uploaded to the application (resources). I'd like those files to be outside the deployment folder so they are not erased/overwritten when the app deploys again. I have browsed through the directories and I was wondering:
is it ok to use the /var/lib/openshift/[openshift-id]/app-root/data folder for these files?
Yes, you should use your ~/app-root/data folder for any files that you want to not be erased when you do a git push, there is also an environment variable that you can use that points to that folder called OPENSHIFT_DATA_DIR. Please note that if you are using a scaled application, that folder is not shared among your gears.
We are developing a service for our QA staff.
The main goal is that a tester from our web interface be able to select from a github branch a dump for this particular machine and click "Deploy" button, then the rails app for testing will be deployed to Digital Ocean.
The feature I am now working on, is collecting deployment logs and displaying them through our web interface.
On DO droplet there is a "logs" folder which contains different log files which are populated during deployment:
migrations_result_#{machine_id}.log, bundle_result_#{machine_id}.log, etc.
Where #{machine_id} is the id of deployed machine on our service(it is not droplet id).
With the help of remote_syslog gem we are monitoring "logs" folders on each droplet and send them through udp to our main service server, and with the help of rsyslog we store them in a particular folder, let's say /var/log/deplogs/
So in /var/log/deplogs/ we have:
migrations_result_1.log, bundle_result_1.log,
migrations_result_2.log, bundle_result_2.log,
...
migrations_result_n.log, bundle_result_n.log
How do I need to monitor this folder and save contents of each log file to mysql database?
I need to achieve something like the following (Ruby code):
Machine.find(#{machine_id}).logs.create!(text: "migrations_result_#{machine_id}.log contents")
Rsyslog does not seems to be able to achieve this. Or am I missing something?
Any advices?
Thanks in advance, and sorry for my English, I hope you can get the idea.
First of all, congratulations! You are in front of a beautiful problem. My suggestion is to use divide and conquer.
Here are my considerations:
Put the relevant folder(s) under version control (for example, GIT)
Check via GIT commands the files that changed every X amount of time.
Also obtain the differences between the prior version of each file, and the new ones, so you can update your database parsing the new info.
Just in case, here are ways to call system commands from ruby.
Hope that helps,
I currently have Jenkins set up with a number of jobs, but it's proving difficult to back up because the artifacts are stored within the job directory. I'd like to back up the job configurations and artifacts separately. I'm sure I remember reading somewhere that Jenkins now has an option to store them outside the job, but I can't find this.
Is there any configuration option that does this while still making the artifacts visible from within the job on the Jenkins interface? (ie rather than merely an add-in that copies the artifacts elsewhere)
Go to your jenkins configuration page, e.g.
http://mybuildserver.acme.com/configure
At the top of the configuration page there is a "home directory" setting. Click the "advanced..." button below it.
Now set the "Workspace Root Directory" to e:\jenkins-workspaces\${ITEM_FULL_NAME}, and "Build Record Root Directory" to e:\jenkins-builds\${ITEM_FULL_NAME} or something similar.
Warning: I run Jenkins 2.7.2 and noticed that certain features don't work properly after configuring Jenkins like that. I saw problems with folders and problems with the multi-branch project plugin. Check the status of those issues if your rely on these features.
As you can see here, there are many plugins to deploy artifacts anywhere you want/need, on FTP, CIFS, Confluence, Artifactory.... especially the ArtifactsDeployer that will allow you to make a copy of the artifacts in the Jenkins Home.
Thank you Sam, for your post, which directed me into the right direction to solve my problem.
Have been searching for a way on how can I make a symlink to the Job-Archive of a build for multibranch projects. Up to now, we used to manually search for the correct folder basename in the filesystem and added that one to the Jenkinsfile.
Now, I can simply use
jobOutputFolder = currentBuild.rawBuild.artifactsDir.path
and use that in my script.
If security is a concern, I could implement that as a shared library additionally.
Try the Use Custom Workspace build option. From the Jenkins popup help:
For each job on Jenkins, Jenkins allocates a unique "workspace
directory." This is the directory where the code is checked out and
builds happen. Normally you should let Jenkins allocate and clean up
workspace directories, but in several situations this is problematic,
and in such case, this option lets you specify the workspace location
manually.
This option is also available under advanced project properties of multi-configuration project builds.
A groovy script under "Prepare an environment for the run" will always run on the master, and this groovy script can create a symlink to where you really want artifacts archiving to archive_to which SHOULD include the job name and build number:
if (! Files.createSymbolicLink(Paths.get(currentBuild.artifactsDir.path),
Paths.get(archive_to.getCanonicalPath()))) {
throw new RuntimeException("Can't create symlink to archive dir")
}
Of course (sadly) when old builds are purged by Jenkins the old artifacts are left because jenkins will not follow a symlink when purging, even if jenkins owns the symlink and the target (shame).
I workaround for that may be to point a symlink back from the new archive dir, then, when jenkins purges it's archive dir, the new symlink will dangle and a cron job can then later delete the new job archive dir
Copy Artifact Plugin (https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin) adds a build step for retrieving files from another project's workspace to current and work from there.
I am new to Jenkins/Hudson and am trying to migrate a C make-based project from buildbot. For legacy reasons, the build system is hard-coded to build outside of the versioned source tree (git), one directory above, in a separate directory. E.g.:
workspace
.git
foo
bar
build
artifacts
Besides the fact that it ends up creating a directory outside the workspace, Jenkins won't recognize items in the build/ directory above to archive as artifacts.
How can I make this kind of build system work with Hudson? Building in-source-tree is not a short-term option. The only option I found was "use custom workspace," but all this does it hard-code the workspace directory to some other directory.
To answer my own question: there is indeed an option in Jenkins git plugin to check out to a local subdirectory instead of the root of the workspace. With the git plugin, click on the Advanced button and fill in the field "Local subdirectory for repo (optional)".
I don't find the option that djs mentioned, but you can specify a different work directory:
Configure job
Extended Project settings
Use custom work space
This can be set to everywhere you want, also the workspace of a different job.