I'm trying to set up a MEAN stack per the instructions at http://learn.mean.io/#mean-hosting-mean-openshift. I'm new to OpenShift and MEAN (and pretty new to Git), so I'm confused about steps 5 and 6. When it says "Clone that repo to your local computer where your mean.io app codebase is" in step 5, does that mean that I should have installed (init mean) MEAN first? Because if I do that and then try to clone my OpenShift repo to the directory, I understandably get an error that the destination path already exists and is not an empty directory (because the MEAN install is already there). And then in step 6 is says to 'merge my completed local app into this new repo.' Is my 'completed local app' the MEAN install? My understanding is that I would have had to create a Git branch (from an existing repo) to merge anything. Thanks!
5. On your new app’s console page on Openshift, make a note of the git repo where the code lives. Clone that repo to your local computer where your mean.io app codebase is.
6. Merge your completed local app into this new repo. You will have some conflicts, so merge carefully, line by line.
By 'mean.io app codebase', the article is assuming that you already have an application written using mean.io. It is requesting that you git clone the new application you created on openshift to your local machine. Then, merge your mean.io application code into this cloned repository.
The cloned repository contains all of the necessary code to run mean.io on openshift. You simply need to import your code into this repository and push it back to openshift. If you have not written any code, just start writing your code in the cloned repository.
Related
I have a basic question on my first attempt to deploy a Node.js app on Google Cloud Services using a Compute Engine Virtual Machine.
I have created a Google Cloud Repository of my GitHub code. I have tried to clone this repo onto my virtual machine. When I do it, I show the repo name, but that directory is empty.
Am I supposed to have the actual files inside this directory, or is it some link to the Google Cloud Repo and it only appears empty? If the files are supposed to physically be in this directory, that could explain why I can't get my startup script to run.
And if they are supposed to be in there, I'm not sure why they don't clone but it could be because my path to default code GitHub is messed up.
Thanks in advance. I have been stuck on this for too long.
EDIT:
I used: I used gcloud source repos clone github_sleepywakes_thunderroost --project=imposing-timer-334919
to clone my repo
When I use it again, I get this message:
WARNING: Repository "github_sleepywakes_thunderroost" in project "imposing-timer-334919" is a mirror. Pushing to this clone will have no effect. Instead, clone the mirrored repository directly with
$ git clone https://github.com/SleepyWakes/ThunderRoost
Cloning into '/home/overlord/github_sleepywakes_thunderroost'...
remote: Total 1581 (delta 264), reused 1581 (delta 264)
Receiving objects: 100% (1581/1581), 3.43 MiB | 9.59 MiB/s, done.
Resolving deltas: 100% (264/264), done.
warning: remote HEAD refers to nonexistent ref, unable to checkout.
So it appears my files are copied, but when I change directory into github_sleepywakes_thunderroost and ls, the directory is empty.
My GitHub repository brand was named "main" rather than "master." Google Cloud Shell apparently looks for master so could not link to the correct path. In GitHub, I deleted the main branch and pushed my code to a new repository called master. I then was able to clone the GitHub repository.
There are likely other ways to fix this.
Right now i'm committing the application changes in 2 places.
git hub repository
openshift git hub repository
I made couple of changes to git repository as well as to openshift git repository but the changes are not being
Example git repository: https://github.com/ramkumar/test
Example openshift repository: ssh://12131212000005#testing.rhcloud.com/~/git/test.git
Questions:
1. In which tomcat folder i need to check whether the files are updated? i
check in app-root and app-deployments (repo-target-test-pages-
display.html) and it is having the latest files but this is not being used
in the application. when i removed the files still sees the application
is working successfully without any issues.
2. Do we need to always commit in 2 places? Is there a way the openshift
automatically takes the git repository and use the same? In heroku the
changes are being automatically incorporated as Git hub can be directly
associated.
I have the access to folder via putty login
I have the access to the openshift console and make changes as well.
Please guide me on how to resolve the issue.
Thanks.
We have a dedicated issue tracking (Redmine) machine, which has a Mercurial repository (call it "Redmine repository"). Redmine is set up to use that repository, and as far as I understand, Redmine never makes any changes to that repository. All developers (eventually) push their changes to that repository.
We also have a dedicated production machine, which can execute the code, but is not used to make any changes to the code.
We have two choices:
Set up another Mercurial repository on the production machine (call it "production repository"). When a new production release is approved, pull the changes from the Redmine repository to the production repository, and then update the local working directory to the appropriate revision from the production repository.
Reuse the existing Redmine repository on the production machine designating it a local repository for the Mercurial installation there (the Redmine repository is on the shared drive that can be easily mounted on the production machine). Whenever a new production is approved, update the local working directory to the appropriate revision from the Redmine repository.
With option #2, we get rid of an extra "pull" step (from Redmine repository to production repository), which slightly simplifies the process. But I'm not sure if it's ok that a single repository is used by two Mercurial installations as if it's local.
Any comments on this choice (or any other aspect of this setup) is appreciated!
It sounds like a bad idea. Mercurial does a really good job of keeping reads and writes to its repository atomic, but it has a harder time doing that when the repository is on a shared drive -- even if it's only one local repository using it -- because network shares (especially on Windows) don't always make things atomic that they say they do.
Ideally your repositories (both the working dir and the repository) are local when possible, and you use push/pull to get changesets to/from a network share. If that's not possible then having a single local application using the repo on the remote file system is the best idea.
If you positively want to try having two clones using the same underlying repository check out the ShareExtension, which ships with Mercurial but is for advanced users only.
Instead of trying to piggy-back, why not just put a hook like this in your redmine repository:
[hooks]
changegroup = hg push //production/clone
That will automatically push changesets that arrive in redmine to production.
I have a website that I want to deploy to a clients DEV and UAT environments, the site is part of a mercurial repo - it is in the Website folder at the same level as the .hg folder. I know I can push the entire repository but would rather push only the website folder so the client does not have the other files and folders.
The repo looks like this:
Project root
.hg
Database (SQL Source Control uses this)
Documentation (All specs, pdfs, art work etc.)
Lib (pre-Nuget 3rd party dlls)
packages (Nuget stuff)
Website (this is the only area I want to deploy)
.hgignore
Project.sln
Edit:
The clients servers are not connected directly to the internet, my access to them is over a vpn and then RDP. Currently to deploy any changes I need to zip the site up, put it on a shared ftp server then wait up to 3 days for the files to be copied to the servers. Rules have been configured so I can use Mercurial over this connection.
Edit 2
I have managed to create a subrepo from the Website folder by forgetting the Website folder and all it's contents, committing the change then putting the files back, creating a repo then echoing out the .hgsub file. Locally this works for me, I can clone from the Website repo without getting any of the additional folders. However I have not been able to use this version of the repo, even if I repeat the process on our repo server. When I try to clone the hosted version down to my local working copy I get 404 errors, but I can clone the hosted version on the hosting server.
I would appreciate some step-by-step instructions (a guide for dummies if you like) on how to achive my goal; which is to be able to push only the Website folder to the clients servers. The master copy of the repo is on our repo server, I have a local clone and need to be able to push out versions from my copy.
Edit 3
Turns out that the problem I was having converting a folder to a subrepo as described in http://mercurial.aragost.com/kick-start/en/subrepositories/#converting-folder-into-a-subrepository was that the convert command, in versions after 2.1.0, is broken and is still broken in 2.3.1. After I figured that out and rolled back to that version of TortoiseHg I was able to convert the folder to a subrepo, in the root of the repo I have .hgsub which says Website = Website. I was able to work with that locally, commit to the whole repo, the subrepo, clone either the full repo or the subrepo (which is what I want), however I can't get this to work from our master repo server.
I zipped the whole thing up and ftp'd it to our remote master repo server, then set it up so I could clone from it. Directly on the server this works fine (hg clone --verbose -- C:\Repositories\EM .), however when I try to clone from the server to my local development machine with (hg clone --verbose -- https://myserver.com/hg/EM/ .) it fails with "HTTP Error: 404 (Not Found)".
requesting all changes
adding changesets
adding manifests
adding file changes
added 628 changesets with 6002 changes to 4326 files
updating to branch default
resolving manifests
calling hook preupdate.eol: <function preupdate at 0x00000000035204A8>
getting .hgignore
getting .hgsub
getting .hgsubstate
HTTP Error: 404 (Not Found)
[command returned code 255 Fri Apr 20 10:51:23 2012]
I don't know what the problem is, the files are there so why the 404?
In my opinion Mercurial shouldn't be used for this purpose. This is particularly true if that website is a web application because you shouldn't have the DLLs in Mercurial.
You should look at the web deployment tool built into Visual Studio. Have a look at this page to see if it suits your purpose.
If you can't install the required services on the destination server then it can be configured to use FTP instead.
You can not push part of repo tree
If DEV and UAT environments are unversioned targets, you can use any other way for distributing Mercurial content
You can separate Website into subrepo and will be able to push this repo
As others have pointed out you can't use push for this. Just do 'rsync' from your server to theirs. You could even automated that in a hook, where you push to a local repository and it auto-deploys to their site. Something like:
[hooks]
changegroup.deploy = $HG update ; rsync Website account#theirserver:/path/to/docroot
I have a working solution to this. I created a batch file that creates an outgoing repo and starts the built in server so I can pull from it on the client machines. First it clears out the previous folder, then clones from my local working copy (there's a parameter to determine which tag it should clone from). Next it creates a map file and converts the Website folder to a new Website2 folder in order to preserve the history then gets rid of the original folder and renames the new one. Finally it spins up the built in server.
cd c:\inetpub\wwwroot
rd /S /Q _ProjectName
hg clone -- C:\inetpub\wwwroot\ProjectName#%1 C:\inetpub\wwwroot\_ProjectName
cd c:\inetpub\wwwroot\_ProjectName
echo include Website > map.txt
echo rename Website . >> map.txt
hg --config extensions.hgext.convert= convert --filemap map.txt . Website2
cd Website2
hg update
cd ..
hg remove Website/*
hg commit -m "Removed Website"
rename Website2 Website
hg serve
So it isn't pretty, but now I just need to call the batch file and pass the tag I want to build the outgoing website from (uat, dev etc.) and give it a minute to create my Website folder, with history, that I can use to pull from or push from. I don't need to call hg serve because I know the names of the client servers so I can push the changeset out by creating aliased remote repositories. But I included that step so the client machines can pull. I haven't fully explored this option, so I'm not sure whether it's got any particular advantage. It's fine for the case when it's just me working on the project, but if any other developer needs to work on this then the Uri for their local project server will obviously be different (http://SIMON-PC:8000/ won't be the case for everyone), in which case pushing into the client might be best.
But by using this approach my local working repo doesn't need to change and so I don't get any issues communicating with our central repo, the 404 errors mentioned in edit3. I keep the entire history of the repo with the convert process, so the next time I need to send changes I'm not starting at revision 1 - in other words it isn't destructive of the Website and although I am deleting the entire outgoing repo (_ProjectName) each time I am retaining the history and yet in a position to pull / push ONLY the Website directory because it is created each time as a 'standalone' repo
I have a local machine ("laptop") and a shared Mercurial repository on another machine ("server").
The shared repository is set up as a multi-repository as described in the Mercurial documentation using Apache, the hgwebdir.cgi script and Mercurial 1.4.
The setup works in the sense that I can browse the projects (repositories) in the web browser, I can clone and pull from the server, and I can push from the laptop when the project/repository already exists on the server.
But I cannot create a new project on the laptop (hg init, do stuff, hg commit) and push it to the shared multi-repository (hg push http://server/hg/my-new-project-name) - I get "abort: HTTP Error 404: Not Found", presumably because the directory/project repository does not exist yet.
How can I push a new project/directory structure to a Mercurial running elsewhere? I couldn't find anything in the documentation, how do you guys do it?
You cannot create new remote repositories over http with the built-in functionality. Your options are to either:
create with a ssh clone: `ssh clone local-repo ssh://you#remote//path/to/repo'
log in to the remote repo and do a hg init where you want the repo. After that you can push to the new empty repo
Use a cheesy http-creation CGI like the one I wrote here: http://ry4an.org/unblog/UnBlog/2009-09-17
Update
I tried using Dropbox as described below, but couldn't make it sufficiently reliable, so I'm not recommending that option.
Original answer below, kept for context.
/update
I found one more option: Skipping both http and ssh altogether and using Dropbox for shared repos.
For the one-person-multiple-computers scenario, it looks like the simplest option of the lot, and you get backups as a nice side effect.
Here is a discussion on Hacker News