I am trying to push a second version of my app (nodeJS + MongoDB) into my OpenShift account. It worked the first time, but now it fails with this error:
Erics-MacBook-Air:rippleRating ericg$ git push openShift master
Counting objects: 129, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (129/129), done.
Writing objects: 100% (129/129), 28.09 KiB | 0 bytes/s, done.
Total 129 (delta 94), reused 0 (delta 0)
remote: Stopping NodeJS cartridge
remote: Mon Apr 13 2015 07:53:08 GMT-0400 (EDT): Stopping application 'ripplerating' ...
remote: Mon Apr 13 2015 07:53:09 GMT-0400 (EDT): Stopped Node application 'ripplerating'
remote: Stopping MongoDB cartridge
remote: No such file or directory - /var/lib/openshift/xxxxxxxxxxxxxxxxf8000090/app-deployments/2015-04-13_07-53-10.382/metadata.json
To ssh://xxxxxxxxxxxxxxxxf8000090#ripplerating-<domain>.rhcloud.com/~/git/ripplerating.git/
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'ssh://xxxxxxxxxxxxxxxxf8000090#ripplerating-<domain>.rhcloud.com/~/git/ripplerating.git/'
If I rhc ssh to my app, I don't see the directory 2015-04-13_07-53-19.382, I have only app-files, current and by-id (app-files has the metadata.json).
BTW what would be a good place to add some files (secret.json) that I don't want to put in the git repo and can be used by the nodeJS app?
Thanks!
I recently came across this problem myself and wanted to share how I came to a solution.
To start, my /app-deployments directory contained the following:
by-id current redis-cli
Using ls -l reveals that current is actually a soft link to the currently running build.
current -> 2015-07-10_22-45-22.964
However, using the command file current also revealed that:
current: broken symbolic link to `2015-07-12_22-45-22.964'
That seemed strange, but it was consistent with the fact that there was no folder in the /app-deployments directory named with the most recent build's timestamp (2015-07-10_22-45-22.964). I removed current and attempted to push. Same result as the OP however, missing folder or directory for the new build with metadata.json inside.
After poking around in by-id and redis-cli, I found that redis-cli contained it's own metadata.json file with a lot of null values in it (both 'git_sha1' and 'id' were null). I played with the git_sha1 field to match both my previous and new commits, but nothing changed. The other folder, by-id, had a soft link in it as well which pointed to the redis-cli folder.
At this point I had backed up everything I wanted and I attempted to force a refresh to defaults of the /app-deployments directory by deleting everything in it and pushing. Surprisingly, it worked! Now my /app-deployments directory looks like this:
2015-07-10_22-45-22.964 by-id current
which is what I normally expect to see in there. Hopefully this will be helpful to someone!
As a side note, I later decided to enable openshift's support for multiple rollback versions, which you can read about here. It allows you to specify how many rollbacks you want to keep, which could be very valuable in another situation like this.
I finally got to the bottom of this one. I had created a folder under app-deployments, and that upsets the auto deployment logic in OpenShift. The current folder was deleted under app-deployments and I have to recreate it and put a metadata.json copy in it. Once I have done that I was able to deploy again using git push. I am gessing that if you have some secret data that cannot be kept in the git repo, they have to reside under app-root/data although this won't work for a scalable app... which in this case I am not sure where should I put those sensible data...
My answer is basically the one provided by #Will.R but shorter:
The problem comes from the fact that:
* app-deployments/current is a broken symbolic link to the most recent build.
If you want to fix this problem:
* Delete everything inside /app-deployments
* Push again
* :)
* Problem fixed.
Related
I have a 3 node bare metal K3s cluster where an install fails on one node, but not another.
My guess is that somehow the Kubernetes image repository on the node where the deployment failed is in a bad state. I don't know how to prove that, or fix it.
I did a helm install yesterday which failed with the following error:
Apr 14 14:28:41 clstr2n1 k3s[18777]: E0414 14:28:41.878018 18777 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"docker.ssgh.com/device-api:1.2.0-SNAPSHOT\": failed to copy: httpReadSeeker: failed open: could not fetch content descriptor sha256:cd5b8d67fe0f3675553921aeb4310503a746c0bb8db237be6ad5160575a133f9 (application/vnd.docker.image.rootfs.diff.tar.gzip) from remote: not found" image="docker.ssgh.com/device-api:1.2.0-SNAPSHOT"
I verified that I could pull the image from the repository using docker pull docker.ssgh.com/device-api:1.2.0-SNAPSHOT on my development VM and it worked as expected.
I then set the nodeName attribute for the pod specification to force it to one of the other nodes and the deployment worked as expected.
In addition I also used cURL to fetch the content descriptor, which worked as expected.
Edit for further detail.
My original install included 6 different charts. Initially only 2 of the 6 installed correctly, the remaining 4 reported image pull errors. I deleted the failing 4 and tried again, this time 2 of the 4 failed. I deleted the failing 2 and tried again. These 2 continued to fail, unless I specified a different node, in which they worked. I deleted them again and waited for an hour to see if Kubernetes would clean up the mess. When I tried again, 1 of them worked, but the other continued to fail. I left it over night, and its still failing this morning. Unless I move force onto a different node.
It is worth noting that the nodes in question are able to download other images from the same private repo without issue.
There can be multiple reasons for your pod not pulling the image on particular node:
Docker on non-working node is not trusting the image repo
Docker is not verifying the CA issuer for the repo
Firewall is not opened to image repo on non-working node
Troubleshoot using the following option to find the cause of the issue :
Check the connectivity to image repo on the non-working node
Check the docker config over non working node whether its allowing the image repo
Do docker pull on non working node
So, I'm enjoying using composer, but I'm struggling to understand how others use it in relation to a deployment service. Currently I'm using deployhq, and yes, I can set it to deploy and run composer when there is an update to the repo, but this doesn't make sense to me now.
My main composer repo, containing just the json file of all of the packages I want to include in my build, only gets updated when I add a new package to the list.
When I update my theme, or custom extension (which is referenced in the json file), there is no "hook" to update my deployment service. So I have to log in to my server and manually run composer (which takes the site down until it's finished).
So how do others manage this? Should I only run composer locally and include the vendor folder in my repo?
Any answers would be greatly appreciated.
James
There will always be arguments as to the best way to do things such as this and there are different answers and different options - the trick is to find the one that works best for you.
Firstly
I would first take a step back and look at how you are managing your composer.json
I would recommend that all of your packages in composer.json be locked down to the exact version number of the item in Packagist. If you are using github repo's for any of the packages (or they are set to dev-master) then I would ensure that these packages are locked to a specific commit hash! It sounds like you are basically there with this as you say nothing updates out of the packages when you run it.
Why?
This is to ensure that when you run composer update on the server, these packages are taken from the cache if they exist and to ensure that you dont accidentally deploy untested code if one of the modules happens to get updated between you testing and your deployment.
Actual deployments
Possible Method 1
My opinion is slightly controversial in that when it comes to Composer for many of my projects that don't go through a CI system, I will commit the entire vendor directory to version control. This is quite simply to ensure that I have a completely deployable branch at any stage, it also makes deployments incredibly quick and easy (git pull).
There will already be people saying that this is unnecessary and that locking down the version numbers will be enough to ensure any remote system failures will be handled, it clogs up the VCS tree etc etc - I won't go into these now, there are arguments for and against (a lot of it opinion based), but as you mentioned it in your question I thought I would let you know that it has served me well on a lot of projects in the past and it is a viable option.
Possible Method 2
By using symlinks on your server to your document root you can ensure that the build completes before you switch over the symlink to the new directory once you have confirmed the build completed.
This is the least resistance path towards a safe deployment for a basic code set using composer update on the server. I actually use this method in conjunction with most of my deployments (including the ones above and below).
Possible Method 3
Composer can use "artifacts" rather than a remote server, this will mean that you will basically be creating a "repository folder" of your vendor files, this is an alternative to adding the entire vendor folder into your VCS - but it also protects you against Github / Packagist outages / files being removed and various other potential issues. The files are retrieved from the artifacts folder and installed directly from the zip file rather than being retrieved from a server - this folder can be stored remotely - think of it as a poor mans private packagist (another option btw).
IMO - The best method overall
Set up a CI system (like Jenkins), create some tests for your application and have them respond to push webhooks on your VCS so it builds each time something is pushed. In this build you will set up the system to:
run tests on your application (If they exist)
run composer update
generate an artifact of these files (if the above items succeed)
Jenkins can also do an actual deployment for you if you wish (and the build process doesn't fail), it can:
push the artifact to the server via SSH
deploy the artifact using a script
But if you already have a deployment system in place, having a tested artifact to be deployed will probably be one of its deployment scenarios.
Hope this helps :)
Today i've tried to push changes into our shared repository hosted on an apache(2.2.x) running webdav(over HTTPS).
The repository in the dav-directory is a clone of my working directory. Option NoUpdate is enabled. Both Repositories are initiated.
To move on I mapped the dav-directory/repositoy as network drive and set the repository to push to "y:/"
When I try to push from Workbench the exception "aborted, ret 255" is thrown.
% hg --repository C:\wamp\www\ommon push y:
pushing to y:
searching for changes
abort: Y:\.hg/store/journal: The system cannot find the file specified
[command returned code 255 Thu Jun 20 12:08:28 2013]
Pushing from commandline throws:
pushing to y:\
searching for changes
abort: y:\.hg/store/journal: The system cannot find the file specified
Exception AttributeError: "'transaction' object has no attribute 'file'" in
<bound method transaction.__del__ of <mercurial.transaction.transaction object>>
I tried to alter the path to directory since the side-swapped dividers are looking strange to me. But it did not succeed.
Further information: I'm not using hgweb or any cgi-script based version.
EDIT Multiple google entries in reference to the issue left me with the idea that pushing changes to a repository provided by webDAV is not entirely possible. Further I have to use hgWeb to resolve that.
But why do I have to? My idea is that webDAV is capable of writing. Since i mapped the directory as a network drive - mercurial should be able to push changes on to the webserver likewise it does to a local directory.
Can someone confirm this?
Windows WebDAV support can be shaky. It's very possible that because of mercurial's likely advanced file-system operations, the OS does something incorrectly, or something apache's mod_dav cannot cope with.
It's also possible that something simpler is wrong, like apache blocking access to paths starting with a ..
You may be able to find something in your apache log, but I would recommend not doing this and use a true mercurial server instead.
Mercurial's http-repositories NEVER speak on WebDAV
You have to use any Mercurial-capable web-frontend for communication with repo or mount WebDAV-drive as local drive and access repository on it as repository on local FS
Somewhere I did something silly.
I was deploying my Rails app via cloning the Mercurial repo down onto my Ubuntu server. It worked the first time, and then...well, I made a small change on my dev machine, pushed the changes to the repo, and then deleted the copy on the Ubuntu server and re-cloned from the repo.
The clone operation (the second, and third, and 'n' times) works without error, but I don't have write access to the files that were cloned.
When I try to startup my mongrel - it can't create the /tmp folder, and because of no write access, fails to start the Rails app.
Fixed through work around stated in comment above.
This is probably a simple problem and I'm feeling exceptionally dumb because I can't find a any kind of documentation.
I've just installed TeamCity 5 and I want to get files from my Mercurial hosting and there is two fields I just can't figure out.
HG Command path. What should I put here? The path to a file containing what? Can I get an example of that file somewhere?
The host is using Mercurial over SSH where do I define my private key?
Pull changes from? Should I put the address I'm cloning from i.e. ssh://username#myhost.something/project
I figured this out for my TeamCity 5 server last week.
HG Command path: HG
Pull changes from: https://bitbucket.org/.../.../
Don't put the username# in the URL. This is specificed as in the Username/Password fields. If you include the username in the URL it'll fail as there is a bug in the configuration tool. You'll also see a screenshot of the configuration attached to the thread:
http://www.jetbrains.net/devnet/message/5254640#5254640
I'd suggest getting things working with HTTPS and then moving to SSH if possible. This breaks things down into two easier to solve configuration problems. I used the following tutorial to get SSH going on my Windows client machine.
http://www.codza.com/mercurial-with-ssh-setup-on-windows
I've not set this up on my TeamCity server yet. However I did get TeamCity to pick up my Mercurial.ini settings by putting the ini file in \Documents and Settings\TeamCity, which is the account the service runs under.
I've not used team city, but I think hg command path is probably the full path to your local mercurial executable. For me (on linux) that's:
$ type hg
hg is /usr/bin/hg
On windows it's where the 'hg' executable in your system path was placed by whichever (of the many) windows installers for mercurial you used.
Pull changes from sounds like the URL to the repo, so:
ssh://username#myhost.something/project
or
ssh://username#myhost.something//project # note the _two_ double slashes
if you're using absolute paths on the server side.
Your private key location/specification depends on what you're using for ssh and whether or not you're running ssh-agent, but here's a links that explicitly points from within mercurial.ini, which seems sound:
http://dev.openttdcoop.org/projects/home/wiki/Configuring_TortoiseHg_(Windows)#Pointing-to-you-Private-key