Hi I just create a new project in Google App Engine.
I wanted to use the release pipeline with the option of building and testing {python nosetest option}. I have connected the repository with bitbucket. I tried to move source control from bitbucket to the default google source control option. There does not seem to be any difference.
When I try to create the pipeline I get the following error.
Failed to create the pipeline
I figured this out.
You need to initialize compute-engine VM first. I created a project and did not initialize compute-engine.
There is an issue about it not working here: https://code.google.com/p/googleappengine/issues/detail?id=10952
Related
I want to host a website via Firebase hosting. When i run firebase init on cmd i get this:
First, let's associate this project directory with a Firebase project.
You can create multiple project aliases by running firebase use --add,but for now we'll just set up a default project.
i .firebaserc already has a default project, using my_project.
How can i change my project? Since I've deleted the default project that firebase mentioned, when i run firebase deploy i got this error:
Error: Failed to get Firebase project my_project. Please make sure the project exists and your account has permission to access it.
Can anyone show me the way out? Thanks.
The solution to this is very simple. All you have to do is go into your .firebaserc and change the defualt to the name of your project. You can probably also just delete .firebaserc and rerun firebase init.
I'm trying to configure continuous deployment with Openshift. I have sample React.js application which I want to deploy automatically when the new commit appears in the repository.
I created "application" and "build" and in Openshift, the build is deploying the web app to a new pod well (I used standard Node.js builder template). But it doesn't do it automatically when the code is updated in repo. How can I make Openshift build to observe the repository?
Short answer is: webhooks.
Openshift exposes so called webhooks. Source code repository (for example Bitbucket) can notify Openshift's build (using a web interface) when event like a push to the repository happens.
In build settings we have triggers section. We can configure new trigger with a specific secret. The triggers are visible in the configuration section of the build in read only mode then. They are the https addresses basically.
After creation of the trigger we can configure the repository to notify Openshift using webhook.
I have installed and configured a custom Laravel private repository hosted on bitbucket on minishift running on my laptop. I found that all the files were imported properly without any issues and the image is running.
However, now I want to make configuration changes in my repository for my application to work. How do I make it?
Will I have to import the image from VM on my laptop, work on them
and then push the changes back
Or will I be able to access the files or folder from within my editor or IDE?
I am new to Openshift origin and using it for the first time.
If you have your source code on Bitbucket, you would checkout the repository to your local laptop, make the changes, commit them, and push them back to the repository on Bitbucket. You would then tell OpenShift to rebuild the application by clicking on the Start Build button on the build configuration details in the web console, or by using oc start-build on the command line, supplying it the name of the build configuration to do the build for. The rebuilding of the image from the code when done will automatically trigger a new deployment. If you set up a webhook in Bitbucket, you can have it tell OpenShift when new changes have been pushed and that will trigger a build without you needing to do it manually.
If you are quite new, I would suggest you work through the interactive tutorials at:
https://learn.openshift.com
Also read the free eBook on OpenShift.
https://www.openshift.com/promotions/for-developers.html
I've been trying to follow the
Setting Up Stackdriver Debugger for Java applications on Google Compute Engine, but am running into issues with Stackdriver Debug.
I'm building my .war file from a separate build server, then deploying it to my GCE server. I added the agent to the start command via /etc/defaults, and my app appears in the https://console.cloud.google.com/debug control panel. The version I set in the run command matches the revision that shows up in the source-context(s).json files.
However when I click open the app, I see the message that
No source version information was provided by the deployed application
I connected the app's git repo as a mirrored cloud repository, and can browse the source files in the sidebar of the Stackdriver Debug page. But, If I browse to a file and add a breakpoint I get an error that the error "File was not found in the executable."
I have ran the gcloud preview app gen-repo-info-file command, which created two basic json files storing my git repo and revision. Is it supposed to do anything else?
I have tried running jetty using both normal and extracted modes. If I have jetty first extract the war file, I can see the source-context.json filesin the WEB-INF/classes directory.
What am I missing?
https://github.com/GoogleCloudPlatform/cloud-debug-java#extra-classpath mentions
you can update the agentPath showing your WEB-INF/class directory.
-agentpath:/opt/cdbg/cdbg_java_agent.so=--cdbg_extra_class_path=/opt/tomcat/webapps/myapp/WEB-INF/classes
For multiple class paths:
-agentpath:/opt/cdbg/cdbg_java_agent.so=--cdbg_extra_class_path=/opt/tomcat/webapps/myapp/WEB-INF/classes:/another/path/with/classes
There are a couple of things going on here.
First, it sounds like you are doing the correct thing with gen-repo-info-file. The debugger agent should pick up the json files from the WEB-INF/classes directory.
The debugger uses fuzzy matching to find source files, so as long as the name of the .java file matches a file in your executable, you should not get that error.
The most likely scenario given the information in your question is that you are attaching the debugger to a launcher process, rather than your actual application. Without further details, I can't absolutely confirm that, though.
If you send us more details at cdbg-feedback#google.com, we can look more closely at your case to see if we can understand exactly what's happening, and potentially improve our documentation, since it sounds like you followed the docs pretty closely.
I currently have Jenkins set up with a number of jobs, but it's proving difficult to back up because the artifacts are stored within the job directory. I'd like to back up the job configurations and artifacts separately. I'm sure I remember reading somewhere that Jenkins now has an option to store them outside the job, but I can't find this.
Is there any configuration option that does this while still making the artifacts visible from within the job on the Jenkins interface? (ie rather than merely an add-in that copies the artifacts elsewhere)
Go to your jenkins configuration page, e.g.
http://mybuildserver.acme.com/configure
At the top of the configuration page there is a "home directory" setting. Click the "advanced..." button below it.
Now set the "Workspace Root Directory" to e:\jenkins-workspaces\${ITEM_FULL_NAME}, and "Build Record Root Directory" to e:\jenkins-builds\${ITEM_FULL_NAME} or something similar.
Warning: I run Jenkins 2.7.2 and noticed that certain features don't work properly after configuring Jenkins like that. I saw problems with folders and problems with the multi-branch project plugin. Check the status of those issues if your rely on these features.
As you can see here, there are many plugins to deploy artifacts anywhere you want/need, on FTP, CIFS, Confluence, Artifactory.... especially the ArtifactsDeployer that will allow you to make a copy of the artifacts in the Jenkins Home.
Thank you Sam, for your post, which directed me into the right direction to solve my problem.
Have been searching for a way on how can I make a symlink to the Job-Archive of a build for multibranch projects. Up to now, we used to manually search for the correct folder basename in the filesystem and added that one to the Jenkinsfile.
Now, I can simply use
jobOutputFolder = currentBuild.rawBuild.artifactsDir.path
and use that in my script.
If security is a concern, I could implement that as a shared library additionally.
Try the Use Custom Workspace build option. From the Jenkins popup help:
For each job on Jenkins, Jenkins allocates a unique "workspace
directory." This is the directory where the code is checked out and
builds happen. Normally you should let Jenkins allocate and clean up
workspace directories, but in several situations this is problematic,
and in such case, this option lets you specify the workspace location
manually.
This option is also available under advanced project properties of multi-configuration project builds.
A groovy script under "Prepare an environment for the run" will always run on the master, and this groovy script can create a symlink to where you really want artifacts archiving to archive_to which SHOULD include the job name and build number:
if (! Files.createSymbolicLink(Paths.get(currentBuild.artifactsDir.path),
Paths.get(archive_to.getCanonicalPath()))) {
throw new RuntimeException("Can't create symlink to archive dir")
}
Of course (sadly) when old builds are purged by Jenkins the old artifacts are left because jenkins will not follow a symlink when purging, even if jenkins owns the symlink and the target (shame).
I workaround for that may be to point a symlink back from the new archive dir, then, when jenkins purges it's archive dir, the new symlink will dangle and a cron job can then later delete the new job archive dir
Copy Artifact Plugin (https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin) adds a build step for retrieving files from another project's workspace to current and work from there.