Does anyone know how to export or obtain the cpu information (core) for each Openshift project? - containers

Does anyone know how to export or obtain the cpu information (core) for each Openshift project? It is listed on the projects list in the console but there is not a way to export that information from the console. I also have not found the cli command either.

Related

Trigger external pipeline / job after Jira in OpenShift startet

I'm running jira in openshift using the basic image from atlassian: https://hub.docker.com/r/atlassian/jira-software
So far most things work fine.
I installed a plugin using the web ui which worked as well.
But now I'm running into an issue when a pod is restarted. The pod uses the image and naturally (as specified) my plugin is not installed anymore. I can install the plugin via webservice calls and register it as an osgi module for jira. But I don't want to do this manually. Building a pipeline or jon for this is quite easy (I'm thinking jenkins or ansible tower). But I so far I didn't find a way to trigger this pipeline after the pod is started (or better after jira is started).
Anyone got an idea how to handle this?
Thanks and best regards. Sebastian
Why not create a custom image based on the Atlassian image with everything you need installed?
As far as I know, there isn't a way to trigger a pipeline when a Pod is started; only Webhook, Image Change, and Config Change triggers are available. You'll need to write a Jenkinsfile to script all of the installation and setup you want, but then that can be triggered in one of the three ways mentioned.
I'm thinking an Image Change trigger would work best for you, so when the latest version of Atlassian's image comes out, you can run your pipeline to set everything up on the latest version.
Also, just curious, but do you have some persistent storage attached to the Jira pod? If not, you'll lose everything in Jira if the Pod dies; that means tickets, boards, comments, everything.
Update:
Looking at this page, it looks like most of the stuff you're trying to persist is stored in jira-home, so maybe mounting that as a persistent volume will be a good solution for you.
You're correct that the tickets are stored in the database, but I'm guessing the database connection settings are getting wiped when the Pod is cycled.
The jira-home directory stores your application and database connection settings, as well as a subdirectory for your plugins.
dbconfig.xml
This file (located at the root of your JIRA home directory) defines
all details for JIRA's database connection. This file is typically
created by running the JIRA setup wizard on new installations of JIRA
or by configuring a database connection using the JIRA configuration
tool.
You can also create your own dbconfig.xml file. This is useful if you
need to specify additional parameters for your specific database
configuration, which are not generated by the setup wizard or JIRA
configuration tool. For more information, refer to the 'manual'
connection instructions of the appropriate database configuration
guide in Connecting JIRA to a database.
jira-config.properties
This file (also located at the root of your JIRA home directory)
stores custom values for most of JIRA's advanced configuration
settings. Properties defined in this file override the default values
defined in the jpm.xml file (located in your JIRA application
installation directory). See Advanced JIRA configuration for more
information.
In new JIRA installations, this file may not initially exist and if
so, will need to be created manually. See Making changes to the
jira-config.properties file for more information. This file is
typically present in JIRA installations upgraded from version 4.3 or
earlier, whose advanced configuration options had been customized
(from their default values).
plugins/
This is the directory where plugins built on Atlassian's Plugin
Framework 2 (i.e. 'Plugins 2' plugins) are stored. If you are
installing a new 'Plugins 2' plugin, you will need to deploy it into
this directory under the installed-plugins sub-directory.
'Plugins 1' plugins should be stored in the JIRA application
installation directory.
This directory is created on JIRA startup, if it does not exist
already.

Download Directory from Google Cloud Compute Engine

I am trying to download a full recursive directory from Google Cloud Platform using the trial edition of the platform. I assumed that the "Download File" option under the SSH dropdown settings would work, but it does not, providing only a "Failed" message on the window.
Upon trying to look up the answer, I found people mentioning downloading files from storage buckets and such - that is not what this is and to my knowledge I don't have access to those on a trial edition of GCP. I have a compute engine running and can SSH into it and I am looking to download a full recursive directory from it.
Thank you for any advice that you can offer me!
If you already have SSH access, you can use the scp command to copy files(assuming it is available on the system to which you want to copy the files).
scp -r username#server:/path/to/your/directory /local/destination
Another option is to use SFTP if scp is not available. Various clients are available for this for various operating systems.
Either of these options will transfer the files over SSH without any additional configuration required on the server(compute instance in your case).

Urbancode IBM getting Versions

Is there any Rest API that can be used to get the versions of an application in DEV Environment in Urbancode.
I tried using supported Rest API to get List of application and Snapshots(If any) but now I need to get the versions (if any) of that particular application inside DEV Environment.
How do I go about it ?
Any suggestions would be appreciated.
Thanks.
In IBM UrbanCode Deploy, applications don't have versions. Components have versions.
If you've got the snapshot that has been deployed to the environment, you can get the component versions in that snapshot with this command:
https://www.ibm.com/support/knowledgecenter/SS4GSP_6.2.4/com.ibm.udeploy.api.doc/topics/rest_cli_snapshot_getsnapshotversions_get.html
If you don't have a snapshot, you can use the internal, unsupported API to get the current desired inventory. The command would be something like this:
GET https://ucdserver.example.com:8443/rest/deploy/environment/9e022848-ca4f-447e-9311-3d77103c612c/latestDesiredInventory/true?rowsPerPage=50&pageNumber=1&orderField=name&sortType=desc
That command returns JSON with the versions that have most recently been deployed to the environment and a lot of other info about the environment inventory.

click to deploy Hadoop on GCE not working

I'm trying "Click-to-deploy Hadoop on Google Compute Engine" here
Unfortunately this doesn't seems to work : either the process stops almost immediately, or it's like it's frozen.
message displayed is
Deployment may take 3 to 10 minutes to complete, depending on the size of your cluster
Creating deployment
In any case, I can't have any cluster. Tried several zones, Hadoop versions, nothing.
Any thought ?
The problem is occurring because your Cloud project does not have a project id associated with it, but only a project number, which is true for some long-standing Cloud projects.
https://developers.google.com/console/help/new/#projectnumber
You can fix this by going into Developers Console, selecting your project from the project list, selecting Billing & settings from the left-hand navigation, and adding the project id there.
The following URL should take you there directly:
https://console.developers.google.com/project/_/settings
Thanks,
-Matt
A few items to help diagnose the problem:
Go to the Compute Engine instance list and check if there are any instances created for the deployment.
Check if there are any errors raised to the Javascript Console for your browser.
BTW, what browser and version are you using?
Thanks.
No instance deployed (however I can (and had) deployed compute engine VM instances)
I have a 404 in console :
POST https://console.developers.google.com/m/deploy?pid=1090158225078&cmd=custom…ion=europe-west1&app=hadoop&xsrf=R5Ezthkrr1L8xU1STye3sXUiHiA:1414055456964 404 (Not Found)
on Chrome, Windows7
I tried on Firefox too : no 404 in console but same effect : no deployment at all.
The "customdeploy" command should not be returning a 404, so let's check if there's something going on with your Cloud project.
Click to Deploy uses the preview version of Deployment Manager on the backend. Let's check the objects (if any) that Deployment Manager has created for the Hadoop deployment.
To do this, you will need to:
Install the Google Cloud SDK (if you have not already)
Add the preview component
Query for Deployment Manager templates
Query for Deployment Manager deployments
Install the Google Cloud SDK:
Instructions are here: https://cloud.google.com/sdk/
Add the preview component:
gcloud components update preview
Query for Deployment Manager templates
gcloud preview --project=<projectid> deployment-manager templates list
Query for Deployment Manager deployments
gcloud preview --project=<projectid> deployment-manager deployments --region europe-west1 list
One last question. Is this a relatively "new" or "old" Google Cloud project? Sometimes old projects need a feature to be enabled that is automatically enabled on new projects.
Thanks.

Mercurial on share host without SSH

I am trying to get mercurial to work on a shared host without SSH access.
I have looked through https://www.mercurial-scm.org/wiki/PublishingRepositories and seen different means to publish my repository to the web but not been able to get any method to work because of certain dependencies.
For example, I looked at hgweb, it says
"First of all, you need to have a Python installation that can access the mercurial package. Verify this by running python and typing the following:
import mercurial"
I havent worked with python before and as such not sure where to run the "import mercurial" command. i am guessing via ssh for which i do not have access.
Also came across SCM-Manager, the quick guide just talks about downloading the source and extracting the content to your server, SCM manager depends on java and also not sure how to get that to work over the web.
Which of hgweb, SCM-Manager and RhodeCode is possible to set up without ssh access?
Short Answer: NONE
But you can at least try, if you have Java Application Server running (Tomcat, GlassFish etc), run SCM-manager as Web application (*.war)