Executing Hudson jobs remotely - hudson

I am trying to automate Hudson by hitting the appropriate urls remotely. I am using python's urllib2 for doing the same.
First of all , I am trying to build an existing job and get the build status.
A sample url for the build would look like this:
http://tomcaturl:8080/hudson/job/.NET%20Build/build
However this returns to me html data.
Hudson docs say that I can get data in python/json/xml format, so I try to hit
http://tomcaturl:8080/hudson/job/.NET%20Build/build/api/json
But I get no data at all, although the build happens successfully.
Is there a way to find out which build was started by my remote build request, so that I can maintain a one-to-one mapping.
Please note that I am doing this through a remote python program and I DO NOT have access to hudson GUI.

First of all, if you have any security/login enabled you have to be logged in to the remote hudson server for the /job/JobName/build. If you allow starting the build without being logged in, this is not a problem.
The /job/JobName/build request will return html data. If you are not logged in you will get a repsonse redirecting to the login page and the build will not be started. If the request is successful you will not get a redirect to the login, and you can assume the build was queued. You can also check the build queue using the api url of the project (see below). Note that there may be a delay before the build is started, which you can control by calling /job/JobName/build?delay=0sec
The API is not available under the job/JobName/build url, but you can see api information here:
http://tomcaturl:8080/hudson/job/.NET%20Build/api
Most pages in hudson that shows information (about a project, a specific build and so on) has an api page if you append /api/xml or /api/json to the end of the url.
The reason /job/JobName/build doesn't have an api page is simply because it isn't an url to an information page.
Example api requests:
xml call for information about the project:
http://tomcaturl:8080/hudson/job/.NET%20Build/api/xml
json call for information about the last successful build of the project:
http://tomcaturl:8080/hudson/job/.NET%20Build/lastSucessfulBuild/api/json

Related

Banno OATH Cannot GET /v0/oidc/auth

Completely working Banno simple-plugin-example using one AWS Linux server with NodeJS copied to be transitioned to work under a Microsoft IIS server with NodeJS and URL Rewrites and all that entails basically worked out; but fails when it gets to actual process of OATH apparently as getting a "Cannot GET /v0/oidc/auth" response. Tried a number of ideas; but looking for some ideas to try.
I'm unable to reproduce the behavior that you saw, which makes me wonder if you ran into a temporary blip.
Assuming that we're talking about the latest version (commit c8775db2e3d9ecb4ce9ca708475d81d5936adf0e) of the Simple Plugin Example, then there are a few things to check and/or try.
It'll be good to check that the environment value in your config.js is correct. The config-EXAMPLE.js in the repo uses https://digital.garden-fi.com which matches up with the Garden demo financial institution. However, if you're not running this with a Client ID and Client Secret that's from Garden (i.e. this is with a different financial institution) then you'll have to change that environment value as appropriate for your financial institution.
It'll be good to double-check that the Client ID and Client Secret match up with your External Application which is configured for your plugin.
If you're running the Simple Plugin Example locally, then you can try navigating to the http://localhost:8080/dynamic URL which is expected by the plugin when the code is run locally. This is a good way to figure out if the sample code itself is running as expected.
Assuming the above is fine, it'll be good to double-check the "URL Rewrites" which you mentioned...it's unclear what those URL rewrites are doing, but it's possible that you have some code which is interfering with what the sample code is expecting.

I need to run and fill Salesforce Web-to-lead form on herokuapp

I generated a Web-to-lead in Salesforce form, received an html file, installed a heroku CLI and registered in heroku. What should I do next to run this form in an html file and create a new record in the Lead object in Salesforce. I need a detailed answer as a beginner.
You have a static html page you want to upload somewhere to Heroku. Heroku doesn't do "static" stuff, you can "trick" it into treating your page as a PHP project.
Have you seen posts like
https://medium.com/#winnieliang/how-to-run-a-simple-html-css-javascript-application-on-heroku-4e664c541b0b
https://medium.com/#agavitalis/how-to-deploy-a-simple-static-html-website-on-heroku-492697238e48
https://blog.teamtreehouse.com/deploy-static-site-heroku
https://javascript.plainenglish.io/deploy-your-static-sites-for-free-on-heroku-in-seconds-7644959356a7
Or really you could follow any detailed Heroku "getting started" guide: https://devcenter.heroku.com/start

How to run a python script when a hyperlink is clicked from a webpage?

I have some code on my mac in the latest version of python idle 3, that collects certain data from a csv file that gets sent to myself and prints out the output in the terminal. I want to create a webpage that has a button or link that a user clicks and it runs the code and prints out the output of my program.
Eventually i want to be able to create a website with multiple links that can do the same operation.
Will i need to create an sql database? If so how?...
From the sound of it, you want to use a webpage as a user interface for your python script. Unfortunately without utilising a server-side language this is not possible.
Multiple options exist for reacting to a button press on the server side, with PHP being the most well known, but solutions using only python do exist, such as Flask.
If you're just after a local GUI for your script, simpler options exist within python such as Tk.
Actually you can expose this function using a webserver and then the webpage will call the server with the right url.
Since you are using python I will recommend to take a look at Flask http://flask.pocoo.org/ great micro framework to get you started.

How can I get Facebook's open graph scraper to successfully parse my single-page website hosted via GitHub Pages?

One of my friend set up a simple one-page website and asked me to help to integrate open graph metadata so that sharing on Facebook provides a better user experience.
Unfortunately, Facebook doesn't recognize some values and Facebook's URL Debugger doesn't really help, cause it shows stuff from the registrar by default and fails with the error message Error parsing input URL, no data was cached, or no data was scraped. when I click on the Fetch new scrape information button. Also, when I click on See exactly what our scraper sees for your URL, I get the following error Document returned no data.
The URL is: http://know-your-limits.com/. The registrar is Gandi and the site is hosted on GitHub. The DNS configuration is as follows:
dig know-your-limits.com +nostats +nocomments +nocmd
; <<>> DiG 9.8.3-P1 <<>> know-your-limits.com +nostats +nocomments +nocmd
;; global options: +cmd
;know-your-limits.com. IN A
know-your-limits.com. 10771 IN A 192.30.252.153
Is there something I could do to fix this on something I have control on (ie registrat configuration, GitHub repository update, HTML update) as opposed to stuff I don't have control on (ie GitHub web server)?
Do you think it is a bug with GitHub hosting?

Scripted Dashboards for Graphite

I am trying to generate dashboards for some metrics using graphite. Ideally, if i would like to display metrics such as CPU usage, Memory, and log statistics stored in graphite whisper DB. Is there any tool (and documentation) such as kibana3 which supports scripted dash-boards. Thanks
Try Grafana (http://grafana.org) it is based on Kibana.
Generated graphs can be configured and saved in the following ways-
1. Dashboard
The dashboard can be accessed at- http://graphite-url/dashboard. Once you display the graph(s), you can configure the size, lineMode, etc. Once done, save the dashboard by going to dashboard -> save as.
2. Composer
While installing graphite webapp, the django user auth details can be used to log into the webapp. Once logged in, every graph will have a save icon on the top-left of the composer window. Saved graphs will be saved under user-graphs, under the metric-tree.
3. Render endpoint
If by 'scripting' you meant content, than rendering, use the render URL endpoint. Generate the required url by script and do something like- http://graphite-host/composer/&target=a.b.c&target=d.*.e.f
Not sure if this is what you mean but for example using the graph-explorer graphite dashboard you can go to "/dashboard/server-basics/insert_hostname_here" and it will server graphs (cpu, memory, diskspace) for the given hostname. you can change the dashboards or add more to do the same things for other metrics.
edit: this does assume that the plugins that it uses were able to parse your metrics.