Because, if there are, I can't find them. Basically, I would love to be able to show the build history on another web page. Has this been done before?
Hudson provides rss feeds of build histories. Links are at the bottom of the build page and will look something like this: http://hudson.company.com/rssAll
Then you can use the Google AJAX Feed API: http://www.google.com/uds/solutions/wizards/dynamicfeed.html which will generate all the code for you!
In addition to the RSS feeds mentioned by Scobal there's also the Hudson Remote access API, which seems easy enough to use..
It all depends on how much information you need and how much effort you're willing to put into it yourself.. From the looks of it implementing something simple won't take you more than 15 minutes.
Related
I want to show the amount of visitors for a Bolt page in the frontend. I could not find a proper solution for this in the docs or in the extensions.
Is this possible with Twig? Or should I use a third-party solution?
From the point of view of Twig, no. You would need a reliable way to log each visit, remove bots and crawlers and still no trash your database.
I wrote something for WP once, well twice actually, that attempted to do the visit tracking but it ended up putting a lot of pressure on the backend to track things that people like Google had far better resources to handle in terms of a) knowing what should be considered a 'real' visitor, and b) storing/processing the associated data.
What you however could do, implement Google Analytics to log the visitors and then use the API to display visitors on the front-end.
Might be quite a hassle to figure out how the API works for certain pages, but it will solve your problem.
Good luck!
Also see https://developers.google.com/analytics/
I'm a relative beginner using Google Apps Script and JavaScript, but I've been playing around with bot for days now and I've created a few simple programs and I'd really like to try and get started on my dream project, even if it takes me forever. I'd like some advice on what I should use in terms of making the UI and what database I should use to hold the information (and if this app is even possible).
The App
I'd like to create an online novel management app that utilizes Google Drive as it's source for files. The UI would have a tree that showcases all the google drive files in the novel. When a scene is clicked, the scene opens up for editting.
Questions
Is this app a possibility?
If so, in terms of a UI, what do you think I should use? The google
provided UIbuilder? The HTML service - for example, can I have a
frame on the right that the google doc that needs to be editted
can open up in on the right?
Lastly, what database should I use? The database would have to store
chapter names and positions, as well as scene names, positions,
and the google doc ID that the scene corresponds to. I've got a
handle on ScriptDB and Spreadsheets... And if either of these two
aren't the best option, would some other database work better? And
why?
This app will, hopefully, be able to give an overview of a novel in tree form, allow you to open a particular scene and edit it, create new scenes, and also change the order in which the scenes are displayed. And then when the person finishes their novel, the app will compile all the scenes into one novel (also in google Drive).
Any insight or suggestions would be greatly appreciated!
Having a look at the questions you recently posted I think I have a pretty good idea of what you are trying to do and it looks like an exciting project... I can only encourage you to start it as soon as you can even if you're not comfortable with all the tools you will need to use, the best learning method is probably to work on something important to you.
Now your 3 questions : 1 - This is perfectly doable in the GAS environment and shouldn't be too hard to go through.
2 - the GUI builder is an easy way to start with UI but it lacks a number of features and tools that you will be needing (tree for example ) and is not so easy to expand if you ever need to. Depending on your knowledge in html, the choice is mainly between UiApp and html service... I would choose UiApp because I'm not good at all in html (but that's not relevant here ;-) but both are capable of building what you want, are easily expandable and not too hard to debug. The advantage could go to html service if you are going to look for 'nice looking features' because it opens the door to 3rd party tools... but again, this is a matter of personal choice.
3 - A recent post from Mogsdad showed that spreadsheet are faster than scriptDb for data storage and manipulation. I find it also easier since I can have a global view on data in the spreadsheet when debugging. Of course Spreadsheet must be considered as a container and data manipulated at array level to benefit from maximum performance. I use that in a lot of database application with full satisfaction.
Sorry for these "general considerations" that don't comply to sto standards ;-)
Yes, it seems that all of the things you are requesting are not too ridiculous. I recommend sticking to Google services because they are all easily integrated. To start off, you may want to use the UI builder/UI services. There may be a point in this project where you may want some functionality that the UiApp doesn't provide. At that point, you might want to switch over to HtmlService.
My answer is the same for the databases question. You might want to use a spreadsheet for your database so that you will be able to easily edit it by hand if you need to. You may not have the performance that another database would give you, but it will be fairly easy to test and mess around with your spreadsheet "database."
You could start out with getting the basics down. There's a serious amount of data out there. I would suggest you research on an "as-needed" basis. Design some work-/dataflow patterns for your app, for which you could try to use the Fluid UI extension for Chrome. Have a look at this from Mozilla on the designing of apps.
When you've gone through this you might want to have a look at Phonegap and the basics of web development and how you could combine the two.
There's also several ways of using/storing data. You could try WebSQL though it they no longer develop it. You could look at IndexedDB. You could try to use cookies.
Seriously, have a look around. You might also like the books of Wrox. They're very informative and have great work with reading demo's. Though the books are huge ;)
Does anyone know of any good tutorials that would show me how to create a sitemap similar to the image below. I can't figure out how to add the different sections underneath like the Your Account, FAQs, etc.
Any help would be extremely helpful. Thank you.
google search for instagram
What you want is what Google calls Sitelinks.
The process is automated and it's not possible right now to create them but you can manage them with the Google's web master tools. The algorithm used by Google to generate them is not public.
You can try this: http://www.xml-sitemaps.com/, or just google 'xml sitemap generator'
I remember that what happened (in my 'previous' life, when I had to take care of all the gory details of our company site) I just followed google's recommended seo suggestions. It was painstaking and slow, but over time when we started turning up at the top of sear results, that exactly how google presented us. It pulled relevant information on it's own and created that nice display. Looking at my old codebase I don't even see a sitemap file there. But I do remember using one of those online generators and then hand turning it a bit.
I am creating a desktop app that will create some reports. I want to export these reports as RSS or ATOM feeds. I can easily create feeds with Rome lib for Java. But I have no idea how to spread them. I thought about embedding httpd into my app, but it's bad idea, because a computer can be behind NAT or turned off.
I need some kind of "proxy" server, where can I push my feeds, and clients will be able to pull content from that server.
I can probable write server side app fore this, but first I'd like to find out if some dedicated solution is available for problems like this.
I was also thinking about using some blogging platform and using its API. What do you think about this approach?
One more thing I have to consider when choosing platform ability to handle lot of updates. Sometimes desktop app will be shut down but when it will be running, it generates quite a lot of updates.
Check out Google's feedburner.
EDIT
Here's a better link for their help / faq. You'll still need to use some service to generate your feed, but it won't have to handle a heavy load. Feedburner will poll your feed every 30 minutes and their servers will act as a proxy for your feed. As far as how to publish the feed for Feedburner to read, I would recommend writing a service to handle this, even more considering that you getting the data for the feeds from a number of desktop applications, and it'll probably be easier to write a custom service to interface with them, store your data in a DB, and publish feeds than it would be to try and modify a blogging service for this purpose.
I don't know why I didn't think of this when I first answered your question, but Yahoo has a service called Yahoo Pipes which allows you could use to generate feeds from various kinds of inputs. I'm not sure how well it would scale but it might work for you.
I need to write a script that go to a web site, logs in, navigates to a page and downloads (and after that parse) the html of that page.
What I want is a standalone script, not a script that controls Firefox. I don't need any javascript support in that just simple html navigation.
If nothing easy to do this exists.. well then something that acts though a web browser (firefox or safari, I'm on mac).
thanks
I've no knowledge of pre-built general purpose scrapers, but you may be able to find one via Google.
Writing a web scraper is definitely doable. In my very limited experience (I've written only a couple), I did not need to deal with login/security issues, but in Googling around I saw some examples that dealt with them - afraid I don't remember URL's for those pages. I did need to know some specifics about the pages I was scraping; having that made it easier to write the scraper, but, of course, the scrapers were limited to use on those pages. However, if you're just grabbing the entire page, you may only need the URL(s) of the page(s) in question.
Without knowing what language(s) would be acceptable to you, it is difficult to help much more. FWIW, I've done scrapers in PHP and Python. As Ben G. said, PHP has cURL to help with this; maybe there are more, but I don't know PHP very well. Python has several modules you might choose from, including lxml, BeautifulSoup, and HTMLParser.
Edit: If you're on Unix/Linux (or, I presume, CygWin) You may be able to achieve what you want with wget.
If you wanted to use PHP, you could use the cURL functions to build your own simple web page scraper.
For an idea of how to get started, see: http://us2.php.net/manual/en/curl.examples-basic.php
This is PROBABLY a dumb question, since I have no knowledge of mac but what language are we talking about here, and also is this a website that you have control over, or something like a spider bot that google might use when checking page content? I know that in C# you can load in objects on other sites using an HttpWebRequest and a stream reader... In java script (this would only really work if you know what is SUPPOSED to be there) you could open the web page as the source of an iframe, and using java script traverse the contents of all the elements on the page... or better yet, use jquery.
I need to write a script that go to a web site, logs in, navigates to a page and downloads (and after that parse) the html of that page.
To me this just sounds like a POST or GET request to the URL of the login page could do the job.With the proper parameters username and password (depending on the form input names used on the page) set in the request, the result will be the html of the page that you can then parse as you please.
This can be done with virtually any language. What language do you want to use?
I recently did exactly what you’re asking for in a C# project. If login is required your first request is likely to be a post and include credentials. The response will usually include cookies which persist the identity across subsequent requests. Use Fiddler to look at what form data (field names and values) is being posted to the server when you logon normally with your browser. Once you have this you can construct an HttpWebRequest with the form data and store the cookies from the response in a CookieContainer.
The next step is to make the request for the content you actually want. This will be another HttpWebRequest with the CookieContainer attached. The response can be read by a StreamReader which you can than read and convert to a string.
Each time I’ve done this it has usually been a pretty laborious process to identify all the relevant form data and recreate the requests manually. Use Fiddler extensively and compare the requests your browser is making when using the site normally with the requests coming from your script. You may also need to manipulate the request headers; again, use Fiddler to construct these by hand, get them submitting correctly and the response as you expect then code it. Good luck!