I'm looking to do some dead-simple logging from a web app (client-side) to some remote service/endpoint. Sure, I could roll my own, but for the purpose of this task, let's assume I want an existing service like Logentries/Splunk/Logstash so that my viewers can still log debugging info if my backend goes down.
Most logging services offer an API where I can import some <script/> onto my page and then use an API like LE.log('string', data); [Logentries example]. However that pulls in a JS dependency and uses cross-domain XHR for probably well-founded reasons (like URI length limitations).
My question is if anyone can point me to a service that will let me send simple query params to a "pixel" endpoint (similar to how Google Analytics does it). Something like:
<script>
new Image().src = 'http://something.io/pixel/log/<API_TOKEN>?some_data=1234';
</script>
-- or, in pure HTML --
<img src="http://something.io/pixel/log/<API_TOKEN>?some_data=1234" style="display:none" />
I'd assume some of the big names in the logging-as-a-service space would have something like this but I've not found anything (or it's too specific to turn up any search results).
This would not be for analytics so much as error logging, debugging, etc. Fire-and-forget sort of stuff.
Any advice appreciated.
It's possible to do this with Logentries, they offer a pixel tracker.
They require that data is sent in a base64 encoding, but that's quite simple in Javascript.
From their documentation:
var encoded = encodeURIComponent(btoa("Log message"));
This data can then be used in a pixel tracker like this:
<img src="https://js.logentries.com/v1/logs/{API-TOKEN}?e={ENCODED_DATA}/">
Related
Good Evening!
I've been looking into the possibility of using GAS(Google Apps Script) to host a small bit of javascript that lets me use the new Google finance apps api. The intention being that I'll be using the stock information for a project which involves the use of stock data. I know that there are a few ways to get stock information from Google, but the data that the finanace app returns is more in-line with other sources we are using. (One constraint on this project is that we have multiple sources).
I've written the javascript and I can call a httpc:request to the URL for the script given to me from Google. In the browser the JS returns the json object as I want it, however when the call is made from Erlang I'm getting it in a list of ascii. From checking the values it appears to be a document starting like:
Below is the javascript and the url to see the json:
https://script.google.com/macros/s/AKfycbzEvuuQl4jkrbPCz7hf9Zv4nvIOzqAkBxL1ixslLBxmSEhksQM/exec
function doGet() {
var stock = FinanceApp.getStockInfo('LON:TSCO');
return ContentService.createTextOutput(JSON.stringify(stock))
.setMimeType(ContentService.MimeType.JSON);
}
For the erlang, it's a simple request but I've not been doing erlang long, so perhaps I've messed something up here (The URL being the one mentioned above). I've got crypto / ssl / inets when I'm testing this on the command line.
{ok, {Version, Headers, Body}} = httpc:request(get, URL, []}, [], []).
I think it's also worth mentioning that when i curl it from Cygwin, I get a massive load of HTML also, I've included it below, but if you see it you'll thank me for not posting it in here! http://pastebin.com/UtJHXjRm
I've been updating the script as I go with the new versions but I'm at a bit of a loss as to why it's not returning correctly.
If anyone can give me any pointers I'd be very grateful! I get the feeling that it's not intended to be used this way, perhaps only within other Google products and such.
Cheers!
It would be necessary to review how are you deploying the Web App, specifically the Who has access to the app, to access without authentication should be configured as shown in the image:
See Deploying Your Script as a Web App from the documentation.
In my test, by running:
curl -L https://script.google.com/macros/s/************/exec
Get the following result:
{
"priceopen":358,
"change":2.199981689453125,
"high52":388.04998779296875,
"tradetime":"2013-10-11T15:35:18.000Z",
"currency":"GBX",
"timezone":"Europe/London",
"low52":307,
"quote":357.8999938964844,
"name":"Tesco PLC",
"exchange":"LON",
"marketcap":28929273763,
"symbol":"TSCO",
"volumedelay":0,
"shares":8083060703,
"pe":23.4719295501709,
"eps":0.15248000621795654,
"price":357.8999938964844,
"has_stock_data":true,
"volumeavg":14196534,
"volume":8885809,
"changepct":0.6184935569763184,
"high":359.5,
"datadelay":0,
"low":355.8999938964844,
"closeyest":355.70001220703125
}
Possibly your GET is not following the REDIRECT that happens when you use contentService. Look at the html returned there is a redirect in there.
I'm doing a little project for my class and I'm just a beginner, so please forgive me if I mix up some of my terminology.
Basically, I'm creating an interactive journey planner for my city's public transit system. Unfortunately, they haven't made all the data I need publicly available. So instead of putting all my time into gathering the data for personal use, I've opted to do some screen scraping - letting their servers calculate the journey info from a START and STOP variable and then displaying the selected info on my page.
So is it possible to fill out a form's fields remotely, and then scrape the data on the page that subsequently loads? And if so, what would be the quickest, most convenient way? This happens to be a case where the data can't be manipulated via the URL, so it has to access the data by filling out the form first.
The website in question:
http://jp.translink.com.au/travel-information/journey-planner
Here is what you can do:
1.) Send a POST Request to the journey-planner with some data like that (be aware that CORS might jump in, then you could use cURL via PHP or whatsoever):
Start:Wickham Tce, Spring Hill
End:Upper Edward St, Spring Hill
SearchDate:10/05/2013 12:00:00 AM
TimeSearchMode:LeaveAfter
SearchHour:7
SearchMinute:40
TimeMeridiem:AM
TransportModes:Bus
TransportModes:Train
TransportModes:Ferry
MaximumWalkingDistance:1500
WalkingSpeed:Normal
ServiceTypes:Regular
ServiceTypes:Express
ServiceTypes:NightLink
FareTypes:Standard
FareTypes:Prepaid
FareTypes:Free
2.) You will get a new response location. This seems to be a REST link. Important for you is the id at the end. You will have to call that page and parse the HTML and look for a div with the HTML-id option-summaries, where you will find more information within the divs travel-option-1 to travel-option-n. You have to look at it carefully in order to find out which information is stored whee and how you will be able to use it.
In order to find such things you should learn how to use Firebug or Chrome's development tools.
This is one way to solve your problem. Probably not the best but still better than "screen-scraping" anything. But it will ask you for a lot of skills and effort. Furthermore if the data provider is going to change just a bit your solution will not work anymore. Additionally they might prevent your access by CORS or anything else (blocking your IP etc.)
Im creating an app that needs to track the location of the user (with their knowledge, like a running app) so that I can show them their route later. Should I use HTML5 with some timeout interval to save the coordinates every N seconds and if so, how often should I save the data and how should I save it (locally using local storage or post it to the server?)
Also, what is the easiest way to display the map of where the user has been later?
Has anyone done anything like this before?
The timeout interval for forge.geolocation is up to you and the balance of responsiveness of your application. Also, network traffic is expensive. So maybe you can buffer... say the last 10 geopositions... and then Http post (or whatever... see Parse below) in bulk? And since the geo data sounds like temporary device data why would there be a need to persist using forge.prefs? Unless maybe you need to the app to work "offline"?
For permanent storage I would look at Parse (generous free plan) and their Parse.GeoPoint class via their Javascript or REST Api as one possible solution. They have some nifty methods like (kilometersTo, milesTo, radiansTo) - https://parse.com/docs/js/symbols/Parse.GeoPoint.html
I'm looking for a way to create a simple HTML table that can be updated in real-time upon a database change event; specifically a new record added.
In other words, think of it like an executive dashboard. If a sale is made and a new line is added in a database (MySQL in my case) then the web page should "refresh" the table with the new line.
I have seen some information on the new using EVENT GATEWAY but all of the examples use Coldfusion as the "pusher" and not the "consumer". I would like to have Coldfusion both update / push an event to the gateway and also consume the response.
If this can be done using a combination of AJAX and CF please let me know!
I'm really just looking to understand where to get started with real-time updating.
Thank you in advance!!
EDIT / Explanation of selected answer:
I ended up going with #bpeterson76's answer because at the moment it was easiest to implement on a small scale. I really like his Datatables suggestion, and that's what I am using to update in close to real time.
As my site gets larger though (hopefully), I'm not sure if this will be a scalable solution as every user will be hitting a "listener" page and then subsequently querying my DB. My query is relatively simple, but I'm still worried about performance in the future.
In my opinion though, as HTML5 starts to become the web standard, the Web Sockets method suggested by #iKnowKungFoo is most likely the best approach. Comet with long polling is also a great idea, but it's a little cumbersome to implement / also seems to have some scaling issues.
So, let's hope web users start to adopt more modern browsers that support HTML5, because Web Sockets is a relatively easy and scalable way to get close to real time.
If you feel that I made the wrong decision please leave a comment.
Finally, here is some source code for it all:
Javascript:
note, this is a very simple implementation. It's only looking to see if the number of records in the current datatable has changed and if so update the table and throw an alert. The production code is much longer and more involved. This is just showing a simple way of getting a close to real-time update.
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.6.1/jquery.js"></script>
<script type="text/javascript" charset="utf-8">
var originalNumberOfRecsInDatatable = 0;
var oTable;
var setChecker = setInterval(checkIfNewRecordHasBeenAdded,5000); //5 second intervals
function checkIfNewRecordHasBeenAdded() {
//json object to post to CFM page
var postData = {
numberOfRecords: originalNumberOfRecsInDatatable
};
var ajaxResponse = $.ajax({
type: "post",
url: "./tabs/checkIfNewItemIsAvailable.cfm",
contentType: "application/json",
data: JSON.stringify( postData )
})
// When the response comes back, if update is available
//then re-draw the datatable and throw an alert to the user
ajaxResponse.then(
function( apiResponse ){
var obj = jQuery.parseJSON(apiResponse);
if (obj.isUpdateAvail == "Yes")
{
oTable = $('#MY_DATATABLE_ID').dataTable();
oTable.fnDraw(false);
originalNumberOfRecsInDatatable = obj.recordcount;
alert('A new line has been added!');
}
}
);
}
</script>
Coldfusion:
<cfset requestBody = toString( getHttpRequestData().content ) />
<!--- Double-check to make sure it's a JSON value. --->
<cfif isJSON( requestBody )>
<cfset deserializedResult = deserializeJSON( requestBody )>
<cfset numberOFRecords = #deserializedResult.originalNumberOfRecsInDatatable#>
<cfquery name="qCount" datasource="#Application.DBdsn#" username="#Application.DBusername#" password="#Application.DBpw#">
SELECT COUNT(ID) as total
FROM myTable
</cfquery>
<cfif #qCount.total# neq #variables.originalNumberOfRecsInDatatable#>
{"isUpdateAvail": "Yes", "recordcount": <cfoutput>#qCount.total#</cfoutput>}
<cfelse>
{"isUpdateAvail": "No"}
</cfif>
</cfif>
This isn't too difficult. The simple way would be to add via .append:
$( '#table > tbody:last').append('<tr id="id"><td>stuff</td></tr>');
Adding elements real-time isn't entirely possible. You'd have to run an Ajax query that updates in a loop to "catch" the change. So, not totally real-time, but very, very close to it. Your user really wouldn't notice the difference, though your server's load might.
But if you're going to get more involved, I'd suggest looking at DataTables. It gives you quite a few new features, including sorting, paging, filtering, limiting, searching, and ajax loading. From there, you could either add an element via ajax and refresh the table view, or simply append on via its API. I've been using DataTables in my app for some time now and they've been consistently cited as the number 1 feature that makes the immense amount of data usable.
--Edit --
Because it isn't obvious, to update the DataTable you call set your Datatables call to a variable:
var oTable = $('#selector').dataTable();
Then run this to do the update:
oTable.fnDraw(false);
UPDATE -- 5 years later, Feb 2016:
This is much more possible today than it was in 2011. New Javascript frameworks such as Backbone.js can connect directly to the database and trigger changes on UI elements including tables on change, update, or delete of data....it's one of these framework's primary benefits. Additionally, UI's can be fed real-time updates via socket connections to a web service, which can also then be caught and acted upon. While the technique described here still works, there are far more "live" ways of doing things today.
You can use SSE (Server Sent Events) a feature in HTML5.
Server-Sent Events (SSE) is a standard describing how servers can initiate data transmission towards clients once an initial client connection has been established. They are commonly used to send message updates or continuous data streams to a browser client and designed to enhance native, cross-browser streaming through a JavaScript API called EventSource, through which a client requests a particular URL in order to receive an event stream.
heres a simple example
http://www.w3schools.com/html/html5_serversentevents.asp
In MS SQL, you can attach a trigger to a table insert/delete/update event that can fire a stored proc to invoke a web service. If the web service is CF-based, you can, in turn, invoke a messaging service using event gateways. Anything listening to the gateway can be notified to refresh its contents. That said, you'd have to see if MySQL supports triggers and accessing web services via stored procedures. You'd also have to have some sort of component in your web app that's listening to the messaging gateway. It's easy to do in Adobe Flex applications, but I'm not sure if there are comparable components accessible in JavaScript.
While this answer does not come close to directly addressing your question, perhaps it will give you some ideas as to how to solve the problem using db triggers and CF messaging gateways.
M. McConnell
With "current" technologies, I think long polling with Ajax is your only choice. However, if you can use HTML5, you should take a look at WebSockets which gives you the functionality you want.
http://net.tutsplus.com/tutorials/javascript-ajax/start-using-html5-websockets-today/
WebSockets is a technique for two-way communication over one (TCP) socket, a type of PUSH technology. At the moment, it’s still being standardized by the W3C; however, the latest versions of Chrome and Safari have support for WebSockets.
http://html5demos.com/web-socket
Check out AJAX long polling.
Place to start Comet
No, you can't have any db code execute server side code. But you could write a service to poll the db periodically to see if a new record has been added then notify the code you have that needs pseudo real-time updates.
The browser can receive real-time updates via BOSH connection to Jabber/XMPP server. All bits and pieces can be found in this book http://professionalxmpp.com/ which I highly recommend. If you can anyhow send XMPP message upon record addition in your DB, then it is relatively easy to build the dashboard you want. You need strophe.js, Jabber/XMPP server (e.g. ejabberd), http server for proxying http-bind requests. All the details can be found in the book. A must read which I strongly believe will solve your problem.
The way I would achieve the notification is after the database update has been successfully committed I would publish an event that would tell any listening systems or even web pages that the change has occurred. I've detailed one way of doing this using an e-commerce solution in a recent blog post. The blog post shows how to trigger the event in ASP.NET but the same thing can easily be done in any other language since ultimately the trigger is performed via a REST API call.
The solution in this blog post uses Pusher but there's not reason why you couldn't install your own real-time server or use a Message Queue to communication between your app and the realtime server, which would then push the notification to the web page or client application.
Is there any way to "subscribe" from GWT to JSON objects stream and listen to incoming events on keep-alive connection, without trying to fetch them all at once? I believe that the buzzword-du-jour for this technology is "Comet".
Let's assume that I have HTTP service which opens keep-alive connection and put JSON objects with incoming stock quotes there in real time:
{"symbol": "AAPL", "bid": "88.84", "ask":"88.86"}
{"symbol": "AAPL", "bid": "88.85", "ask":"88.87"}
{"symbol": "IBM", "bid": "87.48", "ask":"87.49"}
{"symbol": "GOOG", "bid": "305.64", "ask":"305.67"}
...
I need to listen to this events and update GWT components (tables, labels) in realtime. Any ideas how to do it?
There is a GWT Comet Module for StreamHub:
http://code.google.com/p/gwt-comet-streamhub/
StreamHub is a Comet server with a free community edition. There is an example of it in action here.
You'll need to download the StreamHub Comet server and create a new SubscriptionListener, use the StockDemo example as a starting point, then create a new JsonPayload to stream the data:
Payload payload = new JsonPayload("AAPL");
payload.addField("bid", "88.84");
payload.addField("ask", "88.86");
server.publish("AAPL", payload);
...
Download the JAR from the google code site, add it to your GWT projects classpath and add the include to your GWT module:
<inherits name="com.google.gwt.json.JSON" />
<inherits name="com.streamhub.StreamHubGWTAdapter" />
Connect and subscribe from your GWT code:
StreamHubGWTAdapter streamhub = new StreamHubGWTAdapter();
streamhub.connect("http://localhost:7979/");
StreamHubGWTUpdateListener listener = new StockListener();
streamhub.subscribe("AAPL", listener);
streamhub.subscribe("IBM", listener);
streamhub.subscribe("GOOG", listener);
...
Then process the updates how you like in the update listener (also in the GWT code):
public class StockListener implements StreamHubGWTUpdateListener {
public void onUpdate(String topic, JSONObject update) {
String bid = ((JSONString)update.get("bid")).stringValue();
String ask = ((JSONString)update.get("ask")).stringValue();
String symbol = topic;
...
}
}
Don't forget to include streamhub-min.js in your GWT projects main HTML page.
I have used this technique in a couple of projects, though it does have it's problems. I should note that I have only done this specifically through GWT-RPC, but the principle is the same for whatever mechanism you are using to handle data. Depending on what exactly you are doing, there might not be much need to over complicate things.
First off, on the client side, I do not believe that GWT can properly support any sort of streaming data. The connection has to close before the client can actually process the data. What this means from a server-push standpoint is that your client will connect to the server and block until data is available at which point it will return. Whatever code executes on the completed connection should immediately re-open a new connection with the server to wait for more data.
From the server side of things, you simply drop into a wait cycle (the java concurrent package is particularly handy for this with blocks and timeouts), until new data is available. At that point in time, the server can return a package of data down to the client which will update accordingly. There are a bunch of considerations depending on what your data flow is like, but here are a few to think about:
Is a client getting every single update important? If so, then the server needs to cache any potential events between the time the client gets some data and then reconnects.
Are there going to be gobs of updates? If this is the case, it might be wiser to package up a number of updates and push down chunks at a time every several seconds rather than having the client get one update at a time.
The server will likely need a way to detect if a client has gone away to avoid piling up huge amounts of cached packages for that client.
I found there were two problems with the server push approach. With lots of clients, this means lots of open connections on the web server. Depending on the web server in question, this could mean lots of threads being created and held open. The second has to do with the typical browser's limit of 2 requests per domain. If you are able to serve your images, css and other static content fro second level domains, this problem can be mitigated.
there is indeed a cometd-like library for gwt - http://code.google.com/p/gwteventservice/
But i ve not personally used it, so cant really vouch for whether its good or not, but the doco seems quite good. worth a try.
Theres a few other ones i ve seen, like gwt-rocket's cometd library.
Some preliminary ideas for Comet implementation for GWT can be found here... though I wonder whether there is something more mature.
Also, some insight on GWT/Comet integration is available there, using even more cutting-and-bleeding edge technology: "Jetty Continuations". Worth taking a look.
Here you can find a description (with some source samples) of how to do this for IBM WebSphere Application Server. Shouldn't be too different with Jetty or any other Comet-enabled J2EE server. Briefly, the idea is: encode your Java object to JSON string via GWT RPC, then using cometd send it to the client, where it is received by Dojo, which triggers your JSNI code, which calls your widget methods, where you deserialize the object again using GWT RPC. Voila! :)
My experience with this setup is positive, there were no problems with it except for the security questions. It is not really clear how to implement security for comet in this case... Seems that Comet update servlets should have different URLs and then J2EE security can be applied.
The JBoss Errai project has a message bus that provides bi-directional messaging that provides a good alternative to cometd.
We are using Atmosphere Framewrok(http://async-io.org/) for ServerPush/Comet in GWT aplication.
On a client side Framework has GWT integration that is pretty straightforward. On a server side it uses plain Servlet.
We are currently using it in production with 1000+ concurent users in clustered environment. We had some problems on the way that had to be solved by modifying Atmosphere source. Also the documentation is really thin.
Framework is free to use.