Is it possible to capture an outgoing http call from an ActionScript (Flex) module? - actionscript-3

I'm trying to develop a test framework for some ActionScript code we're developing (Flex 3.5). What's happening is this:
As part of a Web Analytics function we are calling a track method in a class, providing the relevant information as part of the call. This method is provided in a library (SWC), and we have no access to the code.
Ultimately the track method sends an outgoing http request to the tracking server. We can see this quite happily in HttpFox.
I was hoping to be able to capture this outgoing request and interrogate it in my test class, allowing us to a) run tests in a more standalone fashion, and b) programmatically determine that the correct information is being tracked.

No problem just run this developer tool that displays all requests leaving your machine.
http://www.charlesproxy.com/

Unless you're going to use a sniffing tool, which probably would be hard to use for a programmatic evaluation, I would recommend using a proxy to channel your request. You could let the track method send the request to a php script on the proxy server, have it evaluate the request content, and then forward it to the actual tracking server. I suppose on a tracking system, you won't need to worry about the response, so it shouldn't be too hard to implement.

You could run a web server on a localhost (or any really) and just make sure the DNS entry the code is trying to access points to the server you are running.

Related

How to test WebHooks without an on-premise external system?

I'm trying to teach myself about integrating systems via WebHooks.
In a free/hosted GIS system, I can create a WebHook that would, in theory, POST a JSON object to an external system.
The problem is, I don't have an external system that's available right now for for receiving the POST.
I think I need some sort of publicly available sample server that would:
Receive the POST requests
Do something with the requests (ie. create some sort of record)
...so that I could determine if the WebHook worked correctly or not.
How can I test my WebHooks without having an on-premise external system?
I've poked around websites like Postman Echo and Amazon Lambda. But to my untrained eye, it seems like they're not quite designed for what I need.
You could use any of these options depending on your requirements:
You could use webhooks modules in services like Integromat or Zapier to receive webhook data and then apply transformation.
You could deploy a script on heroku and use the URL generated there to send the webhooks calls.
You could also use services like requestbin, webhook.site etc if you just want to receive webhooks data.
Regards

How to get an OAuth access token from Google Cloud Messaging on a remote server

I have a general comprehension question about OAuth access token retrieval for a Google Chrome Extension.
I have a popup HTML window in the browser that uses Jquery to request data from the server (a LAMP stack on AWS). The data is presented by PHP scripts which access a MySQL database. All very basic stuff.
I now want to implement a push messaging system using Google Cloud Messaging to alert users of new content that they can check. However I don't really understand where I should request the access token and how to listen for the response. I figure it should be in the PHP scripts but all the Google documentation that I've read states the user has to be present in order to allow access to push messaging. That tells me I should put it in the JavaScript but I feel this is a bad idea because every user could potentially request an access token when I think I only need one every 3000 seconds or so. If my app was completely implemented in PHP I'm sure this would be possible and now I'm worried that splitting it up like this leaves push messaging out of the question. Am I missing a crucial detail or just out of luck?
If the data access you need isn't user-specific, then you're right, there's no good reason to get a separate token for each user. Check out https://developers.google.com/accounts/cookbook/roles/Apps which discusses some options.

Using a completely decoupled frontend with user authentication

I'm playing with the idea of having a completely decoupled HTML5 frontend, but still user authentication for a web app. Is this possible or will I run into some heavy browser security issues?
The idea is to have all static content delivered through a CDN on like example.com, and having it fetch dynamic data (and user authentication) through a separate subdomain, like api.example.com.
This would speed up the loading time of the site, and I could keep the frontend stuff in a completely separate repo so that the developers don't have to worry about setting up the backend to develop and test new features.
Is this already possible in some JS framework perhaps, backbone.js, angular.js, ember.js, knockout.js ?
It definitely is, but I think it is more about approach rather than technology. I have implemented what you describe for a project (it's online but don't want to do a shameless plug here, if interested to check it out I can post the link). My stack is java in the backend exposing a REST api for both autentication and business logic. The client is a backbone.js application. I explicitely decided NOT to use sessions at all. It is completely stateless. This of course means that the user must be re-authenticated at every request.
When the user logs in through a slightly modified OAuth endpoint, it gets a token that must be passed at every request. Cookie works in this case as they are handled automatically by the browser. If not passed as cookie, the backend expect it as a parameter. The frontend communicates using the REST endpoints. It's a single-page application, full client side, this means that the backend serves a page that is basically empty, that include few JS files that are the application itself. No other pageload occurs. Logout is done by simply deleting the cookie or not sending the authToken, the server cannot and doesn't have to "forget" about the user. Token are nice as they can be invalidated, both expilcitely or by changing the password. I've chosen this approach as it made it easy to develop desktop app and browser plugin for my webapp without touching a single line of backend code.

Is it possible to make Websocket as a REST instead of SOAP????

Is there any way to make a websocket as a REST service and host it in IIS..IIS8 only supports websocket with NetHttpBinding. and access from a client who has a proxy implemented for the service...But I want to have Websocket with REST..so that I can access that service from my android App and my HTML5 Client. Is that possible..???
I have a rest service in my project which serves data as per requirement.
1.RegisterTag(TagName);
2.value GetValue();
Now I have to have a callback from the service. First I have to call the RegisterTag(MyTagName). and then I should get notification from the server side.It is implemented with the Server sent events. But now I need to convert this REST service to websocket.
So, is it possible to add REST feature in WebSocket ?? I am planning to add NetHttpBinding in my new implementation.
Thanks
Arijit
have a look at this
Is ReST over websockets possible?
http://www.kimchy.org/rest_and_web_sockets/
REST does not require any specific protocol so it is possible to use websockets if you like.
"One thing that confuses people, is that REST and HTTP seem to be hand-in-hand. After all, the world-wide-web itself runs on HTTP, and it makes sense, a RESTful API does the same. However, there is nothing in the REST constraints that makes the usage of HTTP as a transfer protocol mandatory. It's perfectly possible to use other transfer protocols like SNMP, SMTP and others to use, and your API could still very well be a RESTful API"
http://restcookbook.com/Miscellaneous/rest-and-http/

How feasible/difficult is it to run an application that runs on a router?

In my example, I want to build an application that sends users who join a network some kind of interface and manage this at a central station (possibly the router, or a central server). The new user's input to this interface will be sent back to the central station and controlled.
How plausible is this? Is sending something to a newly discovered IP realistic?
As long as you control the DNS server, you can send them to any web server you like.
Completely plausible, but you'll need a router with open source firmware and you'll need to program in the language of that source code and have the toolchain to build the binary for the firmware.
The only thing I can think of is NoCatAuth and friends. The user has to use their web browser, but most are accustomed to that.
Are you trying to FORCE the users to use your application (e.g. by selling these routers via an ISP), or are you expecting users to co-operate (e.g. inside a organisation's WAN)?
If the latter, it may be sufficient to set the DHCP server inside the router to serve the address of an HTTP proxy. That will get picked up by most OS/browsers. The proxy can then be used to control web-traffic - which pages they can see, and which ones are redirected to your own web-app.
If the user is considered an adversary, it would be trivial for them to override the proxy settings. In a LAN/WAN situation, you need to make sure nothing is connecting them to the outside world, except through the proxy.