Wowza error response on duplicate stream name - actionscript-3

I am trying to edit the sample chat application provided by wowza to my needs. I want the application to automatically assign streamnames when the app load but do not want to allow duplicate stream names.
To do this here are my plans.
Disallow in Wowza settings not to accept stream names if it is already existing from application.xml.
Supply an auto numbered stream names on my app in the format of "stream_x" where x is a number
Check if the automatically supplied stream names is existing on the server. If it is existing, try to increment stream number by one and try to publish again. Repeat the process until stream is no duplicate of existing stream.
For step number 3, I need to be able to grab the server response if the stream is already existing.
Looking at the code in the sample chat application provided by wowza, this is the part involve in publishing the stream name
nsPublish = new NetStream(nc);
nsPublish.addEventListener(NetStatusEvent.NET_STATUS, nsPublishOnStatus);
I would like to know what is the error code I will receive in this line if the streamname is already existing in the server.
Im planning to make a loop below this line to increment my stream name until it becomes no duplicate of what's on the server already.
I checked their error codes here but did not find error related to duplicate stream name
http://www.wowza.com/forums/content.php?277-How-to-troubleshoot-error-messages#server
Thanks

How about you create a stream with a random name without checking if already exists?
For example Timestamp + RandomNumber(X) would be good enough. For sake of shortening it, you could convert it to a Base58 string as well. Unless you get really unlucky (or have really high traffic) you have very little chance of name collisions.

Related

API POST Endpoints and Image Processing

So, for my android app, not only do I get certain data that I would like to POST to an API endpoint via JSON format, but one of the data pieces is also an image. Everything besides the image goes into a postgresql database. I want to put the images somewhere (no important where) then put the link to that image in the database.
Here's the thing, while that image is connected to the other pieces of data I send to the API endpoint that gets put into the database, I would be sending the image somewhere else and then the link be put in at a different time. So here's my mental gymnastic I am trying to get over:
How would I send these two separate data pieces (an image and then all other data in a single JSON object) and have the image associated with that JSON object that get's put into the database without the image and data getting all mixed up due to multiple users doing the same thing?
To simplify, say I have the following information as a single JSON object going to an endpoint called api.example.com/frontdoor. The object looks something like this:
{
"visitor_id": "5d548e53-c351-4016-9078-b0a572df0bca",
"name": "John Doe",
"appointment": false,
"purpose": "blahblahblah..."
}
That JSON object is consumed by the server and is then put into their respective tables in the database.
At the same time, and image is taken and given a uuid as a file name and send to api.example.com/face, then the server processes it and somehow a adds link to the image in the proper database entry row.
The question is, how do I accomplish that? How would I go about relating these two pieces of data that get sent to two different places?
In the end, I plan on having a separate endpoint such as api.example.com/visitors provide a JSON object with a list of all visits that looks something like:
{
"visits": [
{
"visitor_id": "5d548e53-c351-4016-9078-b0a572df0bca",
"name": "John Doe",
"appointment": false,
"purpose": "blahblahblah..."
"image": "imgbin.example.com/faces/c3118272-9e9d-4c54-8824-8cf4cfaa679f.png"
},
...
]
}
Mainly, I am trying to get my head around the design of all of this so I can start writing code. Any help would be appreciated.
As I understand, your question is about executing an action on the server side where two different sub services are involved - one service to update the text data in a sql db and another to store an image and then put the image's reference back to the main data. There are two approaches that come to my mind.
1) Generate a unique id on the client side and associate that to both json object upload an the image upload. And then when your image is uploaded, the image upload service can take this ID, find the corresponding record in SQL and update the image path. However generating client side unique IDs are not a recommended approach because there is a chance of collision such that more than 1 client generates the same ID which will break the logic. To work around this, before uploading, client can make a call to an ID generation service which will uniquely generate the ID on the server side and send it back to the client and then client can perform the upload step using the same approach. The downside to this approach is that the client needs to make an extra call to the server to get the unique ID. Advantage of this approach is that the UI can get separate updates for the data and the image as in when the data upload service is successful, it can say that the data is successfully updated and when the image is uploaded at some point in time later, then it can say that image upload is completed. Thus, the responses of each upload can be managed differently in this case. However if the data and image upload has to happen together and has to be atomic (the whole upload fails if either of data or image upload fails) then this approach can't be used because the server must group both these actions in a transaction.
2) Another approach is to have a common endpoint for both image and data upload. Both image and data get uploaded together in a single call to the server and the server first generates a unique ID and then makes two parallel calls to data upload service and image upload service and both these sub service calls get this unique ID as the parameter. If both uploads have to be atomic then the server must group these sub service calls in a transaction. Regarding returning response, it can be synchronous or asynchronous. If the UI needs to be kept waiting for the uploads to succeed, then the response will be synchronous and the server will have to wait for both these sub services to complete before returning a response. But if UI doesn't need to be kept waiting then the server can respond immediately after making calls to these sub services with a message that the upload request has been accepted. In this case, the sub services calls are processed asynchronously.
In my opinion, approach 2 is better because that way server has more control over grouping the related actions together. Regarding response, it depends on the use case. If the user cares about whether his post was properly recorded on the server (like making a payment) then it is better to have synchronous implementation. However if user initiates the action and leaves (as in the case of generating a report or sending an email) then it can have asynchronous implementation. Asynchronous implementation is better in terms of server utilization because server is free to accept other requests rather than waiting for the sub services' actions to complete.
These are 2 general approaches. I am sure there will be several variations or may be entirely different approaches for this problem.
Ah, too long of an answer, hope it helps. Let me know if further questions.

Caching API response from simple ruby script

I'm working on a simple ruby script with cli that will allow me to browse certain statistics inside the terminal.
I'm using API from the following website: https://worldcup.sfg.io/matches
require 'httparty'
url = "https://worldcup.sfg.io/matches"
response = HTTParty.get(url)
I have to goals in mind. First is to somehow save the JSON response (I'm not using a database) so I can avoid unnecessary requests. Second is to check if the new data is available, and if it is, to override the previously saved response.
What's the best way to go about this?
... with cli ...
So caching in memory is likely not available to you. In this case you can save the response to a file on disk.
Second is to check if the new data is available, and if it is, to override the previously saved response.
The thing is, how can you check if new data is available without doing a request for the data? Not possible (given the information you provided). So you can simply keep fetching data every 5 minutes or so and updating your local file.

Elasticsearch store user input as JSON document

I have a following architectural question - my application back-end is written at Java and client side at AngularJS. Right now I need to store the user input on the page in order to be able to share and bookmark my application urls and restore the state by this url.
I'm going to implement the following approach - every time the user interacts with my application via selecting the data and conditions at the page I'll collect all of his input at a complex JSON document and store this document at Elasticsearch. Key of this document from ES I'll send back to client application(AngularJS) and based on this key I'll update the page url. For example the original url looks like:
http://example.com/some-page
based on a key from server I'll update this url to following:
http://example.com/some-page/analysis/234532453455
where 234532453455 is a key of a document in ES.
Every time the users will try to access the following url - http://example.com/some-page/analysis/234532453455 AngularJS application will try to get a saved state by key (234532453455) via Java backend REST endpoint.
Will it work ?
Also, I'm in doubt right now how to prevent the duplication of the documents in ES. Right now I have no experience with ES so don't know what approach from ES can be used out of the box for this purpose.
For example is it a good idea to calculate some hash code of each JSON document and store this hash code as a key of a document.. so before storing the new document I can check the old document by hash code. Also performance is very important to me so please take this into account also.
For me it sounds you try to implement cache.
Yes you can do this but if you will use ES only for this solution then I think you should better look to redis or memcached.
I can't say that ES is bad solution for this but ES has some tricks which you must remember for instance its near realtime search. After you index data they are not immediately available to search it takes few seconds depends on configuration(But you can also call _refresh but I am not sure about performance if you index data very often).
Hash: I dont see reasons to use has I'd better create right id. So if you have types of reports per user than id could be "reporttype_{userid}", because if you will use hash as an ID then each new object will have new id and instead of rewriting you will end up with having many copies of old data for that user. if you go with pattern reporttype_{userid} then each time when user regenerate report with new data you will just override it.
As an option you could add to that option fields userid and expireat for future cleanup, for instance you could have job which will cleanup expired reports, but this is valid only if you go with ES, since in redis and memcached there is option to set expiration when you save data

MongoDB - Update collection hourly without interrupting normal query

I'm implementing a web service that needs to query a JSON file( size: ~100MB; format: [{},{},...,{}] ) about 70-80 times per second, and the JSON file will be updated every hour. "query a JSON file" means checking if there's a JSON object in the file that has an attribute with a certain value.
Currently I think I will implement the service in Node.js, and import ( mongoimport ) the JSON file into a collection in MongoDB. When a request comes in, it will query the MongoDB collection instead of reading and looking up in the file directly. In the Node.js server, there should be another timer service, which in every hour checks whether the JSON file has been updated, and if it has, it needs to "repopulate" the collection with the data in the new file.
The JSON file is retrieved by sending a request to an external API. The API has two methods: methodA lets me download the entire JSON file; methodB is actually just an HTTP HEAD call, which simply tells whether the file has been updated. I cannot get the incrementally updated data from the API.
My problem is with the hourly update. With the service running, requests are coming in constantly. When the timer detects there is an update for the JSON file, it will download it and when download finishes it will try to re-import the file to the collection, which I think will take at least a few minutes. Is there a way to do this without interrupting the queries to the collection?
Above is my first idea to approach this. Is there anything wrong about the process? Looking up in the file directly just seems too expensive, especially with the requests coming in about 100 times per seconds.

Retrieve URL JSON data in MS Access

There is a web service that allows me to go to a URL, with my API-key, and request a page of data. The data is returned as JSON. The JSON is well-formed, I ran it through JSONLint and confirmed its OK.
What I would like to do is retrieve the JSON data from within MS Access (2003 or 2007), if possible, and build a table from that data (first time thru), then append/update the table on subsequent calls to that URL. I would settle for "pre-step" where I retrieve this information via another means. Since I have an API key in the URL, I do not want to do this server-side. I would like to keep it all within Access, run it on my PC at home (its for personal use anyway).
If I have to use another step before the database load then Javascript? But I dont know that very well. I dont even really know what JSON is other than what I have read in Wikipedia. The URL looks similar to:
http://www.SomeWebService.com/MyAPIKey?p=1&s=50
where: p = page number
s = records per page
Access DB is a JavaScript Lib for MS Access, quick page search says they play nicely with JSON, and you can input/output with. boo-ya.
http://www.accessdb.org/
EDIT:
dead url; wayback machine ftw:
http://web.archive.org/web/20131007143335/http://www.accessdb.org/
also sourceforge
http://sourceforge.net/projects/accessdb/