Is Amfphp a fine way to do real time multiplayer games? - actionscript-3

I did guess that I could do a multiplayer game with a loop using Amfphp. Into this loop the Amfphp would do requests on DB, and, so this, the players could see another players by db data.
Do you think this could do tense to the server? IS there some way to make it fast?

Maintainer of Amfphp here. Amfphp is mainly here to help with communication. It does not aim to solve the underlying problem that PHP has with one client talking to another. Your approach with a loop can work for prototyping but won't scale.
If you must use PHP, look at using a socket server. If you can, use another server side language that is better suited to pushing data around from client to client.

It depends and testing will be required anyway.
Basically AMF will send a smaller data packet in most cases but AMF first need to serialize the data and then when received it need to be deserialized and that needs to be added to the overall time. Depending on data complexity the whole AMF process might win you a few milliseconds or might lose you some. Only testing can tell. If you stick to generic data types you might be ok, if you go with complex data types that might cost too much.

Related

Reliability Android when connection is off

I'm developing an App where I store my data in a DB online using HTTP POSTO and GET.
I need to implement some reliability to my software, so if the user presses the button, and there is no connection, the data should be stored in something (file? sqlite?) and then when the connection is again on, send the HTTP request to send data.
Any advices or pieces of code to show me how to do this?
Thanks.
Sounds good and pretty forward for me. Just go.
You use a local sqlite db as "cache". To keep it simple, do not implement any logic about that into your apps normal code. Just use the local db. Then, separately, you code a synchronizer. That one checks for the online connection and synchronizes the the local sqlite database with a remote database, maybe mysql.
This should be perfectly fine for all applications that to not require immediate exchange of the data with other processes all the time.
There is one catch, though: the low performance of sqlite on bigger data sets. That is an issue with all single file database solutions. So this approach probably is only valid for small data sets in total, or if you can reduce the usage of the local database to only a part of the total data, maybe only the time critical stuff.
Another workaround might be to use joins over two separate databases, the local and the remote one. But such things really boost the complexity of code, so think thrice if that really is required.

What would be a preferrable approach for rendering time series data

We have a simple JSON feed which provides stock/price information at a certain point in time.
e.g.
{t0, {MSFT, 20}, {AAPL, 30}}
{t1, {MSFT, 10}, {AAPL, 40}}
{t2, {MSFT, 5}, {AAPL, 50}}
What would be a preferred mechanism to store/retrieve this data and to plot a graph based on this data (say MSFT). Should I use redis or mysql?
I would also want to show the latest entries to all users in the portal as and when new data is received. The data could be retrieved every minute. Should I use node.js for this
Ours is a rails application and would like to know what libraries/database should I use to model this capability.
Depends on the traffic and the data. If the data is relational, meaning it is formally described and organized according to the relational model, then MySQL is better. If most of the queries are get and set with key->value , meaning you are going to get the data using one key, and you need to support many clients and many set/get per minute, then defiantly go with Redis. There are many other noSQL DBs that might fit, have a look at this post for a great review of some of the most popular ones.
So many ways to do this.. if getting an update once a minute is enough have the client do AJAX calls every minute to get the updated data, and then you can build your server side using php, .NET, java servlet ot node.js, again, depend on the expected user concurrency. PHP is very easy to develop on, while node.js can support many short i/o requests. Another option you might want to consider is you use server push (Node's socket.io for example) instead of the client AJAX call. In this way the client will be notified immediately on an update.
Personally, I like both node.js and Redis and used them couple of production applications, supporting many concurrent users using a single server. I like node since it's easy to develop, and support many users, and Redis for it's amazing speed and concurrent requests. Having said that, I also use MySQL for saving relational data, and PHP servers for fast development of APIs. Each have its own benefits.
Hope you'll find this info helpful.
Kuf.
As Kuf mentioned, there are so many ways to go about this and it really does depends on your needs: low latency, data storage, or ease of implementation.
Redis will most likely be the best solution if you are going for a low latency and easy solution to implement. You can use Pub/Sub to push updates to clients (e.g. Node’s socket.io) in real-time and run a second Redis instance to store the JSON data as a sorted set using the timestamp as a score. I’ve used the same to much success storing time-based statistical data. The downside to this solution is that it is resource (i.e. memory) expensive if you want to store a lot of data. 
If you are looking to store a lot of data in JSON format and want to use a pull to fetch data every minute, then using ElasticSearch to store/retrieve data is another possibility. You can use ElasticSearch’s range query to search using a timestamp field, for example:
"range": {
"#timestamp": {
"gte": date_from,
"lte": now
}
}
This adds the flexibility of using an extremely scalable and redundant system, storing larger amounts of data, and a RESTful real-time API. 
Best of luck!
Since you're basically storing JSON data...
Postgres has a native JSON datatype
Also MongoDB might be a good fit too as JSON -> BSON
But if its just serving data even something as simple as memcached would suffice.
If you have a lot of data to keep updated in real-time like stock ticker prices, the solution should involve the server publishing to the client, not the client continually hitting the server for updates. Publish/subscribe (pub/sub) type model with websockets might be a good choice at the moment, depending on your client requirements.
For plotting the data using data from websockets there is already a question about that here.
Ruby-toolbox has a category called HTTP Pub Sub which might be a good place to start. Whether MySQL or Redis is better depends on what you will be doing with it aside from just streaming stock prices. Redis may be a better choice for performance. Note also that websocket-rails assumes Redis, if you were to use that- just as an example.
I would not recommend a simple JSON API (non-pubsub) in this case, because it will not scale as well (see this answer), but if you don't think you'll have many clients, go for it.
Cube could be a good example for reference. It uses MongoDB for data storage.
For plotting time series data, you may try out cubism.js.
Both projects are from square.

Should a multiplayer game always request data from a database on each client request?

Well I don't know if the title for this question is appropriate or not, but I didn't know how to put this in few words.
I'm currently developing a multiplayer 2D game using NodeJS and Socket.io on the server side and HTML5 on the client side. This game doesn't need to save the players progress unless when they finish it. And I must make sure that all information about the players, such as, scores and helps are always valid. So I decided to centralize this information on the server, this way the clients never send a score to the server, instead based on the information sent the server calculates the scores and send them to all clients. Besides that, I can have different sessions of this game running with 2 players minimum and 4 players maximum.
At first I decided to implement this with the server always maintaing the games sessions data in memory, but now I'm questioning myself if I shouldn't have used a database to store this data instead. I'm using a database to store the sessions data only when they are finished because I have no use for unfinished sessions data. But should I instead maintain the sessions and players data on the database while they are playing? My problem here is that the clients communicate very frequently with the server, with this approach I would have to first request their data from the database, make the necessary changes, store it back into the database, and repeat this process on each client request. Maybe the answer to this is obvious, but should I use this approach instead?
It is the first time I'm developing a game, so I have no idea how things usually work. I just know that I want the server to be fast. I chose to maintain everything on memory mainly because all examples and tutorials I found about multiplayer games development never mentioned a database...
Thank you!
I am also developing a multiplayer game using node.js and socket.io. You should not access the database on each client request because,
1) I/O operations are expensive.
Reading/writing to database is "expensive" (slow). It is much faster to read and write from memory.
2) I/O operations should be asynchronous in Node.js.
function read_or_alter_database(input, function(callback){ update_the_client(); });
This makes the database operation non-blocking: the rest of your application will still run, until the operation is done. Then, the callback function is executed. If the player's client rely on the database access to update the game state, then this becomes a blocking operation (since the game cannot proceed until the database operation is done), which negates the main purpose of Node.js.
3) There will be a high volume of client requests in multiplayer games.
This plus point 1 results in a big efficiency loss.
Sounds like you already have your answer: Since the state is only important once a session is complete, only store the data upon completion of that session. Storing the intermediate values serves no purpose, and only slows things down.
Node.js is a relatively new development platform - it is uncertain if it would be able to support such a system. As far as what is NORMALLY done - as much information as possible is stored client side, first off, and then servers typically hold data in memory, or, at least, I know this is the case for the popular games League of Legends, World of Warcraft, and Starcraft II. Then again, those applications are not HTML/Node.js, are very very demanding on the system, and are by huge development companies - it may be that the DB approach is fine for a smaller game, I've just never heard of it.

mobile application and database interaction design

I'm involved in a project thats going to be developing a mobile application which will be interacting with a remote database. I'm wondering what the best way to design the interaction between these devices and the database will be, to make best use of battery life, bandwidth etc.
Should we have a server side application/set of scripts which do all the database interaction and then send the data back to our application as XML in a response, or should we have the devices querying the database directly? Or is there another better way to approach this?
My feeling atm is that the former will be a bit more work, but will reduce the workload on the devices, saving power etc, which would be a better way to go.
Thanks!
What do you mean by "have the devices querying the database directly". It is normally considered a bad idea (for security reasons) to expose your database server directly to the Internet. The standard approach is to provide Web Services that connect to the database and return data as JSON or some other format.
Performance is a big issue with these devices and I have found that they are very slow to process XML so you may be better to try using JSON. Also I suggest to stay clear of data sets and data tables. Collections of plain business objects run much faster.
You might want to have a look at this:
What are the most valuable .Net Compact Framework Tips, Tricks, and Gotcha-Avoiders?

Implementing cross-thread/process queues in Perl

What is the most efficient way of implementing queues to be read by another thread/process?
I'm thinking of using a basic MySQL table with polling on sleep. This sounds to be the most scalable (it doesn't even have to be on the same server) but might potentially result in too many queries to the DB.
You have several options, and it really depends on what you are trying to get the system to do.
fork child processes, and interface using connections their stdin/stdout pipes.
create a named pipe on the file system, like /tmp/mysql.sock. This is basically using sockets to communicate cross process.
Setup a message broker. I'd recommend giving ActiveMQ a try, and using the Stomp client for Perl. This is probably your most scalable solution.
This is one of those things that is simple to write yourself to your exact specifications. I wrote a toy one here:
http://github.com/jrockway/app-queue
I am not sure it compiles anymore, as AnyEvent::Subprocess has changed significantly since I wrote it. But you can steal the ideas.
Basically, I think an RPC-style infrastructure is the best. You have a server that handles keeping the data. Then clients connect and add data or remove data via RPC calls. This gives you ultimate flexibility with the semantics. You can be "transactional" so that if a client takes data and then never says "hey, I am done with it", you can assume the client died and give the job to another client. You can also ensure that each job is only run once.
Anyway, making a queue work with a relational database table involves a bit of effort. You should use something KiokuDB for the persistence. (You can physically store the data in MySQL if you desire, but this provides a nicer Perl API to that.)
In PostgreSQL you could use the NOTIFY/LISTEN combination, would need only a wait on the PG connection socket after running LISTEN(s).