How does poker tracking software evaluates live hands - language-agnostic

I'm just curious how they make this kind of software.
Do they?
Capture the screen and use an OCR library to track hands
Determine the memory locations of the hands and read it directly from there
Something else?
Edit/ I really mean live tracking, not using the hand histories

Related

Creating a server to arbitrate a simple game

I've created a simple game where 2 players make a simultaneous choice in each round, and the winner of the round is determined by a set of rules specific to the game. Sort of like how Rock Paper Scissors works.
I'd like to be able to offer this game online where 2 players can find and play against each other. There would be some central server to arbitrate the game, and then each player would interact with the game using some game client of his choice that we would provide (i.e. web-based, mobile-based, Flash, etc).
Obviously, a player could also play against a computer opponent that we could provide. I'd also like to have the capability to allow programmers to submit computer programs that they've written to act as players and play against other programs in some sort of tournament.
I realize that the specifics of my game would certainly need to be written from scratch, but it seems that all of the work that the servers would have to do to communicate with the clients and maintain the state of the game has probably been done many times before. This is probably the bulk of the work.
Does anyone have any ideas for how this could be done quickly and easily? Are there servers available with some sort of standard interface to drop new games into? Is there some sort of open source game server? How would you go about doing this?
Seeing as the clients only communicate with the game server occasionally (as opposed to continuously), a web framework should be able to serve as your "basic game server". While web frameworks may be made for providing "web pages", they can certainly be (ab)used to serve as request handlers.
This certainly doesn't force you to make the game a browser game; standalone game clients can be made easily, and they can communicate with your game server using basic http. I also heard this thing called Ajax is pretty nifty for such things.
Not only will you find a lot of ready-made http-based servers, as an added bonus, there is a lot more documentation on how to work with Web 2.0®©™ than "game servers". You just need to know that you want a web framework that lets you easily manage sessions and receive/respond to requests and a client library that does likewise.
As an added aside, "maintaining the state of the game", as you put it, falls 100% within the domain of the actual game logic. But many web frameworks come with good database support, and will surely be useful for this kind of thing.

What's PPC and how can I use it to make an xbox rebooter?

So I've always been interested in a type of modified xbox 360 called a jtag. Apparently do to some system checks implemented by microsoft they can no longer connect to xbox live and I want to give building a new rebooter a try.
First off, if anyone has any ideas on how to go about doing this or getting started, that would be awesome.
Secondly (this is my main question), what's PPC and how do I learn to "reverse engineer" it?
I'm following this tutorial: http://www.thetechgame.com/Forums/viewtopic/t=1086504.html and I want to know how to go about doing it. I'm assuming I've done everything right so far, I've opened up the right files in IDA and I want to start in on it, but I don't really know how.
PPC (PowerPC) is a CPU architecture and instruction set, the reference to which you can find online. You mentioned JTAG, JTAG isn't a Xbox-specific technology, it's a common hardware debugging standard.
You sound like you've never done low-level development or reverse engineering, so I'm warning you that you may be way over your head. However, having said that, the following few points should get you started:
Xbox runs a CPU that's in PowerPC family of processors. They work by reading code from memory, and executing that code. PowerPC just refers to the format that code takes, what instructions are supported and describes its general behavior.
Since you're talking about JTAG, you might need to physically open your Xbox, find some JTAG tools and solder leads to it. You will probably break your Xbox.
Googling for "software reverse engineering" should get you started, but don't expect to learn this overnight as it's a very deep and technical topic.

Strategies for programmatically controlling a commercial DVD player

If you were tasked with operating a commercial DVD player from a computer program, how would you do it?
My company sells a product that does exactly that. We have a couple of different approaches, and they both have major issues:
Get an IR Transmitter, Pretend To Be a Remote Control
Pros: Works with pretty much every commercial DVD player in existence.
Cons: The IR transmitter is another moving part that can (and too frequently does) go wrong. Only allows one-way communication; you can talk to the DVD player, but it can't talk back; you can only tell if it's on or off by seeing if it's putting out a video signal.
Get a DVD Player With an RS-232 Serial Port
Pros: Everything that's "con" with the IR transmitter approach simply goes away. The direct connection is more reliable and allows the code to understand just what the machine is doing.
Cons: Niche market; very few machines actually have an RS-232 port. So, when a manufacturer discontinues a model you've been using, you're left scrambling to find a replacement.
And I suppose for the sake of completeness, I should mention....
Just Use the DVD Drive In the PC
Cons: Boss doesn't like it.
What other approaches are available? I've seen DVD players with USB ports, but the last time I researched the subject, it seemed that was just for playing media stored on an iPhone or the like and not actually a potential control mechanism.
I'm really hoping somebody will say something like "Silly boy, don't you know about the ridiculously common FOO port that allows a home theater system to control the DVD directly? Just get a USB -> FOO converter and you're all set!" But I'm grateful for any options I haven't already considered.
The DVD drive is the way to go.
But if he does not like that, I'd go ahead and get a PIC microcontroller, one with USB built in (forget which part number this is). I'd write code to control this, having the IO lines go out on lead wires that attach to the inside of the front panel buttons. You'd need less than a dozen.
If a model is obseleted, it only changes where the lead wires attach. A hole can be punched out of the back of the commercial DVD player, and one of those little rubber gaskets can seal the USB cable to it. It looks like a regular player with a USB A cable coming out of the back.
The cable itself would be rather cool, I'd buy a few if someone sold them. My "USB betamax VCR" would be hilarious.
Bonus points if you integrate it with Front Row, complete with another icon/menu entry.
Many Blue-Ray players can be controlled via the HDMI port. The protocol is probably proprietary and different for each vendor...
I am am a big fan of optical media but it is a dying form of media.
Have you considered building a small windows or linux computer with an SD card slot (Raspberry Pi) and placing a DVD disc image on the SD card? From there you could write software to playback the DVD from the image and interact with it. You could even use something like Adobe Director as a framework for playing and interacting with the DVD content.
Or you can bypass the DVD image idea altogether and build an interactive framework in Flash, HTML/CSS or Adobe Director which allows you to draw menus on the fly and playback audio/video when a link/button is selected. This would have the added benefit of being more flexible than a multiplexed DVD. You could program the menus to build from an XML file for easy language localization, typo corrections, etc. And you could support playback of video with multiple audio streams, subtitles, etc.
It depends... are you being tasked with controlling (almost) any dvd player, or do you get to decide the model? If you're trying to control whatever AV setup a customer may have, then you're basically required to go the IR transmitter route. And there will still be things you won't be able to handle (like a PS3) without additional hardware.
Most AV devices don't put out any input about their current state short of power draw and video/audio output, and the ones that do usually use proprietary rf (sony is big on this in the near future) or localized standards (like scart in europe). A handful will send/recognize command signals over coax, but that went out of style in the 90s.

In which domains are message oriented middleware like AMQP useful?

What problem do MOM (Message Oriented Middleware) solve? Scalability? Integration?
In which domain are they typically used and in which domains are they typically not used?
For example, say, is Google using such solution for it's main search engine or to power GMail?
What about big websites like Walmart, eBay, FedEx (pretty much a Java shop) and buy.com (pretty much an MS shop)? Does MOM solve a need there?
Does it make any sense when you're writing a Webapp where you control the server-side and have an homogenous environment (say tens of Amazon EC2 instances all running Linux + Java JVMs) there and where the clients are, well, Web browsers?
Does it make sense for desktop apps that need to communicate with a server?
Or is it 'only' for big enterprise stuff where you typically have a happy mix of countless of different systems that needs to communicate in a way or another?
I'm a bit confused as to what they're useful for and I think that with example of where they're appropriate and where they're not appropriate I could better understand their use.
This is a great question.
The main uses of messaging are: scaling, offloading work, integration, monitoring, event handling, routing, networking, push, mobility, buffering, queueing, task sharing, alerts, management, logging, batch, data delivery, pubsub, multicast, audit, scheduling, ... and more. Basically: anything where you need data but don't want to make a database request. (Caching is another, longer story).
Another way of looking at this is to notice that many applications used to be built by assuming that users (people) would perform actions that would be fulfilled by executing a transaction on a database (including reads, writes). But today, many actions are not user-initiated. Instead they are application-initiated. For example "tell me when the book that I want to buy is in stock". The best way to solve this class of problems is with messaging of some sort. Whether you call it middleware or web push or real time salad dressing does not matter. It's all messaging.
When you enable applications to initiate or react to events, then it is much easier to scale because your architecture can be based on loosely coupled components. It is also much easier to integrate those components if your messaging is based on a stable, scalable, serviceable tool, preferably using open standard APIs and protocols.
I hope this helps. We try to maintain a list of useful links about messaging here
Please get in touch with questions and comments on any of this, we are dead easy to find.
To address your specific questions:
In which domain are they typically used and in which domains are they typically not used?
Like databases, messaging systems crop up everywhere.
For example, say, is Google using such solution for it's main search engine or to power GMail?
Google uses a lot of home grown technology, but a lot of their open source contributions and known use cases suggest that messaging is (or should be) central to some of the main services.
What about big websites like Walmart, eBay, FedEx (pretty much a Java shop) and buy.com (pretty much an MS shop)? Does MOM solve a need there?
Very much so.
An example use case is scaling web page requests. When the user makes a web request, the web server puts it onto a queue for background processing. This means that the web server can keep working while the request is processed. It also means that the web server does not need to know how the request is handled, making system maintenance, upgrade and rollback much simpler because the main parts are 'decoupled'.
So, anyway, the web request gets processed by a back end service, or possibly by many services, eg 'look up book titles', 'draw shopping cart', 'get advertisement', 'check user account'... Finally all the results get put onto another queue, ready for collection and user response by the web server. Typically the system will include a timeout of around 100ms so that any late requests just get thrown away. The user sees anything that got processed in the time interval. This is one reason why some large ecommerce sites have pages that appear to load in stages.
There are many more use cases...
Does it make any sense when you're writing a Webapp where you control the server-side and have an homogenous environment (say tens of Amazon EC2 instances all running Linux + Java JVMs) there and where the clients are, well, Web browsers?
Definitely. If you have an unknown, or unbounded, number of users, server side instances, and application latencies, then it makes sense to use messaging, even if just as a scalable substrate for non-blocking RPC.
Does it make sense for desktop apps that need to communicate with a server?
In lots of cases. One very common case is when the server pushes events to the desktop app, eg game event, tweets, price feeds in finance, system alerts....
Or is it 'only' for big enterprise stuff where you typically have a happy mix of countless of different systems that needs to communicate in a way or another?
Definitely not only for those 'legacy integration' cases but they are important too. At RabbitMQ, the biggest customers we have in terms of pure scale or message volume are cloud providers and big web application providers.
I will answer only one answer, from prior experience - take a look at this middle-ware that is employed by big companies here - middle-ware has one purpose - to glue dis-connected systems (written in disparate languages) together so that they can interact with one another and streamline the business process - Entera as I have had experience with, creates a middle layer in which the unix box using processes written in C, interact with the mainframe system (DB2, COBOL) via a front-end written in PowerBuilder (I am not naming the company!).
From the description I have given, Entera is a middle-ware which hosts a number of things - smooth integration of the flow of data regardless of the endian format, ability for different languages to talk to the middle-ware broker (a broker is a CORBA or DCE like process, that conforms to 'The Open Group) that listens on a particular port) and is specified by an IDL which makes a process appear to be local - if you understand the terminology used in Remoting under Microsoft's .NET Framework, you are not far off the mark! The middle-ware generates stubs which are linked at compile-time and manages the creation of the process, hosting it off a port, multi-threading at run-time, and also, the modern front-ends (such as .NET, Java, PowerBuilder even the unspeakable VB6...ok...VB.NET for the purists out there) can interact by opening a connection to the specified port on a particular IP address, and using the stubs generated, can interact with it directly.
Obviously, from what was described you can see how the legacy systems can have new life breathed into it and thus scalability of the process, the major downside of this is the cost factor which can run into thousdands of dollars. Big companies who uses mainframes as their back-end processing systems for billing/invoicing, who generate a huge revenue can obviously afford such an expensive product - to them it would seem like throwing pennies into a pool of water...because of the use of middle-ware which prolongs the business process, and breathe new life into it, can extend the business by a good number of years into the future without worrying about 'legacy' tag attached to it.
Incidentally, I carried this out as part of my thesis for my BSc. in Information Systems which covered this commercial front-end. There was an open source version of the middle-ware available on sourceforge called FreeDCE, but development efforts have declined or stopped.
Edit:
#cocotwo: That is exactly what middle-ware does as you said it is a plumbing tool...message oriented middle-ware is not really heard of AFAIK because I would imagine, the processes (functions) would need to be called as if they are locally visible within the application domain of the front-end to make it easy to interact with.
Using messages may have its advantages over RPC calls in that the messages are queued in a safe-holding area in the event that a network disconnection occurs - there may be some data caching going on within that aspect to allow the front-end to continue regardless...it would be useful in the instances of 'updating a status of a particular billing/invoice number' - a one-way write-data to the back-end via the middle-ware.
Ok, big companies would have advanced systems infrastructure in that technicians are constantly around the clock to ensure a smooth delivery of data-flow so that would have to be factored in. The company that I worked with had IBM Global Support contract to fulfill in order to ensure a maximum uptime 99% with 6 nine's after the decimal point...with hot-swapping/balanced-clusters/mirroring systems in place...
Whereas with RPC, if the disconnection occurs, the front-end would have to be restarted or would have to handle the disconnection event. It really depends if the message-queueing middle-ware handles each message in real-time and pass back results to the front-end immediately...
This is where each (Message-queueing and RPC related middle-ware) have their strengths and weaknesses...and also the cost mitigation factor such as support, maximum up-time, development efforts and training - that's a biggie here as middle-ware are really proprietary (despite following the 'The Open Group' layout/standards) and complex to setup and to glue the whole thing together via scripts.
Good answers and discussion here. Our consulting team has two preferred "messaging" solutions: RabittMQ and NXTera a high speed RPC middleware, the contemporary version of Entera mentioned above. My partners and I have developed several solutions using RabittMQ, it is the best tool available in that space right now. Additionally, I happen to work for the company that makes NXTera/Entera.
From experience I can clearly say that both of these products meet the need for reliability and low maintenance as discussed above. There are situations where a messaging service, like RabittMQ, is the right choice -- where Publish and subscribe, certified delivery, Queuing or store-and-forward are required.
In other cases, RPC's (remote procedure calls) are the best and fastest solutions for transactional and distributed processing for enterprise or cloud-based applications. When it is right to use an RPC, but SOAP/.NET (yes these are RPC implementations) are too slow, expensive or complex, a lightwieght high speed RPC middleware like NXTera/Entera is the right choice for us.
There is some use case overlap between RPC middleware and message oriented middleware, and where there are you can use either successfully. But both are strong and dependable choices.
The large companies I work with use both RPC and MoM side-by-side. As far as Internet companies, Google (Protocol Buffers) and Facebook (Thrift) show that RPC's have a roll to play in modern web and cloud-based development.

Why is Google's "face recognition" feature available only in Picasa WEB and not Picasa for the PC?

I friend asked me this today.
Picasa Web has a cool (and frightening :-) feature where it will recognize all the faces in your photo album.
But the PC (desktop) version doesn't have this.
Several reasons I can think of:
They just haven't gotten around to writing the PC version of the code.
They are licensing that feature and it costs a lot more (or isn't available) on the PC.
Takes a lot of processing power (this seems odd b/c MY PC cycles are free to Google, but they have to pay for for cycles consumed on their server.
Any other thoughts?
I'm certain it'll make it out in coming releases but Google is a funny company when it comes to its own competing/complementing services. One thing is for sure, only somebody on the Picasa team could give an accurate answer.
But we could hypothesise several things...
They don't want their code reverse-engineered.
(As you say), they aren't licensed to redist
It's blocked in the dev version by other new features that aren't complete yet
They don't want to release it because they want people to use PicasaWeb as a social photo network.
I don't think processing power is an issue. If they're running it in bulk on their own servers for free, a modern desktop could probably run it without issue.
From my limited contact with face recognition software, it's probably the redistribution issue. When I dealt with it, face recognition was its own little world with extremely high per-CPU licensing costs and tremendous paranoia about code getting loose.
I'm not so sure it's not a processing issue. It took Google's massive servers 30 minutes to run through all my photos. I can only imagine that same task would have taken days on my local machine.
Actually, its in, just in limited functionality when you do a search, there's an icon to find only photos with faces. The experimental passport feature also works that way.
So the answer is:
Not the same base (APIs) available or used and not the same language so its not directly portable.
Not the same software and there are no stated goals to make both apps feature equivalent.
Programmers are limited and their time is too. They make choices as to what implement now.
No idea if this is the case for Picasa, but there's another case where licensing could be the issue. If the server-side code is using code with a restrictive license with DRM (GPL, for example) which restricts how you can distribute modules using the code. Running that module on a web server, where the user only gets the output, is legal under such licenses. If that code was distributed, there would be many legal requirements attached which would likely be very undesirable for commercial software companies, including google. This is one very good reason to have some capabilities only accessible through web services.
This was also the case with Riya (who was arguably the first to market with reliable facial recognition for consumer photo collections).
The biggest reasons are likely:
Processing Time (they can't control
how fast your CPU is and therefore
they can't control the experience).
Facial recognition is very likely to
be process intensive (this was Riya's
stated reason for not doing it
client-side)
The recognition process requires a
LARGE volume of data for processing
that is only accessible on the
server? (In other words, the process needs to spin through millions of faces, not just the faces that you have on your hard drive?)