Multiple ultrasonic sensor interference - multiple-columns

I am trying to use multiple ultrasonic sensor to develop an estimate of a profile by facing them on the same side and placing them 1 or 2cm away. The problem is a) I have trigger them at the same time b) I cannot change the frequency of the sound. Since the profile is pretty rough, the sound from the one transducer might be sensed by the another transducer also due to scattering. How will I differentiate which echo belongs to which transducer?

Related

How to optimally store the viewing time of an episode in the database?

How to optimally store the viewing time of an episode in the database? What I have done now is that as soon as the user starts an episode 1- At first, it is checked whether there is a user observation record in the table or not. 2- If there is, the time column will be updated with Ajax every second 3- If it does not exist, a record will be recorded and the viewing time will be recorded again in the table Note: This system is currently working well, but my question is, if, for example, a thousand people are watching the episodes at the same time, will the database not face a serious problem? Every second, four queries are sent by one person, and if a thousand people see it, this number reaches four thousand queries per second, what is the best solution? Does this system not cause failure?
A consideration is how accurate you want the viewing time to be.
You could for instance, write the viewing time to the session (only) and then persist it to the database every minute, or when the viewer navigates to a new page.
But, every second seems an unrealistic expectation since you have per second requests to your server coming in from every viewer.
So, consider what is reasonable. If it is just for your own stats then maybe 10 second increments is enough. If it is so that the user can leave and return to the exact second they left then again, possible, but probably unreasonable. I don't expect this to happen with youtube or netflix - there is normally some amount of overlap, which actually helps me remember where I got to.

How to handle a database which is constantly updated from a game

I am working on a game which will store data in a database for multiplayer purposes. A simplified version of the data to be stored is something along the following lines:
Player A goes to a country X on a map and by doing so that country will lose influence over time.
This would be simple enough as the local game could update in the background and then send an update to the database whenever the players stops having an influence there.
The complication comes when:
Player B goes to country X after player A. To check the current influence of country X one of two things need to happen, either A) the influence field in the database updates itself over time, or B) the game constantly updates the database every few seconds when the player A's game updates.
A) sounds like the better option, but I don't actually know if that's possible (is it?) and how much of a strain would that be on the database with a lot of countries?
Sorry if this sounds vague but it seems like a fairly generally applicable scenario for more than just games. If you need any clarification just ask.
Simple, You can do this from database with a Sql Job
Create a SP that calculates influence based on some fields and update it into some column.
Call this SP in a job, schedule it to run every 5 mins or whatever time you want.

Make the computer never lose at Tic-Tac-Toe

I am working on a simple game of Tic Tac Toe code for C. I have most of the code finished, but I want the AI to never lose.
I have read about the minimax algorithm, but I don't understand it. How do I use this algorithm to enable the computer to either Win or Draw but never lose?
The way to approach this sort of problem is one of exploring possible futures. Usually (for a chess or drafts AI) you'd consider futures a certain number of moves ahead but because tic tac toe games are so short, you can explore to the end of the game.
Conceptual implementation
So you create a branching structure:
The AI imagines itself making every legal move
The AI them imagines the user making each legal move they can make after each of its legal move
Then the AI imagines each of its next legal moves
etc.
Then, going from the most branched end (the furthest forward in time) the player whose turn it is (AI or user) chooses which future is best for it (win, lose or draw) at each branching point. Then it hands over to the player higher up the tree (closer to the present); each time choosing the best future for the player whose imaginary turn it is until finally you're at the first branching point where the AI can see futures which play out towards it losing, drawing and winning. It chooses a future where it wins (or if unavailable draws).
Actual implementation
Note that conceptually this is what is happening but it's not necessary to create the whole tree, then judge it like this. You can just as easily work though the tree getting to the furthest points in time and choosing then.
Here, this approach works nicely with a recursive function. Each level of the function polls all its branches; passing the possible future to them and returns -1,0,+1; choosing the best score for the current player at each point. The top level chooses the move without actually knowing how each future pans out, just how well they pan out.
Pseudo code
I assume in this pseudo code that +1 is AI winning, 0 is drawing, -1 is user losing
determineNextMove(currentStateOfBoard)
currentBestMove= null
currentBestScore= - veryLargeNumber
for each legalMove
score=getFutureScoreOfMove(stateOfBoardAfterLegalMove , AI’sMove)
if score>currentBestScore
currentBestMove=legalMove
currentBestScore=score
end
end
make currentBestMove
end
getFutureScoreOfMove(stateOfBoard, playersTurn)
if no LegalMoves
return 1 if AI wins, 0 if draw, -1 if user wins
end
if playersTurn=AI’sTurn
currentBestScore= - veryLargeNumber //this is the worst case for AI
else
currentBestScore= + veryLargeNumber //this is the worst case for Player
end
for each legalMove
score=getFutureScoreOfMove(stateOfBoardAfterLegalMove , INVERT playersTurn)
if playersTurn ==AI’sTurn AND score>currentBestScore //AI wants positive score
currentBestScore=score
end
if playersTurn ==Users’sTurn AND score<currentBestScore //user wants negative score
currentBestScore=score
end
end
return currentBestScore
end
This pseudo code doesn't care what the starting board is (you call this function every AI move with the current board) and doesn't return what path the future will take (we can't know if the user will play optimally so this information is useless), but it will always choose the move that goes towards the optimum future for the AI.
Considerations for larger problems
In this case where you explore to the end of the game, it is obvious which the best possible future is (win, lose or draw), but if you're only going (for example) five moves into the future, you'd have to find some way of determining that; in chess or drafts piece score is the easiest way to do this with piece position being a useful enhancement.
I have been doing such a thing about 5 years ago. I've made a research. In tic tac toe it doesn't take long, you just need to prepare patterns for first two or three moves.
You need to check how to play:
Computer starts first.
Player starts first.
There are 9 different start positions:
But actually just 3 of them are different (others are rotated).
So after that you will see what should be done after some specific moves, I think you don't need any algorithms in this case because tic tac toe ending is determining by first moves. So in this case you will need a few if-else or switch statements and random generator.
tic tac toe belong to group of games, which won't be lost if you know how to play, so for such a games you do not need to use trees and modified sorting algorithms. To write such algorithm you need just a few functions:
CanIWin() to check if computer has 2 in a row and possible to win.
ShouldIBlock() to check if player do not have 2 in a row and need to block it.
Those two functions must be called in this order, if it returns true you need either to win or not to let player win.
After that you need to do other calculations for move.
One exclusive situation is when computer starts the game. You need to chose cell which belongs to the biggest amount of different directions (there are 8 of them - 3 horizontal, vertical and 2 diagonal). In such a algorithm computer will always choose center because it has 4 directions, you should add small possibility to choose second best option to make game a bit more attractive.
So when you reach situation where are some chosen parts of the board and computer have to move you need to rate every free cell. (If first or second function returned true you have to take an action before reaching this place!!!). So to rate cell you need to count how many open directions left on every cell, also you need to block at least one opponent direction.
After that you will have a few possible cells to put your mark. So you need to check for necessary moves sequence, because you will have a few options and it may be that one of them lead you into loosing. So after that you will have set and you can randomly choose move, or to choose the one with biggest score.
I have to say similar thing, as said at the beginning of post. Bigger games do not have perfect strategy and, lets say chess are much based on patterns, but also on forward thinking strategy (for such a thing is using patricia trie). So to sum up you do not need difficult algorithms, just a few functions to count how much you gain and opponent loses with move.
Make a subsidiary program to predict the cases with which the user can win. Then you can say your ai to do the things that the user has to do to win.

Is it possible to do A/B testing by page rather than by individual?

Lets say I have a simple ecommerce site that sells 100 different t-shirt designs. I want to do some a/b testing to optimise my sales. Let's say I want to test two different "buy" buttons. Normally, I would use AB testing to randomly assign each visitor to see button A or button B (and try to ensure that that the user experience is consistent by storing that assignment in session, cookies etc).
Would it be possible to take a different approach and instead, randomly assign each of my 100 designs to use button A or B, and measure the conversion rate as (number of sales of design n) / (pageviews of design n)
This approach would seem to have some advantages; I would not have to worry about keeping the user experience consistent - a given page (e.g. www.example.com/viewdesign?id=6) would always return the same html. If I were to test different prices, it would be far less distressing to the user to see different prices for different designs than different prices for the same design on different computers. I also wonder whether it might be better for SEO - my suspicion is that Google would "prefer" that it always sees the same html when crawling a page.
Obviously this approach would only be suitable for a limited number of sites; I was just wondering if anyone has tried it?
Your intuition is correct. In theory, randomizing by page will work fine. Both treatment groups will have balanced characteristics in expectation.
However, the sample size is quite small so you need to be careful. Simple randomization may create imbalance by chance. The standard solution is to block on pre-treatment characteristics of the shirts. The most important characteristic is your pre-treatment outcome, which I assume is the conversion rate.
There are many ways to create "balanced" randomized designs. For instance, you you could create pairs using optimal matching, and randomize within pairs. A rougher match could be found by ranking pages by their conversion rate in the previous week/month and then creating pairs of neighbors. Or you could combine blocked randomization within Aaron's suggestion: randomize within pairs and then flip the treatment each week.
A second concern, somewhat unrelated, is interaction between treatments. This may be more problematic. It's possible that if a user sees one button on one page and then a different button on a different page, that new button will have a particularly large effect. That is, can you really view treatments as independent? Does the button on one page affect the likelihood of conversion on another? Unfortunately, it probably does, particularly because if you buy a t-shirt on one page, you're probably very unlikely to buy a t-shirt on the other page. I'd worry about this more than the randomization. The standard approach -- randomizing by unique user -- better mimics your final design.
You could always run an experiment to see if you get the same results using these two methods, and then proceed with the simpler one if you do.
You can't.
Lets 50 t-shirts have button A and the remaining 50 have button B. After your test, you realize t-shirts with button A have a better conversion rate.
Now - was the conversion better because of button A, or was it better because the t-shirt designs were really cool and people liked them?
You can't answer that question objectively, so you can't do A/B testing in this manner.
The trouble with your approach is that you're testing two things at the same time.
Say, design x is using button a. Design y is using button b. Design y gets more sales, and more conversions.
Is that because button b gives a better conversion rate than button a, or is that because design y gives a better conversion rate than design x?
If your volume of designs is very high, your volume of users is very low, and your conversions are distributed evenly amongst your designs, I could see your approach being better than the normal fashion - because the risk that the "good" designs clump together and skew your result would be smaller than the risk that the "good" users do. However, in that case you won't have a particularly large sample size of conversions to draw conclusions from - you need a sufficiently high volume of users for AB testing to be worthwhile in the first place.
Instead of changing the sale button for some pages, run all pages with button A for a week and then change to button B for another week. That should give you enough data to see whether the number of sales change significantly between the two buttons.
A week should be short enough that seasonal/weather effect shouldn't apply.

How should I start designing an AI algorithm for an artillery warfare game?

Here's the background... in my free time I'm designing an artillery warfare game called Staker (inspired by the old BASIC games Tank Wars and Scorched Earth) and I'm programming it in MATLAB. Your first thought might be "Why MATLAB? There are plenty of other languages/software packages that are better for game design." And you would be right. However, I'm a dork and I'm interested in learning the nuts and bolts of how you would design a game from the ground up, so I don't necessarily want to use anything with prefab modules. Also, I've used MATLAB for years and I like the challenge of doing things with it that others haven't really tried to do.
Now to the problem at hand: I want to incorporate AI so that the player can go up against the computer. I've only just started thinking about how to design the algorithm to choose an azimuth angle, elevation angle, and projectile velocity to hit a target, and then adjust them each turn. I feel like maybe I've been overthinking the problem and trying to make the AI too complex at the outset, so I thought I'd pause and ask the community here for ideas about how they would design an algorithm.
Some specific questions:
Are there specific references for AI design that you would suggest I check out?
Would you design the AI players to vary in difficulty in a continuous manner (a difficulty of 0 (easy) to 1 (hard), all still using the same general algorithm) or would you design specific algorithms for a discrete number of AI players (like an easy enemy that fires in random directions or a hard enemy that is able to account for the effects of wind)?
What sorts of mathematical algorithms (pseudocode description) would you start with?
Some additional info: the model I use to simulate projectile motion incorporates fluid drag and the effect of wind. The "fluid" can be air or water. In air, the air density (and thus effect of drag) varies with height above the ground based on some simple atmospheric models. In water, the drag is so great that the projectile usually requires additional thrust. In other words, the projectile can be affected by forces other than just gravity.
In a real artillery situation all these factors would be handled either with formulas or simply brute-force simulation: Fire an electronic shell, apply all relevant forces and see where it lands. Adjust and try again until the electronic shell hits the target. Now you have your numbers to send to the gun.
Given the complexity of the situation I doubt there is any answer better than the brute-force one. While you could precalculate a table of expected drag effects vs velocity I can't see it being worthwhile.
Of course a game where the AI dropped the first shell on your head every time wouldn't be interesting. Once you know the correct values you'll have to make the AI a lousy shot. Apply a random factor to the shot and then walk to towards the target--move it say 30+random(140)% towards the true target each time it shoots.
Edit:
I do agree with BCS's notion of improving it as time goes on. I said that but then changed my mind on how to write a bunch of it and then ended up forgetting to put it back in. The tougher it's supposed to be the smaller the random component should be.
Loren's brute force solution is appealing as because it would allow easy "Intelligence adjustments" by adding more iterations. Also the adjustment factors for the iteration could be part of the intelligence as some value will make it converge faster.
Also for the basic system (no drag, wind, etc) there is a closed form solution that can be derived from a basic physics text. I would make the first guess be that and then do one or more iteration per turn. You might want to try and come up with an empirical correction correlation to improve the first shot (something that will make the first shot distributions average be closer to correct)
Thanks Loren and BCS, I think you've hit upon an idea I was considering (which prompted question #2 above). The pseudocode for an AIs turn would look something like this:
nSims; % A variable storing the numbers of projectile simulations
% done per turn for the AI (i.e. difficulty)
prevParams; % A variable storing the previous shot parameters
prevResults; % A variable storing some measure of accuracy of the last shot
newParams = get_new_guess(prevParams,prevResults);
loop for nSims times,
newResults = simulate_projectile_flight(newParams);
newParams = get_new_guess(newParams,newResults);
end
fire_projectile(newParams);
In this case, the variable nSims is essentially a measure of "intelligence" for the AI. A "dumb" AI would have nSims=0, and would simply make a new guess each turn (based on results of the previous turn). A "smart" AI would refine its guess nSims times per turn by simulating the projectile flight.
Two more questions spring from this:
1) What goes into the function get_new_guess? How should I adjust the three shot parameters to minimize the distance to the target? For example, if a shot falls short of the target, you can try to get it closer by adjusting the elevation angle only, adjusting the projectile velocity only, or adjusting both of them together.
2) Should get_new_guess be the same for all AIs, with the nSims value being the only determiner of "intelligence"? Or should get_new_guess be dependent on another "intelligence" parameter (like guessAccuracy)?
A difference between artillery games and real artillery situations is that all sides have 100% information, and that there are typically more than 2 opponents.
As a result, your evaluation function should consider which opponent it would be more urgent to try and eliminate. For example, if I have an easy kill at 90%, but a 50% chance on someone who's trying to kill me and just missed two shots near me, it's more important to deal with that chance.
I think you would need some way of evaluating the risk everyone poses to you in terms of ammunition, location, activity, past history, etc.
I'm now addressing the response you posted:
While you have the general idea I don't believe your approach will be workable--it's going to converge way too fast even for a low value of nSims. I doubt you want more than one iteration of get_new_guess between shells and it very well might need some randomizing beyond that.
Even if you can use multiple iterations they wouldn't be good at making a continuously increasing difficulty as they will be big steps. It seems to me that difficulty must be handled by randomness.
First, get_initial_guess:
To start out I would have a table that divides the world up into zones--the higher the difficulty the more zones. The borders between these zones would have precalculated power for 45, 60 & 75 degrees. Do a test plot, if a shell smacks terrain try again at a higher angle--if 75 hits terrain use it anyway.
The initial shell should be fired at a random power between the values given for the low and high bounds.
Now, for get_new_guess:
Did the shell hit terrain? Increase the angle. I think there will be a constant ratio of how much power needs to be increased to maintain the same distance--you'll need to run tests on this.
Assuming it didn't smack a mountain, note if it's short or long. This gives you a bound. The new guess is somewhere between the two bounds (if you're missing a bound, use the value from the table in get_initial_guess in it's place.)
Note what percentage of the way between the low and high bound impact points the target is and choose a power that far between the low and high bound power.
This is probably far too accurate and will likely require some randomizing. I've changed my mind about adding a simple random %. Rather, multiple random numbers should be used to get a bell curve.
Another thought: Are we dealing with a system where only one shell is active at once? Long ago I implemented an artillery game where you had 5 barrels, each with a fixed reload time that was above the maximum possible flight time.
With that I found myself using a strategy of firing shells spread across the range between my current low bound and high bound. It's possible that being a mere human I wasn't using an optimal strategy, though--this was realtime, getting a round off as soon as the barrel was ready was more important than ensuring it was aimed as well as possible as it would converge quite fast, anyway. I would generally put a shell on target on the second salvo and the third would generally all be hits. (A kill required killing ALL pixels in the target.)
In an AI situation I would model both this and a strategy of holding back some of the barrels to fire more accurate rounds later. I would still fire a spread across the target range, the only question is whether I would use all barrels or not.
I have personally created such a system - for the web-game Zwok, using brute force. I fired lots of shots in random directions and recorded the best result. I wouldn't recommend doing it any other way as the difference between timesteps etc will give you unexpected results.