Creating a turn queue for a RPG - actionscript-3

I'm trying to create a turn-based RPG where the player characters and the enemy characters each possess a speed stat. Using this stat, I would like to create an on-screen display of the next, say, 6 people in the queue to take their turn.
My issue is that I can't figure out how to turn the speed stat of each character into a useable number to determine turn order.
For example:
char1.speed = 10;
char2.speed = 20;
char3.speed = 80;
In a situation like this, I would like to be able to create a turn queue such that char3 takes two or three turns ahead of the other characters, since his character is significantly faster than the others. So the on-screen display would show portraits of char3, char3, char2, char3, char1, char3, for example. (I can make the queue display and make it re-sort itself; my struggle is making a changeable turn order that is based on a character's speed stat.)
Another issue that I'm struggling with is that I want to be able to modify a character's speed by spells, potions, etc that may end up changing the turn order mid-battle. I anticipate having an updateTurns() function which will re-sort my queue when this happens... is the best way to go about this giving each character two speed stats, baseSpeed and adjSpeed, for example? So that the baseSpeed remains the same no matter what happens through spells and items, while the adjSpeed represents a character's speed at that particular moment in battle?
Thanks for the help, and hopefully I've made sense. This is my first time posting here, so if I need any more clarification or whatnot, just let me know.

Should be relatively straight forward. First you need your divisor, i.e, how to determine what a single turn is. I assume 10? So get how many turns each character gets, set up a constant with the single turn speed in your character base class;
public static const TURN:uint = 10;
Then you can do something like this to get each players' turns;
char2.speed / character.TURN = // how many turns each player gets.
Then you can have a main loop, which is an array of your characters, and a sub loop, which loops through each character, removing a turn each time, and adding the char to the queue each time. Once turn = 0. The next character will be iterated by the main loop. Once you have a queue, you could shuffle it afterwards to change the order up a bit. Break it into two tasks.
Once you have turnsfor each character, you could deduct some turns, so also store a speedPenalty in each char which is normally 0, but if hit by a spell, change it to x. Then your main forumula is actually;
(char2.speed / character.TURN) - speedPenalty
If you do this, you'll have to make sure each char can never go below 1 turn. Or, as you say, have a base speed, and a current speed, and then deduct from current speed and use that to calculate turns, and reset it to base speed once the spell wears off.

Related

AS3 - Massive Numbers/Integers, Beyond MAX_VALUE

Can anyone help me write a class, e.g. BigNumber.as (or BigInt.as) which will:
Allow for really really big numbers/integers.
Include a method to express a number in format "1.54 Million", "1.98 Vigintillion" and so on...
Allow the maximum number to stop only at the last number word (e.g. Million, Vigintillion, etc) in the defined list. (e.g. list built from here: https://en.wikipedia.org/wiki/Names_of_large_numbers under Standard dictionary numbers [Short scale])
I had an idea to have a class which contains 2 Number values ("value" and "timesMaxedOut"). When "value" >= Number.MAX_VALUE, it would then increment "timesMaxedOut" by 1 and reset "value" back to the difference that the value went over by.
The problem? It seems if you hit or surpass "MAX_VALUE" then the Number will reset to 0. I'm also sure it would then be difficult to properly multiply or divide numbers with this approach, as it would need to take into account "timesMaxedOut" just for the calculations to work correctly.
My goal is to write a game which would allow players to reach really big numbers, and play indefinitely essentially, but AS3 lacks very large number support it seems.

Picking JSON objects out of array based on their value

Perhaps I think about this wrong, but here is a problem:
I have NSMutableArray all full of JSON objects. Each object look like this, here are 2 of them for example:
{
player = "Lorenz";
speed = "12.12";
},
{
player = "Firmino";
speed = "15.35";
}
Okay so this is fine, this is dynamic info I get from webserver feed. Now what I want though is lets pretend there are 22 such entries, and the speeds vary.
I want to have a timer going that starts at 1.0 seconds and goes to 60.0 seconds, and a few times a second I want it to grab all the players whose speed has just been passed. So for instance if the timer goes off at 12.0 , and then goes off again at 12.5, I want it to grab out all the player names who are between 12.0 and 12.5 in speed, you see?
The obvious easy way would be to iterate over the array completely every time that the timer goes off, but I would like the timer to go off a LOT, like 10 times a second or more, so that would be a fairly wasteful algorithm I think. Any better ideas? I could attempt to alter the way data comes from the webserver but don't feel that should be necessary.
NOTE: edited to reflect a corrected understanding that the number in 1 to 60 is incremented continously across that range rather than being a random number in that interval.
Before you enter the timer loop, you should do some common preprocessing:
Convert the speeds from strings to numeric values upfront for fast comparison without having to parse each time. This is O(1) for each item and O(n) to process all the items.
Put the data in an ordered container such as a sorted list or sorted binary tree. This will allow you to easily find elements in the target range. This is O(n log n) to sort all the items.
On the first iteration:
Use binary search to find the start index. This is O(log n).
Use binary search to find the end index, using the start index to bound the search.
On subsequent iterations:
If each iteration increases by a predictable amount and the step between elements in the list is likewise a predictable amount, then just maintain a pointer and increment as per Pete's comment. This would make each iteration cost O(1) (just stepping ahead by a fixed amount).
If the steps between iterations and/or the entries in the list are not predictable, then do a binary search as in the initial case. If the values are monotonically increasing (as I now understand the problem to be stating), even if they are unpredictable, you can incorporate this into your binary search algorithm by maintaining an index as in the other case, but instead of resuming iteration directly from there, if the values are unpredictable, instead use the remembered index to set a lower bound on the binary search so that you narrow the region being searched. This would make each iteration cost O(log m), where "m" are the remaining elements to be considered.
Overall, this produces an algorithm that is no worse than O((N + I) log N) where "I" is the number of iterations compared to the previous algorithm that was O(I * N) (and shifts most of the computation outside of the loop, rather than inside the loop).
A modern computer can do billions of operations per second. Even if your timer goes off 1000 times per second, and your need to process 1000 entries, you will still be fine with a naive approach.
But to answer the question, the best approach would be to sort the data first based on speed, and then have an index of the last player whose speed was already passed. At the beginning the pointer, obviously, points at the first player. Then every time your timer goes off, you will need to process some continuous chunk of players starting at that index. Something along the lines of (in pseudocode):
global index = 0;
sort(players); // sort on speed
onTimer = function(currentSpeed) {
while (index < players.length && players[index].speed < currentSpeed) {
processPlayer(players[index]);
++ index;
}
}

AS3 repeat same code in a vertor which has 2500 objects

This is my problem, i'm making a path finding program 'jump point search algorithm'. And i need to reset every node (object) in the vector 40 by 40 vector so 2500 nodes, so i need to do the following
//* some type of loop*//
{
node.is_been_on = false;
}
But my path finding may happen 5 times every seconds with a few objects. So that a lot of looping.
What is the CPU friendly way to do this, or another solution which means i don't need to do it.
One of my friends saying that i should make a 40 by 40 boolean array and having the is_been_on variable it, so i would refer to that and not the node, would that be better?
Thanks for reading, and i hope you can help
The most simple idea is to reset only the nodes that you've changed - store them in different array and iterate only it - JPS should modify only a small part of the given nodes.
The idea of your friends is not better, since you will still iterate over all nodes, and modify each value. The values of the node are also boolean (or at least I hope so), so you win nothing but having second array (vector) of values.
Either way I don't find it that bad to modify bool values, but if you really need to optimize (which I find great) - go with "reset what's changed" - can't imagine better one.
But why do you recalculate path 5 times every second? You have graph with a size 40x40, by the help of A* or another one algorithm, you will be able find correct path. As you calculate path, you don't recalculate it again, only if you have dynamic obstacles in the game.
If you don't know how to implement pathfinding algorithm in AS3 project. There are several ready solutions

How to detect local maxima and curve windows correctly in semi complex scenarios?

I have a series of data and need to detect peak values in the series within a certain number of readings (window size) and excluding a certain level of background "noise." I also need to capture the starting and stopping points of the appreciable curves (ie, when it starts ticking up and then when it stops ticking down).
The data are high precision floats.
Here's a quick sketch that captures the most common scenarios that I'm up against visually:
One method I attempted was to pass a window of size X along the curve going backwards to detect the peaks. It started off working well, but I missed a lot of conditions initially not anticipated. Another method I started to work out was a growing window that would discover the longer duration curves. Yet another approach used a more calculus based approach that watches for some velocity / gradient aspects. None seemed to hit the sweet spot, probably due to my lack of experience in statistical analysis.
Perhaps I need to use some kind of a statistical analysis package to cover my bases vs writing my own algorithm? Or would there be an efficient method for tackling this directly with SQL with some kind of local max techniques? I'm simply not sure how to approach this efficiently. Each method I try it seems that I keep missing various thresholds, detecting too many peak values or not capturing entire events (reporting a peak datapoint too early in the reading process).
Ultimately this is implemented in Ruby and so if you could advise as to the most efficient and correct way to approach this problem with Ruby that would be appreciated, however I'm open to a language agnostic algorithmic approach as well. Or is there a certain library that would address the various issues I'm up against in this scenario of detecting the maximum peaks?
my idea is simple, after get your windows of interest you will need find all the peaks in this window, you can just compare the last value with the next , after this you will have where the peaks occur and you can decide where are the best peak.
I wrote one simple source in matlab to show my idea!
My example are in wave from audio file :-)
waveFile='Chick_eco.wav';
[y, fs, nbits]=wavread(waveFile);
subplot(2,2,1); plot(y); legend('Original signal');
startIndex=15000;
WindowSize=100;
endIndex=startIndex+WindowSize-1;
frame = y(startIndex:endIndex);
nframe=length(frame)
%find the peaks
peaks = zeros(nframe,1);
k=3;
while(k <= nframe - 1)
y1 = frame(k - 1);
y2 = frame(k);
y3 = frame(k + 1);
if (y2 > 0)
if (y2 > y1 && y2 >= y3)
peaks(k)=frame(k);
end
end
k=k+1;
end
peaks2=peaks;
peaks2(peaks2<=0)=nan;
subplot(2,2,2); plot(frame); legend('Get Window Length = 100');
subplot(2,2,3); plot(peaks); legend('Where are the PEAKS');
subplot(2,2,4); plot(frame); legend('Peaks in the Window');
hold on; plot(peaks2, '*');
for j = 1 : nframe
if (peaks(j) > 0)
fprintf('Local=%i\n', j);
fprintf('Value=%i\n', peaks(j));
end
end
%Where the Local Maxima occur
[maxivalue, maxi]=max(peaks)
you can see all the peaks and where it occurs
Local=37
Value=3.266296e-001
Local=51
Value=4.333496e-002
Local=65
Value=5.049438e-001
Local=80
Value=4.286804e-001
Local=84
Value=3.110046e-001
I'll propose a couple of different ideas. One is to use discrete wavelets, the other is to use the geographer's concept of prominence.
Wavelets: Apply some sort of wavelet decomposition to your data. There are multiple choices, with Daubechies wavelets being the most widely used. You want the low frequency peaks. Zero out the high frequency wavelet elements, reconstruct your data, and look for local extrema.
Prominence: Those noisy peaks and valleys are of key interest to geographers. They want to know exactly which of a mountain's multiple little peaks is tallest, the exact location of the lowest point in the valley. Find the local minima and maxima in your data set. You should have a sequence of min/max/min/max/.../min. (You might want to add an arbitrary end points that are lower than your global minimum.) Consider a min/max/min sequence. Classify each of these triples per the difference between the max and the larger of the two minima. Make a reduced sequence that replaces the smallest of these triples with the smaller of the two minima. Iterate until you get down to a single min/max/min triple. In your example, you want the next layer down, the min/max/min/max/min sequence.
Note: I'm going to describe the algorithmic steps as if each pass were distinct. Obviously, in a specific implementation, you can combine steps where it makes sense for your application. For the purposes of my explanation, it makes the text a little more clear.
I'm going to make some assumptions about your problem:
The windows of interest (the signals that you are looking for) cover a fraction of the entire data space (i.e., it's not one long signal).
The windows have significant scope (i.e., they aren't one pixel wide on your picture).
The windows have a minimum peak of interest (i.e., even if the signal exceeds the background noise, the peak must have an additional signal excess of the background).
The windows will never overlap (i.e., each can be examined as a distinct sub-problem out of context of the rest of the signal).
Given those, you can first look through your data stream for a set of windows of interest. You can do this by making a first pass through the data: moving from left to right, look for noise threshold crossing points. If the signal was below the noise floor and exceeds it on the next sample, that's a candidate starting point for a window (vice versa for the candidate end point).
Now make a pass through your candidate windows: compare the scope and contents of each window with the values defined above. To use your picture as an example, the small peaks on the left of the image barely exceed the noise floor and do so for too short a time. However, the window in the center of the screen clearly has a wide time extent and a significant max value. Keep the windows that meet your minimum criteria, discard those that are trivial.
Now to examine your remaining windows in detail (remember, they can be treated individually). The peak is easy to find: pass through the window and keep the local max. With respect to the leading and trailing edges of the signal, you can see n the picture that you have a window that's slightly larger than the actual point at which the signal exceeds the noise floor. In this case, you can use a finite difference approximation to calculate the first derivative of the signal. You know that the leading edge will be somewhat to the left of the window on the chart: look for a point at which the first derivative exceeds a positive noise floor of its own (the slope turns upwards sharply). Do the same for the trailing edge (which will always be to the right of the window).
Result: a set of time windows, the leading and trailing edges of the signals and the peak that occured in that window.
It looks like the definition of a window is the range of x over which y is above the threshold. So use that to determine the size of the window. Within that, locate the largest value, thus finding the peak.
If that fails, then what additional criteria do you have for defining a region of interest? You may need to nail down your implicit assumptions to more than 'that looks like a peak to me'.

Unlimited Map Dimensions for a game in AS3

Recently I've been planning out how I would run a game with an environment/map that is capable of unlimited dimensions (unlimited being a loose terms as there's obviously limitations on how much data can be stored in memory, etc). I've achieved this using a "grid" that contains level data stored as a String that can be converted to a 2D Array that would represent objects and their properties.
Here's an example of two objects stored as a String:
"game.doodads.Tree#200#10#terrain$game.mobiles.Player#400#400#mobiles"
The "grid" is a 3D Array, of which the contents would represent the x/y coordinate of the grid cell. The grid cells would be, say, 600x600.
An example of this "grid" Array would be as follows:
var grid:Array = [[["leveldata: 0,0"],["leveldata 0,1"]],
[["leveldata: 1,0"],["leveldata 1,1"]]];
The environment will handle loading a grid square and it's 8 surrounding squares based on a given point. ie the position of the Player. It would have a function along the lines of
function loadCells(xp:int, yp:int):void
This would also handle the unloading of the previously loaded cells that are no longer close enough to be required. In the unload process, the data at grid[x][y] would be overwritten with the new data, which is created by looping through the objects in that cell and appending each new set of data to the grid cell data.
Everything works fine in terms of when you move in a direction, cells are unloaded/saved and new cells are loaded. The problem is this:
Say this is a large city infested by zombies. If you walk three grid squares in any direction and return, everything is as you left it. I'm struggling to find a way to at least simulate all objects still moving around and doing their thing. It looks silly when you for example throw a grenade, walk away, return and the grenade still hasn't detonated.
I've considered storing a timestamp on each object when I unload the level, and when it's initialized there's a loop that runs it's "step" function amount of times. Problem here is obviously that when you come back 5 minutes later, 20 zombies are going to try and step 248932489 times and the game will crash.
Ideas?
I don't know AS3 but let me try give you some tips.
It seems you want to make a seamless world since you load / unload cells as a player moves. That's a good idea. What you should do here is deciding what data a cell should load / unload( or, even further, what data a cell should hold or handle ).
For example, the grenade should not be unloaded by the cell as it needs to be updated even after the player leaves the cell. It's NOT a good idea for a cell to manage all game objects just because they are located in the cell. Instead, the player object could handle the grenade as an owner. Or, there could be one EntityManager which handles all game entities like grenades or zombies.
With this idea, you can update your 20 zombies even when they are in an unloaded zone (their location does not matter anymore) instead of calling update() 248932489 tiems at once. However, do you really need to keep updating the zombies? Perhaps, unloading all of them and spawning new zombies with the number of the latest active zombies in the cell would work. It depends on your game design but, usually, you don't have to update entities which are not visible or far enough from the player. Hope it helps. Good luck! :D
Interesting problem. Of course, game cannot simulate unlimited environment precisely. If you left some zone for a few minutes, you don't need every step of zombies (or any actors there) to be simulated precisely. Every game has its own simplications. Maybe approximate simulation will help you. For example, if survivor was in heavily infested zone for a long time, your simulator could decide that he turned into zombie, without computing every step of the process. Or, if horde was rampant in some part of the city, this part should be damaged randomly.
I see two methods as to how you could handle this issue.
first: you have the engine keep any active cells loaded, active means there are object-initiated events involving cell-owned objects OR player-owned objects (if you have made such a distinction that is). Each time such a cell is excluded from normal unloading behaviour it is assigned a number and if memory happens to run out the cells which have the lowest number will be unloaded. Clearly, this might be the hardest method code-wise but still it might be the only one truly doing what you desire.
second: use very tiny cells and let the engine keep a narrow path loaded. The cells migth then be 100x100 instead of 600x600 and 36 of them do the work one bigger cell would do ( risk: more cluttter code-wise) then each cell your player actually traverses ( not all that have been loaded to produce a natural visibility-range ) is kept in memory including every cell that has player-owned objects in it for a limited amount of time ( i.e. 5 minutes ).
I believe you can find out how to check for these conditions upon unloading yourself and hope to have been of help to you.