I'm currently planning a Web GL game and am starting to make the models for it, I need to know if anyone knows if say my model is 1X scale, my camera zooms/pans out from the object and my models becomes 0.1X scale, what kind of simplification happens by the WGL engine to the models in view?
I.E if I use a triangle as an example, here is it at 1X scale
And here is the triangle at 10% of the original size while keeping all complexity (sorry it's so faint)
While the triangle looks the same, the complexity isn't entirely necessary and could be simplified into perhaps into 4 triangles for performance.
I understand that WebGL is a state machine and perhaps nothing happens; the complexity of the model remains the same, regardless of scale or state but how do I resolve this for the best performance possible?
Since at 1X scale there could be only one or very few models in view but when zoomed to 0.1X scale there could be many hundreds. Meaning, if the complexity of the model is too high then performance takes a huge hit and the game becomes unresponsive/useless.
All advice is hugely appreciated.
WebGL doesn't simplify for you. You have to do it yourself.
Generally you compute the distance away from the camera depending on the distance display a different hand made model. Far away you display a low detail model, close up you display a high detail model. There are lots of ways to do this, which way you choose is up to you. For example
Use different high poly models close, low poly far away
This is the easiest and most common method. The problem with this method is you often see popping when the engine switches from using the low poly model to the high poly model. The three.js sample linked in another answer uses this technique. It creates a LOD object who's job it is to decide which of N models to switch between. It's up to you to supply the models.
Use low-poly far away, fade in the high poly one over it. Once the high poly one is completely obscuring the low poly one stop drawing the low-poly.
Grand Theft Auto uses this technique
Create low poly from high poly and morph between them using any number of techniques.
For example.
1----2----3----4 1--------------4
| | | | | |
| | | | | |
4----5----6----7 | |
| | | | <-----> | |
| | | | | |
8----9----10---11 | |
| | | | | |
| | | | | |
12---13---14---15 12-------------15
Jak and Daxter and Crash Team Racing (old games) use the structure above.
Far away only points 1,4,12,15 are used. Close up all 16 points are used.
Points 2, 3, 4, 5, 6, 8, 9, 10, 11, 13, 14 can be placed anywhere.
Between the far and near distances all the points are morphed so the 16 point
mesh becomes the 4 point mesh. If you play Jak and Daxter #1 or Ratchet and Clank #1
you can see this morphing going on as you play. By the second version of those
games the artists got good at hiding the morphing.
Draw high poly up close, render high poly into a texture and draw a billboard in
the distance. Update the billboard slowly (ever N frames instead of every frame).
This is a technique used for animated objects. It was used in Crash Team Racing
for the other racers when they are far away.
I'm sure there are many others. There are algorithms for tessellating in real time to auto generate low-poly from high or describing your models in some other form (b-splines, meta-balls, subdivision surfaces) and then generate some number of polygons. Whether they are fast enough and produce good enough results is up to you. Most AAA games, as far as I know, don't use them
Search for 'tessellation'.
With it you can add or subtract triangles from your mesh.
Tessellation is closely related to LOD objects (level-of-detail).
Scale factor is just coefficient that all vertices of the mesh are being multiplied with, and with scale you simply stretch your mesh along axis.
Take a look at this Three.js example:
http://threejs.org/examples/webgl_lod.html (WASD/mouse to move around)
Related
i am building a website for recommendation system using differential evolution.
The website will ask the user's budget and some criteria and will return the optimal package.
The data field look like this and i have 8 dimensions (tables).
Id | Name | Price
1 | A | $100
2 | B | $300
So far i have come up with this equation:
f = 1/abs(budget-x1-x2-x3-x4-x5-x6-x7-x8)+1
abs=(absolute)
x1 = 1st Dimension $ price
x2 = 2nd Dimension $ price
and so on
The +1 at the end is to not be divided by zero, so f=1 would be the best cost/score.
I've tried this formula and if it can't find f=1 then the cost would give bad result.
Someone have a better solution or any literature close to this type of problem ?
Thanks in advance
So, DE is a great evolutionary algorithm. DE is fairly simple, on the one hand, while also being reasonably powerful.
However, this problem is a classic integer linear programming problem -- the objective is linear in the variables, and there are no constraints. Integer linear programming is going to go so much faster than any evolutionary algorithm. I'd look at pyomo or glpk for open source ILP solvers.
I'm trying to develop a model to recognize new gestures with the Myo Armband. (It's an armband that possesses 8 electrical sensors and can recognize 5 hand gestures). I'd like to record the sensors' raw data for a new gesture and feed it to a model so it can recognize it.
I'm new to machine/deep learning and I'm using CNTK. I'm wondering what would be the best way to do it.
I'm struggling to understand how to create the trainer. The input data looks like something like that I'm thinking about using 20 sets of these 8 values (they're between -127 and 127). So one label is the output of 20 sets of values.
I don't really know how to do that, I've seen tutorials where images are linked with their label but it's not the same idea. And even after the training is done, how can I avoid the model to recognize this one gesture whatever I do since it's the only one it's been trained for.
An easy way to get you started would be to create 161 columns (8 columns for each of the 20 time steps + the designated label). You would rearrange the columns like
emg1_t01, emg2_t01, emg3_t01, ..., emg8_t20, gesture_id
This will give you the right 2D format to use different algorithms in sklearn as well as a feed forward neural network in CNTK. You would use the first 160 columns to predict the 161th one.
Once you have that working you can model your data to better represent the natural time series order it contains. You would move away from a 2D shape and instead create a 3D array to represent your data.
The first axis shows the number of samples
The second axis shows the number of time steps (20)
The thirst axis shows the number of sensors (8)
With this shape you're all set to use a 1D convolutional model (CNN) in CNTK that traverses the time axis to learn local patterns from one step to the next.
You might also want to look into RNNs which are often used to work with time series data. However, RNNs are sometimes hard to train and a recent paper suggests that CNNs should be the natural starting point to work with sequence data.
Warning: I know nothing about GIS. That will become very apparent in a moment, of course. My vocabulary isn't going to be spot on, either, Apologies.
I need to recreate parts of a "Strategy Map" that looks like this as "real geo-spatial" map:
Why? Because if I can manage to plot the boxes ("Maximize Shareholder Value", "Exceed Customer Expectations", etc.) on a map in correct relation to each other, I can do some very fun stuff in a data visualization tool I'm working with.
I can build the strategy map above in Visio, and then use a script to export the shapes I care about as X, Y points OR Polygons. One of the boxes above might looks like this once exported:
ShapeNo ShapeName PointNo X Y
1 Exceed Cust 2 37 155
1 Exceed Cust 4 116 155
1 Exceed Cust 6 116 234
1 Exceed Cust 8 37 234
1 Exceed Cust 10 37 155
...or it might look like this:
POLYGON ((37 155, 116 155, 116 134, 37 234, 37,155))
Regardless, I have a bunch of points, and I need to turn these into lat/lon coordinates, using lat/lon (0,0) as my point of reference. In the map above, 0,0 might be beneath the "Exceed Customer Expectations" box - more or less dead center.
Then, I suspect I can find a tool that will convert this jumble of stuff into an ESRI shapefile and I can import directly into my dataviz tool.
Are there any known (free) tools, scripts, libraries, etc.that might do some of this for me?
Your problem shouldn't be solved with a GIS but I can appreciate that you have found some cool dataviz features that require a shapefile.
The problem is that you want to take some x,y points and convert them to lat/lon. Latitude-longitude refer specifically to points on the earth's surface and the points in your problem have no relation to the earth's surface.
Another way to think of this is that you are trying to take random points and say one represents the capital of Russia and the other represents a large city in Germany etc.
Another problem is that you want to have a 0,0 reference point but latitude and longitude have a datum as a reference point which is a specific geographic location.
It's hard to suggest an alternative method to solve your problem without more information on your familiarity with graphic design tools, but lat/lon with GIS are not the direction to be looking.
Many people do convert x,y points to lat/lon but this is not a direct conversion. Cartesian coordinates require a known projection and datum in order for this conversion to be accurate.
Check out this link for an in depth explanation of why arbitrary x,y cannot be converted to lat/lon.
On the other hand, +1 for an out-of-the-box original idea for strategy map design!
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
What would be the most appropriate (naturally suited) way to represent the various chord progression (musical) rules in a data-structure such that each chord had a weighted set of options that it could progress to?
This data structure would be implemented in a procedural music generation program in a way that you could code: (language-agnostic pseudo-code)
Chord[7] songArray;
Chord first = new Chord(I); //set the first chord's value
songArray[0] = first;
for (i=0; i<7; i++){
Chord temp = songArray[i].next(); //select the following chord
songArray[i+1] = temp;
}
Note: In classical-type music, each chord in a given key can naturally progress to another chord following these rules:
----------------------
| Chord | Leads to |
|=======================
| I | any |
| ii | V, vii |
| iii | IV, vi |
| IV | ii, V, vii |
| V | vi |
| vi | ii, ii, IV, V|
| vii | I |
----------------------
The data structure would store the various progressions as weighted options. As an example, consider the IV chord in any given major key: IV can naturally progress to ii, V, or vii, but could also break the rules in progressing to any other chord. Breaking the rules would happen infrequently.
I have considered some sort of linked list/tree data structure, but it would hardly resemble any type of tree or list I've ever used -- additionally, I can't work out how to implement the weighting:
Another thought was to use JSON or something similar, but it seems to get redundant very quickly:
{
"I":{
"100%":{
"I",
"ii",
"iii",
"IV",
"V",
"vi",
"vii"
}
},
"ii":{
"80%":{
"V",
"vii"
},
"20%":{
"i",
"ii",
"iii",
"IV",
"vi"
}
},
// ...
}
Note: I am comfortable implementing this in a handful of languages, and at this point am NOT concerned with a specific language implementation, but a language-agnostic data-structure architecture.
A Markov Chain might be a good fit for this problem.
A Markov chain is a stochastic process where the progression to the next state is determined by the current state. So for a given interval from your table you would apply weights to the "Leads to" values and then determine randomly to which state to progress.
I'd expect you to have less than 100 chords, therefore if you use 32 bits to represent probability series (likely extreme overkill) you'd end up with a 100x100x4 (40000) byte array for a flat Markov matrix representation. Depending on the sparsity of the matrix (e.g. if you have 50 chords, but each one typically maps to 2 or 3 chords) for speed and less importantly space reasons you may want an array of arrays where each final array element is (chord ID, probability).
In either case, one of the key points here is that you should use a probability series, not a probability sequence. That is, instead of saying "this chord has a 10% chance, and this one has a 10% chance, and this one has a 80% chance) say "the first chord has a 10% chance, the first two chords have a 20% chance, and the first three chords have a 100% chance."
Here's why: When you go to select a random but weighted value, you can generate a number in a fixed range (for unsigned integers, 0 to 0xFFFFFFFF) and then perform a binary search through the chords rather than linear search. (Search for the element with least probability series value that is still greater than or equal to the number you generated.)
On the other hand, if you've only got a few following chords for each chord, a linear search would likely be faster than a binary search due to a tighter loop, and then all the probability series saves you calculating a simple running sum of the probability values.
If you don't require the most staggeringly amazing performance (and I suspect you don't -- for a computer there's just not that many chords in a piece of music) for this portion of your code, I'd honestly just stick to a flat representation of a Markov matrix -- easy to understand, easy to implement, reasonable execution speed.
Just as a fun aside, this sort of thing lends itself well to thinking about predictive coding -- a common methodology in data compression. You might consider an n-gram based algorithm (e.g. PPM) to achieve higher-order structure in your music generation without too much example material required. It's been working in data compression for years.
It sounds like you want some form of directed, weighted graph where the nodes are the chords and the edges are the progression options with edge weights being the progression's likelihood.
Of these two designs for a crossroad database
#Street
street_id | street_nm
#Crossing
crossing_id | x | y | street_id_1 | street_id_2
VS
#Street
street_id | street_nm
#Crossing
crossing_id | x | y
#street crossing relationship
street_id | crossing_id
Assuming every crossing has only exactly two roads, is there a reason why one would use the first solution over the first?
EDIT: For the second setup, how could I create a view where results look like this
crossing_id| x | y | street_nm_1 | street_nm_1
Also I'm not sure how creating a junction with three roads would effect the view.
I'd prefer the second.
First of all "assuming every crossing has only exactly two roads" is quite risky. In general, when designing I prefer not to rely on assumptions that clash with reality because sooner or later your design will have to accomodate for "extra cases".
But the second design is better for another reason... assuming you want to design a query that returns all roads that cross road "X" (which I suppose would be a pretty common requirement) your first design forces you to test for road "X" id both in street_id_1 and street_id_2 - in general, the queries are more convoluted because whenever you are looking for a given road, you don't know if it will be listed in id_1 or id_2.
The relationship "x crosses y" should be symmetrical (unless you want to distinguish between "main roads" and "tributaries", which does not seem to be the case here) so the second design is closer to the intent.
Regarding your question about the view... what about:
Select a.cross_id,a.x,a.y,b.street_nm,c.street_nm
from crossing a, crossing_rel e, street b, street c
where b.street_id=e.street_id and
c.street_id=e.street_id and
a.crossing_id=e.crossing_id and
b.street <> c.street
note that this will not give any specific order to which street appears as "x" and which as "y"... maybe you will prefer something like:
Select a.cross_id,a.x,a.y,b.street_nm,c.street_nm
from crossing a, crossing_rel e, street b, street c
where b.street_id=e.street_id and
c.street_id=e.street_id and
a.crossing_id=e.crossing_id and
b.street_nm < c.street_nm
The second solution is a little more flexible for additions to either crossings or streets while keeping the relationship between them in its proper context. It's a subtle distinction, but worth mentioning.