Procedural snare drum - language-agnostic

So I've got something like:
void createSinewave( short * array, int duration, int startOffset,
float freq, float amp ) ;
void createSquarewave( short * array, int duration, int startOffset,
float freq, float amp ) ;
Other functions "slide" a wave form from some low frequency to some high frequency, and accept two frequency parameters.
Using just these functions i've been able to create a variety of sounds.. kick drum, an old school laser fire sound, and a bunch of things that sound like footsteps. I've not been able to synthesize a snare drum type sound.
Any suggestions on how to generate one? What frequencies to mix and in what amounts to mix them in? Other wave form types to use than sine and square and triangle wave?
Kind of inspired by 64 k executable contests.

Procedural snare dum synthesis.

Drums are often synthesized by short bursts of noise, for example white, pink or brown noise.
Of these, white noise is the easiest to generate: just fill your array with random samples, independently chosen with uniform probability. Brown noise is also pretty easy.

Related

Custom mesh jittering in Mujoco environment in OpenAI gym

I've tried modifying the FetchPickAndPlace-v1 OpenAI environment to replace the cube with a pair of scissors. Everything works perfectly except for the fact that my custom mesh seems to jitter a few millimeters in and out of the table every few time steps. I've included a picture mid-jitter below:
As you can see, the scissors are caught mid-way through the surface of the table. How can I prevent this? All I've done is switch out the code for the cube in pick_and_place.xml with the asset related to the scissor mesh. Here's the code of interest:
<body name="object0" pos="0.0 0.0 0.0">
<joint name="object0:joint" type="free" damping="0.01"></joint>
<geom size="0.025 0.025 0.025" mesh="tool0:scissors" condim="3" name="object0" material="tool_mat" class="tool0:matte" mass="2"></geom>
<site name="object0" pos="0 0 0" size="0.02 0.02 0.02" rgba="1 0 0 1" type="sphere"></site>
</body>
I've tried playing around with the coordinates of the position and geometry but to no avail. Any tips? Replacing mesh="tool0:scissors" with type="box" gets rid of the problem entirely but I'm back to square one.
As suggested by Emo Todorov in the MuJoCo forums:
Replace the ground box with a plane and
use MuJoCo 2.0. The latest version of the collision detector
generates multiple contacts between a mesh and a plane, which
results in more stable simulation. But this only works for
plane-mesh, not for box-mesh.
The better solution is to break the mesh into several meshes, and include them as multiple geoms in the same body. Then MuJoCo will construct the convex hull of each sub-mesh, resulting in multiple contact points (even without the special plane mechanism mentioned above) and furthermore it will be a better approximation to the actual object geometry.

How to create and use very large palette textures for use in opengl?

Details: I have a glsl fragment shader with a uniform texture, "u_MapTexture" with several thousand colors on it (max of about 10k-15k unique rgb values). I also have a uniform palette texture ("u_paletteTexture") that is 16384 × 1 that I want to use to index the colors on u_MapTexture to. My problem is that no matter what I try mathematically, I can't seem to properly index the colors from the first texture to the palette texture using the RGB values of the passed color. Amy thoughts or ideas as to how I could do this?
Wasn't sure whether to post this here, on Gamedev SE, or on the Math SE.
Edit: I guess I might not have added enough information about the problem in, so here are some more details.
My current idea for the map is to keep an indexed palette of province colors, and to perform a palette-swap operation in my fragment shader (like the one outlined in this SO question: Simulating palette swaps with OpenGL Shaders (in LibGDX)). My shader is pretty much exactly copied from the linked article.
My problem: finding a way to uniquely index the province map (the original texture) -> province colors (the indexed palette texture).
At first, I decided that the palette texture would be configured as a (255+255)×(255+255) texture. This would give a large maximum number enough number of countries to choose from that would never in practice be reached.
I thought you could get the appropriate index of the palette texture of a country's color by getting its index in the texture as so: the index of each country would have been located at that palette texture's (x, y)->(r+g),(g+b)
I ran some example colors through this simple equation and came across a troubling scenario:
RGB (0, 0, 0) -> (0, 0);
RGB (1, 0, 1) -> (1, 1); ?
RGB (1, 3, 2) -> (4, 5);
RGB (0, 1, 0) -> (1, 1); ?
RGB (2, 5, 10) -> (7, 15);
RGB (255, 255, 255) -> (510, 510);
The question marks are by "recurring" colors in the algorithm, meaning that they would incorrectly map to the same country index.
Then I thought to add additional parameters and shrink the texture to a 1-dimensional array.
For example, the palette texture would have been of size (r+g+b),(r, g, b).
With this, with them same texture points:
RGB(0, 0, 0) -> (0);
RGB(1, 0, 1) -> (2); ?
RGB(0, 1, 1) -> (2); ?
RGB(1, 3, 2) -> (6); ?
RGB(3, 2, 1) -> (6); ?
RGB(0, 1, 0) -> (1);
RGB(2, 5, 10) -> (17);
RGB(255, 255, 255) -> (1020);
The recurrence problem is exacerbated. I did some quick calculations in my head (and thought about it more deeply in general) and I realized that no matter how many ways I add/multiply the color rgb variables, the same problem will occur due to the laws of mathematics. This leads to the actual problem: How can I uniquely and procedurally index country colors in the palette texture and access them via my shader? This seems like the most performant method, but its implementation is eluding me.
Also, for the record, I know that the UV coords and color values are floats, but I'm using the standard 0-255 format to logic the problem out.
TL;DR I need to extract a unique index from every RGB value, and that doesn't appear to be possible based on my test sets.
Basically the MCVE would be creating a 2D sprite and passing the fragment shader of the accepted answer of the linked SO question to sprite. The sprite would be comprised of about 10 unique RGB values, however whatever system used would have to support at least several thousand different unique colors. I don't have stable internet connection or I would upload my test textures.
Not sure if I get it right anyway let assume integer channels <0,255> so:
id = r + 256*g + 65536*b
that will give you id = <0,16777215>. Now just remap to your xs*ys texture:
x = id%xs
y = id/xs
where xs,ys is the resolution of the texture. Once you realize you can use powers of 2 for all of this you can use bit operations instead. For example let xs=4096,ys=4096 ...
id = r + g<<8 + b<<16
x = id&4095
y = id>>12
[Edit1]
So if I use this image you linked as input (txr_map):
And generate 4096x4096 texture all filed with 0x00404040 gray color except:
((DWORD*)(scr.txrs.txr.txr))[0x4A3020]=0x00FF0000;
((DWORD*)(scr.txrs.txr.txr))[0x49247E]=0x0000FF00;
((DWORD*)(scr.txrs.txr.txr))[0xCB3EAD]=0x000000FF;
((DWORD*)(scr.txrs.txr.txr))[0xC78A4F]=0x0000FFFF;
((DWORD*)(scr.txrs.txr.txr))[0x593D4E]=0x00FF00FF;
((DWORD*)(scr.txrs.txr.txr))[0x4B3C7E]=0x00FFFF00;
where scr.txrs.txr.txr is linearly allocated texture array so address is also your id... This selects few regions I picked up with color picker and set them with specific colors (red,green,blue,...).
Do not forget to set GL_LINEAR for min and mag filter. Then applying these shaders should do the trick:
//---------------------------------------------------------------------------
// Vertex
//---------------------------------------------------------------------------
#version 120
varying vec2 pos; // screen position <-1,+1>
varying vec2 txr; // texture position <0,1>
void main()
{
pos=gl_Vertex.xy;
txr=gl_MultiTexCoord0.st;
gl_Position=gl_Vertex;
}
//---------------------------------------------------------------------------
//---------------------------------------------------------------------------
// Fragment
//---------------------------------------------------------------------------
#version 130
in vec2 pos; // screen position <-1,+1>
in vec2 txr; // texture position <0,1>
out vec4 col;
uniform sampler2D txr_map;
uniform sampler2D txr_pal;
//---------------------------------------------------------------------------
void main()
{
vec3 c;
int id,x,y;
c=texture2D(txr_map,txr).rgb;
x=int(float(c.b*255.0f)); id =x;
x=int(float(c.g*255.0f)); id|=x<<8;
x=int(float(c.r*255.0f)); id|=x<<16;
x= id &4095;
y=(id>>12)&4095;
c.s=(float(x)+0.5f)/4096.0f;
c.t=(float(y)+0.5f)/4096.0f;
col=texture2D(txr_pal,c.st);
}
//---------------------------------------------------------------------------
Sadly usampler2D does not work in my engine in the old API (that is why I use floats most likely some internal texture format problem). My CPU side GL code looks like this:
//---------------------------------------------------------------------------
OpenGLscreen scr; // my GL engine
GLSLprogram shd; // shaders
GLint txr_map=-1; // map
GLint txr_pal=-1; // palette
//---------------------------------------------------------------------------
void TForm1::draw()
{
scr.cls(); // glClear
glDisable(GL_CULL_FACE);
glDisable(GL_DEPTH_TEST);
shd.bind(); // use shader program
int unit=0;
scr.txrs.bind(txr_map,unit); shd.set1i("txr_map",unit); unit++; // bind textures and set uniforms
scr.txrs.bind(txr_pal,unit); shd.set1i("txr_pal",unit); unit++;
float a=5632.0/8192.0; // handle texture power of 2 size correction
glActiveTexture(GL_TEXTURE0);
glBegin(GL_QUADS);
glTexCoord2f(0.0,1.0); glVertex2f(-1.0,-1.0);
glTexCoord2f(0.0,0.0); glVertex2f(-1.0,+1.0);
glTexCoord2f( a ,0.0); glVertex2f(+1.0,+1.0);
glTexCoord2f( a ,1.0); glVertex2f(+1.0,-1.0);
glEnd();
for (unit--;unit>=0;unit--) scr.txrs.unbind(unit); // unbind textures
shd.unbind(); // unbind shaders
// just prints the GLSL logs for debug
scr.text_init_pix(1.0);
glColor4f(1.0,1.0,1.0,0.75);
scr.text(0.0,0.0,shd.log);
scr.text_exit_pixel();
scr.exe(); // glFlush
scr.rfs(); // swap buffers
}
//---------------------------------------------------------------------------
The result looks like this:
When I mix both result and input texture (for visual check) with:
col=(0.9*texture2D(txr_pal,c.st))+(0.1*texture2D(txr_map,txr));
The result looks like this:
So it clearly works as expected...
Not sure I understand exactly what you want to do.
First of all, the only way to uniquely map all 8-bit RGB colors to indices is to have 256^3 indices. You can shuffle the bits around to have a non-identity mapping (like here), but you still need that many destination indices.
If only a subset of all colors is used and you want fewer than 256^3 destination indices (as you seem to describe), some mechanism needs to be put in place to avoid collisions. Unless you have some special properties in the source colors that can be exploited mathematically, this mechanism will require some form of storage (like another texture or an SSBO).
Now what I don't understand is what you want to map to indices. Do you want to map all possible colors to a unique index? Does everything related to the mapping have to be done exclusively inside the shader? You mention countries and provinces, but I don't quite get how they relate exactly to the mapping you want.

Cocos2d-x turn based RPG game: run characters' attack animation in sequence

I am building a turn-based RPG game with Cocos2d-x 3.3final. I have four sprites, say sprite1, sprite2, sprite3 and sprite4. Each has an associated attack animation(class type Animate*), say sprite1attack, sprite2attack, sprite3attack, sprite4attack.
During the battle, every turn the attack order of these characters changes according to user's operation. In my code, I would like to have a menu callback function that could run the four character's attack animation in sequence when the user clicks its associated button:
void onStart(){
}
If I coded it like:
void onStart(){
sprite1->runAction(sprite1attack);
sprite2->runAction(sprite2attack);
sprite3->runAction(sprite3attack);
sprite4->runAction(sprite4attack);
}
The four animations would run all together.
Are there any good design pattern that can run the sprites' animation in any user wanted sequence?
It's OK to add variables, like vector<int> attackOrder.
When you want to put Some Actions ( from TargetedAction) in sequence, you will need to use TargetedAction.
For example in your case you need something like this:
auto sequnecedAction = Sequence::create(
TargetedAction::create(sprite1,sprite1attackAnimation),
TargetedAction::create(sprite2,sprite2attackAnimation), nullptr);
sprite1->runAction(sequnecedAction);

AS3 additive tone synthesis. playing multiple generated sounds

Inspired by Andre michelle, I`m building a tone matrix in AS3.
I managed to create the matrix and generate the different sounds. They don´t sound that good, but I´m getting there
One big problem I have is when more than one dot is set to play, it sounds just horrible. I googled a lot and found the additive synthesis method but don´t have a clue how to apply it to as3.
anybody out there knows how to play multiple sounds together? any hint?
my demo is at www.inklink.co.at/tonematrix
Oh common the sound was horrible...
Checked wiki? It is not that hard to understand... Even if you don't know that much of mathematics... Which you should - PROGRAMMING music is not easy.
So:
Let's first define something:
var harmonics:Array = new Array();
harmonics is the array in which we will store individual harmonics. Each child will be another array, containing ["amplitude"] (technically the volume), ["frequency"] and ["wavelength"] (period). We also need a function that can give us the phase of the wave given the amplitude, wavelength and offset (from the beginning of the wave). For square wave something like:
function getSquarePhase(amp:Number, wl:Number, off:Number):Number {
while (off > wl){off -= wl;}
return (off > wl / 2 ? -amp : amp); // Return amp in first half, -amp in 2.
}
You might add other types, or even custom vector waves if you want.
Now for the harder part.
var samplingFrequency; // set this to your SF
function getAddSyn(harmonics:Array, time:Number):Number {
if (harmonics.length == 1){ // We do not need to perform AS here
return getSquarePhase(harmonics[0]["amplitude"], harmonics[0]["wavelength"], time);
} else {
var hs:Number = 0;
hs += 0.5 * (harmonics[0]["amplitude"] * Math.cos(getSquarePhase(harmonics[0]["amplitude"], harmonics[0]["wavelength"], time)));
// ^ You can try to remove the line above if it does not sound right.
for (var i:int = 1; i < harmonics.length; i++){
hs += (harmonics[0]["amplitude"] * Math.cos(getSquarePhase(harmonics[0]["amplitude"], harmonics[0]["wavelength"], time)) * Math.cos((Math.PI * 2 * harmonics[0]["frequency"] / samplingFrequency) * time);
hs -= Math.sin(getSquarePhase(harmonics[0]["amplitude"], harmonics[0]["wavelength"], time)) * Math.sin((Math.PI * 2 * harmonics[0]["frequency"] / samplingFrequency) * time);
}
return hs;
}
}
This is all just converted (weakly :D) from the Wikipedia, I may have done a mistake somewhere in there... But I think you should get the idea... And if not, try to convert the AS from Wikipedia yourself, as I said, it is not so hard.
I also somehow ignored the Nyquist frequency...
I have tried your demo and thought it sounded pretty good actually. What do you mean it doesn't sound that good? What's missing? My main area of interest is music and I haven't found anything wrong , only it's a little frustrating , because after creating a sequence, I feel the need to add new sounds! Had I been able to record what I was playing with, I would have sent it to you.
Going into additive synthesis doesn't look like a light undertaking though. How far do you want to push it, would you want to create some form of synthesizer?

How does this work in computing the depth map?

From this site: http://www.catalinzima.com/?page_id=14
I've always been confused about how the depth map is calculated.
The vertex shader function calculates position as follows:
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
float4 worldPosition = mul(input.Position, World);
float4 viewPosition = mul(worldPosition, View);
output.Position = mul(viewPosition, Projection);
output.TexCoord = input.TexCoord; //pass the texture coordinates further
output.Normal =mul(input.Normal,World); //get normal into world space
output.Depth.x = output.Position.z;
output.Depth.y = output.Position.w;
return output;
}
What are output.Position.z and output.Position.w? I'm not sure as to the maths behind this.
And in the pixel shader there is this line: output.Depth = input.Depth.x / input.Depth.y;
So output.Depth is output.Position.z / outputPOsition.w? Why do we do this?
Finally in the point light shader (http://www.catalinzima.com/?page_id=55) to convert this output to be a position the code is:
//read depth
float depthVal = tex2D(depthSampler,texCoord).r;
//compute screen-space position
float4 position;
position.xy = input.ScreenPosition.xy;
position.z = depthVal;
position.w = 1.0f;
//transform to world space
position = mul(position, InvertViewProjection);
position /= position.w;
again I don't understand this. I sort of see why we use InvertViewProjection as we multiply by the view projection previously, but the whole z and now w being made to equal 1, after which the whole position is divided by w confuses me quite a bit.
To understand this completely, you'll need to understand how the algebra that underpins 3D transforms works. SO does not really help (or I don't know how to use it) to do matrix math, so it'll have to be without fancy formulaes. Here is some high level explanation though:
If you look closely, you'll notice that all transformations that happen to a vertex position (from model to world to view to clip coordinates) happens to be using 4D vectors. That's right. 4D. Why, when we live in a 3D world ? Because in that 4D representation, all the transformations we usually want to do to vertices are expressible as a matrix multiplication. This is not the case if we stay in 3D representation. And matrix multiplications are what a GPU is good at.
What does a vertex in 3D correspond to in 4D ? This is where it gets interesting. The (x, y, z) point corresponds to the line (a.x, a.y, a.z, a). We can grab any point on this line to do the math we need, and we usually pick the easiest one, a=1 (that way, we don't have to do any multiplication, just set w=1).
So that answers pretty much all the math you're looking at. To project a 3D point in 4D we set w=1, to get back a component from a 4D vector, that we want to compare against our standard sizes in 3D, we have to divide that component by w.
This coordinate system, if you want to dive deeper, is called homogeneous coordinates.