I'm trying to build audio reactive leds as a little project. I'm using an api to interact with leds via computer. What I'm trying to accomplish is to be able to divide the audio stream into frequency ranges. For example, if I divide by 100Hz, a 22100 Hz solo channel should provide 221 groups. Then I want to assign each group to a led or leds and do effects with the normalized mean value of the groups.How can I accomplish this? I've been reading about FFT and tried some sample codes but so far I've failed. I can't seem to get my head around this. Below is the closest project that I've found online. I've tweaked around with it and I'm able to control my leds but I just can't, for example, isolate the first 100 Hz.
https://gist.github.com/limitedmage/2245892
Related
I'd like to find out the offset of certain information like level, equipped items etc. from the ER0000.sl2 save file.
It has been done for Dark Souls here.
And I found out some offset locations in this file
What would be a good approach to reverse-engineer an exhaustive list like the first one?
Change my save-state ingame, diff the save files and see what changes? Or is there an easier way?
I would use previous souls like games as a base (like the link you provided). You want to get a lot of savedatas and then you want to compare what is same/different given you know it is being saved like you suggested.
For Elden ring all character slots are always same size but not all of the data within them is always in same location within the slot's save data. For example death location differs even thousands of bytes on different character saves.
I found out where deaths are in the save game (roughly, above statement explains this) by dying and comparing where it is saved using hex editor, knowing deaths are 32 bit little-endian integers.
I haven't looked but if you can find a structure of the save data within IDA pro or similar you could have easier time too (if they have an object for save data).
A good trick is to use memory reader while playing the game, read the data you want to find and then search it within the save file. I for example made a new save and died until I could find death location inside Cheat Engine, then changed to my other save that had hundreds of deaths and I found the death count instantly which was then easy to locate from the save file.
I have very important question for me. I would like to use Autodesk Reality Capture API in my app. I read the documentation to API but I did not find it. I know the position of camera and i would like to send this information to Reality Capture API. For example circle was divided in 24 parts. So I know that each photo was made every 15 degrees. Is there any parameter which gives me possibility to provide the position of camera?
There is no way of passing this kind of information to Reality Capture API (at least no official way) and even if it is debatable, there is not much use for such input.
Roughly speaking, the engine will “stitch” the given images based on common pixels/regions/patches. For complex objects, each 15 degrees might not be enough to capture the complex geometry and you will have to add more photos aiming that specific region.
The main benefit is that you can process your images, get the result, see the missing or low detail spots, take a bunch of photos of those specific spots and add them to the project, process your project again and repeat till you get a satisfying result. From this perspective, the "rule" of photos taken each 15 degrees will break very fast.
If you are getting wrong results, 80% of the time (again the Pareto principle) this is caused by missing the scenetype parameter, which defaults to aerial, when usually people expects to use the object type.
Check The Hitchhiker's Guide to ... Reality Capture API for more details.
I'm trying to make a program that can convert ORG files into WAV files directly. The ORG format is similar to MIDI, in the sense that it is a list of "instructions" about when and how to play specific instruments, and a program plays these instruments for it to create the song.
However, as I said, I want to generate a WAV directly, instead of just playing the ORG. So, in a sense, I want to "play" the sounds into a WAV. I do know the WAV format and have created some files from raw PCM samples, but this isn't as simple.
The sounds generated by the ORG come from a bunch of files containing WAV samples I have. They're mono, 8-bit samples should be played at 22050Hz. They're all under a second long, and the largest aren't more than 11KB. I would assume that to play them all after each other, I would simply put the samples into the WAV one after the other. It isn't that simple though, as the ORG can have up to 16 different instruments playing at once, and each note of each instrument also has a pan (i.e. a balance, allowing stereo sound). What's more, each ORG has its own tempo (i.e. milliseconds between each point a sound can be played), and some sounds may be longer than this tempo, which means that two sounds on the same instrument can overlap. For instance, a note plays on an instrument, 90 milliseconds later the same note plays on the same instrument, but the first not hasn't finished, hence the first note plays into the second.
I just thought to explain all of that to be sure the situation is clear. In any case, I'd basically like to know how I would go about converting or "playing" an ORG (or if you like, a MIDI (since they're essentially the same)) into a WAV. As I mentioned each note does have a pan/balance, so the WAV would also need to be stereo.
If it matters at all, I'll be doing this in ActionScript 3.0 in FlashDevelop. I don't need any code (as that would be asking someone to do the work for me), but I just want to know how I would go about doing this correctly. An algorithm or two may be handy as well.
First let me say AS3 is not the best language to do these kind of things. Super collider would be a better and easier choice.
But if you want to do it in AS3 here's a general approach. I haven't tested any of it, this is pure theory.
First, put all your sounds into an array, and then find a way of matching the notes from your midi file to a position in the array.
I don't know the format of midi in depth, but I know the smallest value is a tick, and the length of a tick depends on the BPM. Here's the formula to calculate a midi tick: Midi Ticks to Actual PlayBack Seconds !!! ( Midi Music)
Let's say your tick is 2ms in length. So now you have a base value. You can fill a Vector (like an Array but faster) with what happens at every tick. If nothing happens at a particular tick, then insert a null value.
Now the big problem is reading that Vector. It's a problem because the Timer class does not work at small values like 2ms. But what you can do is check the ellapsed time in ms since the app started using getTimer(). You can have some loop that will check the ellapsed time, and whenever you have 2ms more, you read the next index in the Vector. If there are notes on that index, you play the sounds. If not you wait for the next tick.
The problem with this, is that if a loop goes on for more than 15 seconds (I'm not sure of that value) Flash will think the program is not responding and will kill it. So you have to take care of that too, ending the loop and opening a new one before Flash kills your program.
Ok, so now you have sounds playing. You can record the sounds that flash is making (wavs, mp3, mic) with a library called Standing Wave 3.
https://github.com/maxl0rd/standingwave3
This is very theoretical... and I'm quite sure depending on the number of sounds you want to play you can freeze your program... but I hope it will help to get you going.
I am going to be working on self-chosen project for my college networking class and I just had a couple questions to help get me started in the right direction.
My project will involve creating a new "physical" link over which data, in the form of text, will be transmitted from one computer to another. This link will involve one computer with a webcam that reads a series of flashing colors (black/white) as binary and converts it to text. Each series of flashes will simulate a packet of data. I will be using OSX an the integrated webcam in a Macbook, the flashing computer will either be windows or osx.
So my questions are: which programming languages or API's would be best for reading live webcam data and analyzing the color of a certain area as well as programming and timing the flashes? Also, would I need to worry about matching the flash rate of the "writing" computer and the frame capture rate of the "reading" computer?
Thank you for any help you might be able to provide.
Regarding the frame capture rate, Shannon sampling theorem says that "perfect reconstruction of a signal is possible when the sampling frequency is greater than twice the maximum frequency of the signal being sampled". In other words if your flashing light switches 10 times per second, you need a camera of more than 20fps to properly capture that. So basically check your camera specs, divide by 2, lower the resulting a little and you have your maximum flashing rate.
Whatever can get the frames will work. If the light conditions in which the camera works are gonna be stable, and the position of the light on images is gonna be static then it is gonna be very very easy with checking the average pixel values of a certain area.
If you need additional image processing you should probably also find out about OpenCV (it has bindings to every programming language).
To answer your question about language choice, I would recommend java. The Java Media Framework is great and easy to use. I have used it for capturing video from webcams in the past. Be warned, however, that everyone you ask will recommend a different language - everyone has their preferences!
What are you using as the flashing device? What kind of distance are you trying to achieve? Something worth thinking about is how are you going to get the receiver to recognise where within the captured image to look for the flashes. Some kind of fiducial marker might be necessary. Longer ranges will make this problem harder to resolve.
If you're thinking about shorter ranges, have you considered using a two-dimensional transmitter? (given that you're using a two-dimensional receiver, it makes sense) and maybe have a transmitter that shows a sequence of QR codes (or similar encodings) on a monitor?
You will have to consider some kind of error-correction encoding, such as a hamming code. While encoding would increase the data footprint, it might give you overall better bandwidth given that you can crank up the speed much higher without having to worry about the odd corrupt bit.
Some 'evaluation' type material might include you discussing the obvious security risks in using such a channel - anyone with line of sight to the transmitter can eavesdrop! You could suggest in your writeup using some kind of encryption, a block cipher in CBC would do, but would require a key-exchange prior to transmission, so you could think about public key encryption.
I'm working on a "retro" motorbike game in flash, similar to the Road rash series on the mega drive and after having a long play with the sound sampling capabilities of flash I can't manage to find the "right" way to generate the noise.
I've been trying to basically change the frequency on a sine wave in line with the revs, so as the revs increase so does the frequency - it sorta works but sounds nothing like a real engine (I've been a biker for a while and I ride to work on my bike every day so I "know" what it should sound like :-p).
I'm not so much after a realistic sound, just somthing that sounds "okay", or good enough that most people playing the game wouldn't notice and be happy that the sound actually relates to the revs and speed as apposed to just a flat mp3.
I can't seem to search on google as I can't find the right words, "engine" just dilutes all of the results with game engines and what not.
The majority of articles I find also suggest using sampling - but there are 2 major issues with this:
Even though I have a bike and could record the sounds; recording samples of the rpm - say 15 if I do samples at 1000 intervals (my gsxr revs all the way to 16k :-p) I'd then have to also sample each one at various loads, i.e. 0mph, 10mph, 20mph, 30mph, 40mph as the engine noise varies greatly depending on load - which totals a whopping 80 samples - although I'm not sure if the load can be simulated somehow on top of the rpm samples?
All those samples add up to bytes that have to be downloaded before you can play.
One way I've found uses a mix of sampled engine sounds and synthesized tones. Get samples of an engine at a couple of different RPMs and use those for the base. Mix two samples based on the current RPM, e.g. if it's 1650 RPM, play a sample taken at 1500 RPM at 70% volume and a 2000 RPM sample at 30% volume. Modify the overall volume based on the throttle. Add a sine wave tone based on the RPM like you've done.
The technique is described in the paper Design of a Driving Simulation Sound Engine (PDF), which is about synthesizing engine (and other driving-related sounds) for a driving simulator. I found it by searching for sound synthesis "engine sound" (with "engine sound" in quotes. Motor sound effect synthesis has some discussion of synthesizing motor sounds in general, with instructions for the Pure Data environment.
Sonoflash have a library of code-based sounds (some for free), and may have something appropriate for you, or at least a starting point. Their 'machine propeller', for example.
I've tried Sonoflash as well, it's really good!
http://www.sonoflash.com/sounds/#EngineLight
This might be a sound you are looking out for.