I'm trying to replicate a signal from a remote using another wireless transmitter (a cc1101 to be exact). I got somewhere, but I am stuck, an with an fundamental question
At the top is the signal I'm trying to replicate, and at the bottom we have the signal my transmitter is currently sending.
Apart from the amplitude and the fact that some of the bits seem off I have a main question:
In the original signal I see a single sine wave (2pi-ish) per bit, but if I zoom in on my signal a single bit consists of a lot of full waves (like 16 pi worth).
I am confused about what these waves even mean? I would say it has to do with frequency as my signal just has more waves in the same amount of time. But both signals are recorded at 434MHz, so the frequency should be the same? Is there more than 1 frequency? Am I missing something here? And how do I make my transmitter transmit a single wave per bit?
PS. Sorry if this is a simple question, I'm just teaching myself all this radio stuff. Also any other tips and tricks for my remote replication would be appreciated ;)
Related
I am trying to train a bot in a game like curve fever. It is like a snake which moves with a really precise turn radius (not 90°), which makes random hole (where he can passes throw) and like for a snake game he dies if he goes out of map or hits himself. The difference in the points stands on the fact that the snake has to survive as long as possible and there is no food associated. The tail of the snail increases by 1 at every step. It looks like that:
So I use a deep q learning algorithm with a CNN network, inspired by: Flappy bird deep q learning, which is itself inspired by the DeepMind's paper Playing Atari with Deep Reinforcement Learning.
My images as input are a thresholding image like above where everything is black or white.
At every step I grant +0.1 as reward for staying alive and -1 for dying in border of map or itself.
I trained my agent for hours and after 4.000.000 iterations I result in an agent which almost never goes out of map but crashes on itself in a very fast way.
So it is like he learnt how to not crash on border of the map but not on itself, what could explain this ?
Some examples:
My suppositions are:
I took a replay memory size of 25000 instead of 50000 because of OOM error, is that enough?
I did not train him long enough, but how could I know?
The border of the map never changes so it is easy to learn from it, but the tail of the agent itself changes at every new game, should I get worse reward for crashing on itself so the agent takes it more into account?
Here are my learning curves:
I am requesting your help because it takes a lot of time to train my agent and I can't be sure of what should I do.
Thanks in advance for any help.
I'm reading about pipelined MIPS processor design in the book "Digital Design and Computer Architecture (Second Edition)" by David Money Harris and Sarah L. Harris.
In section 7.5.3 "Hazards" it says (page 415):
The register file can be read and written in the same cycle. The write takes place during the first half of the cycle and the read takes place during the second half of the cycle, so a register can be written and read back in the same cycle without introducing a hazard.
My question is: why can't the register file just get read and written simultaneously?
Actually, my question is quite similar to this stackexchange one, but the answers there does not make me fully clear. Also I'm not allowed to comment there due to lack of reputation, so I start this question.
I think for a SRAM register file with 2 read ports and 1 write port, as shown in wikipedia, it seems perfectly legal to read and write the same address simultaneously. Although the write operation will cause the bits stored in cross-coupled inverters unstable for a while, as along as the clock cycle of the pipelined processor is long enough, the bits will get stabilized. Therefore the read operation, which is fully combinational, can get the correct data. Then why not read and write simultaneously?
My second question is, if we must use such a register file as suggested by the book, which read in the first half cycle, and write in the second half cycle, how to implement this register file as circuits?
My naive solution is to redefine write_enable and read_enable signal of the register file. Let write_enable = write_enable & clock and read_enable = read_enable & ~clock. But the book seems to suggest to write on the failing edge, see HDL example 7.6 register file code comment (page 435):
for pipelined processor, write third port on falling edge of clk
I would assume a clock cycle starts with 1 in the first half, then drops to 0 in the second half. Therefore I feel writing on the falling edge actually results in writing in the second half of the clock cycle, not the first half. What's more, it does nothing to ensure reading in the second half of the cycle. How can it work?
Thanks in advance.
So i wanna learn reinforcement learning by doing some examples. I wrote 2048 game but i do not know if i'm training it right. So as I understand I have to create neural network. I have created 16 inputs for each number. Then hidden layers 12x8 and 4 outputs for moves(up, right, down, left). (Activation function linear function for lat layer and relu for rest) Then I run one full game and save all the moves and rewards(0-nothing happend, -2-to moves that do nothink, -1 when that move lost game and a number of earned score when move do somethink). When the game ends I did backpropagation algorithm from the last move. Am i doing it rigth or what? And I know there are libraries like tensorflow but I wanna understand it all.
I would consult this GitHub repo, as it accomplishes exactly what you are trying to do.
You can actually use the above solution live here.
If you want to actually learn the fundamentals of how that all works, that's beyond the scope of what a single post on StackOverflow can provide.
I'm currently measuring the signal from 3 direction vibration sensor. I wan to convert my signal to a FFT form to determine the frequency analysis of it. Anyone got idea how I do it in Labview?
There is an FFT VI under Signal Processing >> Transforms on the Functions Palette that should do what you're asking. Probably not a bad place to start.
Check out Signal Processing >> Waveform Measurements as well. This has slightly higher level functions which will work out all of the magnitudes and frequencies for you as well. These do require the full development system though.
Somehow I wan to use the DC component from my arduino ADC and compute it into FFT. I tried using this configuration as shows here: http://i.stack.imgur.com/Yr5FP.png But the output of FFT i get doesnt seems much effect when sustained to huge voltage changes overtime in my ADC reading.
Anyone got better way to use FFT to compute the voltage signal from my ADC?
So I have been making a simple HTML5 tuner using the Web Audio API. I have it all set up to respond to the correct frequencies, the problem seems to be with getting the actual frequencies. Using the input, I create an array of the spectrum where I look for the highest value and use that frequency as the one to feed into the tuner. The problem is that when creating an analyser in Web Audio it can not become more specific than an FFT value of 2048. When using this if i play a 440hz note, the closest note in the array is something like 430hz and the next value seems to be higher than 440. Therefor the tuner will think I am playing these notes when infact the loudest frequency should be 440hz and not 430hz. Since this frequency does not exist in the analyser array I am trying to figure out a way around this or if I am missing something very obvious.
I am very new at this so any help would be very appreciated.
Thanks
There are a number of approaches to implementing pitch detection. This paper provides a review of them. Their conclusion is that using FFTs may not be the best way to go - however, it's unclear quite what their FFT-based algorithm actually did.
If you're simply tuning guitar strings to fixed frequencies, much simpler approaches exist. Building a fully chromatic tuner that does not know a-priori the frequency to expect is hard.
The FFT approach you're using is entirely possible (I've built a robust musical instrument tuner using this approach that is being used white-label by a number of 3rd parties). However you need a significant amount of post-processing of the FFT data.
To start, you solve the resolution problem using the Short Timer FFT (STFT) - or more precisely - a succession of them. The process is described nicely in this article.
If you intend building a tuner for guitar and bass guitar (and let's face it, everyone who asks the question here is), you'll need t least a 4092-point DFT with overlapping windows in order not to violate the nyquist rate on the bottom E1 string at ~41Hz.
You have a bunch of other algorithmic and usability hurdles to overcome. Not least, perceived pitch and the spectral peak aren't always the same. Taking the spectral peak from the STFT doesn't work reliably (this is also why the basic auto-correlation approach is also broken).