AD9833 Frequency Control Using STM32F030F4P6 - function

I'm using STM32F030F4P6 and Stm32Cube to run AD9833 Signal Generator.
i can generate signals.but can't change the frequency.in the Analog Devices App note there is Example:
Given this example i write a code like this :
void AD9833_SetFRQ(float FRQ) {
uint32_t freq=0;
uint32_t freq0=0;
uint32_t freq1=0;
freq=(int)(((FRQ*pow(2,28))/FMCLK)+1); // Tuning Word
freq0=(freq&0x3fff)|(1<<14); // FREQ LSB
freq1=(freq>>14)|(1<<14); // FREQ MSB
AD9833_Reset();
HAL_GPIO_WritePin(GPIOB,GPIO_PIN_1,GPIO_PIN_RESET);
HAL_SPI_Transmit(&hspi1,(uint8_t*)&(freq0),1,10);
HAL_GPIO_WritePin(GPIOB,GPIO_PIN_1,GPIO_PIN_SET);
HAL_GPIO_WritePin(GPIOB,GPIO_PIN_1,GPIO_PIN_RESET);
HAL_SPI_Transmit(&hspi1,(uint8_t*)&(freq1),1,10);
HAL_GPIO_WritePin(GPIOB,GPIO_PIN_1,GPIO_PIN_SET);
HAL_GPIO_WritePin(GPIOB,GPIO_PIN_1,GPIO_PIN_RESET);
HAL_SPI_Transmit(&hspi1,(uint8_t*)&(A1),1,10);
HAL_GPIO_WritePin(GPIOB,GPIO_PIN_1,GPIO_PIN_SET);
AD9833_Set();
}
and my data output is exactly like the analog devices example as you can see in logic analyzer image :
Still no chance to change the Frequency.:(
what is the problem?

Did you ensure control bits (e.g. DB13 resp. DB14 DB15) are written correctly?
Also, at the end, reset toggle?
Check Command sequence explained section of
https://www.analog.com/media/en/technical-documentation/application-notes/AN-1070.pdf

Related

Can you use #defines like method parameters in HLSL?

In HLSL, is there a way to make defines act like swap-able methods? My use case is creating a method that does fractal brownian noise with a sampling function(x, y). Ideally I would be able to have a parameter that is a method, and just call that parameter, but I can't seem to do that in HLSL in Unity. It wouldn't make sense to copy+paste the entire fractal brown method and change just the one sampler line, especially if I'm using multiple layers of different noise functions for a final output. But I can't seem to find out how to do it.
Here is what I've tried:
#define NOISE_SAMPLE Random(x, y)
float FBM()
{
...
float somevalue = NOISE_SAMPLE;
....
}
And in a compute buffer, I have something like this:
void CSMain(uint3 id : SV_DispatchThreadID)
{
...
#undef NOISE_SAMPLE
#define NOISE_SAMPLE Perlin(x, y)
float result = FBM();
...
}
However this doesn't seem to work. If I use NOISE_SAMPLE in the CSMain function, it uses the Perlin version. However, calling FBM() still uses the random version. This doesn't seem to make sense as I've read elsewhere that all functions are inline, so I thought the FBM function would 'inline' itself below the redefinition with the Perlin version. Why is this the case and what are some options for my use case?
This doesn't work, as a #define is a preprocessor instruction, and the preprocessor does its work before any other part of the HLSL compiler. So, even though your function is eventually inlined, this inlining only happens long after the preprocessor has run. In fact, the preprocessor is basically doing a purely string-based find-and-replace (just slightly smarter) before the actual compiler even sees your code. It isn't even aware of the concept of a function.
Out of my head, I can think of two options for your use case:
You could pass an integer as a parameter to your FBM() method, which identifies your noise function, and then have a switch (or an if-else-chain) inside your FBM() method, which selects the proper noise function based on this integer. Since the integer is passed as a compile-time constant, I'd expect that the compiler optimizes that branching away (and even if it doesn't, the cost of such a branch is fairly low, since all threads are always taking the same path through the code):
float FBM(uint noise)
{
...
float somevalue = 0.0f;
if(noise == 0)
somevalue = Random(x, y);
else
somevalue = Perlin(x, y);
...
}
void CSMain(uint3 id : SV_DispatchThreadID)
{
...
float result = FMB(1);
...
}
You could write your whole FBM() method as a preprocessor macro instead of a function (you can end a line in a #define with \ to have the macro spanning multiple lines). This is a bit more cumbersome, but your #undef and #define would work, as the inlining is then actually done by the preprocesor as well.
#define NOISE_SAMPLE Random(x, y)
#define FBM { \
... \
float somevalue = NOISE_SAMPLE; \
... \
result = ...; \
}
void CSMain(uint3 id : SV_DispatchThreadID)
{
float result = 0.0f;
...
#undef NOISE_SAMPLE
#define NOISE_SAMPLE Perlin(x, y)
FBM;
...
}
(Note that, with this approach, the compiler errors/warnings will never reference a line inside the FBM macro, but only ever the line(s) where the FBM macro is being called, so debugging these errors/warnings is slightly harder)

How to calculate start address of the stack of a pie binary on a system with full ASLR enabled?

I have a pie binary(ELF) with nx and pie bit enabled on a System with Full ASLR enabled.
I have problem for finding out the stack's base address. I want to ret2mprotect and then ret2shellcode on the stack,
but my exploit doesn't work because offset we use to calcuate stack's base address changes on every run of binary.
Could you please help me to solve the problem?
Here is my vulnerable code:
#include <stdio.h>
int main(int argc,char* argv[]) {
char buffer[256];
printf(argv[1]);
printf("\n");
gets(buffer);
return 0;
}
my compile command:
gcc -Wall -g -O0 -Wl,-rpath,./ -fcf-protection=none -fno-stack-protector -fpic vuln.c -o vuln
I got required leaked from this format-string(below value passed to the program):
%1$p:%41$p
1st leak address is in range of stack.
41th leak address is in range of libc library.
I got this sample values to determine offset based on it: (ex means example)
exStackStart = 0x7ffee8986000
exStackEnd = 0x7ffee89a8000
exLibcStart = 0x7fea59ec5000
exLibcEnd = 0x7fea5a03d000
exGetsBuffer = 0x7ffee89a5d50
exStackLeaked = 0x7ffee89a5f48
exLibcLeaked = 0x7fea59ec7083
offsetStackLeaked2StackEnd = exStackEnd - exStackLeaked
offsetStackLeaked2GetsBuffer = exStackLeaked - exGetsBuffer
offsetLibcLeaked2LibcStart = exLibcLeaked - exLibcStart
stackSize = exStackEnd - exStackStart
io = start()
output = str(io.recvuntil(b'\n')).split(':')
runStackLeaked = int(output[0][2:], 16)
runLibcLeaked = int(output[1][:-3], 16)
runStackEnd = runStackLeaked + offsetStackLeaked2StackEnd
runStackStartRaw = runStackEnd - stackSize
runStackStart = runStackStartRaw & (~4095) <====== here is the problem! sometimes this is correct and sometimes this is 0x1000 or 0x2000 far from correct address
runGetsBuffer = runStackLeaked - offsetStackLeaked2GetsBuffer
runLibcStart = runLibcLeaked - offsetLibcLeaked2LibcStart
io.send(b'A' * 300 + b'\n')
io.interactive()
Update:
The vuln​ gives me a address which is in range of the stack(I've check using vmmap​ command in gdb​) using format string vulnerablitiy.
Full ASLR on my OS is enabled, so address of stack for my binary vary on each run of the binary, so I would like to achieve an offset that I can add/substract to/from leaked address and get to start of the stack(which vmmap​ show me).
I have checked the distance between the leaked address and start of the stack and take note it somewhere, so I want to automatically calculate start address of the stack just by adding/substracting the offset to/from the leaked address without manually check in gdb​.
but unfortunately the offset differs on every run of the binary, so the offset we achieved is not valid.
offset from leaked address to stack's start-address(got from gdb vmmap​):
(following numbers is in decimal)
run1: 127480
run2: 128584
run3: 130632
run4: 126008
run5: 127656
As you can see, the distance between the offset and start of the stack differs on each run.
What cause the problem and how to defeat(achieve/calulcate) it?
following two pictures are for two run of the program, but result is not like we expected.
result of run1
result of run2
I expected the offset from start of the stack to any location in the stack is fixed but it vary for my binary each time, and As far as I know ASLR only affects base address of memory regions(like stack, heap, ...) and offset between the variable inside one memory area must be fixed.
The offset from the buffer to the start of the stack frame is indeed fixed. The offset from the buffer to the start of the stack is not consistent.

How to best make use of the same constants in both host and device code?

Suppose I have some global constant data I use in host-side code:
const float my_array[20] = { 45.146, 54.633, 74.669, 12.734, 74.240, 100.524 };
(Note: I've kept them C-ish, no constexpr here.)
I now want to also use these in device-side code. I can't simply start using them: They are not directly accessible from the device, and trying to use them gives:
error: identifier "my_array" is undefined in device code
What is, or what are, the idiomatic way(s) to make such constants usable on both the host and the device?
This approach was suggested by Mark Harris in an answer in 2012:
#define MY_ARRAY_VALUES 45.146, 54.633, 74.669, 12.734, 74.240, 100.524
__constant__ float device_side_my_array[2] = { MY_ARRAY_VALUES };
const float host_side_my_array[2] = { MY_ARRAY_VALUES };
#undef MY_ARRAY_VALUES
__device__ __host__ float my_array(size_t i) {
#ifdef __CUDA_ARCH__
return device_side_my_array[i];
#else
return host_side_my_array[i];
#endif
}
But this has some drawbacks:
Not actually using the same constants, just constants with the same value.
Duplication of data.
Takes up constant memory, which is a rather limited resource.
Seems a bit verbose (although maybe other options are even more so).
I wonder if this is what most people use in practice.
Note:
In C++ one might use the same name, but in different sub-namespaces within the detail:: namespace.
Doesn't use cudaMemcpyToSymbol().

Read SPI Eeprom using a pointer does not work but works when not using a pointer

I am new to programming and I am trying to read a page (64Bytes) from an SPI Eeprom and I got it working when reading to an array[67] (3 transmitted Bytes to start the read process via SPI + 64 Bytes data).
I am using IAR Workbench working on a STM32L475.
When I try to use pointers it does not work, probably a stupid no brainer of a beginners mistake, but I appreciate some help to solve this.
I am using a union like this (I know I am wasting mem but for the test it is like this):
//Production Data union
union Production_Data_union
{
struct
{
uint8_t Dummy_Array[3];
char Xxxx_Sn[16];
char Yyyy_Sn[16];
char Prod_Date[8];
char Firmware_Ver[8];
};
uint8_t Eeprom_Page0_Buffer[67];
};
union Production_Data_union Prod_Data;
uint8_t *Eeprom_Page0_Ptr;
uint8_t Read_Cmd[3] = {0x03, 0x00, 0x00};
uint8_t Buff[67];
uint8_t Eeprom_Page_Size = 64;
void Eeprom_Page_Read(uint8_t *Data, uint8_t Page_No);
My Main looks like this:
Eeprom_Page0_Ptr = (uint8_t*)&Prod_Data.Eeprom_Page0_Buffer;
Eeprom_Page_Read(Eeprom_Page0_Ptr, 0);
The Eeprom_Page_Read function:
void Eeprom_Page_Read(uint8_t *Data, uint8_t Page_No)
{
uint16_t Address;
Address = Page_No * Eeprom_Page_Size;
Read_Cmd[2] = Address & 0xFF;
Read_Cmd[1] = (Address >> 8) & 0xFF;
//Send READ command to Eeprom
HAL_GPIO_WritePin(GPIOD, GPIO_PIN_2, GPIO_PIN_RESET);
if(HAL_SPI_TransmitReceive(&hspi3, (uint8_t*)Read_Cmd, (uint8_t *)&Data, (Eeprom_Page_Size +3), 5000) != HAL_OK)
{
Error_Handler();
}
printf("Prod_Data:\n - Xxxx SN %s\n - Yyyy SN %s\n - Prod date %s - Firmware %s\n - Cmd - %d - %d - %d\n",
Prod_Data.Xxxx_Sn,
Prod_Data.Yyyy_Sn,
Prod_Data.Prod_Date,
Prod_Data.Firmware_Ver,
Read_Cmd[0],
Read_Cmd[1],
Read_Cmd[2]);
//Wait for SPI transfer to complete
while (HAL_SPI_GetState(&hspi3) != HAL_SPI_STATE_READY)
{
}
HAL_GPIO_WritePin(GPIOD, GPIO_PIN_2, GPIO_PIN_SET);
Read_E2prom = 0;
}
I know the content of the Eeprom is ok and I can read it if I replace “&Data” with “Buff” (Array[67]) in the HAL_SPI_TransmitReceive(...) call.
Pointer value is the start address of the structure (0x20000090).
So addressing a.s.o. should be ok but the struct is empty when using a pointer.
I am mostly interested in why this does not work and a fix to this, not so much in comments like "why don't you do it like this instead, I want to learn what I have done wrong cause this approach, I believe, should work.
Please remember that I AM NEW at programming so please explain "for Dummies".
I am not sure why that is ?
The function HAL_SPI_TransmitReceive wants a pointer to know where to store the data it receives. It essentially wants an address of where to go place the bytes. In your case, according to line
void Eeprom_Page_Read(uint8_t *Data, uint8_t Page_No){...}
Data is already a pointer, because it's denoted with a *. This means that Datais a pointer pointing to some uint8_t number/array somewhere. That somewhere is where you want your SPI to write to.
When you added the & you basically gave the SPI the pointer to the pointer of where you want to write. So the SPI is writing the received data over the pointer, instead of at where the pointer points?
If that doesn't make sense then ask me again. It's a hard thing to explain.
Update:
This is as I understand it, not a pointer but an array?
The compiler only see's it as an pointer. In your case the pointer happen to point to an array, but it could have been to anywhere (almost) in memory. I implore you to rather think in terms of a pointer pointing to the first element of an array (*ptr==array[0]), and not in terms of an array.
Is it somehow implicit so the compiler knows what I want to do so it just accepts and compiles correctly?
I'm not sure if the compiler should compile successfully or not. But you should not rely on that. It's happens often where one could send pointers of pointers around (**ptr), so the compiler is just going to assume you know what you are doing. So you must take great care of how you work with your pointers.
Ok, after trying to solve this for a day or so, I finally found the mistake, it should not be &Data but just Data so it must look like this:
if(HAL_SPI_TransmitReceive(&hspi3, (uint8_t*)Read_Cmd, (uint8_t *)Data, (Eeprom_Page_Size +3), 5000) != HAL_OK)
{
/* Transfer error in transmission process */
Error_Handler();
}
I am not sure why that is ?

Understanding heisenbug example: different precision in registers vs main memory

I read the wiki page about heisenbug, but don't understand this example. Can
anyone explain it in detail?
One common example
of a heisenbug is a bug that appears when the program is compiled with an
optimizing compiler, but not when the same program is compiled without
optimization (as is often done for the purpose of examining it with a debugger).
While debugging, values that an optimized program would normally keep in
registers are often pushed to main memory. This may affect, for instance, the
result of floating-point comparisons, since the value in memory may have smaller
range and accuracy than the value in the register.
Here's a concrete example recently posted:
Infinite loop heisenbug: it exits if I add a printout
It's a really nice specimen because we can all reproduce it: http://ideone.com/rjY5kQ
These bugs are so dependent on very precise features of the platform that people also find them very difficult to reproduce.
In this case when the 'print-out' is omitted the program performs a high precision comparison inside the CPU registers (higher than stored in a double).
But to print out the value the compiler decides to move the result to main memory which results in an implicit truncation of the precision. When it uses that truncated value for the comparison it succeeds.
#include <iostream>
#include <cmath>
double up = 19.0 + (61.0/125.0);
double down = -32.0 - (2.0/3.0);
double rectangle = (up - down) * 8.0;
double f(double x) {
return (pow(x, 4.0)/500.0) - (pow(x, 2.0)/200.0) - 0.012;
}
double g(double x) {
return -(pow(x, 3.0)/30.0) + (x/20.0) + (1.0/6.0);
}
double area_upper(double x, double step) {
return (((up - f(x)) + (up - f(x + step))) * step) / 2.0;
}
double area_lower(double x, double step) {
return (((g(x) - down) + (g(x + step) - down)) * step) / 2.0;
}
double area(double x, double step) {
return area_upper(x, step) + area_lower(x, step);
}
int main() {
double current = 0, last = 0, step = 1.0;
do {
last = current;
step /= 10.0;
current = 0;
for(double x = 2.0; x < 10.0; x += step) current += area(x, step);
current = rectangle - current;
current = round(current * 1000.0) / 1000.0;
//std::cout << current << std::endl; //<-- COMMENT BACK IN TO "FIX" BUG
} while(current != last);
std::cout << current << std::endl;
return 0;
}
Edit: Verified bug and fix still exhibit: 03-FEB-22, 20-Feb-17
It comes from Uncertainty Principle which basically states that there is a fundamental limit to the precision with which certain pairs of physical properties of a particle can be known simultaneously. If you start observing some particle too closely,(i.e., you know its position precisely) then you can't measure its momentum precisely. (And if you have precise speed, then you can't tell its exact position)
So following this, Heisenbug is a bug which disappears when you are watching closely.
In your example, if you need the program to perform well, you will compile it with optimization and there will be a bug. But as soon as you enter in debugging mode, you will not compile it with optimization which will remove the bug.
So if you start observing the bug too closely, you will be uncertain to know its properties(or unable to find it), which resembles the Heisenberg's Uncertainty Principle and hence called Heisenbug.
The idea is that code is compiled to two states - one is normal or debug mode and other is optimised or production mode.
Just as it is important to know what happens to matter at quantum level, we should also know what happens to our code at compiler level!