I need expert advice to print the time in hours minutes and sec - output

nt main() {
int hh,mm,ss;
char time[2];
printf("enter the time");
scanf("%d %d %d %s",&hh,&mm,&ss,&time);
printf("%d",ss);
printf("hh:mm:ss time:%d:%d:%d %s",hh,mm,ss,time);
//for(i=0;i<3;i++)
//printf("%c",time[i]);
return 0;
}
What is wrong with this program that it's every time gets printed with 0 if the input is any non zero integer

First of all give more information when posting a question, rather than just why does this not work...
Now about the code.
For starters gcc wouldn't even let me compile this, because you dont use the & sign when reading an array of characters. And never allocate a small number of memory to the array of characters, especially if you don't know the size of it.
Correct way:
char time[100];
scanf(%s, time);
When I removed the & sign and compiled it worked as it should, however I have no idea what you wanted with the char time[2]; so removed it and here is a perfectly working (more elegant) code, however do realize if you are scanning an integer do not supply the program a string!
#include <stdio.h>
int main() {
int hh,mm,ss;
printf("Enter the time: ");
scanf("%d %d %d", &hh, &mm, &ss);
printf("hh:mm:ss time: %d:%d:%d\n", hh, mm, ss);
return 0;
}

Related

Converting uint32_t to binary in C

The main problem I'm having is to read out values in binary in C. Python and C# had some really quick/easy functions to do this, I found topic about how to do it in C++, I found topic about how to convert int to binary in C, but not how to convert uint32_t to binary in C.
What I am trying to do is to read bit by bit the 32 bits of the DR_REG_RNG_BASE address of an ESP32 (this is the address where the random values of the Random Hardware Generator of the ESP are stored).
So for the moment I was doing that:
#define DR_REG_RNG_BASE 0x3ff75144
void printBitByBit( ){
// READ_PERI_REG is the ESP32 function to read DR_REG_RNG_BASE
uint32_t rndval = READ_PERI_REG(DR_REG_RNG_BASE);
int i;
for (i = 1; i <= 32; i++){
int mask = 1 << i;
int masked_n = rndval & mask;
int thebit = masked_n >> i;
Serial.printf("%i", thebit);
}
Serial.println("\n");
}
At first I thought it was working well. But in fact it takes me out of binary representations that are totally false. Any ideas?
Your shown code has a number of errors/issues.
First, bit positions for a uint32_t (32-bit unsigned integer) are zero-based – so, they run from 0 thru 31, not from 1 thru 32, as your code assumes. Thus, in your code, you are (effectively) ignoring the lowest bit (bit #0); further, when you do the 1 << i on the last loop (when i == 32), your mask will (most likely) have a value of zero (although that shift is, technically, undefined behaviour for a signed integer, as your code uses), so you'll also drop the highest bit.
Second, your code prints (from left-to-right) the lowest bit first, but you want (presumably) to print the highest bit first, as is normal. So, you should run the loop with the i index starting at 31 and decrement it to zero.
Also, your code mixes and mingles unsigned and signed integer types. This sort of thing is best avoided – so it's better to use uint32_t for the intermediate values used in the loop.
Lastly (as mentioned by Eric in the comments), there is a far simpler way to extract "bit n" from an unsigned integer: just use value >> n & 1.
I don't have access to an Arduino platform but, to demonstrate the points made in the above discussion, here is a standard, console-mode C++ program that compares the output of your code to versions with the aforementioned corrections applied:
#include <iostream>
#include <cstdint>
#include <inttypes.h>
int main()
{
uint32_t test = 0x84FF0048uL;
int i;
// Your code ...
for (i = 1; i <= 32; i++) {
int mask = 1 << i;
int masked_n = test & mask;
int thebit = masked_n >> i;
printf("%i", thebit);
}
printf("\n");
// Corrected limits/order/types ...
for (i = 31; i >= 0; --i) {
uint32_t mask = (uint32_t)(1) << i;
uint32_t masked_n = test & mask;
uint32_t thebit = masked_n >> i;
printf("%"PRIu32, thebit);
}
printf("\n");
// Better ...
for (i = 31; i >= 0; --i) {
printf("%"PRIu32, test >> i & 1);
}
printf("\n");
return 0;
}
The three lines of output (first one wrong, as you know; last two correct) are:
001001000000000111111110010000-10
10000100111111110000000001001000
10000100111111110000000001001000
Notes:
(1) On the use of the funny-looking "%"PRu32 format specifier for printing the uint32_t types, see: printf format specifiers for uint32_t and size_t.
(2) The cast on the (uint32_t)(1) constant will ensure that the bit-shift is safe, even when int and unsigned are 16-bit types; without that, you would get undefined behaviour in such a case.
When you printing out a binary string representation of a number, you print the Most Signification Bit (MSB) first, whether the number is a uint32_t or uint16_t, so you will need to have a mask for detecting whether the MSB is a 1 or 0, so you need a mask of 0x80000000, and shift-down on each iteration.
#define DR_REG_RNG_BASE 0x3ff75144
void printBitByBit( ){
// READ_PERI_REG is the ESP32 function to read DR_REG_RNG_BASE
uint32_t rndval = READ_PERI_REG(DR_REG_RNG_BASE);
Serial.println(rndval, HEX); //print out the value in hex for verification purpose
uint32_t mask = 0x80000000;
for (int i=1; i<32; i++) {
Serial.println((rndval & mask) ? "1" : "0");
mask = (uint32_t) mask >> 1;
}
Serial.println("\n");
}
For Arduino, there are actually a couple of built-in functions that can print out the binary string representation of a number. Serial.print(x, BIN) allows you to specify the number base on the 2nd function argument.
Another function that can achieve the same result is itoa(x, str, base) which is not part of standard ANSI C or C++, but available in Arduino to allow you to convert the number x to a str with number base specified.
char str[33];
itoa(rndval, str, 2);
Serial.println(str);
However, both functions does not pad with leading zero, see the result here:
36E68B6D // rndval in HEX
00110110111001101000101101101101 // print by our function
110110111001101000101101101101 // print by Serial.print(rndval, BIN)
110110111001101000101101101101 // print by itoa(rndval, str, 2)
BTW, Arduino is c++, so don't use c tag for your post. I changed it for you.

Read SPI Eeprom using a pointer does not work but works when not using a pointer

I am new to programming and I am trying to read a page (64Bytes) from an SPI Eeprom and I got it working when reading to an array[67] (3 transmitted Bytes to start the read process via SPI + 64 Bytes data).
I am using IAR Workbench working on a STM32L475.
When I try to use pointers it does not work, probably a stupid no brainer of a beginners mistake, but I appreciate some help to solve this.
I am using a union like this (I know I am wasting mem but for the test it is like this):
//Production Data union
union Production_Data_union
{
struct
{
uint8_t Dummy_Array[3];
char Xxxx_Sn[16];
char Yyyy_Sn[16];
char Prod_Date[8];
char Firmware_Ver[8];
};
uint8_t Eeprom_Page0_Buffer[67];
};
union Production_Data_union Prod_Data;
uint8_t *Eeprom_Page0_Ptr;
uint8_t Read_Cmd[3] = {0x03, 0x00, 0x00};
uint8_t Buff[67];
uint8_t Eeprom_Page_Size = 64;
void Eeprom_Page_Read(uint8_t *Data, uint8_t Page_No);
My Main looks like this:
Eeprom_Page0_Ptr = (uint8_t*)&Prod_Data.Eeprom_Page0_Buffer;
Eeprom_Page_Read(Eeprom_Page0_Ptr, 0);
The Eeprom_Page_Read function:
void Eeprom_Page_Read(uint8_t *Data, uint8_t Page_No)
{
uint16_t Address;
Address = Page_No * Eeprom_Page_Size;
Read_Cmd[2] = Address & 0xFF;
Read_Cmd[1] = (Address >> 8) & 0xFF;
//Send READ command to Eeprom
HAL_GPIO_WritePin(GPIOD, GPIO_PIN_2, GPIO_PIN_RESET);
if(HAL_SPI_TransmitReceive(&hspi3, (uint8_t*)Read_Cmd, (uint8_t *)&Data, (Eeprom_Page_Size +3), 5000) != HAL_OK)
{
Error_Handler();
}
printf("Prod_Data:\n - Xxxx SN %s\n - Yyyy SN %s\n - Prod date %s - Firmware %s\n - Cmd - %d - %d - %d\n",
Prod_Data.Xxxx_Sn,
Prod_Data.Yyyy_Sn,
Prod_Data.Prod_Date,
Prod_Data.Firmware_Ver,
Read_Cmd[0],
Read_Cmd[1],
Read_Cmd[2]);
//Wait for SPI transfer to complete
while (HAL_SPI_GetState(&hspi3) != HAL_SPI_STATE_READY)
{
}
HAL_GPIO_WritePin(GPIOD, GPIO_PIN_2, GPIO_PIN_SET);
Read_E2prom = 0;
}
I know the content of the Eeprom is ok and I can read it if I replace “&Data” with “Buff” (Array[67]) in the HAL_SPI_TransmitReceive(...) call.
Pointer value is the start address of the structure (0x20000090).
So addressing a.s.o. should be ok but the struct is empty when using a pointer.
I am mostly interested in why this does not work and a fix to this, not so much in comments like "why don't you do it like this instead, I want to learn what I have done wrong cause this approach, I believe, should work.
Please remember that I AM NEW at programming so please explain "for Dummies".
I am not sure why that is ?
The function HAL_SPI_TransmitReceive wants a pointer to know where to store the data it receives. It essentially wants an address of where to go place the bytes. In your case, according to line
void Eeprom_Page_Read(uint8_t *Data, uint8_t Page_No){...}
Data is already a pointer, because it's denoted with a *. This means that Datais a pointer pointing to some uint8_t number/array somewhere. That somewhere is where you want your SPI to write to.
When you added the & you basically gave the SPI the pointer to the pointer of where you want to write. So the SPI is writing the received data over the pointer, instead of at where the pointer points?
If that doesn't make sense then ask me again. It's a hard thing to explain.
Update:
This is as I understand it, not a pointer but an array?
The compiler only see's it as an pointer. In your case the pointer happen to point to an array, but it could have been to anywhere (almost) in memory. I implore you to rather think in terms of a pointer pointing to the first element of an array (*ptr==array[0]), and not in terms of an array.
Is it somehow implicit so the compiler knows what I want to do so it just accepts and compiles correctly?
I'm not sure if the compiler should compile successfully or not. But you should not rely on that. It's happens often where one could send pointers of pointers around (**ptr), so the compiler is just going to assume you know what you are doing. So you must take great care of how you work with your pointers.
Ok, after trying to solve this for a day or so, I finally found the mistake, it should not be &Data but just Data so it must look like this:
if(HAL_SPI_TransmitReceive(&hspi3, (uint8_t*)Read_Cmd, (uint8_t *)Data, (Eeprom_Page_Size +3), 5000) != HAL_OK)
{
/* Transfer error in transmission process */
Error_Handler();
}
I am not sure why that is ?

Allocating using malloc inside a function passing the address of the pointer and using vector syntax?

What's wrong on the code below? I need to send the address of the pointer *A to the function, read some numbers with scanf inside it, return to main and print the numbers read at that function.
void create_number_vector(int **number)
{
(*number) = (int*)malloc(5*sizeof(int));
int i;
for(i=0; i<5; i++){
scanf("%d",number[i]);
}
}
int main(void){
int i, *A;
create_number_vector(&A);
for(i=0; i<5; i++){
printf("%d",A[i]);
}
return 0;
}
Except one line(concept), everything is pretty much OK.
Problamatic line is:
scanf("%d",number[i]);
And should be replace with:
scanf("%d", *number+i);
Because our allocated variable is a pointer, we should use him like that, we should go to the 'i' address inside of the allocated variable and scan into him.
Ofcourse you can keep on using the "array" style usage, with this syntax:
scanf("%d", &(*number)[i]);
P.S
Don't forget to free the allocated resources at the end of the usage, altough this kind of small program that exits at the end of the echoing, it's still a good practice to always free your resources at the end of its usage.

how to use the array in cudaPitchedPtr type data

I have encountered a problem when trying to use the array within the data type cudaPitchedptr.
I transferred the data from the main function to the global function and print the value. As I set the value to be 12 in the cudaMemset3D, however, the result printed is 0.0000. Attached is my code. I really appreciate it if someone can help me.
#include "cuda_runtime.h"
#include "device_launch_parameters.h"
#include "cuPrintf.cu"
#include "stdio.h"
__global__ void printtest(double devptr[])
{
printf("%f\n",devptr[1]);
}
int main()
{
int width=191, height=192, depth=192;
cudaExtent extent= make_cudaExtent( width*sizeof(double),height,depth);
cudaPitchedPtr Ex;
cudaMalloc3D(&Ex,extent);
cudaMemset3D(Ex,12 ,extent);
printtest<<<1,1>>>( (double*) Ex.ptr);
}
The problem is that cudaMemset3D is used to set every byte in a range to a value. Note in the description:
value- Value to set for each byte of specified memory
So you are setting every byte in your allocated region to 12 (decimal). Then you're taking 8 of those bytes in a row and attempting to interpet it as a double-precision floating point type. You're going to get results that aren't what you expect.
If you want to see something sensible, then after your cudaMalloc3D, instead of the cudaMemset3D, insert this code:
double myval = 1.3579f; //or whatever value you want to see
double *hostdata;
hostdata = (double *)malloc(width*sizeof(double)* height*depth);
if (hostdata == 0) {printf("malloc fail"); return 1;}
hostdata[1] = myval;
cudaMemcpy3DParms p = {0};
p.srcPtr = make_cudaPitchedPtr(hostdata, width*sizeof(double), width, height);
p.dstPtr = Ex;
p.extent = extent;
p.srcPos = make_cudaPos(0,0,0);
p.dstPos = make_cudaPos(0,0,0);
p.kind=cudaMemcpyHostToDevice;
cudaMemcpy3D(&p);
I'd also recommend using cuda error checking after every api call and kernel launch in your code.
You may also be interested in this question/answer.

CUDA global (as in C) dynamic arrays allocated to device memory

So, im trying to write some code that utilizes Nvidia's CUDA architecture. I noticed that copying to and from the device was really hurting my overall performance, so now I am trying to move a large amount of data onto the device.
As this data is used in numerous functions, I would like it to be global. Yes, I can pass pointers around, but I would really like to know how to work with globals in this instance.
So, I have device functions that want to access a device allocated array.
Ideally, I could do something like:
__device__ float* global_data;
main()
{
cudaMalloc(global_data);
kernel1<<<blah>>>(blah); //access global data
kernel2<<<blah>>>(blah); //access global data again
}
However, I havent figured out how to create a dynamic array. I figured out a work around by declaring the array as follows:
__device__ float global_data[REALLY_LARGE_NUMBER];
And while that doesn't require a cudaMalloc call, I would prefer the dynamic allocation approach.
Something like this should probably work.
#include <algorithm>
#define NDEBUG
#define CUT_CHECK_ERROR(errorMessage) do { \
cudaThreadSynchronize(); \
cudaError_t err = cudaGetLastError(); \
if( cudaSuccess != err) { \
fprintf(stderr, "Cuda error: %s in file '%s' in line %i : %s.\n", \
errorMessage, __FILE__, __LINE__, cudaGetErrorString( err) );\
exit(EXIT_FAILURE); \
} } while (0)
__device__ float *devPtr;
__global__
void kernel1(float *some_neat_data)
{
devPtr = some_neat_data;
}
__global__
void kernel2(void)
{
devPtr[threadIdx.x] *= .3f;
}
int main(int argc, char *argv[])
{
float* otherDevPtr;
cudaMalloc((void**)&otherDevPtr, 256 * sizeof(*otherDevPtr));
cudaMemset(otherDevPtr, 0, 256 * sizeof(*otherDevPtr));
kernel1<<<1,128>>>(otherDevPtr);
CUT_CHECK_ERROR("kernel1");
kernel2<<<1,128>>>();
CUT_CHECK_ERROR("kernel2");
return 0;
}
Give it a whirl.
Spend some time focusing on the copious documentation offered by NVIDIA.
From the Programming Guide:
float* devPtr;
cudaMalloc((void**)&devPtr, 256 * sizeof(*devPtr));
cudaMemset(devPtr, 0, 256 * sizeof(*devPtr));
That's a simple example of how to allocate memory. Now, in your kernels, you should accept a pointer to a float like so:
__global__
void kernel1(float *some_neat_data)
{
some_neat_data[threadIdx.x]++;
}
__global__
void kernel2(float *potentially_that_same_neat_data)
{
potentially_that_same_neat_data[threadIdx.x] *= 0.3f;
}
So now you can invoke them like so:
float* devPtr;
cudaMalloc((void**)&devPtr, 256 * sizeof(*devPtr));
cudaMemset(devPtr, 0, 256 * sizeof(*devPtr));
kernel1<<<1,128>>>(devPtr);
kernel2<<<1,128>>>(devPtr);
As this data is used in numerous
functions, I would like it to be
global.
There are few good reasons to use globals. This definitely is not one. I'll leave it as an exercise to expand this example to include moving "devPtr" to a global scope.
EDIT:
Ok, the fundamental problem is this: your kernels can only access device memory and the only global-scope pointers that they can use are GPU ones. When calling a kernel from your CPU, behind the scenes what happens is that the pointers and primitives get copied into GPU registers and/or shared memory before the kernel gets executed.
So the closest I can suggest is this: use cudaMemcpyToSymbol() to achieve your goals. But, in the background, consider that a different approach might be the Right Thing.
#include <algorithm>
__constant__ float devPtr[1024];
__global__
void kernel1(float *some_neat_data)
{
some_neat_data[threadIdx.x] = devPtr[0] * devPtr[1];
}
__global__
void kernel2(float *potentially_that_same_neat_data)
{
potentially_that_same_neat_data[threadIdx.x] *= devPtr[2];
}
int main(int argc, char *argv[])
{
float some_data[256];
for (int i = 0; i < sizeof(some_data) / sizeof(some_data[0]); i++)
{
some_data[i] = i * 2;
}
cudaMemcpyToSymbol(devPtr, some_data, std::min(sizeof(some_data), sizeof(devPtr) ));
float* otherDevPtr;
cudaMalloc((void**)&otherDevPtr, 256 * sizeof(*otherDevPtr));
cudaMemset(otherDevPtr, 0, 256 * sizeof(*otherDevPtr));
kernel1<<<1,128>>>(otherDevPtr);
kernel2<<<1,128>>>(otherDevPtr);
return 0;
}
Don't forget '--host-compilation=c++' for this example.
I went ahead and tried the solution of allocating a temporary pointer and passing it to a simple global function similar to kernel1.
The good news is that it does work :)
However, I think it confuses the compiler as I now get "Advisory: Cannot tell what pointer points to, assuming global memory space" whenever I try to access the global data. Luckily, the assumption happens to be correct, but the warnings are annoying.
Anyway, for the record - I have looked at many of the examples and did run through the nvidia exercises where the point is to get the output to say "Correct!". However, I haven't looked at all of them. If anyone knows of an sdk example where they do dynamic global device memory allocation, I would still like to know.
Erm, it was exactly that problem of moving devPtr to global scope that was my problem.
I have an implementation that does exactly that, with the two kernels having a pointer to data passed in. I explicitly don't want to pass in those pointers.
I have read the documentation fairly closely, and hit up the nvidia forums (and google searched for an hour or so), but I haven't found an implementation of a global dynamic device array that actually runs (i have tried several that compile and then fail in new and interesting ways).
check out the samples included with the SDK. Many of those sample projects are a decent way to learn by example.
As this data is used in numerous functions, I would like it to be global.
-
There are few good reasons to use globals. This definitely is not one. I'll leave it as an
exercise to expand this example to include moving "devPtr" to a global scope.
What if the kernel operates on a large const structure consisting of arrays? Using the so called constant memory is not an option, because it's very limited in size.. so then you have to put it in global memory..?