Solidity Remix simple questions - ethereum

Am new to Solidity here, this is the code I am testing and remix spits out
browser/Untitled.sol:1:1: : Source file does not specify required compiler version!Consider adding "pragma solidity ^0.4.12
contract C {
^
Spanning multiple lines.
Hopefully someone can give some guidance.
contract C {
function bytes32ToString(bytes32 x) constant returns (string) {
bytes memory bytesString = new bytes(32);
uint charCount = 0;
for (uint j = 0; j < 32; j++) {
byte char = byte(bytes32(uint(x) * 2 ** (8 * j)));
if (char != 0) {
bytesString[charCount] = char;
charCount++;
}
}
bytes memory bytesStringTrimmed = new bytes(charCount);
for (j = 0; j < charCount; j++) {
bytesStringTrimmed[j] = bytesString[j];
}
return string(bytesStringTrimmed);
}
function bytes32ArrayToString(bytes32[] data) returns (string) {
bytes memory bytesString = new bytes(data.length * 32);
uint urlLength;
for (uint i=0; i<data.length; i++) {
for (uint j=0; j<32; j++) {
byte char = byte(bytes32(uint(data[i]) * 2 ** (8 * j)));
if (char != 0) {
bytesString[urlLength] = char;
urlLength += 1;
}
}
}
bytes memory bytesStringTrimmed = new bytes(urlLength);
for (i=0; i<urlLength; i++) {
bytesStringTrimmed[i] = bytesString[i];
}
return string(bytesStringTrimmed);
} }

Include a version pragma at the very top of the source file to get rid of the warning.
pragma solidity ^0.4.0;
contract MyContract {
}
From Solidity documentation:
Version Pragma
Source files can (and should) be annotated with a so-called version
pragma to reject being compiled with future compiler versions that
might introduce incompatible changes. We try to keep such changes to
an absolute minimum and especially introduce changes in a way that
changes in semantics will also require changes in the syntax, but this
is of course not always possible. Because of that, it is always a good
idea to read through the changelog at least for releases that contain
breaking changes, those releases will always have versions of the form
0.x.0 or x.0.0.
The version pragma is used as follows:
pragma solidity ^0.4.0;
Such a source file will not compile with a
compiler earlier than version 0.4.0 and it will also not work on a
compiler starting from version 0.5.0 (this second condition is added
by using ^). The idea behind this is that there will be no breaking
changes until version 0.5.0, so we can always be sure that our code
will compile the way we intended it to. We do not fix the exact
version of the compiler, so that bugfix releases are still possible.

As everyone mentioned above, you need to specify the compiler version in the first line of the solidity code:
pragma solidity ^0.4.0;

This code is actually compiled, and the warning is just that: a warning.
It's suggested in the solidity docs to specify a compiler version, to reject compilation by compiler versions that may introduce breaking changes.
Try adding pragma solidity ^0.4.11; (or some other version) to the top of your file, and you'll see the warning disappear.
Your full file would now be:
pragma solidity ^0.4.11;
contract C {
function bytes32ToString(bytes32 x) constant returns (string) {
bytes memory bytesString = new bytes(32);
uint charCount = 0;
for (uint j = 0; j < 32; j++) {
byte char = byte(bytes32(uint(x) * 2 ** (8 * j)));
if (char != 0) {
bytesString[charCount] = char;
charCount++;
}
}
bytes memory bytesStringTrimmed = new bytes(charCount);
for (j = 0; j < charCount; j++) {
bytesStringTrimmed[j] = bytesString[j];
}
return string(bytesStringTrimmed);
}
function bytes32ArrayToString(bytes32[] data) returns (string) {
bytes memory bytesString = new bytes(data.length * 32);
uint urlLength;
for (uint i=0; i<data.length; i++) {
for (uint j=0; j<32; j++) {
byte char = byte(bytes32(uint(data[i]) * 2 ** (8 * j)));
if (char != 0) {
bytesString[urlLength] = char;
urlLength += 1;
}
}
}
bytes memory bytesStringTrimmed = new bytes(urlLength);
for (i=0; i<urlLength; i++) {
bytesStringTrimmed[i] = bytesString[i];
}
return string(bytesStringTrimmed);
}
}

See the responses given here and want to be clear with the compiler version:
In this case, you should use pragma solidity 0.4.11; if that is the compiler version that you have been testing and intend to deploy from. If you add ^ you don't lock the version and the risk of bugs will be considerably higher, especially if anyone other than the author will deploy the contract. If you lock the compiler version, you can ensure that the code will not be compiled with another version and the one that you intended.
Note that Solidity has a new code pattern here: pragma solidity >=0.4.24 <0.6.0; but you can still lock the version pragma solidity 0.5.2;.

Related

How to convert bytes 32 array?

I tried do this, but this return "0x0000000000000000000000000000000000000000"
// Convert bytes to address
function fromBytes(bytes32[] _additionalArgs) public view returns (address[]){
address[] memory path = new address[](_additionalArgs.length);
for(uint i = 0; i > _additionalArgs.length; i++){
path[i] = address(_additionalArgs[i]);
}
return path;
}
I need return array with addresses!
Your loop never executes.
for(uint i = 0; i > _additionalArgs.length; i++){
i begins at 0, and the loop condition is i > _additionalArgs.length, which can't ever be true. You almost certainly meant to use < instead:
for(uint i = 0; i < _additionalArgs.length; i++){
With that change, I believe your code should work.

how to return a structure (defined in .h file) from a function in another cpp file?

I ran into this issue and I cannot handle it. Any suggestion is appreciated.
I have a structure defined in a header file as follows:
Results.h
#ifndef RESULTS_H
#define RESULTS_H
struct Results
{
double dOptSizeMWh;
double dOrigSOCFinal;
double dManiSOCFinal;
};
#endif
and a general definition of "Deterministic" function in Deterministic.h:
#ifndef DETERMINISTIC_H
#define DETERMINISTIC_H
Results Deterministic(int,int,int,double,double); //Deterministic(int nNoMonth, int nNOWind, int nWindLength, double dPreviousSizeMWh, double dPreviousSOC)
#endif;
This function is implemented in Deterministic.cpp:
#include "Results.h"
Results Deterministic(int nNoMonth, int nNOWind, int nWindLength, double dPreviousSizeMWh, double dPreviousSOC)
{
// returns number of rows and columns of the array created
struct Results sRes;
sRes.dOptSizeMWh = -1.0; // for the optimal size of battery in MWh
sRes.dOrigSOCFinal = -1.0; // for the SOC at the end of the window
sRes.dManiSOCFinal = -1.0; // this is set to 0.0 if final SOC is slightly below 0
//...........................////
// OTHER Calculation .......////
//...........................////
return sRes;
}
Finally, I have a main file which I call Deterministic function and I use Results structure, main.cpp:
#include <Results.h>
#include <Deterministic.h>
using namespace std;
int main ()
{
int nNoMonth = 1; // the month that we want to use in the input
int nWindLength = 1; // length of window, hour
int nNODays = 1; // number of days that we want to repeat optimization
struct Results dValues;
double **mRes = new double*[nNODays * 24 / nWindLength];
for (int i = 0; i < nNODays * 24 / nWindLength; ++i) mRes[i] = new double[3];
for (int i = 0; i < nNODays * 24 / nWindLength; i++)
{
if (i == 0)
{
dValues = Deterministic(nNoMonth, i, nWindLength, 0.0, 0.0);
}else
{
temp0 = *(*(mRes+i-1)); double temp1 = *(*(mRes+i-1)+1); double temp2 = *(*(mRes+i-1)+2);
if (temp2 == -1.0) {dValues = Deterministic(nNoMonth, i, nWindLength, temp0, temp1);}
else {dValues = Deterministic(nNoMonth, i, nWindLength, *(*(mRes+i-1)), *(*(mRes+i-1)));}
}
*(*(mRes+i)) = dValues.dOptSizeMWh;
*(*(mRes+i)+1) = dValues.dOrigSOCFinal;
*(*(mRes+i)+2) = dValues.dManiSOCFinal;
}
these are only a small portion of the codes in Deterministic.cpp and main.cpp which defines the problem. First loop goes well (i.e., i=0) without any problem, but it fails in the second loop and beyond with this error: "R6010 - abort() has been called"
This error comes up in the main.cpp where I call Deterministic function in the if statement.
I have no problem compiling and running the posted code (other than the missing double in front of the declaration of temp). Without knowing what Deterministic() is actually doing, it's a bit hard to guess what the problem is (divide by zero? playing a Justin Bieber mp3?). It shouldn't have anything to do with returning a structure from a function defined in another file (translation units are a fundamental feature of the language). To find the root cause, single-step through the (complete) Deterministic() using your debugger.

RootBeer silently fails for large arrays?

I have a simple application that (for now) simulates error correction in a large array.
This bit generates the data and adds 16 bytes of Reed-Solomon parity to each block of 255 bytes.
ReedSolomonEncoder encoder = new ReedSolomonEncoder(QR_CODE_FIELD_256);
int[][] data = new int[params.getNumBlocks()][255];
int[][] original = new int[params.getNumBlocks()][];
int value = 0;
for (int i = 0; i < params.getNumBlocks(); i++) {
int[] block = data[i];
for (int j = 0; j < 239; j++) {
value = (value + 1) % 256;
block[j] = value;
}
encoder.encode(block, 16);
original[i] = Arrays.copyOf(block, block.length);
// Corrupt a byte
block[50] += 1;
}
This is my kernel:
public class RsKernel implements Kernel {
private final int[] block;
public RsKernel(int[] block) {
this.block = block;
}
#Override
public void gpuMethod() {
block[50] -= 1;
}
}
it merely manually reverts the corrupted byte in each block (it doesn't do actual Reed-Solomon error-correction).
I run the kernels with the following code:
ArrayList<Kernel> kernels = new ArrayList<>(params.getNumBlocks());
for (int[] block : data) {
kernels.add(new RsKernel(block));
}
new Rootbeer().run(kernels);
And I verify decoding with JUnit's assertArrayEquals:
Assert.assertArrayEquals(original, data);
The curious bit is that if I run this code with up to 8192 (what a suspiciously convenient number) blocks (kernels), the data is reported to have been decoded correctly; for 8193 blocks and above, it is not decoded correctly:
Exception in thread "main" arrays first differed at element [8192][50]; expected:<51> but was:<52>
at org.junit.Assert.internalArrayEquals(Assert.java:437)
at org.junit.Assert.internalArrayEquals(Assert.java:428)
at org.junit.Assert.assertArrayEquals(Assert.java:167)
at org.junit.Assert.assertArrayEquals(Assert.java:184)
at com.amphinicy.blink.rootbeer.RootBeerDemo.main(Jasmin)
What could cause this behaviour?
Here is the output of java -jar rootbeer-1.1.14.jar -printdeviceinfo:
device count: 1
device: GeForce GT 525M
compute_capability: 2.1
total_global_memory: 1073414144 bytes
num_multiprocessors: 2
max_threads_per_multiprocessor: 1536
clock_rate: 1200000 Hz
Looking at the code, I'm thinking it may be because the following:
// Corrupt a byte
block[50] += 1;
Could be adding one to 255, giving 256 which would not be a valid byte. Corrupting the byte might work better with something like this:
block[50] ^= 0x40;
Which would flip the bit in position 7 instead of adding to corrupt the byte.

performance difference between .cu and .cpp files

For the study we have to analyze performance difference between the CPU and GPU. My problem is that i have a .cu file with only cpp code and a .cpp file with exactly the same code. But there is a performance difference that the .cu file run 3 times faster than the .cpp file. The .cu file will compiled by the NVCC compiler but the NVCC compiler will only compile cuda code, and there is no cuda code, so it will be compiled by the host cpp compiler. And thats my Problem. I dont unterstand the performance difference.
#include <iostream>
#include <conio.h>
#include <ctime>
#include <cuda.h>
#include <cuda_runtime.h> // Stops underlining of __global__
#include <device_launch_parameters.h> // Stops underlining of threadIdx etc.
using namespace std;
void FindClosestCPU(float3* points, int* indices, int count) {
// Base case, if there's 1 point don't do anything
if(count <= 1) return;
// Loop through every point
for(int curPoint = 0; curPoint < count; curPoint++) {
// This variable is nearest so far, set it to float.max
float distToClosest = 3.40282e38f;
// See how far it is from every other point
for(int i = 0; i < count; i++) {
// Don't check distance to itself
if(i == curPoint) continue;
float dist = sqrt((points[curPoint].x - points[i].x) *
(points[curPoint].x - points[i].x) +
(points[curPoint].y - points[i].y) *
(points[curPoint].y - points[i].y) +
(points[curPoint].z - points[i].z) *
(points[curPoint].z - points[i].z));
if(dist < distToClosest) {
distToClosest = dist;
indices[curPoint] = i;
}
}
}
}
int main()
{
// Number of points
const int count = 10000;
// Arrays of points
int *indexOfClosest = new int[count];
float3 *points = new float3[count];
// Create a list of random points
for(int i = 0; i < count; i++)
{
points[i].x = (float)((rand()%10000) - 5000);
points[i].y = (float)((rand()%10000) - 5000);
points[i].z = (float)((rand()%10000) - 5000);
}
// This variable is used to keep track of the fastest time so far
long fastest = 1000000;
// Run the algorithm 2 times
for(int q = 0; q < 2; q++)
{
long startTime = clock();
// Run the algorithm
FindClosestCPU(points, indexOfClosest, count);
long finishTime = clock();
cout<<"Run "<<q<<" took "<<(finishTime - startTime)<<" millis"<<endl;
// If that run was faster update the fastest time so far
if((finishTime - startTime) < fastest)
fastest = (finishTime - startTime);
}
// Print out the fastest time
cout<<"Fastest time: "<<fastest<<endl;
// Print the final results to screen
cout<<"Final results:"<<endl;
for(int i = 0; i < 10; i++)
cout<<i<<"."<<indexOfClosest[i]<<endl;
// Deallocate ram
delete[] indexOfClosest;
delete[] points;
_getch();
return 0;
}
The only difference between the two files, is that one is an .cu file and will be compiled by the NVCC and the other is a .cpp file and will be compiled normally by the cpp compiler.
well ,as such you are not using any cuda functions that need to run on the GPU, but you are using float3 which is included as a part of the CUDA api and is not purely CPP, so when you change the extension to .cu, the code involving float3, will be compiled by NVCC, and as it might be different from the default cpp compiler, there are chances that a time difference may arise during execution.
you might want to check this by passing a 'pure' cpp file with .cu
extension to the NVCC and check the time difference, hopefully it will
pass on the whole code to the default cpp compiler, and there would be
no time difference when executing.

C++ ROT13 Function Crashes

I'm not too good with C++, however; my code compiled, but the function crashes my program, the below is a short sum-up of the code; it's not complete, however the function and call is there.
void rot13(char *ret, const char *in);
int main()
{
char* str;
MessageBox(NULL, _T("Test 1; Does get here!"), _T("Test 1"), MB_OK);
rot13(str, "uryyb jbeyq!"); // hello world!
/* Do stuff with char* str; */
MessageBox(NULL, _T("Test 2; Doesn't get here!"), _T("Test 2"), MB_OK);
return 0;
}
void rot13(char *ret, const char *in){
for( int i=0; i = sizeof(in); i++ ){
if(in[i] >= 'a' && in[i] <= 'm'){
// Crashes Here;
ret[i] += 13;
}
else if(in[i] > 'n' && in[i] <= 'z'){
// Possibly crashing Here too?
ret[i] -= 13;
}
else if(in[i] > 'A' && in[i] <= 'M'){
// Possibly crashing Here too?
ret[i] += 13;
}
else if(in[i] > 'N' && in[i] <= 'Z'){
// Possibly crashing Here too?
ret[i] -= 13;
}
}
}
The function gets to "Test 1; Does get Here!" - However it doesn't get to "Test 2; Doesn't get here!"
Thank you in advanced.
-Nick Daniels.
str is uninitialised and it is being dereferenced in rot13, causing the crash. Allocate memory for str before passing to rot13() (either on the stack or dynamically):
char str[1024] = ""; /* Large enough to hold string and initialised. */
The for loop inside rot13() is also incorrect (infinte loop):
for( int i=0; i = sizeof(in); i++ ){
change to:
for(size_t i = 0, len = strlen(in); i < len; i++ ){
You've got several problems:
You never allocate memory for your output - you never initialise the variable str. This is what's causing your crash.
Your loop condition always evaluates to true (= assigns and returns the assigned value, == tests for equality).
Your loop condition uses sizeof(in) with the intention of getting the size of the input string, but that will actually give you the size of the pointer. Use strlen instead.
Your algorithm increases or decreases the values in the return string by 13. The values you place in the output string are +/- 13 from the initial values in the output string, when they should be based on the input string.
Your algorithm doesn't handle 'A', 'n' or 'N'.
Your algorithm doesn't handle any non-alphabetic characters, yet the test string you use contains two.