An error with dec to bin - binary

I have been debugging this function but I don't know why is it throwing 99 when I send 4 to the function.
This is a function to covert from decimal to binary.
Actually, I have tried to cout exp, res and the other variables in each step and then multiply them but I don't know. It doesn't make sense.
int DecToBinary(long num) {
if(num == 0) {
return 0;
}
else if(num == 1) {
return 1;
}
int exp = 0;
int res = 0;
for (; num != 0; exp++){
res = res+num%2*pow(10,exp);
num = num/2;
}
return res;
}
Thank you guys.

if(num == 0) {
return 0;
}
else if(num == 0) {
return 1;
}
You know the second branch will never be executed, right?
Furthermore:
pow(10,exp);
this yields a floating-point number. Be prepared for rounding errors. Even better: don't use pow() at all (you don't need floating-point numbers for working with integers). Simply do the division step by step, accumulating the result in a variable.
int dec2bin(int n)
{
int r = 0, tp = 1;
while (n) {
r += (n % 2) * tp;
n >>= 1;
tp *= 10;
}
return r;
}

Related

C# How to count how many anagrams are in a given string

I have to calculate how many anagrams are in a given word.
I have tried using factorial, permutations and using the posibilities for each letter in the word.
This is what I have done.
static int DoAnagrams(string a, int x)
{
int anagrams = 1;
int result = 0;
x = a.Length;
for (int i = 0; i < x; i++)
{ anagrams *= (x - 1); result += anagrams; anagrams = 1; }
return result;
}
Example: for aabv I have to get 12; for aaab I have to get 4
As already stated in a comment there is a formula for calculating the number of different anagrams
#anagrams = n! / (c_1! * c_2! * ... * c_k!)
where n is the length of the word, k is the number of distinct characters and c_i is the count of how often a specific character occurs.
So first of all, you will need to calculate the faculty
int fac(int n) {
int f = 1;
for (int i = 2; i <=n; i++) f*=i;
return f;
}
and you will also need to count the characters in the word
Dictionary<char, int> countChars(string word){
var r = new Dictionary<char, int>();
foreach (char c in word) {
if (!r.ContainsKey(c)) r[c] = 0;
r[c]++;
}
return r;
}
Then the anagram count can be calculated as follows
int anagrams(string word) {
int ac = fac(word.Length);
var cc = countChars(word);
foreach (int ct in cc.Values)
ac /= fac(ct);
return ac;
}
Answer with Code
This is written in c#, so it may not apply to the language you desire, but you didn't specify a language.
This works by getting every possible permutation of the string, adding every copy found in the list to another list, then removing those copies from the original list. After that, the count of the original list is the amount of unique anagrams a string contains.
private static List<string> anagrams = new List<string>();
static void Main(string[] args)
{
string str = "AAAB";
char[] charArry = str.ToCharArray();
Permute(charArry, 0, str.Count() - 1);
List<string> copyList = new List<string>();
for(int i = 0; i < anagrams.Count - 1; i++)
{
List<string> anagramSublist = anagrams.GetRange(i + 1, anagrams.Count - 1 - i);
var perm = anagrams.ElementAt(i);
if (anagramSublist.Contains(perm))
{
copyList.Add(perm);
}
}
foreach(var copy in copyList)
{
anagrams.Remove(copy);
}
Console.WriteLine(anagrams.Count);
Console.ReadKey();
}
static void Permute(char[] arry, int i, int n)
{
int j;
if (i == n)
{
var temp = string.Empty;
foreach(var character in arry)
{
temp += character;
}
anagrams.Add(temp);
}
else
{
for (j = i; j <= n; j++)
{
Swap(ref arry[i], ref arry[j]);
Permute(arry, i + 1, n);
Swap(ref arry[i], ref arry[j]); //backtrack
}
}
}
static void Swap(ref char a, ref char b)
{
char tmp;
tmp = a;
a = b;
b = tmp;
}
Final Notes
I know this isn't the cleanest, nor best solution. This is simply one that carries the best across the 3 object oriented languages I know, that's also not too complex of a solution. Simple to understand, simple to change languages, so it's the answer I've decided to give.
EDIT
Here's the a new answer based on the comments of this answer.
static void Main(string[] args)
{
var str = "abaa";
var strAsArray = new string(str.ToCharArray());
var duplicateCount = 0;
List<char> dupedCharacters = new List<char>();
foreach(var character in strAsArray)
{
if(str.Count(f => (f == character)) > 1 && !dupedCharacters.Contains(character))
{
duplicateCount += str.Count(f => (f == character));
dupedCharacters.Add(character);
}
}
Console.WriteLine("The number of possible anagrams is: " + (factorial(str.Count()) / factorial(duplicateCount)));
Console.ReadLine();
int factorial(int num)
{
if(num <= 1)
return 1;
return num * factorial(num - 1);
}
}

cant understand the calculation of return statement of binary program with recursion in c

Program of binary conversion with recursion
it is working fine but i cant understand the meaning of one statement
Can any one help me to explain following
return (num % 2) + 10 * binary_conversion(num / 2);
while having input of 13
i am lil confused getting like this num =13;
13%2 = 1 + 10 * 6 = 66 , something stupid like calculation
int binary_conversion(int);
int main()
{
int num, bin;
printf("Enter a decimal number: ");
scanf("%d", &num);
bin = binary_conversion(num);
printf("The binary equivalent of %d is %d\n", num, bin);
}
int binary_conversion(int num)
{
if (num == 0)
{
return 0;
}
else
{
return (num % 2) + 10 * binary_conversion(num / 2);
}
}
Your confusions stems from not understanding the operation of recursion. It's time to interview the function with print statements. This will allow you to follow the control and data flow of the routine.
int binary_conversion(int num)
{
printf("ENTER num = %d\n", num);
if (num == 0)
{
printf("BASE CASE returns 0\n");
return 0;
}
else
{
printf("RECURSION: new bit = %d, recur on %d\n", num % 2, num / 2);
return (num % 2) + 10 * binary_conversion(num / 2);
}
}

Reverse engineering history pattern length in branch predictor

I'm trying to find the length of the history pattern in the branch predictor of my computer's processor. I generated variable length array of bits and have if conditions based on the value of the bit. I will then plot the run time of different execution of the function and search for the knee in the graph. but I don't see any such point in the graph. What am I doing wrong? Any idea?
Here is my code:
vector<int> randomArr(int n)
{
vector<int> arr (n);
for ( int i=0; i <n; i++){
arr[i] = rand() % 2;
}
return arr;
}
int branchy(vector<int> & arr){
int a = 0 ;
int b = 0 ;
for ( int i = 0 ; i < arr.size() ; i++ ) {
if ( arr[i] == 0)
a++;
else
b++;
}
return a^b;
}
int main() {
long int iterations = 100000;
int start_s;
int stop_s;
ofstream runtimesFile;
runtimesFile.open("runtimesFile.txt");
for (int j=0; j <iterations ; j++){
vector<int> arr = randomArr(j);
start_s=clock();
branchy(arr);
stop_s=clock();
runtimesFile<< to_string(stop_s-start_s)<<"\n";
}
runtimesFile.close();
return 0;
}

Divide function

I need to write the divide function in the Jack language.
my code is:
function int divide(int x, int y) {
var int result;
var boolean neg;
let neg = false;
if(((x>0) & (y<0)) | ((x<0) & (y>0))){
let neg = true;
let x = Math.abs(x);
let y = Math.abs(y);
}
if (y>x){
return 0;
}
let result = Math.divide(x, y+y);
if ((x-(2*result*y)) < y) {
if (neg){
return -(result + result);
} else {
return (result + result);
}
} else {
if (neg){
return -(result + result + 1);
} else {
return (result + result + 1);
}
}
}
this algorithm is sub-optimal since each multiplication operation also requires O(n) addition and subtraction operations.
Can I compute the product 2*result*y without any multiplication?
Thanks
Here's an implementation of (unsigned) restoring division (x/y), I don't actually know Jack though so I'm not 100% sure about this
var int r;
let r = 0;
var int i;
let i = 0;
while (i < 16)
{
let r = r + r;
if ((x & 0x8000) = 0x8000) {
let r = r + 1;
}
if ((y ^ 0x8000) > (r ^ 0x8000)) { // this is an unsigned comparison
let x = x + x;
}
else {
let r = r - y;
let x = x + x + 1;
}
let i = i + 1;
}
return x;
You should be able to turn that into signed division.

How to simplify this loop?

Considering an array a[i], i=0,1,...,g, where g could be any given number, and a[0]=1.
for a[1]=a[0]+1 to 1 do
for a[2]=a[1]+1 to 3 do
for a[3]=a[2]+1 to 5 do
...
for a[g]=a[g-1]+1 to 2g-1 do
#print a[1],a[2],...a[g]#
The problem is that everytime we change the value of g, we need to modify the code, those loops above. This is not a good code.
Recursion is one way to solve this(although I was love to see an iterative solution).
!!! Warning, untested code below !!!
template<typename A, unsigned int Size>
void recurse(A (&arr)[Size],int level, int g)
{
if (level > g)
{
// I am at the bottom level, do stuff here
return;
}
for (arr[level] = arr[level-1]+1; arr[level] < 2 * level -1; arr[level]++)
{
recurse(copy,level+1,g);
}
}
Then call with recurse(arr,1,g);
Imagine you are representing numbers with an array of digits. For example, 682 would be [6,8,2].
If you wanted to count from 0 to 999 you could write:
for (int n[0] = 0; n[0] <= 9; ++n[0])
for (int n[1] = 0; n[1] <= 9; ++n[1])
for (int n[2] = 0; n[2] <= 9; ++n[2])
// Do something with three digit number n here
But when you want to count to 9999 you need an extra for loop.
Instead, you use the procedure for adding 1 to a number: increment the final digit, if it overflows move to the preceding digit and so on. Your loop is complete when the first digit overflows. This handles numbers with any number of digits.
You need an analogous procedure to "add 1" to your loop variables.
Increment the final "digit", that is a[g]. If it overflows (i.e. exceeds 2g-1) then move on to the next most-significant "digit" (a[g-1]) and repeat. A slight complication compared to doing this with numbers is that having gone back through the array as values overflow, you then need to go forward to reset the overflowed digits to their new base values (which depend on the values to the left).
The following C# code implements both methods and prints the arrays to the console.
static void Print(int[] a, int n, ref int count)
{
++count;
Console.Write("{0} ", count);
for (int i = 0; i <= n; ++i)
{
Console.Write("{0} ", a[i]);
}
Console.WriteLine();
}
private static void InitialiseRight(int[] a, int startIndex, int g)
{
for (int i = startIndex; i <= g; ++i)
a[i] = a[i - 1] + 1;
}
static void Main(string[] args)
{
const int g = 5;
// Old method
int count = 0;
int[] a = new int[g + 1];
a[0] = 1;
for (a[1] = a[0] + 1; a[1] <= 2; ++a[1])
for (a[2] = a[1] + 1; a[2] <= 3; ++a[2])
for (a[3] = a[2] + 1; a[3] <= 5; ++a[3])
for (a[4] = a[3] + 1; a[4] <= 7; ++a[4])
for (a[5] = a[4] + 1; a[5] <= 9; ++a[5])
Print(a, g, ref count);
Console.WriteLine();
count = 0;
// New method
// Initialise array
a[0] = 1;
InitialiseRight(a, 1, g);
int index = g;
// Loop until all "digits" have overflowed
while (index != 0)
{
// Do processing here
Print(a, g, ref count);
// "Add one" to array
index = g;
bool carry = true;
while ((index > 0) && carry)
{
carry = false;
++a[index];
if (a[index] > 2 * index - 1)
{
--index;
carry = true;
}
}
// Re-initialise digits that overflowed.
if (index != g)
InitialiseRight(a, index + 1, g);
}
}
I'd say you don't want nested loops in the first place. Instead, you just want to call a suitable function, taking the current nesting level, the maximum nesting level (i.e. g), the start of the loop, and whatever if needs as context for the computation as arguments:
void process(int level, int g, int start, T& context) {
if (level != g) {
for (int a(start + 1), end(2 * level - 1); a < end; ++a) {
process(level + 1, g, a, context);
}
}
else {
computation goes here
}
}