MySql Query Table of Masks - mysql

I have a table that is filled with a variety of "masks" such at this:
Type Mask1 Mask2 Mask3
0 fff fff ff
1 aff fff ff
2 aff fff 92
3 001 fff 00
And basically I want to query the database and see if a particular query matches, say a00-111-12. Anywhere there is an f (this is all in hex) I want to say there is a match. So I take the value a00-111-12 and it should match with rows 0 and 1 but not 2 and 3 because in row 0, all f's appear and thus a value AND'd with them would result in that same value. BUT, AND-ing does not work since if testing with row 2, Mask3 column value 92 AND'd with 12 results in 12, however I don't want that row to be a match.
I find this a difficult question to ask, it may not be possible with a few MySQL Queries but I want to avoid importing the entire table into PHP and then finding the correct rows from there.
An idea of a query would be:
SELECT * FROM TABLE WHERE Mask1 = a00 AND Mask2 = 111 AND ...
However some operation would need to be done on either Mask1, 2, 3 or the value being sent to the query.
The end goal is to get the Type from the matching rows. If you need more information please ask.

Create a submasks table to make your job easier, add one row
z1 : z2 : z3
0xf : 0xf0 : 0xf00
Then use the following query
Select
t.*
from Table t
inner join submasks s
on (
((t.Mask1 & s.z1) = s.z1 || (t.Mask1 & s.z1) = (a00 & s.z1)) &&
((t.Mask1 & s.z2) = s.z2 || (t.Mask1 & s.z2) = (a00 & s.z2)) &&
((t.Mask1 & s.z2) = s.z2 || (t.Mask1 & s.z2) = (a00 & s.z2)) &&
((t.Mask2 & s.z1) = s.z1 || (t.Mask2 & s.z1) = (111 & s.z1)) &&
((t.Mask2 & s.z2) = s.z2 || (t.Mask2 & s.z2) = (111 & s.z2)) &&
((t.Mask2 & s.z2) = s.z2 || (t.Mask2 & s.z2) = (111 & s.z2)) &&
((t.Mask3 & s.z1) = s.z1 || (t.Mask3 & s.z1) = (12 & s.z1)) &&
((t.Mask3 & s.z2) = s.z2 || (t.Mask3 & s.z2) = (12 & s.z2))
)
The way this works is by comparing individual hex digits by performing a bitwise AND with z1,z2 and z2 to get each of the 3 digits respectively.
so
<any value> & z1 sets all hex digits except the last to 0, ie 0x123 becomes 0x003
<any value> & z2 sets all hex digits except the second from last to 0, ie 0x123 becomes 0x020
<any value> & z3 sets all hex digits except the third from last to 0, ie 0x123 becomes 0x100
Using this filter the test for each digit can be built as
((mask & filter) = filter) // is the digit f
|| // OR
((mask & filter) = (test & filter)) // is the digit the same.
Repeat the test for each of z1,z2 and z3 (ie 0x00f, 0x0f0, and 0xf00) combine the results with an and condition and you can check all 3 hex digits of the mask are either f or exactly the test value.
This is then repeated for Mask2 and Mask3 (but only z1,z2 as Mask3 is 2 digits).
By using inner join with the submasks table the result will only include the values from Table where the mask conditions are true.
UPDATE - you may want to perform select distinct instead of just select as if two masks match a single row in Table then 2 results will be returned.

Don't know if I explained my question well enough but I ended up coming to the conlusion that this works best:
val1 = 0xa00
val2 = 0x111
val3 = 0x12
SELECT * FROM TABLE WHERE
((Mask1 | val1)=val1 OR (Mask1 | val1)=0xfff) AND
((Mask1 | val2)=val2 OR (Mask1 | val2)=0xfff) AND
((Mask1 | val3)=val3 OR (Mask1 | val2)=0xfff);
The only problem is that val1=a00 will not match with Mask1=aff although I would like it to. Still working on it...

Related

How to add alphanumeric values in a speardsheet if they are comma separated?

Suppose, we have cells as below:
Cell Value Legend
==========================
A1 1,A // A = 1
A2 2,AA // AA = 2
A3 3,L // L = -1
A4 4,N // N = 0
I want the total to be calculated separately in other cells as:
A5 = SUM(1, 2, 3, 4) = 1 + 2 + 3 + 4 = 10
A6 = SUM(1*A, 2*AA, 3*L, 4*N) = 1 + 4 - 3 + 0 = 2
Considering it may require separate functions in App Script, I tried to use SPLIT and SUM them, but it's not accepting the values. I asked a related question: How to pass multiple comma separated values in a cell to a custom function?
However, being a novice in spreadsheet, I am not sure if my approach is correct.
How to add alphanumeric values separately as stated above?
you can create a small lookup table (legend) and then for the first sum try something like
=ArrayFormula(sum(iferror(REGEXEXTRACT(A1:A4, "[0-9-.]+")+0)))
and for the last
=sum(ArrayFormula(iferror(regexextract(A1:A4, "[0-9-.]+")*vlookup(regexextract(A1:A4, "[^,]+$"),D1:E4, 2, 0 ))))

xtable() changes hms output to number when printing in HTML [duplicate]

I have the following data:
transaction <- c(1,2,3);
date <- c("2010-01-31","2010-02-28","2010-03-31");
type <- c("debit", "debit", "credit");
amount <- c(-500, -1000.97, 12500.81);
oldbalance <- c(5000, 4500, 17000.81)
evolution <- data.frame(transaction, date, type, amount, oldbalance, row.names=transaction, stringsAsFactors=FALSE);
evolution <- transform(evolution, newbalance = oldbalance + amount);
evolution
Running
> library(xtable)
> xtable(evolution)
works fine. But if I add the line
evolution$date <- as.Date(evolution$date, "%Y-%m-%d");
to give
transaction <- c(1,2,3);
date <- c("2010-01-31","2010-02-28","2010-03-31");
type <- c("debit", "debit", "credit");
amount <- c(-500, -1000.97, 12500.81);
oldbalance <- c(5000, 4500, 17000.81)
evolution <- data.frame(transaction, date, type, amount, oldbalance, row.names=transaction, stringsAsFactors=FALSE);
evolution$date <- as.Date(evolution$date, "%Y-%m-%d");
evolution <- transform(evolution, newbalance = oldbalance + amount);
evolution
then running xtable gives
xtable(evolution)
Error in Math.Date(x + ifelse(x == 0, 1, 0)) :
abs not defined for Date objects
But it can be useful to use xtable in such a case to do some filtering of dates
evolution$date <- as.Date(evolution$date, "%Y-%m-%d")
startdate <-as.Date("2010-02-01");
enddate <-as.Date("2010-03-30");
newdate <-evolution[which (evolution$date >= startdate & evolution$date <= enddate),]
newdate
> newdate
transaction date type amount oldbalance newbalance
2 2 2010-02-28 debit -1000.97 4500 3499.03
> xtable(newdate)
Error in Math.Date(x + ifelse(x == 0, 1, 0)) :
abs not defined for Date objects
This is arguably a bug in xtable - you may want to report it to the maintainer.
A temporary work-around is to call as.character() on the classes that xtable misinterprets (apart from "Date" I can think of "POSIXt" but there may be others), e.g.:
xtable <- function(x, ...) {
for (i in which(sapply(x, function(y) !all(is.na(match(c("POSIXt","Date"),class(y))))))) x[[i]] <- as.character(x[[i]])
xtable::xtable(x, ...)
}
It does appear that xtable does not always play nicely with columns of class Date. (It does have zoo and ts methods, but those may not help if you have a single column of dates/times in a data frame, as coercion to zoo appears to alter the column names in the resulting table.) A few notes:
The error is actually being thrown by print.xtable, (not xtable.data.frame), which is called by default in order to display the results of xtable in the console. So you'd find that if you stored the results of xtable in a variable, you'd get no error, but then when you tried to print it, the same error would pop up.
Since you've wisely stored your dates in YYYY-MM-DD format, converting them to Date objects actually isn't necessary to use ordered selections, since they will sort properly as characters. So you could actually get away with simply keeping them as characters.
In cases with more complex date/time objects you could do the subsetting first and then convert those columns to characters. Or create a wrapper for xtable.data.frame and add the lines at the beginning,
dates <- sapply(x,FUN = function(x){class(x) == "Date"})
x[,dates] <- as.character(x[,dates])
checking for class Date, or whatever class you're dealing with.
IMHO, xtable.data.frame should probably be checking for Dates, and possibly for other POSIX classes as well and converting them to strings as well. This may be a simple change, and may be worth contacting the package author about.
Lastly, the semicolons as line terminators are not necessary. :) Habit from another language?
As the maintainer of xtable I would like to state what I see as the true position regarding dates in xtable.
This is not really a bug, but the absence of a feature you might think is desirable.
The problem is that xtable only can deal with three different classes of columns: logical; character; and numeric. If you try to submit a table where the class of a column is Date, then it cannot deal with it. The relevant code is the set of xtable methods, the most important of which are xtable.data.frame and xtable.matrix.
The first part of the code for those methods deals with checking the class of the columns being submitted so they can be treated appropriately.
It would be possible to add code to allow columns of class Date as well, but I am not willing to do that.
Firstly, there is an easy work around (at least for straight R code, I can't say for Shiny applications), which is to change any Date column to be a character column:
Second, to allow columns of class Date, would require the addition of an argument to xtable and xtable methods (of which there are currently 31) as well as to xtableFtable and xtableList. That is fraught with problems because of the large number of reverse dependencies for xtable. (Haven't counted, but if you look at xtable on CRAN you will see a stack of depends, imports and suggests.) I am going to break some packages, maybe a lot of packages if I make that sort of change. Backward compatibility is a serious problem with xtable.
Why is an extra argument necessary? Because the end result of using xtable, or more to the point print.xtable, is a string of characters. How the columns of the data frame, matrix or other structure submitted to xtable are treated is determined by firstly how they are classified (logical, character, or numeric), then by the arguments align, digits and display which can all be vectors to allow for different treatment of different columns. So if dates were to be allowed, you would need an extra argument to specify how they would be formatted, because at some point they need to be converted to character to produce the final table output.
Same answer as above, but replace sapply with vapply, slightly safer. Creates a new function xtable2 so you can compare the output. Don't quite understand #David Scott's reluctance to put this idea in xtable.
library(xtable)
xtable2 <- function(x, ...) {
# get the names of variables that are dates by inheritance
datevars <- colnames(x)[vapply(x, function(y) {
inherits(y, c("Date", "POSIXt", "POSIXct"))
}, logical(1))]
for (i in datevars){
x[ , i] <- as.character(x[, i])
}
xtable::xtable(x, ...)
}
example
> str(dat)
'data.frame': 200 obs. of 9 variables:
$ x5 : num 0.686 0.227 -1.762 0.963 -0.863 ...
$ x4 : num 1 3 3 4 4 4 4 5 6 1 ...
$ x3 : Ord.factor w/ 3 levels "med"<"lo"<"hi": 3 2 2 2 3 3 2 1 3 3 ...
$ x2 : chr "d" "c" "b" "d" ...
$ x1 : Factor w/ 5 levels "bobby","cindy",..: 3 2 4 2 3 5 2 2 5 5 ...
$ x7 : Ord.factor w/ 5 levels "a"<"b"<"c"<"d"<..: 4 2 2 2 4 5 4 5 5 4 ...
$ x6 : int 5 4 2 3 4 1 4 3 4 2 ...
$ date1: Date, format: "2020-03-04" "1999-01-01" ...
$ date2: POSIXct, format: "2020-03-04" "2005-04-04" ...
> xtable2(dat)
% latex table generated in R 4.0.3 by xtable 1.8-4 package
% Wed Dec 9 08:59:07 2020
\begin{table}[ht]
\centering
\begin{tabular}{rrrllllrll}
\hline
& x5 & x4 & x3 & x2 & x1 & x7 & x6 & date1 & date2 \\
\hline
1 & 0.69 & 1.00 & hi & d & greg & d & 5 & 2020-03-04 & 2020-03-04 \\
2 & 0.23 & 3.00 & lo & c & cindy & b & 4 & 1999-01-01 & 2005-04-04 \\
3 & -1.76 & 3.00 & lo & b & marcia & b & 2 & 2020-03-04 & 2020-03-04 \\
4 & 0.96 & 4.00 & lo & d & cindy & b & 3 & 2020-03-04 & 2020-03-04 \\
5 & -0.86 & 4.00 & hi & d & greg & d & 4 & 2005-04-04 & 2005-04-04 \\
6 & -0.30 & 4.00 & hi & b & peter & f & 1 & 2005-04-04 & 2020-03-04 \\
7 & -1.39 & 4.00 & lo & c & cindy & d & 4 & 1999-01-01 & 2005-04-04 \\
8 & -1.71 & 5.00 & med & f & cindy & f & 3 & 2005-04-04 & 2020-03-04 \\
[snip]
\hline
\end{tabular}
\end{table}

Boolean function simplifier?

x = (a & b & d) | ~(a | ~b | c) | (~c & ~d & a) | (c & d)
~ = not
& = and
| = or
How do I simplify a function like this, with what should I start?
I've tried some simplifying programs but I don't understand them.
You should write out a truth table for the variables involved and the eventual output.
Then, for each of the rows in the truth table that turn out to be true, you write a logic equation based upon the variables' states to reproduce that logic "one", usually an AND function of the appropriate inputs and inverse inputs.
Say only 3 of the rows have a true or logic one output.
That would mean you'd have three logic equations.
You would complete the job by connecting those three equations together with OR operators.
By looking at the truth table, you might be able to notice that the output of the logical true lines do not depend on all of the variables. This is one way of simplifying the expression.
Solving an equation similar to the one you put above
(a & b & d) | (~a | b | ~c) | (~c & ~d & a) | (c & d)
I get the following result
x = 1 except for one case, i.,e., (a b c d) = (1 0 1 0), in which case it is zero.
Thus x = ~( a & ~b & c & ~d) or x = ~a | b | ~c | d
How to do this?
To make it easier to do this, you can rewrite your equation as
x = A | B | C | D, where
A = (a & b & d)
B = (~a | b | ~c)
C = ~c & ~d & a
D = c & d
variable B = 1 for all but two sets of inputs of (abcd) namely (1010) and (1011).
variable A = 1 for only only two input sets, which B already covers.
similarly with variable C.
Variable D = 1 for one of the two sets of inputs B didn't make = 1, namely (1011).
Thus x = 0 only when the inputs are exactly a=1, b=0, c=1, d=0, but we want to write it as an equation that is True (=1) when those inputs are given, so we write
x = ~(a & ~b & c & ~d) or x = ~a | b | ~c | d
So that is one way of simplifying. I'll add a second technique in a separate answer.
sorry it took so long to spell it out, but perhaps others will find it useful.
The original equation of the OP is fairly simplified as is. The truth table has nearly equal T and F entries, and thus doesn't lend itself well to a demonstration of the technique. One could rewrite it as
x = (a & b & d) | (~a & b & ~c) | (a & ~c & ~d) | (c & d)
which is fairly compact but could be written slightly differently combining the 1st and last terms and the middle two terms:
x = ((a & b | c) & d) | ((~a & b | a & ~d) & ~c)
see 2nd proposed answer below for a further explanation

Reducing a boolean expression

I am having an expression, suppose,
a = 1 && (b = 1 || b != 0 ) && (c >= 35 || d != 5) && (c >= 38 || d = 6)
I expect it to be reduced to,
a = 1 && b != 0 && (c >= 38 || d = 6)
Does anyone have any suggestions? Pointers to any algorithm?
Nota Bene: Karnaugh Map or Quine-McCluskey are not an option here, I believe. As these methods don't handle grey cases. I mean, expression can only be reduced as far as things are like, A or A' or nothing, or say black or white or absense-of-colour. But here I'm having grey shades, as you folks can see.
Solution: I have written the program for this in Clojure. I used map of a map containing a function as value. That came pretty handy, just a few rules for a few combinations and you are good. Thanks for your helpful answers.
I think you should be able to achieve what you want by using Constraint Handling Rules. You would need to write rules that simplify the OR- and AND-expressions.
The main difficulty would be the constraint entailment check that tells you which parts you can drop. E.g., (c >= 35 || d != 5) && (c >= 38 || d = 6) simplifies to (c >= 38 || d = 6) because the former is entailed by the latter, i.e., the latter is more specific. For the OR-expressions, you would need to choose the more general part, though.
Google found a paper on an extension of CHR with entailment check for user-defined constraints. I don't know enough CHR to be able to tell you whether you would need such an extension.
I believe these kinds of things are done regularly in constraint logic programming. Unfortunatly I'm not experienced enough in it to give more accurate details, but that should be a good starting point.
The general principle is simple: an unbound variable can have any value; as you test it against inequalities, it's set of possible values are restricted by one or more intervals. When/if those intervals converge to a single point, that variable is bound to that value. If, OTOH, any of those inequalities are deemed unsolvable for every value in the intervals, a [programming] logic failure occurs.
See also this, for an example of how this is done in practice using swi-prolog. Hopefully you will find links or references to the underlying algorithms, so you can reproduce them in your platform of choice (maybe even finding ready-made libraries).
Update: I tried to reproduce your example using swi-prolog and clpfd, but didn't get the results I expected, only close ones. Here's my code:
?- [library(clpfd)].
simplify(A,B,C,D) :-
A #= 1 ,
(B #= 1 ; B #\= 0 ) ,
(C #>= 35 ; D #\= 5) ,
(C #>= 38 ; D #= 6).
And my results, on backtracking (line breaks inserted for readability):
10 ?- simplify(A,B,C,D).
A = 1,
B = 1,
C in 38..sup ;
A = 1,
B = 1,
D = 6,
C in 35..sup ;
A = 1,
B = 1,
C in 38..sup,
D in inf..4\/6..sup ;
A = 1,
B = 1,
D = 6 ;
A = 1,
B in inf.. -1\/1..sup,
C in 38..sup ;
A = 1,
D = 6,
B in inf.. -1\/1..sup,
C in 35..sup ;
A = 1,
B in inf.. -1\/1..sup,
C in 38..sup,
D in inf..4\/6..sup ;
A = 1,
D = 6,
B in inf.. -1\/1..sup.
11 ?-
So, the program yielded 8 results, among those the 2 you were interested on (5th and 8th):
A = 1,
B in inf.. -1\/1..sup,
C in 38..sup ;
A = 1,
D = 6,
B in inf.. -1\/1..sup.
The other were redundant, and maybe could be eliminated using simple, automatable logic rules:
1st or 5th ==> 5th [B == 1 or B != 0 --> B != 0]
2nd or 4th ==> 4th [C >= 35 or True --> True ]
3rd or 1st ==> 1st ==> 5th [D != 5 or True --> True ]
4th or 8th ==> 8th [B == 1 or B != 0 --> B != 0]
6th or 8th ==> 8th [C >= 35 or True --> True ]
7th or 3rd ==> 3rd ==> 5th [B == 1 or B != 0 --> B != 0]
I know it's a long way behind being a general solution, but as I said, hopefully it's a start...
P.S. I used "regular" AND and OR (, and ;) because clpfd's ones (#/\ and #\/) gave a very weird result that I couldn't understand myself... maybe someone more experienced can cast some light on it...

Convert decimal number to excel-header-like number

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
0 = A
1 = B
...
25 = Z
26 = AA
27 = AB
...
701 = ZZ
702 = AAA
I cannot think of any solution that does not involve loop-bruteforce :-(
I expect a function/program, that accepts a decimal number and returns a string as a result.
Haskell, 78 57 50 43 chars
o=map(['A'..'Z']:)$[]:o
e=(!!)$o>>=sequence
Other entries aren't counting the driver, which adds another 40 chars:
main=interact$unlines.map(e.read).lines
A new approach, using a lazy, infinite list, and the power of Monads! And besides, using sequence makes me :), using infinite lists makes me :o
If you look carefully the excel representation is like base 26 number but not exactly same as base 26.
In Excel conversion Z + 1 = AA while in base-26 Z + 1 = BA
The algorithm is almost same as decimal to base-26 conversion with just once change.
In base-26, we do a recursive call by passing it the quotient, but here we pass it quotient-1:
function decimalToExcel(num)
// base condition of recursion.
if num < 26
print 'A' + num
else
quotient = num / 26;
reminder = num % 26;
// recursive calls.
decimalToExcel(quotient - 1);
decimalToExcel(reminder);
end-if
end-function
Java Implementation
Python, 44 chars
Oh c'mon, we can do better than lengths of 100+ :
X=lambda n:~n and X(n/26-1)+chr(65+n%26)or''
Testing:
>>> for i in 0, 1, 25, 26, 27, 700, 701, 702:
... print i,'=',X(i)
...
0 = A
1 = B
25 = Z
26 = AA
27 = AB
700 = ZY
701 = ZZ
702 = AAA
Since I am not sure what base you're converting from and what base you want (your title suggests one and your question the opposite), I'll cover both.
Algorithm for converting ZZ to 701
First recognize that we have a number encoded in base 26, where the "digits" are A..Z. Set a counter a to zero and start reading the number at the rightmost (least significant digit). Progressing from right to left, read each number and convert its "digit" to a decimal number. Multiply this by 26a and add this to the result. Increment a and process the next digit.
Algorithm for converting 701 to ZZ
We simply factor the number into powers of 26, much like we do when converting to binary. Simply take num%26, convert it to A..Z "digits" and append to the converted number (assuming it's a string), then integer-divide your number. Repeat until num is zero. After this, reverse the converted number string to have the most significant bit first.
Edit: As you point out, once two-digit numbers are reached we actually have base 27 for all non-least-significant bits. Simply apply the same algorithms here, incrementing any "constants" by one. Should work, but I haven't tried it myself.
Re-edit: For the ZZ->701 case, don't increment the base exponent. Do however keep in mind that A no longer is 0 (but 1) and so forth.
Explanation of why this is not a base 26 conversion
Let's start by looking at the real base 26 positional system. (Rather, look as base 4 since it's less numbers). The following is true (assuming A = 0):
A = AA = A * 4^1 + A * 4^0 = 0 * 4^1 + 0 * 4^0 = 0
B = AB = A * 4^1 + B * 4^0 = 0 * 4^1 + 1 * 4^0 = 1
C = AC = A * 4^1 + C * 4^0 = 0 * 4^1 + 2 * 4^0 = 2
D = AD = A * 4^1 + D * 4^0 = 0 * 4^1 + 3 * 4^0 = 3
BA = B * 4^0 + A * 4^0 = 1 * 4^1 + 0 * 4^0 = 4
And so forth... notice that AA is 0 rather than 4 as it would be in Excel notation. Hence, Excel notation is not base 26.
In Excel VBA ... the obvious choice :)
Sub a()
For Each O In Range("A1:AA1")
k = O.Address()
Debug.Print Mid(k, 2, Len(k) - 3); "="; O.Column - 1
Next
End Sub
Or for getting the column number in the first row of the WorkSheet (which make more sense, since we are in Excel ...)
Sub a()
For Each O In Range("A1:AA1")
O.Value = O.Column - 1
Next
End Sub
Or better yet: 56 chars
Sub a()
Set O = Range("A1:AA1")
O.Formula = "=Column()"
End Sub
Scala: 63 chars
def c(n:Int):String=(if(n<26)""else c(n/26-1))+(65+n%26).toChar
Prolog, 109 123 bytes
Convert from decimal number to Excel string:
c(D,E):- d(D,X),atom_codes(E,X).
d(D,[E]):-D<26,E is D+65,!.
d(D,[O|M]):-N is D//27,d(N,M),O is 65+D rem 26.
That code does not work for c(27, N), which yields N='BB'
This one works fine:
c(D,E):-c(D,26,[],X),atom_codes(E,X).
c(D,B,T,M):-(D<B->M-S=[O|T]-B;(S=26,N is D//S,c(N,27,[O|T],M))),O is 91-S+D rem B,!.
Tests:
?- c(0, N).
N = 'A'.
?- c(27, N).
N = 'AB'.
?- c(701, N).
N = 'ZZ'.
?- c(702, N).
N = 'AAA'
Converts from Excel string to decimal number (87 bytes):
x(E,D):-x(E,0,D).
x([C],X,N):-N is X+C-65,!.
x([C|T],X,N):-Y is (X+C-64)*26,x(T,Y,N).
F# : 166 137
let rec c x = if x < 26 then [(char) ((int 'A') + x)] else List.append (c (x/26-1)) (c (x%26))
let s x = new string (c x |> List.toArray)
PHP: At least 59 and 33 characters.
<?for($a=NUM+1;$a>=1;$a=$a/26)$c=chr(--$a%26+65).$c;echo$c;
Or the shortest version:
<?for($a=A;$i++<NUM;++$a);echo$a;
Using the following formula, you can figure out the last character in the string:
transform(int num)
return (char)num + 47; // Transform int to ascii alphabetic char. 47 might not be right.
char lastChar(int num)
{
return transform(num % 26);
}
Using this, we can make a recursive function (I don't think its brute force).
string getExcelHeader(int decimal)
{
if (decimal > 26)
return getExcelHeader(decimal / 26) + transform(decimal % 26);
else
return transform(decimal);
}
Or.. something like that. I'm really tired, maybe I should stop answering questions and go to bed :P