best way to convert a n-bit binary to decimal string?
hi i was making a class that can hold and compute very long numbers like n-bit long numbers so if 32-bit int would be too little it would add another 32-bit int in a dynamic linked list and use that for computing, computing the numbers in binary is very easy but converting to a decimalstring is very hard.
i've made a function that can takes 2 decimalstrings and adds them together and returns a summed string for example
sum( "2302030", "3023852" ); will return "5325882"
i was thinking that for a long number that uses over 32-bit the 33-bit could add the string "4294967295" twice onto the lower order number and the 34-bit would add it 4-times and the 35-bit would add it 8-times however i quickly realized that the exponential rise in computation time isnt a good idea, uh so what do i do? i was hoping to make a class that could compute over a 1000-bit numbers. i know that Mathematica and other math programs seem to have no limit to their numbers so i wanted something like that too, but im not sure what to search for.
the problem seemed simple at first but hmm :/
was looking at the bits themselfs and binary 1010 equals decimal 10 so hmm maybee theres a pattern i can use to convert? i cant see it though
thanks for any help