There is a bitset of, say, a dimension of 1000. The task is to display this number in the decimal system. Since such a number does not fit into any built-in numeric type, it is necessary to convert it to a string (string, const char * - not important). What are the solutions? Only long arithmetic comes to mind, but I would like some more pleasant option.

  • Why do we need such a string with a decimal number? - Abyx
  • So you need to display on the screen, or do arithmetic operations directly in string representation? - ߊߚߤߘ
  • Define for your bitset the operators + , - , * , / , and use the standard algorithm for converting to string. - Ainar-G
  • std::cout << b; | python3 -c "print(int(input(), 2))" std::cout << b; | python3 -c "print(int(input(), 2))" - jfs

2 answers 2

“A more pleasant option” in this case is to use a ready-made library, instead of implementing high-precision arithmetic by hand.

To display std::bitset<> on the screen in the decimal system, you can use the GMP library :

 #include <bitset> #include <iostream> #include <gmp.h> int main() { std::bitset<70> b; b[66] = 1; std::cout << b << std::endl; mpz_t z; mpz_init_set_str(z, b.to_string().c_str(), 2); std::cout << z << std::endl; mpz_clear(z); } 

Example:

 $ g++ main.cc -o main -lgmpxx -lgmp $ ./main 

Conclusion:

 0001000000000000000000000000000000000000000000000000000000000000000000 73786976294838206464 

GMP does not use a quadratic algorithm to translate into a decimal system, so you can print at least numbers with a million digits. Related: Writing huge strings in python .

Of course, you can directly manipulate bits using GMP, without std::bitset<> :

 #include <iostream> #include <gmp.h> int main() { mpz_t b; mpz_init(b); mpz_setbit(b, 66); std::cout << b << std::endl; mpz_clear(b); } 

Conclusion:

 73786976294838206464 

    Why not use an array of, say, unsigned long? Using a string is a deliberately dead-end path, if you consider that these are numbers, which means you will probably have to perform any arithmetic and other operations.

    • 1000 bits will not fit in unsigned 64-bit, since August 19 this question has not changed. This answer does not answer the question. - nick_n_a