On Fri, 23 Apr 1999, Michael Cunningham wrote: > This would be using gcc on a x86 running linux. Integers and long integers on most (if not all) 32 bit platforms are both 32 bits. On 64 bit architectures, you'll probably find a 64 bit long integer with a 32 bit normal integer. The "long long" type, which is a gcc extension and should NOT be considered portable to other compilers or even other architectures which gcc supports, emulates a 64 bit integer. We can represent the maximum value of an arbitrarily sized integer type with the following series, 2^0 + 2^1 + 2^2 + 2^3 + 2^4 ... 2^n-1 where n is (sizeof(type) * 8). Thanks to a property of binary numbers, we can shortcut around this. You'll note that the above series is equal to the expression "(2^n) - 1". Thus, the maximum unsigned size for a 32 bit number is ((2^32) - 1), or 4294967295. If you're curious about the minus one, it's because we need space for zero, and 2^0 is one, *not* zero. Signed integers are only slightly different. What happens is that one bit is reserved for keeping track of whether the number is signed or unsigned. Note that this reservation does not make the integer type "smaller," per se, but does mean it cannot represent so large a positive number. In other words, we still have the same amount of numbers represented. The range for a signed 16 bit integer is -32768 to 32767 (again, we need space for zero, which is why it's not up to 32768). That gives us 65536 numbers, the same as for an unsigned 16 bit type (0 to 65535). It's left up to you to calculate the maximum value for a long long. My best guess is that we're dealing with a 19 digit number. -dak : Hopefully I didn't make basic math or counting errors...again. +------------------------------------------------------------+ | Ensure that you have read the CircleMUD Mailing List FAQ: | | http://qsilver.queensu.ca/~fletchra/Circle/list-faq.html | +------------------------------------------------------------+
This archive was generated by hypermail 2b30 : 12/15/00 PST