Least and Most Significant Bit
Moderators: phlip, Moderators General, Prelates
 thePowerworks
 Posts: 71
 Joined: Wed Feb 20, 2008 5:40 am UTC
Least and Most Significant Bit
For an assembly programming exercise we have to write a program that does the following:
1) Computes the number of the least significant 1 bit set.
2) The Number of the most significant 1 bit set.
3) The total number of 1 bits set.
The example given is the constant 00fffff0 (hex) the LSB set is 4, the MSB set is 23 and there are 20 bits set total.
I understand the 20 bit set total, but I don't understand the LSB being 4 and the MSB being 23.
I'm not asking for anyone to do the assignment for me, I just need help understanding MSB and LSB. How can the LSB be 4?
Thanks for any help!
1) Computes the number of the least significant 1 bit set.
2) The Number of the most significant 1 bit set.
3) The total number of 1 bits set.
The example given is the constant 00fffff0 (hex) the LSB set is 4, the MSB set is 23 and there are 20 bits set total.
I understand the 20 bit set total, but I don't understand the LSB being 4 and the MSB being 23.
I'm not asking for anyone to do the assignment for me, I just need help understanding MSB and LSB. How can the LSB be 4?
Thanks for any help!
Re: Least and Most Significant Bit
In a decimal number the digit furthest to the right counts units, the next to the left is tens, then hundreds, etc. Therefore the right most digit is the least significant of them, the leftmost digit the most significant. The same goes the bits of binary numbers.
In fffff0 the least significant nonzero bit is the fifth bit from the right. The first four on the right form that hex 0 at the end, the next is the rightmost bit of that last f.
Numbering of bits usually starts from zero, so the rightmost bit is called bit 0, and the fifth one from the right is called bit 4. And there's your answer.
Edited to add: A good reason for the bits to be numbered from 0 is that bit n represents a value of 2^{n}. For example if bit 0 is set it contributes a value of 2^{0}=1 to the total.
In fffff0 the least significant nonzero bit is the fifth bit from the right. The first four on the right form that hex 0 at the end, the next is the rightmost bit of that last f.
Numbering of bits usually starts from zero, so the rightmost bit is called bit 0, and the fifth one from the right is called bit 4. And there's your answer.
Edited to add: A good reason for the bits to be numbered from 0 is that bit n represents a value of 2^{n}. For example if bit 0 is set it contributes a value of 2^{0}=1 to the total.
Re: Least and Most Significant Bit
In reallife when you're talking about least/most significant bit in a byte or byte in a word in an arbitrary binary representation, it depends on a property of the representation called endianness.
The MSB and LSB are basically defining the endianness of the binary representation.
The MSB and LSB are basically defining the endianness of the binary representation.
Re: Least and Most Significant Bit
0xBADFEED wrote:In reallife when you're talking about least/most significant bit in a byte or byte in a word in an arbitrary binary representation, it depends on a property of the representation called endianness.
The MSB and LSB are basically defining the endianness of the binary representation.
What you said can be misunderstood, as thePowerworks does not have to care about endianness at all.
The lsb/msb are defined by the value that a bit/byte represent. The least significant bit is by definition bit 0, the one that represents the value 2^{0}. In other words, the lsb/msb by definition is based on the standard representation of a binary number.
When you shift an integer left or right, you do not care about the endianness  the compiler has to do whatever it takes to make it seem as if the bits were in msblsb order. How that number is stored in memory, the endianness, is implementationdefined, is not part of the definition of lsb/msb, and almost completely hidden by the compiler. This is why the C and C++ standards define bit shifts in terms of multiplying/dividing by powers of 2.
It is only when you access an integer not by its value directly but byte by byte from memory or file that endianness matters, and that is not happening here.
 Xanthir
 My HERO!!!
 Posts: 5423
 Joined: Tue Feb 20, 2007 12:49 am UTC
 Location: The Googleplex
 Contact:
Re: Least and Most Significant Bit
thePowerworks wrote:The example given is the constant 00fffff0 (hex) the LSB set is 4, the MSB set is 23 and there are 20 bits set total.
Transform 00fffff0 into binary. You get 0000 0000 1111 1111 1111 1111 1111 0000. The least significant bit set is the fifth from the end, which is 4 when you're dealing with 0based numbering. Similarly, the most significant bit set is the 24th from the end, which is 23 when dealing with 0based numbering.
Endianness does matter for this, but the example makes it clear that the assignment considers the MSB to be on the left, and the LSB to be on the right.
(defun fibs (n &optional (a 1) (b 1)) (take n (unfold '+ a b)))
 thePowerworks
 Posts: 71
 Joined: Wed Feb 20, 2008 5:40 am UTC
Re: Least and Most Significant Bit
Thanks guys.. I am programming it as we speak. I just needed clarity on the actual definitions.
Someone was telling me you had to compliment it then logical and it and all this other stuff.
But this makes more sense.
Someone was telling me you had to compliment it then logical and it and all this other stuff.
But this makes more sense.
Re: Least and Most Significant Bit
The way that I worded it is a bit misleading.
It is more natural to say that the endianness of a binary format determines the LSB and the MSB for that format rather than the other way around.
I don't know why you're talking about C/C++. The question is about some arbitrary assembly language in which case endianness does matter. If I read some data from memory and get:
00000001
The endianness of the machine tells me if I have the MSB or LSB set.
The example the OP gave is the reverse. It tells you the which is the MSB and LSB in the constant value so that you know what the orientation of the bitstring is.
All this stuff is taken care of in C/C++ because the compiler is written for a specific architecture. But we're talking about a level where this stuff is not taken care of for you.
What I'm talking about is just an aside anyway and is in no way required to solve the problem.
(And I know that endianness usually refers to byte not bit orientation, the bitbased example was just to make it easier to talk about the concept)
It is more natural to say that the endianness of a binary format determines the LSB and the MSB for that format rather than the other way around.
jaap wrote:When you shift an integer left or right, you do not care about the endianness  the compiler has to do whatever it takes to make it seem as if the bits were in msblsb order. How that number is stored in memory, the endianness, is implementationdefined, is not part of the definition of lsb/msb, and almost completely hidden by the compiler. This is why the C and C++ standards define bit shifts in terms of multiplying/dividing by powers of 2.
I don't know why you're talking about C/C++. The question is about some arbitrary assembly language in which case endianness does matter. If I read some data from memory and get:
00000001
The endianness of the machine tells me if I have the MSB or LSB set.
The example the OP gave is the reverse. It tells you the which is the MSB and LSB in the constant value so that you know what the orientation of the bitstring is.
All this stuff is taken care of in C/C++ because the compiler is written for a specific architecture. But we're talking about a level where this stuff is not taken care of for you.
What I'm talking about is just an aside anyway and is in no way required to solve the problem.
(And I know that endianness usually refers to byte not bit orientation, the bitbased example was just to make it easier to talk about the concept)
 Yakk
 Poster with most posts but no title.
 Posts: 11129
 Joined: Sat Jan 27, 2007 7:27 pm UTC
 Location: E pur si muove
Re: Least and Most Significant Bit
There is a slow and a fast solution to this.
The slow solution does 1 check operation for each bit  so if you have N bits in your number (counting all 0s and 1s), it takes N checks to solve.
The fast solution does the binary log of N checks.
(Note that most CPUs have a fast "is this number identically zero" check instruction, which you exploit.)
Doing the "1s count" does probably require a check for each and every bit (or at least those between the high and low water marks), practically.
The slow solution does 1 check operation for each bit  so if you have N bits in your number (counting all 0s and 1s), it takes N checks to solve.
The fast solution does the binary log of N checks.
(Note that most CPUs have a fast "is this number identically zero" check instruction, which you exploit.)
Doing the "1s count" does probably require a check for each and every bit (or at least those between the high and low water marks), practically.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision  BR
Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.
Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

 Posts: 778
 Joined: Mon Aug 11, 2008 10:58 pm UTC
 Location: Palo Alto, CA
Re: Least and Most Significant Bit
Yakk wrote:Doing the "1s count" does probably require a check for each and every bit (or at least those between the high and low water marks), practically.
On many architectures a popcount can be done with a single instruction. On architectures where it can't, it can still be done much faster than a check on every bit. There's a huge corpus of clever popcount hacks out there, if you're interested in this sort of thing. Most of them boil down to a sequence of log(N) mask + shift + adds, though there's some really clever ways to use multiply that are sometimes a performance win.
GENERATION 16 + 31i: The first time you see this, copy it into your sig on any forum. Square it, and then add i to the generation.
 Yakk
 Poster with most posts but no title.
 Posts: 11129
 Joined: Sat Jan 27, 2007 7:27 pm UTC
 Location: E pur si muove
Re: Least and Most Significant Bit
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision  BR
Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.
Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.
Re: Least and Most Significant Bit
So, this is along similar lines, hopefully someone here can help. Writing a basic quicksort assembly program that works fine for all the tests I can throw at it. Its given the number (N) of total numbers to be sorted and a pointer to the numbers themselves. In grading my program (we have a self grading program) it works fine for about 20 tests till they throw something strange at it. The last test ran states this "Testing using N with Most significant Bit on. Sort failed testcase for N with MSB on, check sort code for use of signed jumps." Mostly I'm unsure what the MSB is supposed to mean, I would assume it means that they are attempting to throw "negative" numbers at my code but I'm really unsure about it. Any help would be greatly appreciated
Re: Least and Most Significant Bit
Auz wrote:So, this is along similar lines, hopefully someone here can help. Writing a basic quicksort assembly program that works fine for all the tests I can throw at it. Its given the number (N) of total numbers to be sorted and a pointer to the numbers themselves. In grading my program (we have a self grading program) it works fine for about 20 tests till they throw something strange at it. The last test ran states this "Testing using N with Most significant Bit on. Sort failed testcase for N with MSB on, check sort code for use of signed jumps." Mostly I'm unsure what the MSB is supposed to mean, I would assume it means that they are attempting to throw "negative" numbers at my code but I'm really unsure about it. Any help would be greatly appreciated
They are testing whether you treat N as an unsigned integer in your program by giving a value that is larger than what a signed integer could hold. Such a value has its most significant bit set (e.g. N = 2147483648 = 2^31 assuming 32 bit integers), and if you were treating that value as a signed twoscomplement integer it represents a negative value. There are two types of conditional jumps  some that assume the previous instructions (such as a compare or subtract instruction) were applied to unsigned integers, and others that assume they were using signed integers. If you use the wrong type, it may not work for large N.
Re: Least and Most Significant Bit
so yeah, really simple. It wasn't in the main section of the project which is why I was lost. That and I've been up for awhile getting this done. Thanks for the help!
Re: Least and Most Significant Bit
thePowerworks wrote:Thanks guys.. I am programming it as we speak. I just needed clarity on the actual definitions.
Someone was telling me you had to compliment it then logical and it and all this other stuff.
But this makes more sense.
Hmm you could also have listened to that "someone"
If you are dealing with 2's Complement (not compliment though ty & yw mathematics they were correct
I will leave the proof and learning to you but here is an example ::
For the sake of simplicity lets suppose we have 8 bit words instead of 16 or 32
let N = 6 = 110 which in 2's complement is 00000110
let N* = 6 > 11111010 in 2's complement
so this "someone" was telling you to take (N & N) == N & N* to find the lsb hmmmm it works
Another method is to use a cascading logarithmic # of "logical and" tests
Here is the code that will find an lsb in the low 16 bits (of a 32 or 64 bit word) it's easy to extrapolate it to 32 and 64 bits [which i've done for machines that don't have builtin instructions for msb/lsb]
Code: Select all
if (y & 0x0000ffff) // any bits set in 015
{
if (y & 0x000000ff) // are any bits in 07 set
{
if (y & 0x0000000f) // are any bits in 03 set
{
if (y & 0x00000003) // is bit 0 or 1 set
{
if (y & 0x00000001) // bit 0 is set
return 0;
return 1; // bit 0 not set, must be bit 1
}
if (y & 0x00000004) // is bit 2 set
return 2;
return 3; // we must have bit 3 set
}
if (y & 0x00000030) // is bit 4 or 5 set
{
if (y & 0x00000010) // is bit 4 set
return 4;
return 5; // we must have bit 5 set
}
if (y & 0x00000040) // is bit 6 set
return 6;
return 7;
} // 0x000000ff
if (y & 0x00000f00) // are any of bits 811 set
{
if (y & 0x00000300) // is bit 8 or 9 set
{
if (y & 0x00000100) // is bit 8 set
return 8;
return 9;
}
if (y & 0x00000400) // is bit 10 set
return 10;
return 11;
} // 0x00000f00
if (y & 0x00003000) // is bit 12 or 13 set
{
if (y & 0x00001000) // is bit 12 set
return 12;
return 13;
}
if (y & 0x00004000) // is bit 14 set
return 14;
return 15;
}
Yet another method is to utilize a lookup table of size 2^8, 2^12, or 2^16 and perform any necessary shifts and position adjustment calculations.
 Yakk
 Poster with most posts but no title.
 Posts: 11129
 Joined: Sat Jan 27, 2007 7:27 pm UTC
 Location: E pur si muove
Re: Least and Most Significant Bit
Code: Select all
struct bit_result {
int total;
bool zero;
int least;
int most;
};
template<int number=32>
bit_result find_bits( int value ) {
bit_result lower = find_bits<number/2>(value);
bit_result higher = find_bits<number/2>(value >>(number/2) );
bit_result retval;
retval.zero = lower.zero && higher.zero;
retval.least = lower.zero?(higher.least + number/2):lower.least;
retval.most = higher.zero?(higher.most + number/2):lower.most;
retval.total = higher.total+lower.total;
return retval;
}
template<>
bit_result find_bits<1>(int value) {
bit_result retval;
retval.zero = (value & 1);
retval.most = 0;
retval.least = 0;
retval.total = (value & 1);
return retval;
}
I make any mistakes? You can optimize the endrecursive case as shallow or as deep as you want.
You can also write a version that passes in a mask integer that is shifted in the recursions. This version can compile down to one almost as efficient as a handcoded one. As in:
Code: Select all
template<int mask=1, int number=32>
bit_result find_bits( int value ) {
bit_result lower = find_bits<mask, number/2>(value);
bit_result higher = find_bits<mask << (number/2), number/2>(value);
bit_result retval;
retval.zero = lower.zero && higher.zero;
retval.least = lower.zero?(higher.least + number/2):lower.least;
retval.most = higher.zero?(higher.most + number/2):lower.most;
retval.total = higher.total+lower.total;
return retval;
}
template<int mask>
bit_result find_bits<mask, 1>(int value) {
bit_result retval;
retval.zero = (value & mask);
retval.most = 0;
retval.least = 0;
retval.total = (value & 1);
return retval;
}
Further optimizations are of course possible.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision  BR
Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.
Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.
Who is online
Users browsing this forum: No registered users and 8 guests