A binary number with the highest bit clear is considered to be a positive two's complement number.
A single byte represents unsigned numbers in the range 0..255.
There are several ways to represent signed numbers.
One is sign/magnitude in which the upper bit is the sign, 1 for negative. The rest represents the absolute value of the number. The problem with this format is that arithmetic involving negative numbers is awkward. Not to mention there are two representations for zero, positive and negative.
The most common representation supported by computer hardware is two's complement is which a byte represents numbers in the range -128..127. 0 is represented by 0, naturally. 1 by 1 and -1 by $FF. Adding or subtracting two's complement numbers is very easy.
There are other representations with specialized uses.
Maybe this will help to clarify:
http://www.mathcs.emory.edu/~cheung/Cou ... ess-n.html