If you encode a signed number or an unsigned number, the reliable region will be the same (0<=x<2^(bits-1)). You need to know if it's stored unsigned or signed when you decode, but not when you encode, unless you want to throw an error for numbers that don't fit.
Compare to casting in C - cast -17 to (unsigned). It's not going to be 0, you just told the compiler that it was actually an unsigned stored there.