I'm not quite sure if it's right to use sizeof(a)-1 as the end bias anymore. It's consistent in one way since 0 is the first element from the start and <0 the first from the end.
On the other hand, it's off-by-one wrt single indexing (not considering the sign). I.e. the last element through indexing is -1 (a[-1]), while it's <0 through subranging (a[<0..]). Seen that way it appears to be more appropriate to let <1 refer to the last element.
Which way is better? How is it in lpc? Should it perhaps even be that < only selects the negative range so that the index still have to be negative, i.e. to select the last index one would have to write <-1?
I think we should leave this open for now. Please play around with it, but be advised that it might very well change.