Which of the (plethora of) MIME functions is the best for you depends a bit on your requirements. The ones with "words" in the name support RFC 2047 (the son of RFC 1522) encoding for non-ASCII attributes, for example if you have a header such as
Content-Disposition: attachment; filename==?iso-8859-1?q?sn=F6gubbe.txt?=
where the value of the "filename" attribute contains a non-ASCII character. By using decode_words_tokenized{,_labled}, you get the character encoding ("iso-8859-1" in this case) separarately, and the value still encoded in that encoding. If you instead use decode_words_tokenized{,_labeld}_remapped, the values are instead remapped to Pike Unicode strings (losing the information about the original encoding).
If you are only dealing with headers where non-ASCII text is not allowed, you can use the tokenize functions without the "decode_words" prefix.
As for the "labled" variants, these are only needed if you want to keep comments, or to distinguish between quoted and non-quoted values (even though they are semantically equivalent). This is mostly intended for GUI:s with fancy display mechanisms. The variants without "labeled" give you the information you need normally - strings are tokens or quoted strings, ints are tspecials (such as '=' for the equals sign), comments and whitespace are simply removed. So for example
text/plain; charset=us-ascii (Plain text)
and
text/plain; charset="us-ascii"
(an example from RFC 2045 of two completely equivalent header values) both yield
({ "text", '/', "plain", ';' "charset", '=', "us-ascii" })
The quote/encode_words_quoted functions work analogously but in the other direction. Note that for encode_words_quoted{,_labled}_remapped you need to specify which character encoding to remap non-ASCII strings to (either as a fixed string like "utf-8", or as a function to dynamically pick an encoding based on the string contents), and also whether to use base64 or quoted-printable.