DATA REPRESENTATION AND NUMBER SYSTEMS
ASCII AND UNICODE CHARACTER ENCODING
Question
[CLICK ON ANY CHOICE TO KNOW THE RIGHT ANSWER]
|
|
True
|
|
False
|
|
Either A or B
|
|
None of the above
|
Detailed explanation-1: -Unicode can assign a lot more characters. The 1-byte system used in 8-bit ASCII can only represent up to 256 characters. Unicode uses a 2-byte system for each character, which means that more than 65, 000 characters can be represented.
Detailed explanation-2: -The original ASCII was a 7 bit character set (128 possible characters) with no accented letters. This was used in teletype machines. (The eighth bit was originally used to check parity-a way to look for errors.)
Detailed explanation-3: -Many computer systems instead use Unicode, which has millions of code points, but the first 128 of these are the same as the ASCII set.
Detailed explanation-4: -ASCII uses 7-bit or 8-bits (Extended ASCII) to represent different characters. UNICODE uses mainly four character encoding schemes namely UTF-7 (7-bit), UTF-8 (8-bit), UTF-16 (16-bit), and UTF-32 (32-bit). ASCII consumes less memory. UNICODE consumes more memory as compared to ASCII.