FUNDAMENTALS OF COMPUTER

COMPUTER PROGRAMMING FUNDAMENTALS

WHAT IS PROGRAMMING

Question [CLICK ON ANY CHOICE TO KNOW THE RIGHT ANSWER]
what does this represent:1010111011001101
A
byte
B
bit
C
switch
D
code
Explanation: 

Detailed explanation-1: -Whole numbers (integers) are usually represented with 4 bytes, or 32 bits. In the past, symbols (e.g., letters, digits) were represented with one byte (8 bits), with each symbol being mapped to a number between 0-255.

Detailed explanation-2: -In most computer systems, a byte is a unit of data that is eight binary digits long. A byte is the unit most computers use to represent a character such as a letter, number or typographic symbol. Each byte can hold a string of bits that need to be used in a larger unit for application purposes.

Detailed explanation-3: -The symbol for “byte” is “B". Sometimes a lowercase “b” is used, but this use is incorrect because “b” is actually the IEEE symbol for “bit". The IEC symbol for bit is bit. For example, “MB” means “megabyte” and “Mbit” means “megabit".

Detailed explanation-4: -The byte was originally the smallest number of bits that could hold a single character (I assume standard ASCII). We still use ASCII standard, so 8 bits per character is still relevant.

There is 1 question to complete.