ENCYCLOPEDIA 4U .com

 Web Encyclopedia4u.com

# Integer (computer science)

In computer science, the term integer is used to refer to any data type which can represent some subset of the mathematical integers. These are also known as integral data types.

 Table of contents 1 Value and Representation 2 Common integral data types 3 Pointers 4 Bytes and Octets 5 Words

## Value and Representation

The value of a datum with an integral type is the mathematical integer that it corresponds to. The representation of this datum is the way the value is stored in the computer's memory. Integral types may be unsigned (capable of representing only nonnegative integers) or signed (capable of representing negative integers as well).

The most common representation of a positive integer is a string of bits, using the binary numeral system. The order of the bits varies; see Endianness. The width or precision of an integral type is the number of bits in its representation. An unsigned integral type with n bits can represent numbers from 0 to .

There are three different ways to represent negative numbers in a binary numeral system. The most common is twos complement, which allows a signed integral type with n bits to represent numbers from to . Twos complement arithmetic is convenient because there is a perfect one-to-one correspondence between representations and values, and because addition and subtraction do not need to distinguish between signed and unsigned types. The other possibilities are sign and magnitude and ones complement.

Another, rather different representation for integers is binary-coded decimal, which was once commonly used (notably in financial applications) but is now rare.

## Common integral data types

bitsnameuses
8byte, octetASCII characters, C char (minimum), Java byte
16wordUCS-2 characters, C short int (minimum), C int (minimum), Java char, Java short int
32word, doubleword, longwordUCS-4 characters, C int (usual), C long int (minimum), Java int
64longword, quadwordC long int (on 64-bit machines), C99 long long int (minimum), Java long int

Different CPUs support different integral data types. Typically, hardware will support both signed and unsigned types but only a small, fixed set of widths.

The table above lists integral type widths that are supported in hardware by common processors. High level programming languages provide more possibilities. It is common to have a "double width" integral type that has twice as many bits as the biggest hardware-supported type. Many languages also have bit-field types (a specified number of bits, usually constrained to be less than the maximum hardware-supported width) and range types (can represent only the integers in a specified range).

Some languages, such as Lisp, support "infinite precision" integers, also known as arbitrary precision integers or bignums. These are limited only by the size of the computer's memory, so they can represent very large (but not truly infinite) integers.

A Boolean type is a special range type that can represent only two values: 0 and 1, identified with false and true respectively. This type can be stored in memory using a single bit, but is often given a full byte for convenience.

A four-bit quantity is known as a nybble; this is a joke on the word "byte". One nybble corresponds to one digit in hexadecimal and binary-coded decimal.

## Pointers

A pointer is often, but not always, represented by an integer of specified width. This is often, but not always, the widest integer that the hardware supports directly. The value of this integer is the memory address of whatever the pointer points to.

## Bytes and Octets

The term byte initially meant "the least addressable unit of memory". In the past, 5-, 6-, 7-, 8-, and 9-bit bytes have all been used. There have also been computers that could address individual bits ("bit-addressed machine"), or that could only address 16- or 32-bit quantities ("word-addressed machine"); the term "byte" was not used at all in connection with these machines.

The term octet always refers to an 8-bit quantity. It is mostly used in the field of computer networking, where computers with different byte widths might have to communicate.

In modern usage "byte" invariably means eight bits, since all other sizes have fallen into disuse; "octet" has thus come to be synonymous with "byte".

Bytes are used as the unit of computer memory of all kinds. One speaks of a 50 byte text string, a 100 kB (kilobyte) file, a 128 MB (megabyte) RAM module, a 30 GB (gigabyte) hard disk. The prefixes used for byte measurements are similar to the SI prefixes used for other measurements, but they do not have the same meanings. (See SI prefix for further discussion.)

prefix Usual meaning Meaning when applied to bytes
k 103 = 1000 210 = 1024
M 106 = 10002 220 = 10242
G 109 = 10003 230 = 10243
T 1012 = 10004 240 = 10244
P 1015 = 10005 250 = 10245

Unscrupulous hard disk manufacturers describe their products using the power-of-1000 meanings, which is the subject of a current (2003) false advertising lawsuit.

## Words

The term word initially meant "the size of an address in the system memory", and was thus CPU- and OS-specific. One could say that the IBM 370 had 32-bit words, and the 8086 had 16-bit words. 8-, 12-, 16-, 32-, 36-, 60- and 64-bit words have all been used. The meanings of terms derived from "word", such as "long word", "double word", and "half word", also vary with the CPU and OS.

More recently, the popularity of 16- and 32-bit operating systems based on the x86 architecture has caused a substantial population of speakers to take "word" to always refer to a 16-bit quantity.

It is wise to avoid these terms entirely, due to the potential for confusion.

Content on this web site is provided for informational purposes only. We accept no responsibility for any loss, injury or inconvenience sustained by any person resulting from information published on this site. We encourage you to verify any critical information with the relevant authorities.