Hex to Decimal Converter — Base 16 to Base 10

Quick Answer

Hexadecimal (base 16) uses digits 0-9 and A-F to represent values 0-15; for example, 0xFF equals 255 in decimal. Hex is standard for memory addresses, color codes (#FF5733), and binary data representation per IEEE 754.

Also searched as: hex converter, base 16 to base 10, hexadecimal decoder, 0xff to decimal

Decimal (base 10)

Hexadecimal (base 16)

Binary (base 2)

Octal (base 8)

Signed 8/16/32-bit interpretation (2's complement)

How Hex to Decimal Conversion Works

Hexadecimal is a positional number system with base 16, using the digits 0 through 9 for values 0 through 9 and the letters A through F for values 10 through 15. To convert a hex number to decimal, multiply each digit by 16 raised to the power of its position (counting from 0 on the right), then sum the results. For the hex value 0x2A, the calculation is (2 times 16) plus (10 times 1) equals 42 in decimal. Hexadecimal is the de facto notation for raw binary data in programming because each hex digit maps cleanly to exactly four bits, making conversion between the two trivial: the hex digit A is 1010 in binary, F is 1111, and 0xFF is 11111111. This direct mapping is why hex dominates memory addresses, color codes, cryptographic hashes, and the IEEE 754 floating-point interchange format. The base-16 system traces back to the 1963 IBM System/360 architecture, which standardized 8-bit bytes displayed as pairs of hex digits. For related converters see our binary calculator, decimal to binary, and text to binary tools.

The Base Conversion Formula

For a number N written in base B with digits d_k d_{k-1} ... d_1 d_0, the decimal value is N = d_k * B^k + d_{k-1} * B^(k-1) + ... + d_1 * B + d_0. In hex (B=16), every position is 16 times larger than the one to its right, so 0x1234 equals (1 * 16^3) + (2 * 16^2) + (3 * 16) + 4 = 4096 + 512 + 48 + 4 = 4660 in decimal. The reverse conversion from decimal to hex uses repeated division by 16, recording the remainders from last to first: 4660 / 16 = 291 r 4, 291 / 16 = 18 r 3, 18 / 16 = 1 r 2, 1 / 16 = 0 r 1, giving 1234 when read bottom to top. For negative numbers the tool uses 2's complement at common widths (8, 16, 32 bit), which is the representation mandated by IEEE Std 754-2019 for signed integers in most modern CPU architectures. A worked example: 0xFFFF interpreted as unsigned 16-bit is 65,535, but as signed 16-bit it is -1 because the leading bit is set.

Key Terms You Should Know

Base: the number of distinct digits in a positional number system; base 10 has digits 0-9, base 16 has 0-9 and A-F. Positional notation: a number system where each digit's value depends on its position, multiplied by powers of the base. Nibble: a 4-bit unit, exactly one hex digit. Byte: 8 bits, exactly two hex digits, range 00 to FF. Word: a CPU's native integer width, typically 16, 32, or 64 bits (4, 8, or 16 hex digits). 2's complement: the standard binary encoding for signed integers where the high bit indicates sign and negative values are stored as the bitwise NOT of the absolute value plus one. Little-endian / big-endian: byte order within a multi-byte word; x86 is little-endian (least-significant byte first), while network protocols use big-endian. Prefix: characters marking a number's base in source code: 0x or 0X for hex in C/Python/JavaScript, 0b for binary, 0o for octal, # for CSS colors, $ in some assemblers.

Hex Value Reference Data

The table below shows common hex values and their decimal, binary, and practical meanings. These constants appear everywhere in low-level code, firmware, and network protocols. Note how 0xFF marks the boundary between an 8-bit byte and the next size up, 0xFFFF for 16-bit, and 0xFFFFFFFF for 32-bit. Data from the list of file signatures maintained by Wikipedia corroborates the magic number examples.

HexDecimalBinaryMeaning
0x0A1000001010Line feed (LF)
0x203200100000ASCII space
0x416501000001ASCII 'A'
0xFF25511111111Max 8-bit unsigned, -1 signed
0x7FFF32,76701111...Max 16-bit signed
0xCAFEBABE3,405,691,582Java class file magic
0xDEADBEEF3,735,928,559Debug marker / uninitialized memory
0xFFFFFFFF4,294,967,29532 onesMax 32-bit unsigned

Practical Examples

Example 1 — CSS color code: The hex color #1a56db (the WorldlyCalc brand blue) breaks into three 8-bit channels: 0x1a = 26 red, 0x56 = 86 green, 0xdb = 219 blue. Entering 1a56db above shows decimal 1,725,659, which is how browsers store the color internally as a 24-bit integer. Example 2 — IPv4 address: The IP address 192.168.1.1 written in hex is 0xC0A80101 because 192 = 0xC0, 168 = 0xA8, 1 = 0x01, and 1 = 0x01. Converting the full 32-bit hex to decimal gives 3,232,235,777, which is the integer representation routers use in routing tables. Example 3 — File magic numbers: A PNG image file always starts with the 8-byte hex sequence 89 50 4E 47 0D 0A 1A 0A, where the printable portion PNG (0x50 0x4E 0x47) identifies the format per RFC 2083. Tools like file(1) on Linux inspect these magic numbers to identify file types regardless of filename extension.

Tips and Best Practices

Use the 0x prefix: in source code, always prefix hex values with 0x to avoid ambiguity; in text and documentation, either 0x or a subscript 16 is clear. Always pad to byte boundaries: write 0x0F instead of 0xF so the reader immediately sees one full byte. Prefer uppercase for hex digits in output, lowercase for input: the Unix convention is to accept either but display uppercase; this tool honors that. Beware of sign extension: 0xFF as a signed 8-bit is -1 but as a signed 16-bit is 255; always specify bit width when context matters. Use bitwise operators for mask math: hex masks like 0xFF00 are the readable way to extract byte fields from a multi-byte integer. Do not mix bases in arithmetic mentally: convert everything to decimal or everything to hex before calculating to avoid errors. Use a calculator with arbitrary precision for large values: JavaScript's Number type loses precision beyond 2^53, so this tool uses BigInt for 64-bit values to guarantee exact results.

Frequently Asked Questions

How do I convert hex to decimal by hand?

To convert a hex number to decimal by hand, multiply each hex digit by 16 raised to its position power (starting at 0 on the right) and sum the results. For example, 0xFF = (15 * 16^1) + (15 * 16^0) = 240 + 15 = 255. A longer example: 0x1A3 = (1 * 256) + (10 * 16) + (3 * 1) = 256 + 160 + 3 = 419. The hex digits A through F represent 10 through 15 in decimal, so E is 14, F is 15, and 0x10 equals decimal 16. This method is the same as evaluating any positional number system; only the base changes.

What is 0xFF in decimal?

0xFF in decimal is 255, which is also the maximum value that fits in an 8-bit unsigned byte. In binary it is 11111111, and in octal it is 0377. The value 255 appears frequently in computing because a single byte can represent 256 distinct values from 0 to 255, and 0xFF is the pattern with every bit set. In web color codes, #FFFFFF is white because all three 8-bit color channels (red, green, blue) are at their maximum of 255. If you instead interpret 0xFF as a signed 8-bit integer in 2's complement format, the value is -1 because the high bit signals negative.

Why do programmers use hexadecimal?

Hexadecimal (base 16) is widely used in computing because it maps cleanly to binary: each hex digit represents exactly 4 bits, so an 8-bit byte fits in 2 hex characters, a 16-bit word in 4, and a 32-bit integer in 8. This makes it much easier to read and write raw binary data than using long binary strings. Hex is the standard notation for memory addresses, color codes, file magic numbers, and cryptographic hashes like SHA-256, which outputs 64 hex characters representing 256 bits. IEEE Std 754 for floating-point arithmetic also uses hex for its binary exchange formats.

What is 2's complement?

2's complement is the standard method used by modern computers to represent signed integers in binary. For an n-bit number, positive values are stored in normal binary, while negative values are stored as the binary complement of the absolute value plus 1. For example, in 8-bit 2's complement, -1 is 11111111 (0xFF) and -128 is 10000000 (0x80). The advantage is that addition and subtraction work identically for signed and unsigned values, eliminating the need for separate arithmetic circuits. The range of an n-bit 2's complement integer is -2 to the (n-1) through 2 to the (n-1) minus 1.

How do I convert hex color codes?

A hex color code like #FF5733 represents three separate 8-bit values for red, green, and blue. Split the 6 hex digits into three pairs: FF, 57, and 33. Convert each pair to decimal: 0xFF = 255, 0x57 = 87, 0x33 = 51. So #FF5733 is rgb(255, 87, 51), a vivid orange. The CSS Color Module Level 4 specification defines this format and permits a shorthand for colors where all six digits form three pairs of the same value, like #FFF for pure white, which expands to #FFFFFF or rgb(255, 255, 255). Alpha transparency can be added as a fourth pair.

Is hex and hexadecimal the same thing?

Yes. Hex is simply the shortened name for hexadecimal, the base-16 positional number system. Both words refer to the same thing, and they are used interchangeably in programming documentation, textbooks, and standards. The prefix hexa comes from the Greek word for six, and decimal comes from the Latin for ten, so hexadecimal literally means sixteen. Hex values are written with prefixes like 0x in C, Python, and JavaScript (0xFF), # in CSS colors (#FF0000), or a trailing H in assembly language (FFH). All three refer to the same decimal value of 255.

Related Calculators