ASCII Converter

Decimal

Hexadecimal

Binary

Text

What Is ASCII? A Complete Guide to the American Standard Code for Information Interchange

ASCII, short for American Standard Code for Information Interchange, is one of the most important character encoding standards in the history of computing. Developed in the early 1960s by a committee of the American Standards Association (now ANSI), ASCII was first published as ASA X3.4-1963 and went through several revisions before the definitive ANSI X3.4-1968 version that remains the standard today. The encoding was designed to solve a fundamental problem: how do computers, which only understand numbers, represent text characters?

Before ASCII, different computer manufacturers used incompatible character encodings. IBM used EBCDIC, teletypes used Baudot code, and various other proprietary systems were in use. This made data exchange between systems extremely difficult. ASCII provided a universal 7-bit encoding that assigned a unique number (0 through 127) to each of 128 characters, creating a common language that all computers could agree on.

The Structure of the ASCII Table

The 128 ASCII characters are divided into several logical groups. Understanding this structure helps you work with character codes more efficiently and makes debugging encoding issues much easier.

Control characters (0-31 and 127): These 33 characters were designed for controlling hardware devices like teletypes and printers. While most are obsolete for their original purpose, several remain essential in modern computing. NUL (0) is used as a string terminator in C and many programming languages. LF (10, line feed) and CR (13, carriage return) handle line breaks — Unix systems use LF alone, Windows uses CR+LF, and classic Mac OS used CR alone. TAB (9) provides horizontal spacing, and ESC (27) is used to initiate escape sequences in terminal emulators for things like colors and cursor movement.

Printable characters (32-126): These 95 characters include everything you need for basic English text. Space (32) is the first printable character. Digits 0-9 occupy codes 48-57, uppercase letters A-Z occupy 65-90, and lowercase letters a-z occupy 97-122. The remaining positions are filled with punctuation and symbols like ! @ # $ % ^ & * ( ) and others. The careful positioning is deliberate — uppercase and lowercase letters differ by exactly 32 (one bit flip), making case conversion trivially fast in code.

Printable ASCII Character Reference Table

DecHexBinaryOctCharDescription
322000100000040(space)Space
4830001100000600Digit zero
5739001110010719Digit nine
654101000001101AUppercase A
905A01011010132ZUppercase Z
976101100001141aLowercase a
1227A01111010172zLowercase z
1267E01111110176~Tilde (last printable)

Understanding Number Base Conversions: Decimal, Hex, Binary, and Octal

Every ASCII code point can be expressed in multiple number bases, each useful in different programming contexts. Decimal (base 10) is the most human-readable and is commonly used in documentation. Hexadecimal (base 16) is preferred in programming because each hex digit maps exactly to four binary digits, making binary-to-hex conversion trivial. For example, the letter 'A' is 0x41 in hex, which splits cleanly into binary 0100 0001. Binary (base 2) represents the actual bits stored in memory — useful when debugging bitwise operations or understanding protocol formats. Octal (base 8) was historically popular on systems with 12-bit, 24-bit, or 36-bit word lengths and still appears in Unix file permissions (chmod 755 uses octal).

This converter displays all four representations simultaneously. When you type a string, each character is converted to its ASCII code point and displayed in decimal, hexadecimal (with uppercase letters), and binary (zero-padded to 8 bits). Entering space-separated decimal codes in the reverse field converts them back to the original text.

The History and Evolution of Character Encoding

The journey from Morse code to modern Unicode is a fascinating evolution. Telegraph systems in the 1830s used Morse code, which assigned variable-length sequences of dots and dashes to letters. Baudot code (1870) introduced fixed-length 5-bit encoding, supporting 32 characters — enough for the alphabet with a shift mechanism for figures. Murray code (1901) rearranged Baudot for mechanical efficiency, and this evolved into ITA2, used well into the 1960s.

ASCII's 7-bit design in 1963 was a deliberate choice, providing 128 code points — enough for English text, digits, and essential control characters while fitting efficiently into the 8-bit bytes that were becoming standard. The eighth bit was often used for parity checking in early serial communications. As computing went global, the limitations of 128 characters became painfully apparent. ISO 8859 extended ASCII to 8 bits (256 characters) with regional variants like ISO 8859-1 (Latin-1) for Western European languages, but this approach could not scale to Asian writing systems with thousands of characters.

Unicode, first published in 1991, solved this by defining a universal character set capable of encoding every writing system. UTF-8, designed by Ken Thompson and Rob Pike in 1992, provided a backwards-compatible variable-length encoding where ASCII characters use a single byte — making it ideal for the internet. Today, UTF-8 is used by over 98% of web pages, but ASCII remains the foundation: every valid ASCII document is also a valid UTF-8 document.

Practical Uses of ASCII Conversion in Programming

Understanding ASCII values is essential for many programming tasks. Input validation often checks whether a character code falls within expected ranges — for example, verifying that user input contains only digits (codes 48-57) or letters (65-90 and 97-122). Case conversion exploits the 32-offset between uppercase and lowercase: toggling bit 5 (XOR with 32) flips the case of any letter. String sorting in most languages defaults to ASCII/Unicode code point order, which means uppercase letters sort before lowercase ones — a common source of bugs when developers expect case-insensitive alphabetical ordering.

Cryptographic algorithms and hashing functions operate on numeric byte values, so understanding ASCII helps when debugging hash inputs. Network protocols like HTTP, SMTP, and FTP are text-based protocols that rely on ASCII for headers and commands. Data serialization formats like JSON, CSV, and XML are ASCII-compatible, meaning all structural characters (braces, brackets, commas, colons, angle brackets) fall within the ASCII range.

When debugging encoding issues — mojibake (garbled text), unexpected characters in database exports, or broken file transfers — the first step is often checking the raw byte values against an ASCII table. This converter makes that process instant and visual.

Common Control Characters You Should Know

DecHexAbbrNameModern Use
000NULNullString terminator in C/C++
707BELBellTerminal alert sound
808BSBackspaceDelete previous character
909HTHorizontal TabTab spacing (\t)
100ALFLine FeedNewline on Unix/Linux/macOS (\n)
130DCRCarriage ReturnPart of Windows newline (\r\n)
271BESCEscapeANSI escape sequences for terminal colors
1277FDELDeleteDelete character

ASCII in Programming Languages

Every major programming language provides built-in functions for ASCII conversion. In JavaScript, String.fromCharCode(65) returns "A" and "A".charCodeAt(0) returns 65 — this converter uses exactly these functions. In Python, chr(65) and ord("A") serve the same purpose. C treats characters as integers natively, so char c = 65; stores the character 'A'. Java uses (char) 65 for casting and (int) 'A' for the reverse. Understanding these conversions is fundamental to working with strings at a low level.

How This ASCII Converter Works

This converter operates entirely in your browser using JavaScript. When you type text in the input field, the tool iterates through each character, extracts its Unicode code point (which equals the ASCII value for characters 0-127), and displays it in four number bases. The conversion happens in real time as you type — no server calls, no data storage, and no network latency. You can also enter space-separated decimal codes to convert numbers back to text, making it easy to decode ASCII sequences from logs, network captures, or data files.

Frequently Asked Questions

What is ASCII and why does it matter?

ASCII (American Standard Code for Information Interchange) is a character encoding standard that assigns numeric values to 128 characters, including letters, digits, punctuation, and control codes. It forms the foundation of virtually every modern encoding system, including UTF-8, which is backwards-compatible with ASCII for its first 128 code points. Understanding ASCII is essential for programming, debugging encoding issues, and working with network protocols.

What is the difference between ASCII and Unicode?

ASCII defines only 128 characters (7 bits), covering English letters, digits, and basic symbols. Unicode extends this to over 149,000 characters across 161 scripts, supporting every writing system in the world. The first 128 Unicode code points are identical to ASCII, which is why UTF-8 encoded ASCII text is valid UTF-8. In practice, ASCII is a subset of Unicode, and most modern systems use UTF-8 encoding, which handles both ASCII and Unicode seamlessly.

How do I convert a character to its ASCII decimal value?

Each printable ASCII character maps to a decimal number from 32 (space) to 126 (tilde). For example, 'A' is 65, 'a' is 97, and '0' is 48. You can type any character into this converter to instantly see its decimal, hexadecimal, binary, and octal representation. In code, most languages provide built-in functions like JavaScript's charCodeAt(), Python's ord(), or C's implicit char-to-int casting.

Why are there non-printable ASCII characters?

ASCII codes 0 through 31 and code 127 are control characters originally designed for hardware communication — things like carriage return (13), line feed (10), tab (9), escape (27), and null (0). While most are obsolete for their original teletype purpose, several remain essential in modern computing for text formatting and protocol signaling. Line feed and carriage return handle line breaks, tab provides horizontal spacing, null terminates strings in C, and escape initiates ANSI terminal color sequences.

How do I convert ASCII to binary or hexadecimal?

To convert an ASCII character to binary, find its decimal code point and convert that number to base 2, padded to 8 bits. For example, 'A' is decimal 65, which is 01000001 in binary. For hexadecimal, convert the decimal to base 16: 65 becomes 41 in hex. This converter displays all four representations (decimal, hex, binary, octal) simultaneously for any text you type. In programming, JavaScript uses toString(2) for binary and toString(16) for hex. Use our Base64 encoder for another common encoding conversion.

What is the ASCII code for common special characters?

Common special character ASCII codes include: space (32), exclamation mark (33), at sign @ (64), hash # (35), dollar sign $ (36), percent % (37), ampersand & (38), asterisk * (42), parentheses (40-41), hyphen - (45), period . (46), forward slash / (47), colon : (58), semicolon ; (59), equals = (61), question mark ? (63), backslash \ (92), curly braces { } (123, 125), and tilde ~ (126). These are all in the printable ASCII range of 32-126.

Related Developer Tools