Everyday Life · General Mathematics
Scientific Notation Calculator
Converts any decimal number to scientific notation and back, showing the coefficient, exponent, and full expanded form.
Calculator
Formula
N is the original number. a is the coefficient (also called the significand or mantissa), which must satisfy 1 ≤ |a| < 10. b is the integer exponent (power of 10). To convert: count how many places the decimal point moves to produce a coefficient in the valid range — moving left gives a positive exponent, moving right gives a negative exponent.
Source: National Institute of Standards and Technology (NIST) — NIST Guide to the SI, Section 7.9 (Expressing numbers in scientific notation).
How it works
Scientific notation is a standardised way of writing numbers that are too large or too small to be conveniently written in decimal form. Rather than writing 0.000000000167, a physicist writes 1.67 × 10⁻¹⁰ — a far more readable and manipulable expression. The system is grounded in the properties of powers of ten and is recognised internationally as the basis of the SI unit prefix system and all quantitative scientific communication.
The core formula is N = a × 10^b, where a is the coefficient (or significand) satisfying 1 ≤ |a| < 10, and b is an integer exponent. To find b, calculate the floor of log₁₀(|N|). For example, log₁₀(6,500,000) ≈ 6.813, so b = 6. The coefficient is then a = N ÷ 10^b = 6,500,000 ÷ 1,000,000 = 6.5. The result is 6.5 × 10⁶. For numbers less than 1, the exponent becomes negative: 0.00045 yields log₁₀(0.00045) ≈ −3.347, so b = −4 and a = 0.00045 ÷ 10⁻⁴ = 4.5, giving 4.5 × 10⁻⁴.
Scientific notation is used in virtually every quantitative discipline. In astronomy, distances are expressed in light-years or metres with exponents in the range of 10¹⁵ to 10²⁶. In chemistry, Avogadro's number is 6.022 × 10²³. In electronics, capacitances are often in the picofarad range (10⁻¹²). Engineers use it to express tolerances, material properties, and signal amplitudes without ambiguity. It also underpins the metric prefix system: kilo (10³), mega (10⁶), giga (10⁹), milli (10⁻³), micro (10⁻⁶), nano (10⁻⁹), and so on.
Worked example
Suppose you want to convert the number 0.0000725 into scientific notation.
Step 1 — Find the exponent: Calculate log₁₀(0.0000725) = log₁₀(7.25 × 10⁻⁵) ≈ −4.14. Take the floor: b = −5.
Step 2 — Find the coefficient: a = 0.0000725 ÷ 10⁻⁵ = 0.0000725 ÷ 0.00001 = 7.25. Verify: 1 ≤ 7.25 < 10 ✓
Step 3 — Write the result: 0.0000725 = 7.25 × 10⁻⁵
Now consider the large number 93,000,000 (approximate distance from Earth to the Sun in miles):
Step 1: log₁₀(93,000,000) ≈ 7.968. Floor = 7.
Step 2: a = 93,000,000 ÷ 10⁷ = 93,000,000 ÷ 10,000,000 = 9.3.
Step 3: 93,000,000 = 9.3 × 10⁷.
Both examples confirm: positive exponents for large numbers, negative exponents for small numbers.
Limitations & notes
This calculator handles standard real numbers within JavaScript's floating-point range (approximately ±5 × 10⁻³²⁴ to ±1.8 × 10³⁰⁸). Inputting zero is a special case — by convention, 0 has no standard scientific notation form since log₁₀(0) is undefined; the calculator returns 0 × 10⁰ as a practical placeholder. Very long decimal inputs may suffer from floating-point precision issues inherent to IEEE 754 double-precision arithmetic, particularly for numbers requiring more than 15–16 significant digits. This calculator does not perform arithmetic operations between two numbers in scientific notation (such as multiplication or division of two separately expressed values); it is a conversion tool only. For complex operations involving scientific notation arithmetic, a full scientific calculator or computer algebra system is recommended. Additionally, the calculator assumes standard (normalised) scientific notation; engineering notation, where the exponent is always a multiple of three, is a separate convention not computed here.
Frequently asked questions
What is the difference between scientific notation and standard form?
In most countries, 'standard form' is simply another name for scientific notation — a number expressed as a × 10^b where 1 ≤ |a| < 10. In the UK curriculum especially, 'standard form' is the preferred term for the same concept. Both always produce a coefficient between 1 and 10 (inclusive of 1, exclusive of 10) multiplied by an integer power of ten.
How do you convert scientific notation back to a decimal number?
Multiply the coefficient by the appropriate power of ten. For 3.6 × 10⁴, multiply 3.6 by 10,000 to get 36,000. For 2.1 × 10⁻³, multiply 2.1 by 0.001 to get 0.0021. Moving the decimal point right for positive exponents and left for negative exponents is an equivalent shortcut.
What is engineering notation and how does it differ from scientific notation?
Engineering notation is a variant where the exponent is always a multiple of three (e.g. 10³, 10⁶, 10⁻³), which aligns with metric prefixes like kilo, mega, and milli. The coefficient in engineering notation may range from 1 to 999. For example, 47,000 Ω would be written as 47 × 10³ Ω or 47 kΩ in engineering notation, versus 4.7 × 10⁴ Ω in scientific notation.
Why is scientific notation important in chemistry and physics?
Many fundamental constants and measurements in chemistry and physics span an enormous range of magnitudes. Avogadro's number (6.022 × 10²³ mol⁻¹), the mass of a proton (1.673 × 10⁻²⁷ kg), and the speed of light (2.998 × 10⁸ m/s) are impossible to work with efficiently in plain decimal form. Scientific notation prevents errors, clarifies significant figures, and makes order-of-magnitude comparisons straightforward.
How do you multiply or divide two numbers in scientific notation?
To multiply: multiply the coefficients and add the exponents. For example, (3.0 × 10⁴) × (2.0 × 10³) = 6.0 × 10⁷. To divide: divide the coefficients and subtract the exponents. For example, (9.0 × 10⁶) ÷ (3.0 × 10²) = 3.0 × 10⁴. If the resulting coefficient falls outside the range 1–10, adjust it and update the exponent accordingly — for instance, 15.0 × 10⁵ becomes 1.5 × 10⁶.
Last updated: 2025-01-15 · Formula verified against primary sources.