Core Structure

BigFloat Struct Components
DataBits (Mantissa)
BigInteger
Binary representation of the number in two's complement
Scale
int
Position of radix point from least significant digit
_size
int (cached)
Cached bit count for performance optimization

Visual Representation

Parts of a BigFloat

Component Details

๐Ÿ“Š DataBits (Mantissa)

  • Stores the actual numeric value as a BigInteger
  • Uses two's complement for sign representation
  • Can grow up to 2 billion bits
  • Least significant 32 bits serve as guard bits

๐Ÿ“ Scale

  • Indicates radix point position from the right
  • Positive scale: shifts radix right (larger numbers)
  • Negative scale: shifts radix left (fractions)
  • Zero scale: essentially an integer

โšก Size Cache

  • Cached value of ABS(DataBits).GetBitLength()
  • Avoids repeated expensive bit counting
  • Critical for performance in size-based algorithms
  • Updated only when DataBits changes

Architecture Snapshot

Data Layout

  • Mantissa + 32 guard bits stored in BigInteger.
  • Scale offsets the radix point (base-2) without requiring normalization.
  • _size cache mirrors BigInteger.GetBitLength() for fast comparisons.

Determinism

  • Immutable struct design keeps operations side-effect free and thread-safe.
  • Canonicalization removes guard bits for value comparisons and hashing.
  • Binary-first representation guarantees the same result across platforms.

Precision Model

  • Guard bits absorb rounding error through long operation chains.
  • Rounding uses round to nearest with guard awareness before truncation.
  • Precision-aware parsing and formatting preserve guard bits via the | separator.

Constructor normalization

  • Clamp binaryPrecision so no more than 32 source bits are mapped into guard bits (e.g., doubles keep at least 21 in-precision bits, yielding the 37/16 default split).
  • Remaining source bits fill the most significant guard bits; unused guard bits are zero so _size reflects the requested precision.
  • binaryScaler offsets Scale/BinaryExponent without touching the mantissa, and zero inputs keep _size at 0 while encoding the precision budget in Scale.
  • Defaults: doubles 37/16, floats 16/8, integers default to 31/63/64 in-precision bits and widen to the payload width when needed, decimals add 96 extra in-precision bits to the 96-bit payload.

Design Principles

โ™พ๏ธ

Arbitrary Precision

Unlike IEEE floating-point with fixed precision (32/64/128 bits), BigFloat's mantissa can grow arbitrarily large, limited only by available memory. This enables calculations with thousands or even millions of digits of precision.

๐Ÿ›ก๏ธ

Immutable Struct Design

BigFloat is implemented as an immutable value type (struct), ensuring thread safety without locks and preventing unexpected mutations. All operations return new instances rather than modifying existing ones.

โš™๏ธ

Base-2 Internal Representation

All operations are performed in binary for maximum efficiency. While this means some decimal values cannot be represented exactly (e.g., 0.1), it aligns with hardware architecture and enables fast bit-level operations.

๐Ÿ”„

Two's Complement

Leverages BigInteger's two's complement representation for efficient arithmetic. This eliminates the need for separate sign handling and simplifies many operations compared to sign-magnitude representation.

Guard Bits Mechanism

The 32 Hidden Guard Bits

BigFloat maintains 32 least-significant bits as "guard bits" - extra precision that's not considered accurate but helps maintain precision through chains of operations.

Example: Addition with Guard Bits

  101.011|00110011001100110011001100110011 (โ‰ˆ 5.4)
+ 100.010|01100110011001100110011001100110 (โ‰ˆ 4.3)
==========================================
 1001.101|10011001100110011001100110011001 (โ‰ˆ 9.7)

Without guard bits: 1001.101 (loses precision)
With guard bits:    1001.110 (more accurate rounding)

Benefits of Guard Bits

๐ŸŽฏ Cumulative Error Correction

Over many operations, rounding errors accumulate. Guard bits act as a buffer, preserving sub-precision information that improves final results.

๐Ÿ“Š Statistical Accuracy

With proper rounding, guard bits can maintain accuracy through approximately 10ยฒยน operations before precision degradation becomes significant.

๐Ÿ”„ Rounding Improvement

Guard bits provide additional information for rounding decisions, resulting in "round to nearest" behavior rather than always truncating.

Theoretical Precision Limits

With proper rounding: ~(2ยณยฒ ร— 2)ยฒ ร— 4 = 1.5 ร— 10ยฒยน operations before guard bit exhaustion
Without rounding: ~2โดโฐ operations would affect 39 bits (exceeding guard bits)
With rounding: Only ~18 bits affected on average (staying within guard bits)

Scale vs. Exponent System

BigFloat Scale vs. IEEE Exponent

Aspect BigFloat (Scale) IEEE Floating-Point (Exponent)
Measurement Direction From right (least significant digit) From left (most significant digit)
Representation Integer scale value Biased exponent
Normalization Not required Mantissa normalized (1.xxxx)
Precision Control More intuitive for decimal placement Optimized for hardware
Range -2ยณยน to 2ยณยน-1 Limited by exponent bits

Scale Examples

DataBits: 1234 Scale: 0
Result: 1234
DataBits: 1234 Scale: 2
Result: 123400
DataBits: 1234 Scale: -2
Result: 12.34
DataBits: 1234 Scale: -4
Result: 0.1234

Operational Flow

The library leans on a repeatable pipeline that keeps calculations deterministic and predictable. Most arithmetic paths (addition, subtraction, and comparison-heavy code) follow this playbook:

  1. Normalize inputs: Scale alignment brings operands to a shared radix position without discarding guard bits.
  2. Compute: Use BigInteger primitives to perform the core operation in binary (with guard bits intact).
  3. Adjust precision: When shifting is needed, the helper used by RoundingRightShift retains guard bits until a canonical boundary is reached.
  4. Cache sizing: The _size field is updated to mirror the mantissaโ€™s bit length and feed size-aware optimizations.
  5. Return immutable result: A new BigFloat instance is produced, leaving operands untouched.
Why it matters: This flow ensures the same inputs always produce the same outputs, simplifies reasoning about precision, and keeps thread safety trivial.

Rounding & Canonicalization

Round-to-Nearest

BigFloat rounds using the guard bits before truncation, typically producing a round-to-nearest result instead of a blind truncation. This avoids systematic bias when shrinking precision.

  • Guard-aware shifts: RoundingRightShift inspects dropped bits to decide whether to increment the retained mantissa.
  • Carry-safe: Rounding can ripple into higher bits while keeping the scale unchanged.

Canonical Values

Comparisons, hashing, and many conversions use a canonicalized form that strips guard bits after rounding. It keeps logical equality aligned with numerical equality while still allowing higher-precision intermediates during computation.

  • Equality-friendly: Two numbers that differ only in guard bits compare as equal after canonicalization.
  • Predictable formatting: Formatting helpers render canonical values by default but expose options to include guard bits for diagnostics.

Interop Guardrails

When casting to fixed-size types (integers, double, or decimal), guard bits are rounded away first to preserve expected magnitude. Overflow paths still respect the immutable, deterministic semantics.

File Organization

Core Implementation Files

๐Ÿ“„

BigFloat.cs

Primary Structure & Operations

  • Struct definition and fields
  • Constructors for all numeric types
  • Basic arithmetic operators (+, -, *, /, %)
  • Comparison operators and IComparable
  • Type conversions (explicit/implicit)
  • Core properties (Sign, IsZero, IsInteger)
๐Ÿงฎ

BigFloatMath.cs

Mathematical Functions

  • Sqrt() - Newton-Plus algorithm
  • Pow() - Binary exponentiation
  • NthRoot(), CubeRoot()
  • Trigonometric functions (Sin, Cos, Tan)
  • Log2() with hardware acceleration
  • Floor/Ceiling with precision preservation
๐Ÿ“

BigFloatStringsAndSpans.cs

String Formatting & Display

  • ToString() with format specifiers
  • IFormattable implementation
  • Scientific notation support
  • Hexadecimal/Binary output
  • Precision masking (XXXXX notation)
  • Debug visualization
๐Ÿ”

BigFloatParsing.cs

Input Processing

  • Parse() and TryParse() methods
  • Decimal string parsing
  • Hexadecimal (0x) support
  • Binary (0b) support
  • Scientific notation (1.23e+10)
  • Precision separator (123.456|789)
โš–๏ธ

BigFloatCompareTo.cs

Comparison Operations

  • Standard CompareTo()
  • CompareInPrecisionBitsTo()
  • StrictCompareTo() - exact bit comparison
  • FullPrecisionCompareTo()
  • CompareToIgnoringLeastSigBits()
  • BigInteger comparisons
๐Ÿ”ง

BigFloatExtended.cs

Extended Functionality

  • UInt128/Int128 constructors
  • FitsInADouble() validation
  • Extended comparisons
  • Precision management utilities
  • Debug and diagnostic functions
  • Helper methods
ฯ€

Constants.cs

Mathematical Constants

  • Pre-computed constants (ฯ€, e, โˆš2, ฯ†)
  • Up to 1M decimal digits precision
  • Base64 encoded storage
  • Lazy loading and caching
  • Category organization
  • External file support

Performance Optimizations

๐Ÿš€ Newton-Plus Square Root

Custom algorithm that's 2-10x faster than traditional Newton's method. Uses adaptive precision and early termination when convergence is detected.

2-10x faster

โšก Binary Exponentiation

Efficient O(log n) algorithm for integer powers. Minimizes the number of multiplications through bit manipulation of the exponent.

O(log n) complexity

๐Ÿ’พ Size Caching

The _size field caches the bit count of DataBits, avoiding repeated expensive GetBitLength() calls during size-based algorithm selection.

~5x speedup for comparisons

๐ŸŽฏ RoundingRightShift

Core primitive for all rounding operations. Optimized for common bit shift patterns with special cases for power-of-2 shifts.

Used in 90% of operations

๐Ÿ”„ Scale Alignment

Minimal BigInteger operations during arithmetic by efficient scale alignment. Reduces unnecessary bit shifting and copying.

30% reduction in allocations

๐Ÿ“Š Size-Based Algorithms

Different algorithms selected based on operand sizes. Small numbers use simpler algorithms while large numbers use more sophisticated approaches.

Adaptive optimization


Recent Performance Improvements (2025)

  • IsOneBitFollowedByZeroBits: 2x performance using IsPow2 instead of TrailingZeroCount
  • ToDecimal: Performance boost using cached _size instead of GetBitLength()
  • ToHexString: Complete rewrite for better performance and accuracy
  • Binary Operations: Streamlined internal calculations for better throughput

Memory Management

Memory Characteristics

Struct Overhead

BigInteger reference: 8 bytes
Scale (int): 4 bytes
_size (int): 4 bytes
Base overhead: ~16 bytes

Precision Scaling

Per precision bit: 0.125 bytes
100-bit number: ~29 bytes
1,000-bit number: ~141 bytes
1M-bit number: ~125 KB

Optimization Strategies

  • Immutable design: Zero heap allocation for struct itself
  • BigInteger efficiency: Leverages .NET's optimized implementation
  • Constants caching: Pre-computed values cached to avoid recomputation
  • Lazy evaluation: Constants loaded only when needed
  • Reference sharing: BigInteger internally shares buffers when possible

Memory Usage Guidelines

๐Ÿ’ก Tip: For most applications, precision between 100-1000 bits is sufficient. Only use extreme precision (>10,000 bits) when absolutely necessary, as memory usage and computation time scale with precision.

Key Algorithms

Newton-Plus Square Root Algorithm

BigFloat's signature optimization for square root calculation. Combines Newton's method with adaptive precision scaling and convergence detection.

Key Features:

  • Starts with lower precision, gradually increases
  • Early termination on convergence
  • Bit-shift optimizations for power-of-2 cases
  • 2-10x performance improvement over traditional methods
// Simplified algorithm overview
BigFloat Sqrt(BigFloat value)
{
    // Start with hardware double approximation
    double initial = Math.Sqrt((double)value);
    BigFloat x = new(initial);
    
    // Newton iterations with increasing precision
    while (!converged)
    {
        x = (x + value / x) / 2;
        // Adaptive precision adjustment
    }
    return x;
}

Payne-Hanek Reduction

Used for trigonometric functions to accurately reduce arguments to the primary range [-ฯ€/2, ฯ€/2], maintaining precision even for very large arguments.

Benefits:

  • Accurate for arguments with magnitude up to 2^63
  • Preserves precision for periodic functions
  • Eliminates catastrophic cancellation

Adaptive Precision Division

Division algorithm that dynamically adjusts precision based on operand sizes and required accuracy, minimizing unnecessary computation.

Optimization Strategy:

  • Analyzes operand bit patterns
  • Selects optimal algorithm (long division vs. Newton-Raphson)
  • Adjusts working precision dynamically
  • Handles special cases (power-of-2 divisors)

Design Trade-offs

Binary vs. Decimal

โœ… Advantages

  • Aligns with hardware architecture
  • Efficient bit-level operations
  • Direct BigInteger integration

โš ๏ธ Trade-offs

  • Some decimal values not exact (0.1)
  • Conversion overhead for I/O
  • Less intuitive for financial calculations

Scale vs. Exponent

โœ… Advantages

  • More intuitive decimal placement
  • No normalization required
  • Simpler mental model

โš ๏ธ Trade-offs

  • Different from IEEE standard
  • Requires conversion for interop
  • Less hardware optimization

Guard Bits (32)

โœ… Advantages

  • Maintains accuracy over ~10ยฒยน ops
  • Minimal memory overhead
  • Good balance for most use cases

โš ๏ธ Trade-offs

  • Not configurable
  • May be insufficient for some edge cases
  • Gradual precision loss inevitable

Future Architecture Considerations

๐Ÿ”„ Repeating Digit Support

Potential addition of a _repeat field to exactly represent rational numbers with repeating binary patterns (e.g., 1/3 = 0.010101...).

๐ŸŽฏ Configurable Guard Bits

Allow users to specify guard bit count based on their precision requirements and operation chain length.

โšก Hardware Acceleration

Leverage SIMD instructions and GPU computation for operations on very large precision numbers.

๐Ÿ“Š Decimal Mode

Optional base-10 internal representation for applications requiring exact decimal arithmetic (financial, accounting).