BigFloat Architecture
Understanding the internal design and implementation
Core Structure
Visual Representation

Component Details
๐ DataBits (Mantissa)
- Stores the actual numeric value as a BigInteger
- Uses two's complement for sign representation
- Can grow up to 2 billion bits
- Least significant 32 bits serve as guard bits
๐ Scale
- Indicates radix point position from the right
- Positive scale: shifts radix right (larger numbers)
- Negative scale: shifts radix left (fractions)
- Zero scale: essentially an integer
โก Size Cache
- Cached value of
ABS(DataBits).GetBitLength()
- Avoids repeated expensive bit counting
- Critical for performance in size-based algorithms
- Updated only when DataBits changes
Design Principles
Arbitrary Precision
Unlike IEEE floating-point with fixed precision (32/64/128 bits), BigFloat's mantissa can grow arbitrarily large, limited only by available memory. This enables calculations with thousands or even millions of digits of precision.
Immutable Struct Design
BigFloat is implemented as an immutable value type (struct), ensuring thread safety without locks and preventing unexpected mutations. All operations return new instances rather than modifying existing ones.
Base-2 Internal Representation
All operations are performed in binary for maximum efficiency. While this means some decimal values cannot be represented exactly (e.g., 0.1), it aligns with hardware architecture and enables fast bit-level operations.
Two's Complement
Leverages BigInteger's two's complement representation for efficient arithmetic. This eliminates the need for separate sign handling and simplifies many operations compared to sign-magnitude representation.
Guard Bits Mechanism
The 32 Hidden Guard Bits
BigFloat maintains 32 least-significant bits as "guard bits" - extra precision that's not considered accurate but helps maintain precision through chains of operations.
Example: Addition with Guard Bits
101.011|00110011001100110011001100110011 (โ 5.4)
+ 100.010|01100110011001100110011001100110 (โ 4.3)
==========================================
1001.101|10011001100110011001100110011001 (โ 9.7)
Without guard bits: 1001.101 (loses precision)
With guard bits: 1001.110 (more accurate rounding)
Benefits of Guard Bits
๐ฏ Cumulative Error Correction
Over many operations, rounding errors accumulate. Guard bits act as a buffer, preserving sub-precision information that improves final results.
๐ Statistical Accuracy
With proper rounding, guard bits can maintain accuracy through approximately 10ยฒยน operations before precision degradation becomes significant.
๐ Rounding Improvement
Guard bits provide additional information for rounding decisions, resulting in "round to nearest" behavior rather than always truncating.
Theoretical Precision Limits
With proper rounding: ~(2ยณยฒ ร 2)ยฒ ร 4 = 1.5 ร 10ยฒยน operations before guard bit exhaustion
Without rounding: ~2โดโฐ operations would affect 39 bits (exceeding guard bits)
With rounding: Only ~18 bits affected on average (staying within guard bits)
Scale vs. Exponent System
BigFloat Scale vs. IEEE Exponent
Aspect | BigFloat (Scale) | IEEE Floating-Point (Exponent) |
---|---|---|
Measurement Direction | From right (least significant digit) | From left (most significant digit) |
Representation | Integer scale value | Biased exponent |
Normalization | Not required | Mantissa normalized (1.xxxx) |
Precision Control | More intuitive for decimal placement | Optimized for hardware |
Range | -2ยณยน to 2ยณยน-1 | Limited by exponent bits |
Scale Examples
DataBits: 1234
Scale: 0
DataBits: 1234
Scale: 2
DataBits: 1234
Scale: -2
DataBits: 1234
Scale: -4
File Organization
Core Implementation Files
BigFloat.cs
Primary Structure & Operations
- Struct definition and fields
- Constructors for all numeric types
- Basic arithmetic operators (+, -, *, /, %)
- Comparison operators and IComparable
- Type conversions (explicit/implicit)
- Core properties (Sign, IsZero, IsInteger)
BigFloatMath.cs
Mathematical Functions
- Sqrt() - Newton-Plus algorithm
- Pow() - Binary exponentiation
- NthRoot(), CubeRoot()
- Trigonometric functions (Sin, Cos, Tan)
- Log2() with hardware acceleration
- Floor/Ceiling with precision preservation
BigFloatStringsAndSpans.cs
String Formatting & Display
- ToString() with format specifiers
- IFormattable implementation
- Scientific notation support
- Hexadecimal/Binary output
- Precision masking (XXXXX notation)
- Debug visualization
BigFloatParsing.cs
Input Processing
- Parse() and TryParse() methods
- Decimal string parsing
- Hexadecimal (0x) support
- Binary (0b) support
- Scientific notation (1.23e+10)
- Precision separator (123.456|789)
BigFloatCompareTo.cs
Comparison Operations
- Standard CompareTo()
- CompareInPrecisionBitsTo()
- StrictCompareTo() - exact bit comparison
- FullPrecisionCompareTo()
- CompareToIgnoringLeastSigBits()
- BigInteger comparisons
BigFloatExtended.cs
Extended Functionality
- UInt128/Int128 constructors
- FitsInADouble() validation
- Extended comparisons
- Precision management utilities
- Debug and diagnostic functions
- Helper methods
Constants.cs
Mathematical Constants
- Pre-computed constants (ฯ, e, โ2, ฯ)
- Up to 1M decimal digits precision
- Base64 encoded storage
- Lazy loading and caching
- Category organization
- External file support
Performance Optimizations
๐ Newton-Plus Square Root
Custom algorithm that's 2-10x faster than traditional Newton's method. Uses adaptive precision and early termination when convergence is detected.
โก Binary Exponentiation
Efficient O(log n) algorithm for integer powers. Minimizes the number of multiplications through bit manipulation of the exponent.
๐พ Size Caching
The _size field caches the bit count of DataBits, avoiding repeated expensive GetBitLength() calls during size-based algorithm selection.
๐ฏ RoundingRightShift
Core primitive for all rounding operations. Optimized for common bit shift patterns with special cases for power-of-2 shifts.
๐ Scale Alignment
Minimal BigInteger operations during arithmetic by efficient scale alignment. Reduces unnecessary bit shifting and copying.
๐ Size-Based Algorithms
Different algorithms selected based on operand sizes. Small numbers use simpler algorithms while large numbers use more sophisticated approaches.
Recent Performance Improvements (2025)
- IsOneBitFollowedByZeroBits: 2x performance using IsPow2 instead of TrailingZeroCount
- ToDecimal: Performance boost using cached _size instead of GetBitLength()
- ToHexString: Complete rewrite for better performance and accuracy
- Binary Operations: Streamlined internal calculations for better throughput
Memory Management
Memory Characteristics
Struct Overhead
Precision Scaling
Optimization Strategies
- Immutable design: Zero heap allocation for struct itself
- BigInteger efficiency: Leverages .NET's optimized implementation
- Constants caching: Pre-computed values cached to avoid recomputation
- Lazy evaluation: Constants loaded only when needed
- Reference sharing: BigInteger internally shares buffers when possible
Memory Usage Guidelines
๐ก Tip: For most applications, precision between 100-1000 bits is sufficient. Only use extreme precision (>10,000 bits) when absolutely necessary, as memory usage and computation time scale with precision.
Key Algorithms
Newton-Plus Square Root Algorithm
BigFloat's signature optimization for square root calculation. Combines Newton's method with adaptive precision scaling and convergence detection.
Key Features:
- Starts with lower precision, gradually increases
- Early termination on convergence
- Bit-shift optimizations for power-of-2 cases
- 2-10x performance improvement over traditional methods
// Simplified algorithm overview
BigFloat Sqrt(BigFloat value)
{
// Start with hardware double approximation
double initial = Math.Sqrt((double)value);
BigFloat x = new(initial);
// Newton iterations with increasing precision
while (!converged)
{
x = (x + value / x) / 2;
// Adaptive precision adjustment
}
return x;
}
Payne-Hanek Reduction
Used for trigonometric functions to accurately reduce arguments to the primary range [-ฯ/2, ฯ/2], maintaining precision even for very large arguments.
Benefits:
- Accurate for arguments with magnitude up to 2^63
- Preserves precision for periodic functions
- Eliminates catastrophic cancellation
Adaptive Precision Division
Division algorithm that dynamically adjusts precision based on operand sizes and required accuracy, minimizing unnecessary computation.
Optimization Strategy:
- Analyzes operand bit patterns
- Selects optimal algorithm (long division vs. Newton-Raphson)
- Adjusts working precision dynamically
- Handles special cases (power-of-2 divisors)
Design Trade-offs
Binary vs. Decimal
โ Advantages
- Aligns with hardware architecture
- Efficient bit-level operations
- Direct BigInteger integration
โ ๏ธ Trade-offs
- Some decimal values not exact (0.1)
- Conversion overhead for I/O
- Less intuitive for financial calculations
Scale vs. Exponent
โ Advantages
- More intuitive decimal placement
- No normalization required
- Simpler mental model
โ ๏ธ Trade-offs
- Different from IEEE standard
- Requires conversion for interop
- Less hardware optimization
Guard Bits (32)
โ Advantages
- Maintains accuracy over ~10ยฒยน ops
- Minimal memory overhead
- Good balance for most use cases
โ ๏ธ Trade-offs
- Not configurable
- May be insufficient for some edge cases
- Gradual precision loss inevitable
Future Architecture Considerations
๐ Repeating Digit Support
Potential addition of a _repeat field to exactly represent rational numbers with repeating binary patterns (e.g., 1/3 = 0.010101...).
๐ฏ Configurable Guard Bits
Allow users to specify guard bit count based on their precision requirements and operation chain length.
โก Hardware Acceleration
Leverage SIMD instructions and GPU computation for operations on very large precision numbers.
๐ Decimal Mode
Optional base-10 internal representation for applications requiring exact decimal arithmetic (financial, accounting).