US6813742B2 and 5 Similar Patents That Improve Wireless Signal Processing

US6813742B2

Processing wireless signals quickly and efficiently is the key to efficient mobile communication. And patents like US6813742 make this possible.

Simply put, Patent US6813742B2 covers a decoding method to improve data speed and reduce power consumption in 3G systems. It uses a pipelined structure that enables continuous signal correction, making it well-suited for mobile devices.

Assigned to TurboCode LLC , this patent is now part of multiple infringement lawsuits. One of these cases involves National Instruments Corporation, but we will not delve into the legalities. This article focuses on the technology side of the patent given its applications.

Using the Global Patent Search (GPS) tool, we explore five similar inventions that share key concepts with US6813742B2. Each offers a different approach to decoding speed and efficiency in wireless systems.

Understanding Patent US6813742B2

Patent US6813742B2 introduces a baseband processing subsystem tailored for third-generation (3G) wireless networks. It targets key bottlenecks in turbo code decoding by enabling iterative signal correction without sacrificing performance. 

The design emphasizes streamlined architecture, making it suitable for integration into modern mobile hardware platforms. 

turbo code decoder

Source: Google Patents

Its Four Key Features Are

1. Pipelined Log-MAP architecture: Two SISO Log-MAP decoders operate in a feedback loop for iterative decoding.

2. High data throughput: The decoder produces one output per clock cycle in pipeline mode.

3. Low-power hardware implementation: The use of adders instead of multipliers makes ASIC deployment more feasible.

4. Interleaver and de-interleaver memory: These modules store and route soft-decision data between decoding stages.

This decoder is designed for use in 3G systems, such as W-CDMA and CDMA2000. Its structure reduces complexity while maintaining decoding accuracy. The design enables faster, lower-power processing, making it suitable for mobile communication devices.

Similar Patents As US6813742B2

To explore the decoding architecture outlined in US6813742B2, we used the Global Patent Search tool to identify closely related inventions. These references share technical strategies such as pipelined processing and optimization of BCJR/Log-MAP computations. 

Each patent presents a different solution to increase decoding speed and reduce hardware complexity in turbo decoders. Below, we highlight five patents that reflect comparable approaches to improving performance in high-throughput wireless systems.

#1. US2002062471A1

This US application, US2002062471A1, published in 2002, outlines a turbo decoder optimized for high-speed operation with minimal latency and reduced circuit complexity. It introduces a pipelined decoding method based on ACS-approximated BCJR computations. This enables higher throughput without significantly increasing circuit size or memory requirements.

turbo coder patent

Source: GPS

What This Patent Introduces To The Landscape

  1. ACS-based BCJR approximation – Simplifies MAP decoding using Add-Compare-Select logic blocks.
  2. Pipelined gamma metric supply – Gamma values are processed across multiple stages to enable parallel operation.
  3. Stage-wise alpha/beta metric updates – Updates every K stages to reduce memory overhead.
  4. Cascaded likelihood computation – Computes likelihoods with synchronized alpha, beta, and gamma metrics.
  5. Scalable throughput – Speed improves linearly with K, enabling tailored performance for application needs.
  6. Compact decoder design – Aims to deliver interactive performance without increasing memory or circuit size.

How It Connects To US6813742B2

  • Both patents increase throughput using pipelined decoding strategies.
  • Each adopts a soft-in soft-out (SISO) decoding method centered on MAP-based logic.
  • US6813742B2 uses dual Log-MAP decoders with memory interleaving; this patent uses stage-level BCJR approximation.
  • Both aim to reduce decoding delay and power consumption for mobile communication systems.
  • The focus on parallelism and synchronized processing appears across both designs.

Why This Matters

This patent reinforces the use of pipelining and ACS logic in high-performance turbo decoding. It shows how internal structure optimization can deliver low-latency decoding suited for next-generation wireless communication. This aligns with the goals of US6813742B2.

#2. US2002071505A1 

This US application, US2002071505A1, published in 2002, outlines a turbo decoder using a parallel processing architecture to decode turbo-encoded data more efficiently. It proposes a modular system using multiple SISO decoders. It also has interleavers working in synchronized iterations to improve decoding throughput while reducing bottlenecks.

turbo code decoder patent

Source: GPS

What This Patent Introduces To The Landscape

  1. Modulo-N parallel decoding architecture – Implements multiple SISO decoders operating simultaneously, each tied to a specific modulo sequence.
  2. Cross-coupled SISO iterations – SISO modules pass extrinsic values between each other through interleavers for each decoding round.
  3. Tuple and bit interleaving schemes – Supports both grouped (tuple) and separated (bit-level) interleaving for added flexibility.
  4. Memory-optimized SISO design – Processes sub-blocks of data at a time to limit memory footprint.
  5. MAP-based decoding within SISOs – Each SISO uses the Maximum A Posteriori algorithm for soft decision decoding.
  6. Integrated address sequencing – Interleavers and deinterleavers use advanced addressing controlled by a centralized generator.
  7. Modulo-N interleaver consistency – Maintains sequence integrity during encoding and decoding across multiple interleavers.
  8. Normalization logic – Controls register growth using a low-latency logic gate for efficient alpha value handling.

How It Connects To US6813742B2

  • Both designs use iterative soft-in soft-out (SISO) decoders to improve decoding accuracy.
  • US6813742B2 uses two pipelined Log-MAP decoders; this patent expands that to multiple parallel SISOs under a modulo-N framework.
  • Each patent addresses high-speed decoding by optimizing data flow between decoders and interleavers.
  • US6813742B2 focuses on pipelined throughput; this patent offers parallelism scaling with N decoder paths.
  • Both highlight the importance of structured interleaving and efficient memory use in turbo decoding systems.

Why This Matters

This patent offers a scalable approach to turbo decoding using parallel SISO modules tied to modulo sequence logic. It complements US6813742B2’s pipelined method by showing how broader parallelism can further boost decoding performance across large data blocks.

#3. GB2352943A 

This UK patent application, GB2352943A, published in 2001, outlines a turbo-code decoder that reduces computational complexity while preserving decoding performance. It substitutes conventional MAP decoding with Viterbi-based techniques. It also includes SOVA decoding, parallel SISO operation, and adaptive memory control to optimize both speed and hardware efficiency.

turbo code decoding patent

Source: GPS

What This Patent Introduces To The Landscape

  1. Viterbi-based soft decision decoding – Uses Viterbi-like processing to generate soft decisions without explicit channel state measurement.
  2. SOVA integration – Offers an alternate decoding method using the Soft Output Viterbi Algorithm.
  3. CRC-based iteration control – Decoding repetition is driven by error detection rather than fixed iteration counts.
  4. Parallel MAP decoder execution – Enables two soft decision modules to process sequences simultaneously for high-speed decoding.
  5. Efficient path metric storage – Reduces memory needs by updating forward and backward metrics only for select time windows.
  6. Switching between soft decision devices – Dynamically alternates decoding between first and second soft decision devices.
  7. Reduced memory overhead – Stores only a limited set of metrics (M < N), optimizing RAM usage.
  8. Hybrid decoder compatibility – Can operate with either conventional MAP decoders or simplified soft decision logic.

How It Connects To US6813742B2

  • Both patents target higher decoding speed in turbo-coded systems while minimizing hardware and memory cost.
  • US6813742B2 uses dual Log-MAP decoders in a pipelined loop; GB2352943A proposes parallel Viterbi-based soft decision paths.
  • US6813742B2 relies on interleaving between two SISO units; this patent includes switching logic and dynamic SISO coordination.
  • Each system offers reduced complexity by limiting resource use (US6813742B2 with adders, GB2352943A with metric optimization).
  • Both designs aim for compatibility with mobile communication constraints, particularly speed, cost, and power efficiency.

Why This Matters

This patent presents a pragmatic alternative to traditional turbo decoding by applying simplified logic and adaptive memory use. It offers resource-conscious strategies that align with the low-power, high-speed goals of US6813742B2.

Alongside pipelining and iterative decoding, another direction is exemplified by US6630507B1, which centers on streamlining interleaver memory and decoder alignment for compact architectures.

#4. RU2236085C2 

This Russian patent, RU2236085C2, published in 2004, describes a memory-efficient MAP decoder architecture tailored for turbo-coded wireless communication systems. It focuses on managing decoder memory across multiple windows. This enables parallel and sequential decoding to improve throughput and reduce hardware complexity.

turbo code decoder

Source: GPS

What This Patent Introduces To The Landscape

  1. Multi-window RAM buffering – Uses S + 1 RAM windows for sequential reading/writing during iterative decoding.
  2. Bidirectional decoding scheme – Alternates decoding between forward and reverse directions to compute reliable LLR values.
  3. State metric calculator (SMC) banks – Employs distinct or shared SMCs for flexible parallel decoding configurations.
  4. Double-buffered memory architecture – Dual memory banks allow simultaneous read/write for LLR processing.
  5. Multiplexer-controlled RAM switching – Manages addressable access to each decoding window using programmable counters.
  6. Interleaver/deinterleaver synchronization – Each iteration integrates turbo interleaving for updated metric alignment.
  7. Reduced path metric storage – Minimizes memory by storing metrics only for active decoding windows.
  8. Speed-scalable clocking – Supports fewer SMC units by increasing clock rate, balancing speed with power use.

How It Connects To US6813742B2

  • Both patents tackle high-speed turbo decoding using RAM-based interleaver/deinterleaver structures.
  • US6813742B2 proposes pipelined Log-MAP decoding with dual decoders; RU2236085C2 employs parallel memory-managed MAP decoding.
  • Each leverages soft input, soft output (SISO) computation in iterative loops, structured by memory buffers.
  • US6813742B2 focuses on ASIC-friendly design with simplified logic; RU2236085C2 optimizes RAM efficiency and LLR processing.
  • Both aim to boost data throughput while keeping memory and power usage under control for mobile environments.

Why This Matters

This patent shows how memory architecture plays a central role in enabling fast and efficient turbo decoding. It provides design principles for managing interleaving, buffering, and state metrics, which are also core elements found in US6813742B2.

#5. JP2001285079A 

This Japanese patent application, JP2001285079A, published in 2001, outlines a hybrid decoding device capable of decoding both turbo codes and convolutional codes. It uses dual MAP decoders with interleaving loops for iterative decoding. It is designed to balance performance and hardware efficiency.

turbo code decoder patent

Source: GPS

What This Patent Introduces To The Landscape

  1. Dual MAP decoder architecture – Two decoders are linked in a loop through interleavers for iterative decoding.
  2. Turbo and convolutional decoding support – A shared hardware block decodes both coding types via dynamic switching.
  3. Likelihood-based iterative decoding – MAP algorithm calculates prior and posterior likelihood values for each bit.
  4. Switchable decoding pathways – Automatically detects coding type and selects decoding logic accordingly.
  5. Soft output generation – Produces soft decision data for enhanced error correction performance.
  6. Circuit area reduction – Unified decoding reduces gate count and LSI footprint.
  7. Flexible algorithm compatibility – Can substitute MAP with SOVA to reduce computational overhead.
  8. Embedded determination logic – Integrates decision-making blocks for final data recovery and output.

How It Connects To US6813742B2

  • Both devices apply iterative decoding using soft-output MAP logic to enhance performance.
  • JP2001285079A introduces dynamic switching between turbo and convolutional decoding; US6813742B2 is focused purely on turbo decoding with a pipelined Log-MAP architecture.
  • Each approach utilizes interleaving and deinterleaving to loop extrinsic information between decoders.
  • Both are designed to reduce hardware cost and improve suitability for mobile communications.
  • JP2001285079A emphasizes multi-mode adaptability; US6813742B2 focuses on high-throughput, ASIC-optimized turbo decoding.

Why This Matters

This patent highlights the value of decoder flexibility in mobile communications. Its dual-mode approach contrasts with US6813742B2’s speed-optimized turbo-only design. This offers a broader decoding solution.

Want a wider view? US6813742B2 and its related patents show how turbo decoding underpins today’s high-speed error-correction battles.

How to Find Related Patents Using Global Patent Search

GPS Tool

It is essential to conduct a thorough patentability search when studying turbo decoders, iterative architectures, or decoding throughput. The Global Patent Search tool streamlines this process by surfacing inventions that address similar engineering challenges in high-speed error correction and digital communications.

1. Enter the patent number into GPS: Start by entering a patent number like US6813742B2 into the GPS tool. The platform transforms it into a targeted query, which can be refined with terms like “turbo decoding,” “MAP algorithm,” or “pipelined architecture.”

2. Explore conceptual snippets: Instead of comparing features claim-by-claim, GPS now presents curated text snippets. These highlight how other inventions improve decoding speed, reduce memory load, or enhance error correction reliability.

3. Identify related inventions: The tool reveals patents describing BCJR decoders, soft-output Viterbi methods, and parallel architecture enhancements, offering insight into how decoder performance has evolved.

4. Compare systems, not legal claims: Rather than focusing strictly on claim language, GPS emphasizes technical solutions. This helps users recognize overlapping strategies in forward/backward metric calculations or concurrent pipeline designs.

3. Accelerate cross-domain insights: Whether you are in digital signal processing, hardware acceleration, or communications systems, GPS helps uncover transferable architectures that might otherwise remain siloed.

Want to get hands-on? Check out our curated list of the best patent analysis tools to support technical comparison and competitive patent scouting.

With Global Patent Search, researchers can support technology transfer and scouting by analyzing how turbo decoders are built, refined, and scaled across industries. This structured view of decoding architectures helps innovators identify optimization strategies and drive next-generation communication solutions.

Disclaimer: The information provided in this article is for informational purposes only and should not be considered legal advice. The related patent references mentioned are preliminary results from the Global Patent Search tool and do not guarantee legal significance. For a comprehensive related patent analysis, we recommend conducting a detailed search using GPS or consulting a patent attorney.