# Optical Coherent Reception and Noise Processes

Authored by: Le Nguyen Binh

# Noises in Optical Communications and Photonic Systems

Print publication date:  November  2016
Online publication date:  November  2016

Print ISBN: 9781482246940
eBook ISBN: 9781315372747

10.1201/9781315372747-4

#### Abstract

Detection of optical signals can be carried out at the optical receiver by direct conversion of optical signal power to electronic current in the photodiode (PD) and then electronic amplification. This chapter provides the fundamental understanding of coherent detection (CoD) of optical signals, which requires the mixing of the optical fields of the optical signals and that of the local oscillator (LO), a high-power laser so that its beating product would result in the modulated signals preserving both its phase and amplitude characteristics in the electronic domain. Optical preamplification in CoD can also be integrated at the front end of the optical receiver.

#### Optical Coherent Reception and Noise Processes

Detection of optical signals can be carried out at the optical receiver by direct conversion of optical signal power to electronic current in the photodiode (PD) and then electronic amplification. This chapter provides the fundamental understanding of coherent detection (CoD) of optical signals, which requires the mixing of the optical fields of the optical signals and that of the local oscillator (LO), a high-power laser so that its beating product would result in the modulated signals preserving both its phase and amplitude characteristics in the electronic domain. Optical preamplification in CoD can also be integrated at the front end of the optical receiver.

#### 3.1  Introduction

With the exponential increase in data traffic, especially due to the demand for ultrabroad bandwidth driven by multimedia applications, cost-effective ultrahigh-speed optical networks have become highly desired. It is expected that Ethernet technology will not only dominate in access networks but also will become the key transport technology of next generation metro/core networks. 100 Gigabit Ethernet (100 GbE) is currently considered to be the next logical evolution step after 10 GbE. Based on the anticipated 100 GbE requirements, 100 Gbit/s data rate of serial data transmission per wavelength is required. To achieve this data rate while complying with current system design specifications such as channel spacing, chromatic dispersion (CD), and polarization mode dispersion (PMD) tolerance, coherent optical communication systems with multilevel modulation formats will be desired, because it can provide high spectral efficiency, high receiver sensitivity, and potentially high tolerance to fiber dispersion effects [16].*

Synchronous detection is implemented by mixing the signals and a strong local oscillator in association with the phase locking of the local oscillator to that of the carrier.

Compared to conventional direct detection in intensity-modulation/direct-detection (IMDD) systems that only detects the intensity of the light of the signal, CoD can retrieve the phase information of the light, and therefore, can tremendously improve the receiver sensitivity.

Coherent optical receivers are important components in long-haul optical fiber communication systems and networks to improve the receiver sensitivity and thus extra transmission distance. Coherent techniques were considered for optical transmission systems in the 1980s when the extension of repeater distance between spans is pushed to 60 km instead of 40 km for single mode optical fiber at bit rate of 140 Gb/s. However, in the late 1980s the invention of optical fiber amplifiers has overcome this attempt? Recently, interests in coherent optical communications have attracted significant research activities for ultrabit rate dense wavelength division multiplexing (DWDM) optical systems and networks. The motivation has been possible due to the fact that (i) the uses of optical amplifiers in cascade fiber spans have added significant noises and thus limit the transmission distance, (ii) the advances of digital signal processors (DSP) whose sampling rate can reach few tens of giga-samples/s allowing the processing of beating signals to recover the phase or phase estimation (PE), (iii) the availability of advanced signal processing algorithms such as Viterbi and Turbo algorithms, and (iv) that the differential coding and modulation and detection of such signals may not require an optical phase-locked loop (OPLL), and hence self-coherent and DSP to recover transmitted signals. These technological advances, especially in the digital processors at ultra-sampling rate, allow overcoming several difficulties in homodyne coherent reception conducted in the first coherent system generation in the 1980s.

As is well known, a typical arrangement of an optical receiver is that the optical signals are detected by a PD (a PIN diode or avalanche photodiode [APD] or a photon-counting device); electrons generated in the photodetector are then electronically amplified through a front end electronic amplifier. The electronic signals are then decoded for recovery of original format. However, when the fields of incoming optical signals are mixed with those of a LO whose frequency can be identical or different to that of the carrier, the phase, and frequency property of the resultant signals reflect those of the original signals. Coherent optical communication systems have also been reviving dramatically due to electronic processing and availability of stable narrow linewidth lasers.

This chapter deals with the analysis and design of coherent receivers with OPLL and the mixing of optical signals and that of the LO in the optical domain and thence detected by the optoelectronic receivers following this mixing. Thus, both the optical mixing and photodetection devices act as the fundamental elements of a coherent optical receiver. Depending on the frequency difference between the lightwave carrier of the optical signals and that of the LO, the CoD can be termed as heterodyne or homodyne detection. For heterodyne detection, there is a difference in the frequency and thus the beating signal is fallen in a passband region in the electronic domain. Thus, all the electronic processing at the front end must be in this passband region. On the other hand, in homodyne detection, there is no frequency difference and thus the detection is in the baseband of the electronic signal. Both cases would require a locking of the LO and carrier of the signals. An OPLL is thus treated in this chapter.

This chapter is organized as follows: Section 3.2 gives an account of the components of coherent receivers, Section 3.3 outlines the principles of optical coherent detection under heterodyne, homodyne, or intradyne techniques, and Section 3.4 gives details of the OPLL which is a very important development for modern optical coherent detection.

The design of an optical receiver depends on the modulation format of the signals and thus transmitted through the transmitter. The modulation of the optical carrier can be in the form of amplitude, phase, and frequency. Furthermore, the phase shaping also plays a critical role in the detection and the bit-error rate (BER) of the receiver and thence the transmission systems. In particular, it is dependent on the modulation in analog or digital, Gaussian or exponential pulse shape, on-off keying or multiple levels, and so on.

Figure 3.1 shows the schematic diagram of a digital coherent optical receiver, which is similar to the direct detection receiver but with an optical mixer at the front end. Figure 3.2 shows the small signal equivalent circuits of such a receiver’s front end. However, the phase of the signals at base or passband of the detected signals in the electrical domain would remain in the generated electronic current and voltages at the output of the electronic preamplifier. An optical front end is an optical mixer combining the fields of the optical waves of the local laser and the optical signals so that the envelope of the optical signals can be beating with each other to a product with the summation of the frequencies and the difference of the frequencies of the lightwaves. Only the lower frequency term, which falls within the absorption range of the photodetector, is converted into the electronic current preserving both the phase and amplitude of the modulated signals.

Figure 3.1   Schematic diagram of a digital optical coherent receiver with an additional LO mixing with the received optical signals before detected by an optical receiver.

Figure 3.2   Schematic diagram of an electronic preamplifier in an optical receiver of a transimpedance electronic amplifier at the front end. The current source represents the electronic current generated in the photodetector due to the beating of the local oscillator and the optical signals. Cd = photodiode capacitance.

Thus, an optical receiver front end, very much the same as that of the direct detection, is connected following the optical processing front end consisting of a photodetector for converting lightwave energy into electronic currents; an electronic preamplifier for further amplification of the generated electronic current followed by an electronic equalizer for bandwidth extension, usually in voltage form; a main amplifier for further voltage amplification; a clock recovery circuitry for regenerating the timing sequence; and a voltage-level decision circuit for sampling the waveform for the final recovery of the transmitted and received digital sequence. Therefore, the optoelectronic preamplifier is followed by a main amplifier with an automatic control to regulate the electronic signal voltage to be filtered and then sampled by a decision circuit with synchronization by a clock recovery circuitry.

An inline fiber optical amplifier can be incorporated in front of the photodetector to form an optical receiver with an optical amplifier front end to improve its receiving sensitivity. This optical amplification at the front end of an optical receiver will be treated in this chapter dealing with optical amplification processes.

The structure of the receiver is thus consisted of four parts: the optical mixing front, the front end section, the linear channel of the main amplifier and automatic gain control (AGC) if necessary, and the data recovery section. The optical mixing front end sums the optical fields of the LO and that of the optical signals. Polarization orientation between these lightwaves is very critical to maximize the beating of the additive field in the PD. Depending on whether the frequency difference is finite or null between these fields of the resulting electronic signals derived from the detector, the electronic signals can be in the baseband or the passband, and the detection technique is termed as a heterodyne or homodyne technique, respectively.

#### 3.3  CoD

Optical CoD can be distinguished by the “demodulation” scheme in communications techniques in association with following definitions: (i) CoD is the mixing between two lightwaves or optical carriers, one is information-bearing lightwaves and the other a LO with an average energy much larger than that of the signals and (ii) demodulation refers to the recovery of baseband signals from the electrical signals.

A typical schematic diagram of a coherent optical communications employing guided wave medium and components is shown in Figure 3.1, in which a narrow band laser incorporating an optical isolator cascaded with an external modulator is usually the optical transmitter. Information is fed via a microwave power amplifier to an integrated optic modulator, commonly used LiNbO3 or EA types. The CoD is a principal feature of coherent optical communications, which can be further distinguished with heterodyne and homodyne techniques depending whether there is a difference or not between the frequencies of the LO and that of the carrier of the signals. A LO is a laser source whose frequency can be tuned and approximately equivalent to a monochromatic source; a polarization controller would also be used to match its polarization with that of the information-bearing carrier. The LO and the transmitted signal are mixed via a polarization maintaining coupler and then detected by a coherent optical receiver. Most of the previous CoD schemes are implemented in a mixture of photonic domain and electronic/microwave domain.

Coherent optical transmission has become the focus of research. One significant advantage is the preservation of all the information of the optical field during detection, leading to enhanced possibilities for optical multilevel modulation. This section investigates the generation of optical multilevel modulation signals. Several possible structures of optical M-ary phase-shift keying (M-ary-PSK) and M-ary quadrature amplitude modulation (M-ary-QAM) transmitters are shown and theoretically analyzed. Differences in the optical transmitter configuration and the electrical driving lead to different properties of the optical multilevel modulation signals. This is shown by deriving general expressions applicable to every M-ary-PSK and M-ary-QAM modulation format and exemplarily clarified for Square-16-QAM modulation.

Coherent receivers are distinguished between synchronous and asynchronous. Synchronous detection requires an OPLL that recovers the phase and frequency of the received signals to lock the LO to that of the signal so as to measure the absolute phase and frequency of the signals relative to that of the LO. Thus, synchronous receivers allow direct mixing of the bandpass signals and the baseband, so this technique is termed as homodyne reception. For asynchronous receivers, the frequency of the LO is approximately the same as that of the receiving signals and no OPLL is required. In general, the optical signals are first mixed with an intermediate frequency (IF) oscillator which is about two to three times that of the 3 dB passband. The electronic signals can then be recovered using electrical PLL at lower carrier frequency in the electrical domain. The mixing of the signals and an LO of an IF is referred to as heterodyne detection.

If no LO is used for demodulating the digital optical signals, then differential or self-homodyne reception may be utilized, which is classically termed as autocorrelation reception process or self-heterodyne detection.

Coherent communications have been an important technique in the 1980s and the early 1990s, but then their research was interrupted with the advent of optical amplifiers in the late 1990s that offer up to 20 dB gain without difficulty. Nowadays, however, coherent systems have once again become the focus of interest, due to the availability of DSP and low-priced components, the partly relaxed receiver requirements at high data rates and several advantages that CoD provides. The preservation of the temporal phase of the CoD enables new methods for adaptive electronic compensation of CD. With regard to WDM systems, coherent receivers offer tunability and allow channel separation via steep electrical filtering. Furthermore, only the use of CoD permits convergence to the ultimate limits of spectral efficiency. To reach higher spectral efficiencies, the use of multilevel modulation is required. For this, too, coherent systems are also beneficial, because all the information of the optical field is available in the electrical domain. This way complex optical demodulation with interferometric detection—which has to be used in direct detection systems—can be avoided and the complexity is transferred from the optical to the electrical domain. Several different modulation formats based on the modulation of all four quadratures of the optical field were proposed in the early 1990s, describing the possible transmitter and receiver structures and calculating the theoretical BER performance. However, a more detailed and practical investigation of multilevel modulation coherent optical systems for today’s networks and data rates is missing so far.

Currently, coherent reception has attracted significant interests due to the following reasons: (i) The received signals of the coherent optical receivers are in the electrical domain, which is proportional to that in the optical domain. This, in contrast to the direct detection receivers, allows exact electrical equalization or exact PE of the optical signals. (ii) Using heterodyne receivers, DWDM channels can be separated in the electrical domain by using electrical filters with sharp roll of the passband to the cut-off band. Presently, the availability of ultrahigh sampling rate DSP allows users to conduct filtering in the DSP in which the filtering can be changed with ease.

However, there are disadvantages that coherent receivers would suffer: (i) coherent receivers are polarization sensitive that requires polarization tracking at the front end of the receiver; (ii) homodyne receivers require OPLL and electrical PLL for heterodyne that would need control and feedback circuitry, optical, or electrical, which may be complicated; and (iii) for differential detection the compensation may be complicated due to the differentiation receiving nature.

In a later chapter when some advanced modulation formats are presented for optically amplified transmission systems, the use of photonic components is extensively exploited to take advantage of the advanced technology of integrated optics, planar lightwave circuits. The modulation formats of signals depend on whether the amplitude, the phase, or the frequency of the carrier is manipulated as mentioned in Chapter 2. In this chapter, the detection is coherently converted to the IF range in the electrical domain and the signal envelop. The down-converted carrier signals are detected and then recovered. Both binary-level and multilevel modulations schemes employing amplitude, phase, and frequency shift keying (FSK) modulation are described in this chapter.

Thus, CoD can be distinguished by the difference between the central frequency of the optical channel and that of the LO. Three types can be classified as follows: (i) heterodyne, when the difference is higher than the 3 dB bandwidth of the baseband signal; (ii) homodyne, when the difference is nil; and (iii) intradyne, when the frequency difference falls within the baseband of the signal.

It is noted that to maximize the beating signals at the output of the photodetector, the polarizations of the LO and the signals must be aligned. In practice, this can be best implemented by the polarization diversity technique.

#### 3.3.1  Optical Heterodyne Detection

A typical schematic of a coherent optical reception sub-system is shown in Figure 3.3. Figure 3.4 shows also more details of a heterodyne reception sub-system in which the frequency of the channel carrier is different with that of the local oscillator. The LO, whose frequency can be higher or lower than that of the carrier, is mixed with the information-bearing carrier thus allowing down- or upconversion of the information signals to the IF range. The down-converted electrical carrier and signal envelop is received by the photodetector. This combined lightwave is converted by the PD into electronic current signals, which are then filtered by an electrical bandpass filter (BPF) and then demodulated by a demodulator. A low pass filter (LPF) is also used to remove higher order harmonics of the nonlinear detection photodetection process, the square-law detection. With an envelope detector, the process is asynchronous, hence the name term asynchronous detection. If the down-converted carrier is recovered and then mixed with IF signals, then this is synchronous detection. It is noted that the detection is conducted at the IF range in electrical domain, hence there is a need of controlling the stability of the frequency spacing between the signal carrier and the LO. This means the mixing of these carriers would result into an IF carrier in the electrical domain prior to the mixing process or envelop detection to recover the signals.

Figure 3.3   Typical arrangement of coherent optical communications systems. LD/LC is a very narrow linewidth laser diode as an LO without any phase locking to the signal optical carrier. PM coupler is polarization maintaining fiber coupled device, PC = polarization controller.

(Adapted from R. C. Alferness. IEEE J. Quantum. Elect., QE-17, 946–959, 1981; W. A. Stallard et al., IEEE J. Lightwave Technol., LT-4(7), 852–857, July 1986. With permission.)

Figure 3.4   Schematic diagram of optical heterodyne detection: (a) asynchronous and (b) synchronous, receiver structures. LPF = low pass filter, BPF = bandpass filter, PD = photodiode.

The CoD thus relies on the electric field component of the signal and the LO. The polarization alignment of these fields is critical for the optimum detection. The electric field of the optical signals and the LO can be expressed as

3.1()
3.2()

Where Ps(t) and PLo are the instantaneous signal power and average power of the signals and LO, respectively, ωs(t) and ωLO are the signal and LO angular frequencies, ϕs and ϕLO are the phase including any phase noise of the signal and the LO, and ψ(t) is the modulation phase. The modulation can be amplitude with the switching on and off (amplitude shift keying—ASK) of the optical power or phase or frequency with the discrete or continuous variation of the time-dependent phase term. For discrete phase, it can be PSK, differential PSK (DPSK), or differential quadrature PSK (DQPSK), and when the variation of the phase is continuous, we have FSK if the rate of variation is different for the bit “1” and bit “0.”

Under an ideal alignment of the two fields, the photodetection current can be expressed by

3.3()

where the higher frequency term (the sum) is eliminated by the photodetector frequency response, η is the quantum efficiency, q is the electronic charge, h is Plank’s constant, and υ is the optical frequency.

Thus, the power of the LO dominates the shot-noise process and at the same time boosts the signal level, hence enhancing the signal-to-noise ratio (SNR). The oscillating term is resulted from the beating between the LO and the signal inside the PD, which is proportional to the amplitude and is the square root of the product of the power of the LO and the signal.

The electronic signal power S and shot noise Ns can be expressed as

3.4()$S = 2 ℜ 2 P s P LO N s = 2 q ℜ ( P s + P LO ) B ℜ = η q h v = responsivity$

where B is the 3 dB bandwidth of the electronic receiver. Thus, the optical signal-to-noise ratio (OSNR) can be written as

3.5()$OSNR = 2 ℜ 2 P s P L O 2 q ℜ ( P s + P L O ) B + N eq$

Where Neq is the total electronic noise equivalent power at the input to the electronic preamplifier of the receiver. From this equation, we can observe that if the power of the LO is significantly increased so that the shot noise dominates over the equivalent noise, at the same time increasing the SNR, the sensitivity of the coherent receiver can only be limited by the quantum noise inherent in the photodetection process. Under this quantum limit, the OSNRQL is given by

3.6()$OSNR QL = ℜ P s q B$

Under the ASK modulation scheme, the demodulator of Figure 3.4 is an envelope detector (in lieu of the demodulator) followed by a decision circuitry. That is, the eye diagram is obtained, and a sampling instant is established with a clock recovery circuit. While the synchronous detection would require a locking between the frequencies of the carrier and the LO. The LO frequency is tuned to that the carrier according to the tracking of the frequency component of the beating signal. The amplitude demodulated envelope can then be expressed as

3.7()

The IFωIF is the difference between the frequencies of the LO and the signal carrier, and nx and ny are the expected values of the orthogonal noise power components, which are random variables.

3.8()

#### 3.3.1.1.1  Envelop Detection

The noise power terms can be assumed to follow a Gaussian probability distribution and are independent of each other with a zero mean and a variance σ; the probability density function (PDF) can thus be given as

3.9()$p ( n x , n y ) = 1 2 π σ 2 e − ( n x 2 + n y 2 ) / 2 σ 2$

With respect to the phase and amplitude, this equation can be written as [3]

3.10()

where

3.11()$ρ = [ 2 ℜ P s ( t ) P LO + n s ( t ) ] 2 + n y 2 ( t ) A = 2 ℜ P s ( t ) P LO$

The PDF of the amplitude can be obtained by integrating the phase amplitude PDF over the range of 0 to 2π and given as

3.12()$p ( ρ ) = p σ 2 e − ( p 2 + A 2 ) / 2 σ 2 I 0 { A p σ 2 }$

where I0 is the modified Bessel function. If a decision level is set to determine the “1” and “0” level, then the probability of error and the BER can be obtained assuming an equal probability of error between the “1s” and “0s”:

3.13()$BER = 1 2 P e 1 + 1 2 P e 0 = 1 2 [ 1 − Q ( 2 δ , d ) + e − ( d 2 / 2 ) ]$

where Q is the Magnum function and δ is given by

3.14()$δ = A 2 2 σ 2 = 2 ℜ 2 P s P LO 2 q ℜ ( P s + P L O ) B + i N eq 2$

When the power of the LO is much larger than that of the signal and the equivalent noise current power, this SNR becomes

3.15()$δ = ℜ P s q B$

The physical representation of the detected current and the noise current due to the quantum shot noise and noise equivalent of the electronic preamplification is shown in Figure 3.5, in which the signal current can be general and derived from the output of the detection scheme, that from a photodetector or a back-to-back (B2B) pair of photodetectors of a balanced receiver for detecting the phase difference of DPSK or DQPSK or continuous-phase frequency-shift keying (CPFSK) signals and converting to amplitudes.

Figure 3.5   Equivalent current model at the input of the optical receiver, average signal current and equivalent noise current of the electronic preamplifier as seen from its input port.

The BER is optimum when setting its differentiation with respect to the decision level δ, and an approximate value of the decision level can be obtained as

3.16()$d opt ≅ 2 + δ 2 ⇒ BER ASK − e ≅ 1 2 e − ( δ / 4 )$

#### 3.3.1.1.2  Synchronous Detection

ASK can be detected using synchronous detection*

Synchronous detection is implemented by mixing the signals and a strong local oscillator in association with the phase locking of the local oscillator to that of the carrier.

and the BER is given by
3.17()$BER ASK-S ≅ 1 2 e r f c δ / 2$

#### 3.3.1.2  PSK Coherent System

Under the PSK modulation format, the detection is similar to that of Figure 3.4 for heterodyne detection (see Figure 3.6) but after the BPF, an electrical mixer is used to track the phase of the detected signal. The received signal is given by

3.18()

Figure 3.6   Schematic diagram of optical heterodyne detection for PSK format.

The information is contained in the time-dependent phase term φ(t).

When the phase and frequency of the voltage-controlled oscillator (VCO) are matched with those of the signal carrier, then the received electrical signal can be simplified as

3.19()$r ( t ) = 2 ℜ P s P LO a n ( t ) + n x a n ( t ) = ± 1$

Under the Gaussian statistical assumption, the probability of the received signal of a “1” is given by

3.20()$p ( r ) = 1 2 π σ 2 e − ( ( r − u ) 2 / 2 σ 2 )$

Furthermore, the probability of the “0” and that of the “1” are assumed to be equal. We can obtain the BER as the total probability of the received “1” and “0”:

3.21()$BER PSK = 1 2 e r f c ( δ )$

#### 3.3.1.3  Differential Detection

As observed in the synchronous detection, a carrier recovery circuitry is required, usually implemented using a PLL, which complicates the overall receiver structure. It is possible to detect the signal by a self-homodyne process by beating the carrier of one-bit period to that of the next consecutive bit; this is called the differential detection. The detection process can be modified as shown in Figure 3.7, in which the phase of the IF carrier of one bit is compared with that of the next bit, and a difference is recovered to represent the bit “1” or “0.” This requires a differential coding at the transmitter and an additional phase comparator for the recovery process. In later chapters on DPSK, the differential decoding is implemented in photonic domain via a photonic phase comparator in form of an MZ delay interferometer (MZDI) with a thermal section for tuning the delay time of the optical delay line. The BER can be expressed as

3.22()$BER DPSK- e ≅ 1 2 e − δ$
3.23()

Figure 3.7   Schematic diagram of optical heterodyne and differential detection for PSK format.

where s(t) is the modulating waveform and Ak represents the bit “1” or “0.” This is equivalent to the baseband signal, and the ultimate limit is the BER of the baseband signal.

The noise is dominated by the quantum shot noise of the LO, with its square noise current given by

3.24()$i N − s h 2 = 2 q ℜ ( P s + P LO ) ∫ 0 ∞ | H ( j ω ) | 2 d ω$

where H(jω) is the transfer function of the receiver system, normally a transimpedance of the electronic preamp and that of a matched filter. As the power of the LO is much larger than the signal, integrating over the dB bandwidth of the transfer function, this current can be approximated by

3.25()$i N − sh 2 ≃ 2 q ℜ P LO B$

Hence, the SNR (power) is given by

3.26()$SNR ≡ δ ≃ 2 ℜ P s q B$

The BER is the same as that of a synchronous detection and is given by

3.27()$BER homodyne ≅ 1 2 e r f c δ$

The sensitivity of the homodyne process is at least 3 dB better than that of the heterodyne, and the bandwidth of the detection is half of its counterpart due to the double-sideband nature of the heterodyne detection.

#### 3.3.1.4  FSK Coherent System

The nature of FSK is based on the two frequency components that determine the bits “1” and “0.” There are a number of formats related to FSK depending on whether the change of the frequencies representing the bits is continuous or noncontinuous, the FSK or CPFSK modulation formats. For noncontinuous FSK, the detection is usually performed by a structure of dual frequency discrimination as shown in Figure 3.8, in which two narrow band filters are used to extract the signals. For CPFSK, both the frequency discriminator and balanced receiver for PSK detection can be used. The frequency discrimination is indeed preferred as compared with the balanced receiving structures because it would eliminate the phase contribution by the LO or optical amplifiers, which may be used as an optical preamp.

Figure 3.8   Schematic diagram of optical homodyne detection of FSK format.

When the frequency difference between the “1” and “0” equals a quarter of the bit rate, the FSK can be termed as the minimum-shift keying modulation scheme. At this frequency spacing, the phase is continuous between these states.

#### 3.3.2  Optical Homodyne Detection

Optical homodyne detection matches the transmitted signal phases to that of the LO phase signal. A schematic of the optical receiver is shown in Figure 3.9. The field of the incoming optical signals is mixed with the LO, whose frequency and phase are locked with that of the signal carrier waves via a PLL. The resultant electrical signal is then filtered and therefore a decision circuitry is formed.

Figure 3.9   General structure of an optical homodyne detection system. FC = fiber coupler, LPF = low pass filter, PLL = phase lock loop.

#### 3.3.2.1  Detection and OPLL

Optical homodyne detection requires the phase matching of the frequency of the signal carrier and that of the LO. This type of detection would give a very high sensitivity, in principle, of 9 photons/bit. Implementation of such a system would normally require an OPLL, whose structure of a recent development [4] is shown in Figure 3.10. The LO frequency is locked into the carrier frequency of the signals by shifting it to the modulated sideband component via the use of the optical modulator. A single-sideband optical modulator is preferred. However, a double sideband may also be used. This modulator is excited by the output signal of a VCO whose frequency is determined by the voltage level of the output of an electronic BPF condition to meet the required voltage level for driving the electrode of the modulator. The frequency of the LO is normally tuned to the region such that the frequency difference with respect to the signal carrier falls within the passband of the electronic filter. When the frequency difference is zero, there is no voltage level at the output of the filter and thus the OPLL has reached the final stage of locking. The bandwidth of the optical modulator is important so that it can extend the locking range between the two optical carriers.

Figure 3.10   Schematic diagram of optical homodyne detection—electrical line (dashed) and optical line (continuous and solid) using an OPLL.

Any frequency offset between the LO and the carrier is detected, and noise is filtered by the LPF. This voltage level is then fed to a VCO to generate a sinusoidal wave that is then used to modulate an intensity modulator modulating the lightwaves of the LO. The output spectrum of the modulator would exhibit two sidebands and the LO lightwave. One of these components would then be locked to the carrier. A close loop would ensure a stable locking. If the intensity modulator is biased at the minimum transmission point and the voltage level at the output of the VCO is adjusted to 2Vπ with driven signals of π/2 phase shift with each other, then we would have carrier suppression and sideband suppression. This ease the confusion of the close loop locking.

Under a perfect phase matching, the received signal is given by

3.28()

Where ak takes the value ±1, and s(t) is the modulating waveform. This is a baseband signal, and thus the error rate is the same as that of the baseband system.

The shot-noise power induced by the LO and the signal power can be expressed as

3.29()$i N S 2 = 2 q ℜ ( p s + P LO ) ∫ 0 ∞ | H ( jω ) | d ω$

where $| H ( j ω ) |$ is the transfer function of the receiver whose expression, if under a matched filtering,can be

3.30()

Where T is the bit period. Then the noise power becomes

3.31()$i NS 2 = q ℜ ( p s + P L O ) 1 T ≃ q ℜ P LO T when p s ≪ P LO$

Thus, the SNR is

3.32()$SNR = 2 ℜ p s P LO q ℜ P LO / T = 2 p s T q$

and the BER is

3.33()$P E = 1 2 e r f c ( S N R ) → B E R = e r f c ( S N R )$

#### 3.3.2.2  Quantum Limit Detection

For homodyne detection, a super quantum limit can be achieved. In this case, the LO is used in a very special way that matches the incoming signal field in polarization, amplitude, and frequency and is assumed to be phase locked to the signal. Assuming that the phase signal is perfectly modulated such that it acts in-phase or counter-phase with the LO, the homodyne detection would give a normalized signal current of

3.34()$i s C = 1 2 T [ ∓ 2 n p + 2 n L O ] 2 for 0 ≤ t ≤ T$

Assuming further that $n p = n LO$, the number of photon for the LO for generation of detected signals, then the current can be replaced with 4np for the detection of a “1” and nothing for a “0” symbol.

#### 3.3.2.3.1  Heterodyne Phase Detection

When the linewidth of the light sources is significant, the IF deviates due to a phase fluctuation, and the PDF is related to this linewidth conditioned on the deviation δω of the IF. For a signal power of ps, the total probability of error is given as

3.35()$P E = ∫ − ∞ ∞ P c ( p s , ∂ ω ) p IF ( ∂ ω ) ∂ ω$

The PDF of the IF under a frequency deviation can be written as [5]

3.36()$p IF ( ∂ ω ) = 1 Δ υ B T e − ( ∂ ω 2 / 4 π Δ υ B )$

where Δυ is full linewidth at the full width half maximum (FWHM) of the power spectral density and T is the bit period.

#### 3.3.2.3.2.1  DPSK Systems

The DPSK detection requires an MZDI and a balanced receiver either in the optical domain or in the electrical domain. If in the electrical domain, then the beating signals in the PD between the incoming signals and the LO would give the beating electronic current, which is then split. One branch is delay by one-bit period and then summed up. The heterodyne signal current can be expressed as [6]

3.37()

The phase $ϕ s ( t )$ is expressed by

3.38()$ϕ s ( t ) = φ s ( t ) + { φ N ( t ) − φ N ( t + T ) } − { φ p S ( t ) − φ p S ( t + T ) } − { φ p L ( t ) − φ p L ( t + T ) }$

The first term is the phase of the data and takes the value 0 or π. The second term represents the phase noise due to shot noise of the generated current and the third and fourth terms are the quantum shot noise due to the LO and the signals. The probability of error is given by

3.39()$P E = ∫ − π / 2 π / 2 ∫ − ∞ ∞ p n ( ϕ 1 − ϕ 2 ) p q ( ϕ 1 ) ∂ ϕ 1 ∂ ϕ$

where pn(.) is the PDF of the phase noise due to the shot noise and pq(.) is for the quantum phase noise generated from the transmitter and the LO [7].

The probability of error can be written as

3.40()

where Γ(.) is the gamma function and is the modified Bessel function of the first kind. The PDF of the quantum phase noise can be given as [8]

3.41()$p q ( ϕ 1 ) = 1 2 π D τ e ϕ 1 2 / 2 D τ$

where D is the phase diffusion constant, and the standard deviation from the central frequency

3.42()$Δ υ = Δ υ R + Δ υ L = D 2 π$

is the sum of the transmitter and the LO FWHM linewidth. Substituting Equations 3.40 and 3.41 into Equation 3.39, we obtain

3.43()$P E = 1 2 + ρ e − ρ 2 ∑ n = 0 ∞ ( − 1 ) n 2 n + 1 e − ( 2 n + 1 ) 2 π Δ υ T { I n − 1 / 2 ρ 2 + I ( n + 1 ) / 2 ρ 2 } 2$

This equation gives the probability of error as a function of the received power. The probability of error is plotted against the receiver sensitivity, and the product of the linewidth and the bit rate (or the relative bandwidth of the laser linewidth and the bit rate) is shown in Figure 3.11 for DPSK modulation format at 140 Mbps bit rate and the variation of the laser linewidth from 0 to 2 MHz.

Figure 3.11   (a) Probability of error versus receiver sensitivity with linewidth as a parameter in MHz. (b) Degradation of optical receiver sensitivity at BER = 10–9 for DPSK systems as a function of the linewidth and bit period—bit rate = 140 Mb/s.

(Extracted from Nicholson, G., Elect. Lett., 20/24, 1005–1007, 1984. With permission.)

#### 3.3.2.3.3  Differential Phase Detection Under Self-Coherence

Recently, the laser linewidth requirement for DQPSK modulation and differential detection for DQPSK has also been studied. No LO is used, which means self-coherent detection. It has been shown that the linewidth of up to 3 MHz of the transmitter laser would not significantly influence the probability of error as shown in Figure 3.12 [8]. Figure 3.13 shows the maximum linewidth of a laser source in a 10 GSymbols/s system. The loose bound is to neglect linewidth if the impact is to double the BER with the tighter bound being to neglect linewidth if the impact is a 0.1-dB SNR penalty.

Figure 3.12   Analytical approximation (solid line) and numerical evaluation (triangles) of the BER for the cases of zero linewidth and that required to double the BER. The dashed line is the linear fit for zero linewidth. Bit rate 10 Gb/s per channel.

(Extracted from S. Savory and T. Hadjifotiou, IEEE Photonic Technol. Lett., 16, 930–932, March 2004. With permission.)

Figure 3.13   Criteria for neglecting linewidth in a 10 GSymbols/s system. The loose bound is to neglect linewidth if the impact is to double the BER with the tighter bound being to neglect linewidth if the impact is a 0.1 dB SNR penalty. Bit rate 10 GSymbols/s.

(Extracted from Y. Yamamoto and T. Kimura, IEEE J. Quantum Electron., QE-17, 919–934, 1981. With permission.)

#### 3.3.2.3.4  Differential Phase Coherent Detection of Continuous Phase FSK Modulation Format

The probability of error of CPFSK can be derived by taking into consideration the delay line of the differential detector, the frequency deviation, and phase noise [10]. Similar to Figure 3.8, the differential detector configuration is shown in Figure 3.14a, and the conversion of frequency to voltage relationship in Figure 3.14b. If heterodyne detection is employed, then a BPF is used to bring the signals back to the electrical domain.

Figure 3.14   (a) Configuration of a CPFSK differential detection and (b) frequency to voltage conversion relationship of FSK differential detection.

(Extracted from Iwashita, K. and Masumoto, T., IEEE J. Lightwave Technol., LT-5/4, 452–462, 1987, Figure 1. With permission.)

The detected signal phase at the shot-noise limit at the output of the LPF can be expressed as

3.44()

where τ is the differential detection delay time, ∆ω is the deviation of the angular frequency of the carrier for the “1” or “0” symbol, φ(t) is the phase noise due to the shot noise, n(t) is the phase noise due to the transmitter and the LO quantum shot noise and takes the values of ±1, the binary data symbol.

Thus, by integrating the detected phase from $− Δ ω 2 τ → π − Δ ω 2 τ ,$ we obtain the probability of error as

3.45()$P E = ∫ − Δ ω 2 τ π − Δ ω 2 τ / 2 ∫ − ∞ ∞ p n ( ϕ 1 − ϕ 2 ) p q ( ϕ 1 ) ∂ ϕ 1 ∂ ϕ 1$

Similar to the case of DPSK system, substituting Equations 3.40 and 3.41 into Equation 3.45, we obtain

3.46()

where ωm is the deviation of the angular frequency with m the modulation index, and T0 is the pulse period or bit period. The modulation index parameter b is defined as the ratio of the actual frequency deviation to the maximum frequency deviation. Figure 3.15 shows the dependence of degradation of the power penalty to achieve the same BER as a function of the linewidth factor Δυτ and the modulation index β.

Figure 3.15   (a) Dependence of receiver power penalty at a BER of 10–9 on modulation index β (ratio between frequency deviation and maximum frequency spacing between f1 and f2). (b) Receiver power penalty at a BER of 10–9 as a function of the product of the beat bandwidth and the bit delay time—effects excluding LD phase noise.

(Extracted from Iwashita, K. and Masumoto, T., IEEE J. Lightwave Technol., LT-5/4, 452–462, 1987, Figures 3 and 4. With permission.)

Optical phase diversity receivers combine the advantages of the homodyne with minimum signal processing bandwidth and heterodyne reception with no optical phase locking required. The term diversity is well known in radio transmission links that describes the transmission over more than one path. In optical receivers, the optical path is considered as due to different polarization and phase paths. In intradyne detection, the frequency difference, the IF, or the LOFO ((LO) frequency offset) between the LO and the central carrier is nonzero, and lies within the signal bandwidth of the baseband signal as illustrated in Figure 3.16 [11]. Naturally, the control and locking of the carrier and the LO cannot be exact, sometimes due to jittering of the source. Most of the time, the laser frequency is locked stably by oscillating the reflection mirror, and hence the central frequency is varied by a few hundreds of KHz. Thus, intradyne CoD is more realistic. Furthermore, the DSP in modern coherent reception system would be able to extract this difference without much difficulty in the digital domain [12]. Obviously, the heterodyne detection would require a large frequency range of operation of electronic devices, whereas homodyne and intradyne reception require simpler electronics. Either differential or nondifferential format can be used in DSP-based coherent reception. For differential-based reception, the differential decoding would gain an advantage when there are slips in the cycles of bits due to walk-off of the pulse sequences over very long transmission noncompensating fiber lines.

Figure 3.16   Spectrum of CoD (a) homodyne, (b) intradyne, and (c) heterodyne

The diversity in phase and polarization can be achieved by using a π/2 hybrid coupler that splits the polarization of the LO and the received channels and mixing with a π/2 optical phase shift, and then the mixed signals are detected by a balanced photodetectors. This diversity detection is described in the next few sections (see also Figure 3.21).

#### 3.4  Self-Coherent Detection and Electronic Dsp

Coherent techniques described above would offer significant improvement but face a setback due to the availability of stable LO and an OPLL for locking the frequency of the LO and that of the signal carrier.

DSP have been widely used in wireless communications and play key roles in the implementation of DSP-based coherent optical communication systems. DSP techniques have been applied to coherent optical communication systems to overcome the difficulties of OPLL, and also to improve the performance of the transmission systems in the presence of fiber-degrading effects including CD, PMD, and fiber nonlinearities.

Coherent optical receivers have the following advantages: (1) the shot-noise-limited receiver sensitivity can be achieved with a sufficient LO power; (2) closely spaced WDM channels can be separated with electrical filters having sharp roll-off characteristics; and (3) the ability of phase detection can improve the receiver sensitivity compared with the IMDD system [13]. In addition, any kind of multilevel phase modulation formats can be introduced by using the coherent receiver. While the spectral efficiency of binary modulation formats is limited to 1 bit/s/Hz/polarization (which is called the Nyquist limit), multilevel modulation formats with N bits of information per symbol can achieve up to the spectral efficiency of N bits/s/Hz/polarization. Recent research has focused on M-ary PSK and even QAM with CoD, which can increase the spectral efficiency by a factor of log2M [1416]. Moreover, for the same bit rate, because the symbol rate is reduced, the system can have a higher tolerance to CD and PMD.

The maximum-likelihood (ML) carrier phase estimator derived in [23] can be used to approximate the ideal synchronous CoD in optical PSK systems. The ML phase estimator requires only linear computations, and thus it is more feasible for online processing of real systems. Intuitively one can show that the ML estimation receiver outperforms the Mth power block phase estimator and conventional differential detection, especially when the nonlinear phase noise is dominant, thus significantly improving the receiver sensitivity and tolerance to the nonlinear phase noise. The algorithm of ML phase estimator is expected to improve the performance of coherent optical communication systems using different M-ary PSK and QAM formats. The improvement by DSP at the receiver end can be significant for the transmission systems in the presence of fiber-degrading effects, including CD, PMD, and nonlinearities for both single channel and DWDM systems.

#### 3.5.1  Introduction

The electronic amplifier as a preamplification stage of an optical receiver plays a major role in the detection of optical signals so that optimum SNR and therefore the OSNR can be derived based on the photodetector responsivity. Under CoD, the amplifier noises must be much less than that of the quantum shot noises contributed by the high power level of the LO, which is normally about 10 dB above that of the signal average power.

Thus, this section gives an introduction of electronic amplifiers for wide band signals applicable to ultrahigh-speed, high-gain, and low-noise transimpedance amplifiers (TIAs). We concentrate on differential input TIAs, but address the detailed design of a single input single output with noise suppression technique in Section 3.7 with the design strategy for achieving stability in the feedback amplifier as well as low noise and wide bandwidth. We define the electronic noise of the preamplifier stage as the total equivalent input noise spectral density, that is, all the noise sources (current and voltage sources) of all elements of the amplifier are referred to the input port of the amplifier and thus an equivalent current source is found, from which the current density is derived. Once this current density is found, the total equivalent at the input can be found when the overall bandwidth of the receiver is determined. When this current is known, and with the average signal power, we can obtain without difficulty the SNR at the input stage of the optical receiver, and then the OSNR. On the other hand, if the OSNR required at the receiver is determined for any specific modulation format, then with the assumed optical power of the signal available at the front of the optical receiver and the responsivity of the photodetector we can determine the maximum electronic noise spectral density allowable by the preamplification stage and hence the design of the amplifier electronic circuit.

The principal function of an optoelectronic receiver is to convert the received optical signals into electronic equivalent signals, followed by amplification and sampling and processing to recover properties of the original shapes and sequence. So, at first, the optical domain signals must be converted to electronic current in the photodetection device, the photodetector of either p-i-n or APD, in which the optical power is absorbed in the active region and both electrons and holes generated are attracted to the positive- and negative-biased electrodes, respectively. Thus, the generated current is proportional to the power of the optical signals, hence the name “square law” detection. The p-i-n detector is structured with a p+ and n+:doped regions sandwiched by the intrinsic layer in which the absorption of optical signal occurs. A high electric field is established in this region by reverse biasing the diode, and thus electrons and holes are attracted to either sides of the diode, resulting in generation of current. Similarly, an APD works with the reverse-biasing level close to the reverse breakdown level of the pn junction (no intrinsic layer) so that electronic carriers can be multiplied in the avalanche flow when the optical signals are absorbed.

This photogenerated current is then fed into an electronic amplifier whose transimpedance must be sufficiently high and generates low noise so that a sufficient voltage signal can be obtained and then further amplified by a main amplifier, a voltage gain type. For high-speed and wide band signals, transimpedance amplification type is preferred as it offers wide band, much wider than high impedance type, though the noise level might be higher. With TIAs, there are two types, the single input single output port and two differential inputs and single output. The output ports can be differential with a complementary port. The differential input TIA offers much higher transimpedance gain (ZT) and wider bandwidth as well. This is contributed to the use of a long-tail pair at the input and hence reasonable high input impedance that would ease the feedback stability [2426].

In Section 3.3, a case study of coherent optical receiver is described from the design to implementation, including the feedback control and noise reduction. Although the corner frequency is only a few hundreds of MHz, with limited transition frequency of the transistors, this bandwidth is remarkable. The design is scalable to ultra-wide-band reception subsystems.

#### 3.5.2  Wide band TIAs

Two types of TIAs are described and distinguished by the term single input TIA and differential input TIA. They are distinguished by whether the amplifiers provide at the input a single port or a differential two-port. The later type is normally designed using a differential transistor pair termed as “a long-tail pair” instead of a single transistor stage for the former type TIA.

#### 3.5.2.1  Single Input, Single Output

We prefer to treat this section as a design example and describe the experimental demonstration of a wide band and low-noise amplifier. In the next section, the differential input TIA is treated with large transimpedance and reasonably low noise.

#### 3.5.2.2  Differential Inputs, Single/Differential Output

An example circuit of the differential input TIA is shown in Figure 3.17, in which a long tail pair or differential pair is employed at the input stage. Two matched transistors are used to ensure the minimum common mode rejection and maximum different mode operation. This pair has a very high input impedance and thus the feedback from the output stage can be stable. Thus, the feedback resistance can be increased up to the limit of the stability locus of the network pole. This thus offers the high transimpedance ZT and wide bandwidth. A typical ZT of 3000–6000 Ω can be achieved with 30 GHz 3 dB bandwidth (see Figure 3.19), as shown in Figures 3.18 and 3.19. Also the chip image of the TIA can be seen in Figure 3.18a. Such a TIA can be implemented in either InP or SiGe material. The advantage of SiGe is that the circuit can be integrated with a high-speed Ge-APD detector and ADC and DSP. On the other hand, if implemented in InP then high-speed p-i-n or APD can be integrated and then radio frequency (RF) interconnected with ADC and DSP. The differential group delay may be serious and must be compensated in the digital processing domain.

Figure 3.17   A typical structure of a differential TIA [27] with differential feedback paths.

(Adapted from H. Tran, et al., IEEE J. Solid St. Circ., 39(10), 1680–1689, 2004. With permission.)

Figure 3.18   Differential amplifiers: (a) chip level image and (b) referred input noise equivalent spectral noise density. Inphi TIA 3205 (type 1) and 2850 (type 2).

(Courtesy of Inphi Inc., Technical information on 3205 and 2850 TIA device, 2012.)

Figure 3.19   Differential amplifier: frequency response and differential group delay.

#### 3.5.3  Amplifier Noise Referred to Input

There are several noise sources in any electronic systems, which include thermal noises, shot noises, and quantum shot noises, especially in optoelectronic detection. Thermal noises result when the operating temperature is well above the absolute temperature at which no random movement of electrons and the resistance of electronic element occur. This type of noise depends on the ion temperature. Shot noises are due to the current flowing and random scattering of electrons, and thus this type of noise depends on the strength of the flowing currents such as biasing current in electronic devices. Quantum shot noises are generated due to the current emitted from optoelectronic detection processes, which are dependent on the strength of the intensity of the optical signals or sources imposed on the detectors. Thus, this type of noise depends on signals. In the case of CoD, the mixing of the LO laser and signals normally occurs with the strength of the LO being much larger than that of signal average power. Thus, the quantum shot noises are dominated by that from the LO.

In practice, an equivalent electronic noise source is the total noise as referred to the input of electronic amplifiers that can be measured by measuring the total spectral density of the noise distribution over the whole bandwidth of the amplification devices. Thus, the total noise spectral density can be evaluated and referred to the input port. For example, if the amplifier is a transimpedance type, then the transimpedance of the device is measured first, and then the measure voltage spectral density at the output port can be referred to the input. In this case, it is the total equivalent noise spectral density. The common term employed and specified for TIAs is the total equivalent spectral noise density over the midband region of the amplifying device. The midband region of any amplifier is defined as the flat gain region from DC to the corner 3 dB point of the frequency response of the electronic device.

Figure 3.20 illustrates the meaning of the total equivalent noise sources as referred to the input port of a two-port electronic amplifying device. A noisy amplifier with an input excitation current source, typically a signal current generated from the PD after the optical to electrical conversion, can be represented with a noiseless amplifier and the current source in parallel with a noise sources whose strength is equal to the total equivalent noise current referred to the input. Thus, the total equivalent current can be obtained by taking the product between this total equivalent current noise spectral density and the 3 dB bandwidth of the amplifying device. Thus, the SNR at the iutput of the electronic amplifier is given by

3.47()

Figure 3.20   Equivalent noise spectral density current sources.

From this SNR referred at the input of the electronic front end, one can estimate the eye opening of the voltage signals at the output of the amplifying stage which is normally required by the ADC for sampling and conversion to digital signals for processing. Thus, one can then estimate the require OSNR at the input of the photodetector and hence the launched power required at the transmitter over several span links with certain attenuation factors.

Detailed analyses of amplifier noises and their equivalent noise sources as referred to input ports are given in Annex 2. It is noted that noises have no direction of flow as they always add and do not substract, and thus the noises are measured as noise power and not as a current. Thus, electrical spectrum analyzers are commonly used to measure the total noise spectral density, or the distribution of noise voltages over the spectral range under consideration, which is thus defined as the noise power spectral density distribution.

#### 3.6.1  Dsp-Assisted Coherent Detection

Over the years since the introduction of optical coherent communications in the mid-1980s, the invention of optical amplifiers has left coherent reception behind until recently, when long-haul transmission suffered from nonlinearity of dispersion compensating fibers and standard single mode fiber transmission line due to its small effective area. Furthermore, the advancement of DSP in wireless communication has also contributed to the application of DSP in modern coherent communication systems. Thus, the name “DSP-assisted coherent detection,” that is, when a real-time DSP is incorporated after the optoelectronic conversion of the total field of the LO and that of the signals, the analog received signals are sampled by a high-speed ADC and then the digitalized signals are processed in a DSP. Currently, real-time DSP are intensively researched for practical implementation. The main difference between real-time and offline processing is that the real-time processing algorithm must be effective due to limited time available for processing.

When polarization division multiplexed (PDM) QAM channels are transmitted and received, polarization and phase diversity receivers are employed. The schematics of such receiver are shown in Figure 3.21a. Further, the structures of such reception systems incorporating DSP with the diversity hybrid coupler in optical domain are shown in Figure 3.21b–d. The polarization diversity section with the polarization beam splitters at the signal and LO inputs facilitate the demultiplexing of polarized modes in the optical waveguides. The phase diversity using a 90°optical phase shifter allows the separation of the inphase (I-) and quadrature (Q-) phase components of QAM channels. Using a 2 × 2 coupler also enables the balanced reception using photo-detector pair connected B2B and hence a 3 dB gain in the sensitivity. Section 2.7 of Chapter 2 has described the modulation scheme QAM using I-Q modulators for single polarization or dual polarization multiplexed channels.

Figure 3.21   Scheme of a synchronous coherent receiver using DSP for PE for coherent optical communications. (a) Generic scheme, (b) detailed optical receiver using only one polarization phase diversity coupler, (c) hybrid 90° coupler for polarization and phase diversity, (d) typical view of a hybrid coupler with two input ports and eight output ports of structure in (c). TE_V, TE_H = transverse electric mode with vertical (V) or horizontal (H) polarized mode, TM = transverse magnetic mode with polarization orthogonal to that of the TE mode. FS = phase shifter; PBS = polarization beam splitter; and MLSE = maximum-likelihood phase estimation.

(Adapted from S. Zhang et al., A comparison of phase estimation in coherent optical PSK system. In Photonics Global ’08, Paper C3-4A-03, Singapore, December 2008; S. Zhang et al., Adaptive decision-aided maximum likelihood phase estimation in coherent optical DQPSK system. In OptoElectronics and Communications Conference (OECC) ’08, Paper TuA-4, pp. 1–2, Sydney, Australia, July 2008.)

#### 3.6.1.1  DSP-Based Reception Systems

The schematic of synchronous coherent receiver based on DSP is shown in Figure 3.22. Once the polarization and the I- and Q-optical components are separated by the hybrid coupler, the positive and negative parts of the I- and Q- are coupled into a balanced optoelectronic receiver as shown in Figure 3.21b. Two PDs are connected B2B so that push–pull operation can be achieved, hence a 3 dB betterment as compared to a single PD detection. The current generated from the B2B connected PDs is fed into a TIA so that a voltage signal can be derived at the output. Further, a voltage-gain amplifier is used to boost these signals to the right level of the ADC so that sampling can be conducted and the analog signals converted to digital domain. These digitalized signals are then fetched into DSP and processing in the “soft domain” can be conducted. Thus, a number of processing algorithms can be employed in this stage to compensate for linear and nonlinear distortion effects due to optical signal propagation through the optical guided medium, and to recover the carrier phase and the clock rate for resampling of the data sequence and so on. Chapter 6 will describe in detail the fundamental aspects of these processing algorithms. Figure 3.22 shows a schematic of possible processing phases in the DSP incorporated in the DSP-based coherent receiver. Besides the soft processing of the optical phase locking as described in Chapter 5, it is necessary to lock the frequencies of the LO and that of the signal carrier to a certain limit within which the algorithms for clock recovery can function, for example within ±2 GHz.

Figure 3.22   Flow of functionalities of DSP processing in a QAM-coherent optical receiver with possible feedback control.

#### 3.6.2.1  Sensitivity

At ultrahigh bit rate, the laser must be externally modulated; thus, the phase of the lightwave conserves along the fiber transmission line. The detection can be direct, self-coherent, or homodyne and heterodyne. The sensitivity of coherent receiver is also important for the transmission system, especially the PSK scheme under both homodyne and heterodyne transmission techniques. This section gives the analysis of receiver for synchronous coherent optical fiber transmission systems. Consider that the optical fields of the signals and LO are coupled via a fiber coupler with two output ports 1 and 2. The output fields are then launched into two photodetectors connected B2B and then the electronic current is amplified using a transimpedance type and further equalized to extend the bandwidth of the receiver. Our objective is to obtain the receiver penalty and its degradation due to imperfect polarization mixing and unbalancing effects in the balanced receiver. A case study of the design, implementation, and measurements of an optical balanced receiver electronic circuit and noise suppression techniques is given in Section 3.7 (Figure 3.23).

Figure 3.23   Equivalent current model at the input of the optical balanced receiver under CoD, average signal current, and equivalent noise current of the electronic preamplifier as seen from its input port and equalizer. FC = fiber coupler.

The following parameters are commonly used in analysis:

 Es Amplitude of signal optical field at the receiver EL Amplitude of local oscillator optical field Ps, PL Optical power of signal and local oscillator at the input of the photodetector s(t) The modulated pulse $⟨ i N S 2 ( t ) ⟩$ Mean square noise current (power) produced by the total optical intensity on the photodetector $⟨ i s 2 ( t ) ⟩$ Mean square current produced by the photodetector by s(t) SNS(t) Shot noise spectral density of $⟨ i s 2 ( t ) ⟩$ and local oscillator power $⟨ i N e q 2 ( t ) ⟩$ Equivalent noise current of the electronic preamplifier at its input ZT(ω) Transfer impedance of the electronic preamplifier HE(ω) Voltage transfer characteristic of the electronic equalizer followed the electronic preamplifier

The combined field of the signal and LO via a directional coupler can be written with their separate polarized field components as

3.48()$E s X = K s X E s cos ( ω s t − ϕ m ( t ) ) E s Y = K s Y E s cos ( ω s t − ϕ m ( t ) + δ s ) E L X = K L X E L cos ( ω L t ) E L Y = K L Y E L cos ( ω L t + δ L ) ϕ m ( t ) = π 2 K m s ( t )$

where $ϕ m ( t )$represents the phase modulation, Km is the modulation depth, and KsX, KsY, KLX, and KLY are the intensity fraction coefficients in the X and Y directions of the signal and LO fields, respectively.

Thus, the output fields at ports 1 and 2 of the FC in the X-plane can be obtained using the transfer matrix as

3.49()
3.50()

with α defined as the intensity coupling ratio of the coupler. Thus, the field components at ports 1 and 2 can be derived by combining the X and Y components from Equations 3.49 and 3.50; thus, the total powers at ports 1 and 2 are given as

3.51()

where ωIF is the intermediate angular frequency, which is equal to the difference between the frequencies of the LO and the carrier of the signals. ϕe is the phase offset, and ϕp − ϕe is the demodulation reference phase error.

In Equation 3.51, the total field of the signal and the LO are added and then the product of the field vector and its conjugate is taken to obtain the power. Only the term with frequency that falls within the range of the sensitive of the photodetector would produce the electronic current. Thus, the term with the sum of the frequency of the wavelength of the signal and LO would not be detected and only the product of the two terms would be detected as given.

Now assuming a binary PSK (BPSK) modulation scheme, the pulse has a square shape with amplitude +1 or −1, the PD is a p-i-n type, and the PD bandwidth is wider than the signal 3 dB bandwidth followed by an equalized electronic preamplifier. The signal at the output of the electronic equalizer or the input signal to the decision circuit is given by

3.52()

For a perfectly balanced receiver, KB=2 and α = 0.5; otherwise KB=1. The integrals of the first line in Equation 3.52 are given by

3.53()

V D (f) is the transfer function of the matched filter for equalization, and TB is the bit period. The total noise voltage as a sum of the quantum shot noise generated by the signal and the LO and the total equivalent noise of the electronic preamplifier at the input of the preamplifier and at the output of the equalizer is given by

3.54()$⟨ v N 2 ( t ) ⟩ = [ K B α S Is ′ + ( 2 − K B ) S Ix + S IE ] ∫ − ∞ ∞ | H 4 ( f ) | 2 d f K I S 2 ⟨ v N 2 ( t ) ⟩ = [ K B α S I S ′ + ( 2 − K B ) S I x + S I E ] K I S 2 T B$

For homodyne and heterodyne detection, we have

3.55()$⟨ v N 2 ( t ) ⟩ = ℜ q P L λ T B [ K B α S ′ I S + ( 2 − K B ) S I x + S I E ]$

where the spectral densities $S ′ I X , S ′ I E$ are given by

3.56()$S ′ I X = S I X S ′ IS , S ′ IE = S IE S ′ IS$

Thus, the receiver sensitivity for BPSK and equiprobable detection and Gaussian density distribution is given by

3.57()$P e = 1 2 e r f c ( δ 2 )$

with δ given by

3.58()

Thus, using Equations 3.52, 3.55, and 3.58 we obtain the receiver sensitivity in the linear power scale as

3.59()$P s = ⟨ P s ( t ) ⟩ = ℜ q δ 2 4 λ T B K H 2 [ K B α S J S ′ + ( 2 − K B ) S J x + S J E ] η K p ( 1 − α ) α K B 2 sin 2 ( π 2 K m ) cos 2 ( ϕ p − ϕ e )$

#### 3.6.2.2  Shot Noise-Limited Receiver Sensitivity

In the case when the power of the LO dominates the noise of the electronic preamplifier and the equalizer, the receiver sensitivity (in linear scale) is given as

3.60()$P s = ⟨ P s L ⟩ = ℜ q δ 2 4 λ T B K H 2$

This shot-noise-limited receiver sensitivity can be plotted as shown in Figure 3.24.

Figure 3.24   (a) Receiver sensitivity of coherent homodyne and heterodyne detection, signal power versus bandwidth over the wavelength. (b) Power penalty of the receiver sensitivity from the shot-noise-limited level as a function of the excess noise of the LO.

(Extracted from Hodgkinson, I., IEEE J. Lightwave Technol., 5, 573–587, 1987, Figures 1 and 2. With permission.)

#### 3.6.2.3  Receiver Sensitivity under Nonideal Conditions

Under a nonideal condition, the receiver sensitivity departs from the shot-noise-limited sensitivity and is characterized by the receiver sensitivity penalty PDT as

3.61()
3.62()$P D T = 10 Log 10 [ K B α S ′ I S + ( 2 − K B ) S I x + S I E K B α ] − 10 Log 10 [ K B ( 1 − α ) ] − 10 Log 10 ( [ η ] [ K p ] sin 2 ( π 2 K m ) cos 2 ( ϕ p − ϕ e ) )$

where η is the LO excess noise factor.

The receiver sensitivity is plotted against the ratio fB/λ for the case of homodyne and heterodyne detection, as shown in Figure 3.24a, and the power penalty of the receiver sensitivity against the excess noise factor of the LO, as shown in Figure 3.24b. Receiver power penalty can be deduced as a function of the total electronic equivalent noise spectral density, and as a function of the rotation of the polarization of the LO that can be found in [30]. Furthermore, in [31], the receiver power penalty and the normalized heterodyne center frequency can vary as a function of the modulation parameter and as a function of the optical power ratio at the same polarization angle.

#### 3.6.3  Digital Processing Systems

A generic structure of the coherent reception and DSP system is shown in Figure 3.25, in which the DSP system is placed after the sampling and conversion from analog state to digital form. Obviously, the optical signal fields are beating with the LO laser whose frequency would be approximately identical to the signal channel carrier. The beating occurs in the square law photodetectors, that is, the summation of the two fields are squared and the product term is decomposed into the difference and summation term; thus, only the difference term is fallen back into the baseband region and amplified by the electronic preamplifier, which is a balanced differential transimpedance type.

Figure 3.25   Coherent reception and the DSP system.

If the signals are complex, then there are the real and imaginary components that form a pair. Another pair comes from the other polarization mode channel. The digitized signals of both the real and imaginary parts are processed in real time or offline. The processors contain the algorithms to combat a number of transmission impairments such as the imbalance between the inphase and the quadrature components created at the transmitter, the recovery of the clock rate and timing for resampling, the carrier PE for estimation of the signal phases, adaptive equalization for compensation of propagation dispersion effects using ML phase estimation, and so on. These algorithms are built into the hardware processors or memory and loaded to processing subsystems.

The sampling rate must normally be twice that of the signal bandwidth to ensure that the Nyquist criteria are satisfied. Although this rate is very high for 25 G to 32 GSy/s optical channels, Fujitsu ADC has reached this requirement with a sampling rate of 56 G to 64 GSa/s, as depicted in Figure 3.37.

The linewidth resolution of the processing for semiconductor device fabrication has been progressed tremendously over the year in an exponential trend as shown in Table 3.1. These progresses could be made due to the successes in the lithographic techniques using optical at short wavelength such as the UV, the electronic optical beam, and X-ray lithographic with appropriate photoresist such as SU-80, which would allow the line resolution to reach 5 nm in 2020. So, if we plot the trend in a log-linear scale as shown in Figure 3.26, a linear line is obtained, meaning that the resolution is reduced exponentially. When the gate width is reduced, the electronic speed would increase tremendously; at 5 mm, the speed of the electronic CMOS device in SiGe would reach several tens of GHz. Regarding the high-speed ADC and digital-to-analog converter (DAC), the clock speed is increased by paralleling, delaying of extracted outputs of the registers, and taking the summation of all the digitized digital lines to form a very high speed operation. For example, for Fujitsu 64 GSa/s DAC or ADC, the applied clock sinusoidal waveform is only 2 GHz. Figure 3.27 shows the progresses in the speed development of Fujitsu ADC and DAC.

Figure 3.26   Semiconductor manufacturing with resolution of line resolution.

Figure 3.27   Evolution of ADC and DAC operating speed with corresponding linewidth resolution.

### Table 3.1   Milestones of Progresses of Linewidth Resolution

Semiconductor Manufacturing Processes and Spatial Resolution (Gate Width)

10 μm—1971

800 nm (0.80 μm)—1989:

90 nm—2002: electron

14 nm—approx. 2014:

3 μm—1975

UV lithography

lithography

X-ray lithography

1.5 μm—1982

600 nm (0.60 μm)—1994

65 nm—2006

10 nm—approx. 2016:

1 μm—1985

350 nm (0.35 μm)—1995

45 nm—2008

X-ray lithography

250 nm (0.25 μm)—1998

32 nm—2010

7 nm—approx. 2018:

180 nm (0.18 μm)—1999

22 nm—2012

X-ray lithography

30 nm (0.13 μm)—2000

5 nm—approx. 2020:

X-ray lithography

#### 3.6.3.1.1  Definition

Effective number of bits (ENOBs) is a measure of the quality of a digitized signal. The resolution of a DAC or ADC is commonly specified by the number of bits used to represent the analog value, in principle, giving 2N signal levels for an N-bit signal. However, all real signals contain a certain amount of noise. If the converter is able to represent signal levels below the system noise floor, the lower bits of the digitized signal only represent system noise and do not contain useful information. ENOB specifies the number of bits in the digitized signal above the noise floor. Often, ENOB is also used as a quality measure for other blocks such as sample-and-hold amplifiers. This way analog blocks can also be easily included to signal-chain calculations as the total ENOB of a chain of blocks is usually below the ENOB of the worst block.

Thus, we can represent the ENOB of a digitalized system by

3.63()$ENOB = SINAD − 1.76 6.02$

where all values are given in dB, and the signal-to-noise and distortion ratio (SINAD) is the ratio of the total signal including distortion and noise to the wanted signal; the 6.02 term in the divisor converts decibels (a log10 representation) to bits (a log2 representation); and the 1.76 term comes from quantization error in an ideal ADC*

http://en.wikipedia.org/wiki/ENOB - cite_note-3 (access date: Sept. 2011).

.

This definition compares the SINAD of an ideal ADC or DAC with a word length of ENOB bits with the SINAD of the ADC or DAC being tested. Indeed the SINAD is a measure of the quality of a signal from a communications device, often defined as

3.64()$SINAD = P sig + P noise + P distorion P noise + P distorion$

where P is the average power of the signal, noise, and distortion components. SINAD is usually expressed in dB and is quoted alongside the receiver sensitivity, to give a quantitative evaluation of the receiver sensitivity. Note that with this definition, unlike SNR, a SINAD reading can never be less than 1 (i.e., it is always positive when quoted in dB).

When calculating the distortion, it is common to exclude the DC components. Because of the widespread use, SINAD has collected a few different definitions. SINAD is calculated as one of the following: (i) The ratio of (a) total received power, that is, the signal to (b) the noise-plus-distortion power. This is modeled by the equation above. (ii) The ratio of (a) the power of original modulating audio signal, that is, from a modulated radio frequency carrier to (b) the residual audio power, that is, noise-plus-distortion powers remaining after the original modulating audio signal is removed. With this definition, it is now possible for SINAD to be less than 1. This definition is used when SINAD is used in the calculation of ENOB for an ADC.

### Example:Consider the following measurements of a 3-bit unipolar DAC with reference voltage Vref = 8 V:

 Digital input 000 1 10 11 100 101 110 111 Analog output (V) −0.01 1.03 2.02 2.96 3.95 5.02 6 7.08

The offset error in this case is −0.01 V or −0.01 LSB as 1 V = 1 LSB (lower sideband) in this example. The gain error is (7.08 + 0.0.1)/(7/1)/1= 0.09 LSB, where LSB stands for the least significant bits. Correcting the offset and gain error, we obtain the following list of measurements: (0, 1.03, 2.00, 2.93, 3.91, 4.96, 5.93, 7) LSB. This allows the INL and DNL to be calculated: INL = (0, 0.03, 0, −0.07, −0.09, −0.04, −0.07, 0) LSB, and DNL = (0.03, −0.03, −0.07, −0.02, 0.05, −0.03, 0.07, 0) LSB.

Differential nonlinearity (DNL): For an ideal ADC, the output is divided into 2N uniform steps, each with Δ width as shown in Figure 3.28. Any deviation from the ideal step width is called DNL and is measured in number of counts (lower sidebands). For an ideal ADC, the DNL is 0 LSB. In a practical ADC, DNL error comes from its architecture. For example, in an successive-approximation-register ADC, DNL error may be caused near the mid-range due to mismatching of its DAC.

Figure 3.28   Representation of DNL in a transfer curve of an ADC.

Integral nonlinearity (INL) is a measure of how closely the ADC output matches its ideal response. INL can be defined as the deviation in LSB of the actual transfer function of the ADC from the ideal transfer curve. INL can be estimated using DNL at each step by calculating the cumulative sum of DNL errors up to that point. In reality, INL is measured by plotting the ADC transfer characteristics. INL is popularly measured using either (i) best-fit (best straight line) method or (ii) end point method.

Best-fit INL: The best-fit method of INL measurement considers offset and gain error. One can see in Figure 3.29 that the ideal transfer curve considered for calculating the best-fit INL does not go through the origin. The ideal transfer curve drawn here depicts the nearest first-order approximation to the actual transfer curve of the ADC.

Figure 3.29   Best-fit INL.

The intercept and slope of this ideal curve can lend us the values of the offset and gain error of the ADC. Quite intuitively, the best-fit method yields better results for INL. For this reason, many times, this is the number present on ADC datasheets.

The only real use of the best-fit INL number is to predict distortion in time-variant signal applications. This number would be equivalent to the maximum deviation for an AC application. However, it is always better to use the distortion numbers than INL numbers. To calculate the error budget, end-point INL numbers provide a better estimation. Also, this is the specification that is generally provided in datasheets. So, one has to use this instead of end-point INL.

End-point INL: The end-point method provides the worst-case INL. This measurement passes the straight line through the origin and maximum output code of the ADC (Figure 3.6). As this method provides the worst-case INL, it is more useful to use this number as compared to the one measured using best fit for DC applications. This INL number would be typically useful for error budget calculation. This parameter must be considered for applications involving precision measurements and control (Figure 3.30).

Figure 3.30   End-point INL.

The absolute and relative accuracy can now be calculated. In this case, the ENOB absolute accuracy is calculated using the largest absolute deviation D, in this case, 0.08 V:

3.65()

The ENOB relative accuracy is calculated using the largest relative (INL) deviation d, in this case, 0.09 V.

3.66()$d = V ref 2 ENOB → ENOB = 6.47 bits$

For this kind of ENOB calculation, note that the ENOB can be larger or smaller than the actual number of bits (ANOBs). When the ENOB is smaller than the ANOB, this means that some of the LSBs of the result are inaccurate. However, one can also argue that the ENOB can never be larger than the ANOB, because you always have to add the quantization error of an ideal converter, which is ±0.5 LSB. Different designers may use different definitions of ENOB!

#### 3.6.3.1.2  High-Speed ADC and DAC Evaluation Incorporating Statistical Property

The ENOB of an ADC is considered as the number of bits that an analog signal can convert to its digital equivalent by the number of levels represented by the modulo-2 levels, which are reduced due to noises contributed by electronic components in such a convertor. Thus, only an effective number of equivalent bits can be accounted for. Hence, the term ENOB is proposed.

As shown in Figure 3.31b, a real ADC can be modeled as a cascade of two ideal ADCs and additive noise sources and an AGC amplifier [31]. The quantized levels are thus equivalent to a specific ENOB as far as the ADC is operating in the linear nonsaturated region. If the normalized signal amplitude/power surpasses unity, the saturated region, then the signals are clipped. The decision level of the quantization in an ADC normally varies following a normalized Gaussian PDF; thus, we can estimate the RMS noise introduced by the ADC as

3.67()

Figure 3.31   (a) Measured ENOB frequency response of a commercial real-time DSA of 20 GHz bandwidth and sampling rate of 50 GSa/s. (b) Deduced ADC model of variable ENOB based on the experimental frequency response of (a) and spectrum of broadband signals.

where:

• σ is the variance
• x is the variable related to the integration of decision voltage
• similarly, y for integration inside one LSB

Given the known quantity LSB2/12 by the introduction of the ideal quantization error, σ2 can be determined via the Gaussian noise distribution. We can thus deduce the ENOB values corresponding to the levels of Gaussian noise as

3.68()$ENOB = N − log 2 ( LSB 2 / 12 + ( A σ ) 2 LSB / 12 )$

where A is the RMS amplitude derived from the noise power. According to the ENOB model, the frequency response of ENOB of the digital sampling analyzer (DSA) is shown in Figure 3.31a, with the excitation of the DSA by sinusoidal waves of different frequencies. As observed, the ENOB varies with respect to the excitation frequency, in the range from 5 to 5.5. Having known the frequency response of the sampling device, what is the ENOB of the device when excited with broadband signals? This indicates the different resolution of the ADC of the receiver of the transmission operating under different noisy and dispersive conditions; thus, an equivalent model of ENOB for performance evaluation is essential. We note that the amplitudes of the optical fields arrived at the receiver vary depending on the conditions of the optical transmission line. The AGC has a nonlinear gain characteristic in which the input-sampled signal power level is normalized with respect to the saturated (clipping) level. The gain is significantly high in the linear region and saturated in the high level. The received signal R_Xin is scaled with a gain coefficient according $R_ X out = R_ X in / P in_av / P Re f$ where the signal-averaged power Pin_av is estimated and the gain is scaled relative to the reference power level PRef of the AGC; then a linear scaling factor is used to obtain the output sampled value R_Xout. The gain of the AGC is also adjusted according to the signal energy, via the feedback control path from the DSP (see Figure 3.31b). Thus, new values of ENOB can be evaluated with noise distributed across the frequency spectrum of the signals, by an averaging process. This signal-dependent ENOB is now denoted as ENOBs.

#### 3.6.3.1.3  Impact of ENOB on Transmission Performance

Figure 3.32   (a) B2B performance with different ENOBs values of the ADC model with simulated data (8-bit ADC) and (b) OSNR versus BER under different simulated ENOBs of offline data obtained from an experimental digital coherent receiver.

(From Mao, B. N. et al., Investigation on the ENOB and clipping effect of real ADC and AGC in coherent QAM transmission system. In Proceedings of ECOC, Geneva, 2011.)

Figure 3.33   Comprehensive effects of AGC clipping (inverse of PRef) and ENOBs of the coherent receiver, experimental transmission under (a) B2B, (b) DWDM linear operation with full CD compensation (CDC) and (b) linear, (c) nonlinear regions with 4 dBm launch power; and (d) non-CD compensation and (d) linear, (e) nonlinear region with launch power of 5 dBm.

#### 3.6.3.2  Digital Processors

The structures of the DAC and ADC are shown in Figures 3.34 and 3.35, respectively. Normally, there would be four DACs in an IC, in which each DAC section is clocked with a clock sequence which is derived from a lower frequency sinusoidal wave injected externally into the DAC. Four units are required for the in-phase and quadrature phase components of QAM-modulated polarized channels; thus the notations of IDAC and QDAC are shown in the diagram. Similarly, the optical received signals of PDM-QAM would be sampled by a four-phase sampler and then converted to digital form into four groups of I and Q lanes for processing in the DSP subsystem. Because of the interleaving of the sampling clock waveform, the digitalized bits appear simultaneously at the end of a clock period that is sufficiently long, so that the sampling number is sufficiently large to achieve all samples. For example, as shown in Figure 3.35, 1024 samples are achieved at a periodicity corresponding to 500 MHz cycle clock for 8-bit ADC. Thus, the clock has been slowed down by a factor of 128, or alternatively, the sampling interval is 1/(128 × 500 MHz) = 1/64 GHz. The sampling is implemented using a CHArged mode Interleaved Sampler (CHAIS).

Figure 3.34   Fijitsu DAC structures for four channel PDM_QPSK signals: (a) schematic diagram and (b) processing function.

Figure 3.35   ADC principles of operations (CHAIS).

Figure 3.36 shows a generic diagram of an optical DSP-based transceiver employing both DAC and ADC under QPSK modulated or QAM signals. The current maximum sampling rate of 64 GSa/s is available commercially. An IC image of the ADC chip is shown in Figure 3.37.

Figure 3.36   Schematic of a typical structure of ADC and ADC transceiver subsystems for PDM-QPSK modulation channels.

Figure 3.37   Fujitsu ADC subsystems with a dual convertor structure.

#### 3.7  Concluding Remarks

This chapter has described the principles of coherent reception and associated techniques with noise considerations and main functions of the DSP. The DSP algorithms will be described, not in a separate chapter, but between lines in the chapters.

Furthermore, the matching of the LO laser and that of the carrier of the transmitted channel is very important for effective CoD, if not degradation of the sensitivity of results. The International Telecommunication Union standard requires that for DSP- based coherent receiver the frequency offset between the LO and the carrier must be within the limit of ±2.5 GHz. Furthermore, in practice it is expected that in network and system management the tuning of the LO is to be done remotely and automatic locking of the LO with some prior knowledge of the frequency region to set the LO initial frequency. Thus, this action briefly describes the optical phase locking the LO source for an intradyne coherent reception subsystem.

#### References

A. H. Gnauck and P. J. Winzer, Optical phase-shift-keyed transmission, J. Lightwave Technol., 23, 115–130, 2005.
R. C. Alferness, Guided wave devices for optical communication, IEEE J. Quantum. Elect., QE-17, 946–959, 1981.
W. A. Stallard, A. R. Beaumont, and R. C. Booth, Integrated optic devices for coherent transmission, IEEE J. Lightwave Technol., LT-4(7), 852–857, July 1986.
V. Ferrero and S. Camatel, Optical phase locking techniques: An overview and a novel method based on single sideband sub-carrier modulation, Opt. Express, 16(2), 818–828, 21 January 2008.
I. Garrett and G. Jacobsen, Theoretical analysis of heterodyne optical receivers for transmission systems using (semiconductor) lasers with nonnegligible linewidth, IEEE J. Lightwave Technol., LT-3/4, 323–334, 1986.
G. Nicholson, Probability of error for optical heterodyne DPSK systems with quantum phase noise, Electron. Lett., 20/24, 1005–1007, 1984.
S. Shimada, Coherent Lightwave Communications Technology. Chapman and Hall, London, 1995, p. 27.
Y. Yamamoto and T. Kimura, Coherent optical fiber transmission system, IEEE J. Quantum Electron., QE-17, 919–934, 1981.
S. Savory and T. Hadjifotiou, Laser linewidth requirements for optical DQPSK optical systems, IEEE Photonic Technol. Lett., 16(3), 930–932, March 2004.
K. Iwashita and T. Masumoto, Modulation and detection characteristics of optical continuous phase FSK transmission system, IEEE J. Lightwave Technol., LT-5/4, 452–462, 1987.
F. Derr, Coherent optical QPSK intradyne system: Concept and digital receiver realization, IEEE J. Lightwave Technol., 10(9), 1290–1296, 1992.
G. Bosco, I. N. Cano, P. Poggiolini, L. Li, and M. Chen, MLSE-based DQPSK transmission in 43Gb/s DWDM long-haul dispersion managed optical systems, IEEE J. Lightwave Technol., 28(10), May 15, 2010.
D.-S. Ly-Gagnon, S. Tsukamoto, K. Katoh, and K. Kikuchi, Coherent detection of optical quadrature phase-shift keying signals with carrier phase estimation, J. Lightwave Technol., 24, 12–21, 2006.
E. Ip and J. M. Kahn, Feedforward carrier recovery for coherent optical communications, J. Lightwave Technol., 25, 2675–2692, 2007.
L. N. Binh, Dual-ring 16-Star QAM direct and coherent detection in 100 Gb/s optically amplified fiber transmission: Simulation, Opt. Quant. Electron., August 5, 2008. Accepted December 2008. Published on line Dec 2008.
L. N. Binh, Generation of multi-level amplitude-differential phase shift keying modulation formats using only one dual-drive Mach-Zehnder interferometric optical modulator, Opt. Eng., 48(4), 2009.
M. Nazarathy, X. Liu, L. Christen, Y. K. Lize, and A. Willner, Self-coherent multisymbol detection of optical differential phase-shift-keying, J. Lightwave Technol., 26, 1921–1934, 2008.
R. Noe, PLL-free synchronous QPSK polarization multiplex/diversity receiver concept with digital I&Q baseband processing, IEEE Photonics Technol. Lett., 17, 887–889, 2005.
L. G. Kazovsky, G. Kalogerakis, and W.-T. Shaw, Homodyne phase-shift-keying systems: Past challenges and future opportunities, J. Lightwave Technol., 24, 4876–4884, 2006.
J. P. Gordon and L. F. Mollenauer, Phase noise in photonic communications systems using linear amplifiers, Opt. Lett., 15, 1351–1353, 1990.
H. Kim and A. H. Gnauck, Experimental investigation of the performance limitation of DPSK systems due to nonlinear phase noise, IEEE Photonics Technol. Lett., 15, 320–322, 2003.
S. Zhang, P. Y. Kam, J. Chen, and C. Yu, Receiver sensitivity improvement using decision-aided maximum likelihood phase estimation in coherent optical DQPSK system. In Conference on Lasers and Electro-Optics/Quantum Electronics and Laser Science and Photonic Applications Systems Technologies, Technical Digest (CD) (Optical Society of America, 2008), paper CThJJ2.
P. Y. Kam, Maximum-likelihood carrier phase recovery for linear suppressed-carrier digital data modulations, IEEE Trans. Commun., COM-34, 522–527, June 1986.
E. M. Cherry and D. A. Hooper, Amplifying Devices and Amplifiers, John Wiley & Sons, New York, 1965.
E. Cherry and D. Hooper, The design of wide-band transistor feedback amplifiers, Proc. IEE, 110(2), 375–389, 1963.
N. M. S. Costa and A. V. T. Cartaxo, Optical DQPSK system performance evaluation using equivalent differential phase in presence of receiver imperfections, IEEE J. Lightwave Technol., 28(12), 1735–1744, June 2010.
H. Tran, F. Pera, D.S. McPherson, D. Viorel, and S. P. Voinigescu, 6-kΩ, 43-Gb/s differential transimpedance-limiting amplifier with auto-zero feedback and high dynamic range, IEEE J. Solid St. Circ., 39(10), 1680–1689, 2004.
S. Zhang, P. Y. Kam, J. Chen, and C. Yu, A comparison of phase estimation in coherent optical PSK system. In Photonics Global ’08, Paper C3-4A-03, Singapore, December 2008.
S. Zhang, P. Y. Kam, J. Chen, and C. Yu, Adaptive decision-aided maximum likelihood phase estimation in coherent optical DQPSK system. In OptoElectronics and Communications Conference (OECC) ’08, Paper TuA-4, pp. 1–2, Sydney, Australia, July 2008.
I. Hodgkinson, Receiver analysis for optical fiber communications systems, IEEE J. Lightwave Technol., 5(4), 573–587, 1987.
N. Stojanovic, An algorithm for AGC optimization in MLSE dispersion compensation optical receivers, IEEE Trans. Circ. Syst. I, 55, 2841–2847, 2008.
B. N. Mao et al., Investigation on the ENOB and clipping effect of real ADC and AGC in coherent QAM transmission system. In Proceedings of ECOC, Geneva, 2011.

## Use of cookies on this website

We are using cookies to provide statistics that help us give you the best experience of our site. You can find out more in our Privacy Policy. By continuing to use the site you are agreeing to our use of cookies.