Detection of optical signals can be carried out at the optical receiver by direct conversion of optical signal power to electronic current in the photodiode (PD) and then electronic amplification. This chapter provides the fundamental understanding of coherent detection (CoD) of optical signals, which requires the mixing of the optical fields of the optical signals and that of the local oscillator (LO), a highpower laser so that its beating product would result in the modulated signals preserving both its phase and amplitude characteristics in the electronic domain. Optical preamplification in CoD can also be integrated at the front end of the optical receiver.
Detection of optical signals can be carried out at the optical receiver by direct conversion of optical signal power to electronic current in the photodiode (PD) and then electronic amplification. This chapter provides the fundamental understanding of coherent detection (CoD) of optical signals, which requires the mixing of the optical fields of the optical signals and that of the local oscillator (LO), a highpower laser so that its beating product would result in the modulated signals preserving both its phase and amplitude characteristics in the electronic domain. Optical preamplification in CoD can also be integrated at the front end of the optical receiver.
With the exponential increase in data traffic, especially due to the demand for ultrabroad bandwidth driven by multimedia applications, costeffective ultrahighspeed optical networks have become highly desired. It is expected that Ethernet technology will not only dominate in access networks but also will become the key transport technology of next generation metro/core networks. 100 Gigabit Ethernet (100 GbE) is currently considered to be the next logical evolution step after 10 GbE. Based on the anticipated 100 GbE requirements, 100 Gbit/s data rate of serial data transmission per wavelength is required. To achieve this data rate while complying with current system design specifications such as channel spacing, chromatic dispersion (CD), and polarization mode dispersion (PMD) tolerance, coherent optical communication systems with multilevel modulation formats will be desired, because it can provide high spectral efficiency, high receiver sensitivity, and potentially high tolerance to fiber dispersion effects [1–6].^{*}
Synchronous detection is implemented by mixing the signals and a strong local oscillator in association with the phase locking of the local oscillator to that of the carrier.
Compared to conventional direct detection in intensitymodulation/directdetection (IMDD) systems that only detects the intensity of the light of the signal, CoD can retrieve the phase information of the light, and therefore, can tremendously improve the receiver sensitivity.Coherent optical receivers are important components in longhaul optical fiber communication systems and networks to improve the receiver sensitivity and thus extra transmission distance. Coherent techniques were considered for optical transmission systems in the 1980s when the extension of repeater distance between spans is pushed to 60 km instead of 40 km for single mode optical fiber at bit rate of 140 Gb/s. However, in the late 1980s the invention of optical fiber amplifiers has overcome this attempt? Recently, interests in coherent optical communications have attracted significant research activities for ultrabit rate dense wavelength division multiplexing (DWDM) optical systems and networks. The motivation has been possible due to the fact that (i) the uses of optical amplifiers in cascade fiber spans have added significant noises and thus limit the transmission distance, (ii) the advances of digital signal processors (DSP) whose sampling rate can reach few tens of gigasamples/s allowing the processing of beating signals to recover the phase or phase estimation (PE), (iii) the availability of advanced signal processing algorithms such as Viterbi and Turbo algorithms, and (iv) that the differential coding and modulation and detection of such signals may not require an optical phaselocked loop (OPLL), and hence selfcoherent and DSP to recover transmitted signals. These technological advances, especially in the digital processors at ultrasampling rate, allow overcoming several difficulties in homodyne coherent reception conducted in the first coherent system generation in the 1980s.
As is well known, a typical arrangement of an optical receiver is that the optical signals are detected by a PD (a PIN diode or avalanche photodiode [APD] or a photoncounting device); electrons generated in the photodetector are then electronically amplified through a front end electronic amplifier. The electronic signals are then decoded for recovery of original format. However, when the fields of incoming optical signals are mixed with those of a LO whose frequency can be identical or different to that of the carrier, the phase, and frequency property of the resultant signals reflect those of the original signals. Coherent optical communication systems have also been reviving dramatically due to electronic processing and availability of stable narrow linewidth lasers.
This chapter deals with the analysis and design of coherent receivers with OPLL and the mixing of optical signals and that of the LO in the optical domain and thence detected by the optoelectronic receivers following this mixing. Thus, both the optical mixing and photodetection devices act as the fundamental elements of a coherent optical receiver. Depending on the frequency difference between the lightwave carrier of the optical signals and that of the LO, the CoD can be termed as heterodyne or homodyne detection. For heterodyne detection, there is a difference in the frequency and thus the beating signal is fallen in a passband region in the electronic domain. Thus, all the electronic processing at the front end must be in this passband region. On the other hand, in homodyne detection, there is no frequency difference and thus the detection is in the baseband of the electronic signal. Both cases would require a locking of the LO and carrier of the signals. An OPLL is thus treated in this chapter.
This chapter is organized as follows: Section 3.2 gives an account of the components of coherent receivers, Section 3.3 outlines the principles of optical coherent detection under heterodyne, homodyne, or intradyne techniques, and Section 3.4 gives details of the OPLL which is a very important development for modern optical coherent detection.
The design of an optical receiver depends on the modulation format of the signals and thus transmitted through the transmitter. The modulation of the optical carrier can be in the form of amplitude, phase, and frequency. Furthermore, the phase shaping also plays a critical role in the detection and the biterror rate (BER) of the receiver and thence the transmission systems. In particular, it is dependent on the modulation in analog or digital, Gaussian or exponential pulse shape, onoff keying or multiple levels, and so on.
Figure 3.1 shows the schematic diagram of a digital coherent optical receiver, which is similar to the direct detection receiver but with an optical mixer at the front end. Figure 3.2 shows the small signal equivalent circuits of such a receiver’s front end. However, the phase of the signals at base or passband of the detected signals in the electrical domain would remain in the generated electronic current and voltages at the output of the electronic preamplifier. An optical front end is an optical mixer combining the fields of the optical waves of the local laser and the optical signals so that the envelope of the optical signals can be beating with each other to a product with the summation of the frequencies and the difference of the frequencies of the lightwaves. Only the lower frequency term, which falls within the absorption range of the photodetector, is converted into the electronic current preserving both the phase and amplitude of the modulated signals.
Figure 3.1 Schematic diagram of a digital optical coherent receiver with an additional LO mixing with the received optical signals before detected by an optical receiver.
Figure 3.2 Schematic diagram of an electronic preamplifier in an optical receiver of a transimpedance electronic amplifier at the front end. The current source represents the electronic current generated in the photodetector due to the beating of the local oscillator and the optical signals. Cd = photodiode capacitance.
Thus, an optical receiver front end, very much the same as that of the direct detection, is connected following the optical processing front end consisting of a photodetector for converting lightwave energy into electronic currents; an electronic preamplifier for further amplification of the generated electronic current followed by an electronic equalizer for bandwidth extension, usually in voltage form; a main amplifier for further voltage amplification; a clock recovery circuitry for regenerating the timing sequence; and a voltagelevel decision circuit for sampling the waveform for the final recovery of the transmitted and received digital sequence. Therefore, the optoelectronic preamplifier is followed by a main amplifier with an automatic control to regulate the electronic signal voltage to be filtered and then sampled by a decision circuit with synchronization by a clock recovery circuitry.
An inline fiber optical amplifier can be incorporated in front of the photodetector to form an optical receiver with an optical amplifier front end to improve its receiving sensitivity. This optical amplification at the front end of an optical receiver will be treated in this chapter dealing with optical amplification processes.
The structure of the receiver is thus consisted of four parts: the optical mixing front, the front end section, the linear channel of the main amplifier and automatic gain control (AGC) if necessary, and the data recovery section. The optical mixing front end sums the optical fields of the LO and that of the optical signals. Polarization orientation between these lightwaves is very critical to maximize the beating of the additive field in the PD. Depending on whether the frequency difference is finite or null between these fields of the resulting electronic signals derived from the detector, the electronic signals can be in the baseband or the passband, and the detection technique is termed as a heterodyne or homodyne technique, respectively.
Optical CoD can be distinguished by the “demodulation” scheme in communications techniques in association with following definitions: (i) CoD is the mixing between two lightwaves or optical carriers, one is informationbearing lightwaves and the other a LO with an average energy much larger than that of the signals and (ii) demodulation refers to the recovery of baseband signals from the electrical signals.
A typical schematic diagram of a coherent optical communications employing guided wave medium and components is shown in Figure 3.1, in which a narrow band laser incorporating an optical isolator cascaded with an external modulator is usually the optical transmitter. Information is fed via a microwave power amplifier to an integrated optic modulator, commonly used LiNbO_{3} or EA types. The CoD is a principal feature of coherent optical communications, which can be further distinguished with heterodyne and homodyne techniques depending whether there is a difference or not between the frequencies of the LO and that of the carrier of the signals. A LO is a laser source whose frequency can be tuned and approximately equivalent to a monochromatic source; a polarization controller would also be used to match its polarization with that of the informationbearing carrier. The LO and the transmitted signal are mixed via a polarization maintaining coupler and then detected by a coherent optical receiver. Most of the previous CoD schemes are implemented in a mixture of photonic domain and electronic/microwave domain.
Coherent optical transmission has become the focus of research. One significant advantage is the preservation of all the information of the optical field during detection, leading to enhanced possibilities for optical multilevel modulation. This section investigates the generation of optical multilevel modulation signals. Several possible structures of optical Mary phaseshift keying (MaryPSK) and Mary quadrature amplitude modulation (MaryQAM) transmitters are shown and theoretically analyzed. Differences in the optical transmitter configuration and the electrical driving lead to different properties of the optical multilevel modulation signals. This is shown by deriving general expressions applicable to every MaryPSK and MaryQAM modulation format and exemplarily clarified for Square16QAM modulation.
Coherent receivers are distinguished between synchronous and asynchronous. Synchronous detection requires an OPLL that recovers the phase and frequency of the received signals to lock the LO to that of the signal so as to measure the absolute phase and frequency of the signals relative to that of the LO. Thus, synchronous receivers allow direct mixing of the bandpass signals and the baseband, so this technique is termed as homodyne reception. For asynchronous receivers, the frequency of the LO is approximately the same as that of the receiving signals and no OPLL is required. In general, the optical signals are first mixed with an intermediate frequency (IF) oscillator which is about two to three times that of the 3 dB passband. The electronic signals can then be recovered using electrical PLL at lower carrier frequency in the electrical domain. The mixing of the signals and an LO of an IF is referred to as heterodyne detection.
If no LO is used for demodulating the digital optical signals, then differential or selfhomodyne reception may be utilized, which is classically termed as autocorrelation reception process or selfheterodyne detection.
Coherent communications have been an important technique in the 1980s and the early 1990s, but then their research was interrupted with the advent of optical amplifiers in the late 1990s that offer up to 20 dB gain without difficulty. Nowadays, however, coherent systems have once again become the focus of interest, due to the availability of DSP and lowpriced components, the partly relaxed receiver requirements at high data rates and several advantages that CoD provides. The preservation of the temporal phase of the CoD enables new methods for adaptive electronic compensation of CD. With regard to WDM systems, coherent receivers offer tunability and allow channel separation via steep electrical filtering. Furthermore, only the use of CoD permits convergence to the ultimate limits of spectral efficiency. To reach higher spectral efficiencies, the use of multilevel modulation is required. For this, too, coherent systems are also beneficial, because all the information of the optical field is available in the electrical domain. This way complex optical demodulation with interferometric detection—which has to be used in direct detection systems—can be avoided and the complexity is transferred from the optical to the electrical domain. Several different modulation formats based on the modulation of all four quadratures of the optical field were proposed in the early 1990s, describing the possible transmitter and receiver structures and calculating the theoretical BER performance. However, a more detailed and practical investigation of multilevel modulation coherent optical systems for today’s networks and data rates is missing so far.
Currently, coherent reception has attracted significant interests due to the following reasons: (i) The received signals of the coherent optical receivers are in the electrical domain, which is proportional to that in the optical domain. This, in contrast to the direct detection receivers, allows exact electrical equalization or exact PE of the optical signals. (ii) Using heterodyne receivers, DWDM channels can be separated in the electrical domain by using electrical filters with sharp roll of the passband to the cutoff band. Presently, the availability of ultrahigh sampling rate DSP allows users to conduct filtering in the DSP in which the filtering can be changed with ease.
However, there are disadvantages that coherent receivers would suffer: (i) coherent receivers are polarization sensitive that requires polarization tracking at the front end of the receiver; (ii) homodyne receivers require OPLL and electrical PLL for heterodyne that would need control and feedback circuitry, optical, or electrical, which may be complicated; and (iii) for differential detection the compensation may be complicated due to the differentiation receiving nature.
In a later chapter when some advanced modulation formats are presented for optically amplified transmission systems, the use of photonic components is extensively exploited to take advantage of the advanced technology of integrated optics, planar lightwave circuits. The modulation formats of signals depend on whether the amplitude, the phase, or the frequency of the carrier is manipulated as mentioned in Chapter 2. In this chapter, the detection is coherently converted to the IF range in the electrical domain and the signal envelop. The downconverted carrier signals are detected and then recovered. Both binarylevel and multilevel modulations schemes employing amplitude, phase, and frequency shift keying (FSK) modulation are described in this chapter.
Thus, CoD can be distinguished by the difference between the central frequency of the optical channel and that of the LO. Three types can be classified as follows: (i) heterodyne, when the difference is higher than the 3 dB bandwidth of the baseband signal; (ii) homodyne, when the difference is nil; and (iii) intradyne, when the frequency difference falls within the baseband of the signal.
It is noted that to maximize the beating signals at the output of the photodetector, the polarizations of the LO and the signals must be aligned. In practice, this can be best implemented by the polarization diversity technique.
A typical schematic of a coherent optical reception subsystem is shown in Figure 3.3. Figure 3.4 shows also more details of a heterodyne reception subsystem in which the frequency of the channel carrier is different with that of the local oscillator. The LO, whose frequency can be higher or lower than that of the carrier, is mixed with the informationbearing carrier thus allowing down or upconversion of the information signals to the IF range. The downconverted electrical carrier and signal envelop is received by the photodetector. This combined lightwave is converted by the PD into electronic current signals, which are then filtered by an electrical bandpass filter (BPF) and then demodulated by a demodulator. A low pass filter (LPF) is also used to remove higher order harmonics of the nonlinear detection photodetection process, the squarelaw detection. With an envelope detector, the process is asynchronous, hence the name term asynchronous detection. If the downconverted carrier is recovered and then mixed with IF signals, then this is synchronous detection. It is noted that the detection is conducted at the IF range in electrical domain, hence there is a need of controlling the stability of the frequency spacing between the signal carrier and the LO. This means the mixing of these carriers would result into an IF carrier in the electrical domain prior to the mixing process or envelop detection to recover the signals.
Figure 3.3 Typical arrangement of coherent optical communications systems. LD/LC is a very narrow linewidth laser diode as an LO without any phase locking to the signal optical carrier. PM coupler is polarization maintaining fiber coupled device, PC = polarization controller.
Figure 3.4 Schematic diagram of optical heterodyne detection: (a) asynchronous and (b) synchronous, receiver structures. LPF = low pass filter, BPF = bandpass filter, PD = photodiode.
The CoD thus relies on the electric field component of the signal and the LO. The polarization alignment of these fields is critical for the optimum detection. The electric field of the optical signals and the LO can be expressed as
Where P_{s}(t) and P_{Lo} are the instantaneous signal power and average power of the signals and LO, respectively, ω_{s}(t) and ω_{LO} are the signal and LO angular frequencies, ϕ_{s} and ϕ_{LO} are the phase including any phase noise of the signal and the LO, and ψ(t) is the modulation phase. The modulation can be amplitude with the switching on and off (amplitude shift keying—ASK) of the optical power or phase or frequency with the discrete or continuous variation of the timedependent phase term. For discrete phase, it can be PSK, differential PSK (DPSK), or differential quadrature PSK (DQPSK), and when the variation of the phase is continuous, we have FSK if the rate of variation is different for the bit “1” and bit “0.”
Under an ideal alignment of the two fields, the photodetection current can be expressed by
where the higher frequency term (the sum) is eliminated by the photodetector frequency response, η is the quantum efficiency, q is the electronic charge, h is Plank’s constant, and υ is the optical frequency.
Thus, the power of the LO dominates the shotnoise process and at the same time boosts the signal level, hence enhancing the signaltonoise ratio (SNR). The oscillating term is resulted from the beating between the LO and the signal inside the PD, which is proportional to the amplitude and is the square root of the product of the power of the LO and the signal.
The electronic signal power S and shot noise N_{s} can be expressed as
where B is the 3 dB bandwidth of the electronic receiver. Thus, the optical signaltonoise ratio (OSNR) can be written as
Where N_{eq} is the total electronic noise equivalent power at the input to the electronic preamplifier of the receiver. From this equation, we can observe that if the power of the LO is significantly increased so that the shot noise dominates over the equivalent noise, at the same time increasing the SNR, the sensitivity of the coherent receiver can only be limited by the quantum noise inherent in the photodetection process. Under this quantum limit, the OSNR_{QL} is given by
Under the ASK modulation scheme, the demodulator of Figure 3.4 is an envelope detector (in lieu of the demodulator) followed by a decision circuitry. That is, the eye diagram is obtained, and a sampling instant is established with a clock recovery circuit. While the synchronous detection would require a locking between the frequencies of the carrier and the LO. The LO frequency is tuned to that the carrier according to the tracking of the frequency component of the beating signal. The amplitude demodulated envelope can then be expressed as
The IFω_{IF} is the difference between the frequencies of the LO and the signal carrier, and n_{x} and n_{y} are the expected values of the orthogonal noise power components, which are random variables.
The noise power terms can be assumed to follow a Gaussian probability distribution and are independent of each other with a zero mean and a variance σ; the probability density function (PDF) can thus be given as
With respect to the phase and amplitude, this equation can be written as [3]
where
The PDF of the amplitude can be obtained by integrating the phase amplitude PDF over the range of 0 to 2π and given as
where I_{0} is the modified Bessel function. If a decision level is set to determine the “1” and “0” level, then the probability of error and the BER can be obtained assuming an equal probability of error between the “1s” and “0s”:
where Q is the Magnum function and δ is given by
When the power of the LO is much larger than that of the signal and the equivalent noise current power, this SNR becomes
The physical representation of the detected current and the noise current due to the quantum shot noise and noise equivalent of the electronic preamplification is shown in Figure 3.5, in which the signal current can be general and derived from the output of the detection scheme, that from a photodetector or a backtoback (B2B) pair of photodetectors of a balanced receiver for detecting the phase difference of DPSK or DQPSK or continuousphase frequencyshift keying (CPFSK) signals and converting to amplitudes.
Figure 3.5 Equivalent current model at the input of the optical receiver, average signal current and equivalent noise current of the electronic preamplifier as seen from its input port.
The BER is optimum when setting its differentiation with respect to the decision level δ, and an approximate value of the decision level can be obtained as
ASK can be detected using synchronous detection^{*}
Synchronous detection is implemented by mixing the signals and a strong local oscillator in association with the phase locking of the local oscillator to that of the carrier.
and the BER is given byUnder the PSK modulation format, the detection is similar to that of Figure 3.4 for heterodyne detection (see Figure 3.6) but after the BPF, an electrical mixer is used to track the phase of the detected signal. The received signal is given by
Figure 3.6 Schematic diagram of optical heterodyne detection for PSK format.
The information is contained in the timedependent phase term φ(t).
When the phase and frequency of the voltagecontrolled oscillator (VCO) are matched with those of the signal carrier, then the received electrical signal can be simplified as
Under the Gaussian statistical assumption, the probability of the received signal of a “1” is given by
Furthermore, the probability of the “0” and that of the “1” are assumed to be equal. We can obtain the BER as the total probability of the received “1” and “0”:
As observed in the synchronous detection, a carrier recovery circuitry is required, usually implemented using a PLL, which complicates the overall receiver structure. It is possible to detect the signal by a selfhomodyne process by beating the carrier of onebit period to that of the next consecutive bit; this is called the differential detection. The detection process can be modified as shown in Figure 3.7, in which the phase of the IF carrier of one bit is compared with that of the next bit, and a difference is recovered to represent the bit “1” or “0.” This requires a differential coding at the transmitter and an additional phase comparator for the recovery process. In later chapters on DPSK, the differential decoding is implemented in photonic domain via a photonic phase comparator in form of an MZ delay interferometer (MZDI) with a thermal section for tuning the delay time of the optical delay line. The BER can be expressed as
Figure 3.7 Schematic diagram of optical heterodyne and differential detection for PSK format.
where s(t) is the modulating waveform and A_{k} represents the bit “1” or “0.” This is equivalent to the baseband signal, and the ultimate limit is the BER of the baseband signal.
The noise is dominated by the quantum shot noise of the LO, with its square noise current given by
where H(jω) is the transfer function of the receiver system, normally a transimpedance of the electronic preamp and that of a matched filter. As the power of the LO is much larger than the signal, integrating over the dB bandwidth of the transfer function, this current can be approximated by
Hence, the SNR (power) is given by
The BER is the same as that of a synchronous detection and is given by
The sensitivity of the homodyne process is at least 3 dB better than that of the heterodyne, and the bandwidth of the detection is half of its counterpart due to the doublesideband nature of the heterodyne detection.
The nature of FSK is based on the two frequency components that determine the bits “1” and “0.” There are a number of formats related to FSK depending on whether the change of the frequencies representing the bits is continuous or noncontinuous, the FSK or CPFSK modulation formats. For noncontinuous FSK, the detection is usually performed by a structure of dual frequency discrimination as shown in Figure 3.8, in which two narrow band filters are used to extract the signals. For CPFSK, both the frequency discriminator and balanced receiver for PSK detection can be used. The frequency discrimination is indeed preferred as compared with the balanced receiving structures because it would eliminate the phase contribution by the LO or optical amplifiers, which may be used as an optical preamp.
Figure 3.8 Schematic diagram of optical homodyne detection of FSK format.
When the frequency difference between the “1” and “0” equals a quarter of the bit rate, the FSK can be termed as the minimumshift keying modulation scheme. At this frequency spacing, the phase is continuous between these states.
Optical homodyne detection matches the transmitted signal phases to that of the LO phase signal. A schematic of the optical receiver is shown in Figure 3.9. The field of the incoming optical signals is mixed with the LO, whose frequency and phase are locked with that of the signal carrier waves via a PLL. The resultant electrical signal is then filtered and therefore a decision circuitry is formed.
Figure 3.9 General structure of an optical homodyne detection system. FC = fiber coupler, LPF = low pass filter, PLL = phase lock loop.
Optical homodyne detection requires the phase matching of the frequency of the signal carrier and that of the LO. This type of detection would give a very high sensitivity, in principle, of 9 photons/bit. Implementation of such a system would normally require an OPLL, whose structure of a recent development [4] is shown in Figure 3.10. The LO frequency is locked into the carrier frequency of the signals by shifting it to the modulated sideband component via the use of the optical modulator. A singlesideband optical modulator is preferred. However, a double sideband may also be used. This modulator is excited by the output signal of a VCO whose frequency is determined by the voltage level of the output of an electronic BPF condition to meet the required voltage level for driving the electrode of the modulator. The frequency of the LO is normally tuned to the region such that the frequency difference with respect to the signal carrier falls within the passband of the electronic filter. When the frequency difference is zero, there is no voltage level at the output of the filter and thus the OPLL has reached the final stage of locking. The bandwidth of the optical modulator is important so that it can extend the locking range between the two optical carriers.
Figure 3.10 Schematic diagram of optical homodyne detection—electrical line (dashed) and optical line (continuous and solid) using an OPLL.
Any frequency offset between the LO and the carrier is detected, and noise is filtered by the LPF. This voltage level is then fed to a VCO to generate a sinusoidal wave that is then used to modulate an intensity modulator modulating the lightwaves of the LO. The output spectrum of the modulator would exhibit two sidebands and the LO lightwave. One of these components would then be locked to the carrier. A close loop would ensure a stable locking. If the intensity modulator is biased at the minimum transmission point and the voltage level at the output of the VCO is adjusted to 2V_{π} with driven signals of π/2 phase shift with each other, then we would have carrier suppression and sideband suppression. This ease the confusion of the close loop locking.
Under a perfect phase matching, the received signal is given by
Where a_{k} takes the value ±1, and s(t) is the modulating waveform. This is a baseband signal, and thus the error rate is the same as that of the baseband system.
The shotnoise power induced by the LO and the signal power can be expressed as
where $\leftH(j\text{\omega})\right$ is the transfer function of the receiver whose expression, if under a matched filtering,can be
Where T is the bit period. Then the noise power becomes
Thus, the SNR is
and the BER is
For homodyne detection, a super quantum limit can be achieved. In this case, the LO is used in a very special way that matches the incoming signal field in polarization, amplitude, and frequency and is assumed to be phase locked to the signal. Assuming that the phase signal is perfectly modulated such that it acts inphase or counterphase with the LO, the homodyne detection would give a normalized signal current of
Assuming further that ${n}_{p}={n}_{\text{LO}}$, the number of photon for the LO for generation of detected signals, then the current can be replaced with 4n_{p} for the detection of a “1” and nothing for a “0” symbol.
When the linewidth of the light sources is significant, the IF deviates due to a phase fluctuation, and the PDF is related to this linewidth conditioned on the deviation δω of the IF. For a signal power of p_{s}, the total probability of error is given as
The PDF of the IF under a frequency deviation can be written as [5]
where Δυ is full linewidth at the full width half maximum (FWHM) of the power spectral density and T is the bit period.
The DPSK detection requires an MZDI and a balanced receiver either in the optical domain or in the electrical domain. If in the electrical domain, then the beating signals in the PD between the incoming signals and the LO would give the beating electronic current, which is then split. One branch is delay by onebit period and then summed up. The heterodyne signal current can be expressed as [6]
The phase ${\text{\varphi}}_{s}(t)$ is expressed by
The first term is the phase of the data and takes the value 0 or π. The second term represents the phase noise due to shot noise of the generated current and the third and fourth terms are the quantum shot noise due to the LO and the signals. The probability of error is given by
where p_{n}(.) is the PDF of the phase noise due to the shot noise and p_{q}(.) is for the quantum phase noise generated from the transmitter and the LO [7].
The probability of error can be written as
where Γ(.) is the gamma function and is the modified Bessel function of the first kind. The PDF of the quantum phase noise can be given as [8]
where D is the phase diffusion constant, and the standard deviation from the central frequency
is the sum of the transmitter and the LO FWHM linewidth. Substituting Equations 3.40 and 3.41 into Equation 3.39, we obtain
This equation gives the probability of error as a function of the received power. The probability of error is plotted against the receiver sensitivity, and the product of the linewidth and the bit rate (or the relative bandwidth of the laser linewidth and the bit rate) is shown in Figure 3.11 for DPSK modulation format at 140 Mbps bit rate and the variation of the laser linewidth from 0 to 2 MHz.
Figure 3.11 (a) Probability of error versus receiver sensitivity with linewidth as a parameter in MHz. (b) Degradation of optical receiver sensitivity at BER = 10–9 for DPSK systems as a function of the linewidth and bit period—bit rate = 140 Mb/s.
Recently, the laser linewidth requirement for DQPSK modulation and differential detection for DQPSK has also been studied. No LO is used, which means selfcoherent detection. It has been shown that the linewidth of up to 3 MHz of the transmitter laser would not significantly influence the probability of error as shown in Figure 3.12 [8]. Figure 3.13 shows the maximum linewidth of a laser source in a 10 GSymbols/s system. The loose bound is to neglect linewidth if the impact is to double the BER with the tighter bound being to neglect linewidth if the impact is a 0.1dB SNR penalty.
Figure 3.12 Analytical approximation (solid line) and numerical evaluation (triangles) of the BER for the cases of zero linewidth and that required to double the BER. The dashed line is the linear fit for zero linewidth. Bit rate 10 Gb/s per channel.
Figure 3.13 Criteria for neglecting linewidth in a 10 GSymbols/s system. The loose bound is to neglect linewidth if the impact is to double the BER with the tighter bound being to neglect linewidth if the impact is a 0.1 dB SNR penalty. Bit rate 10 GSymbols/s.
The probability of error of CPFSK can be derived by taking into consideration the delay line of the differential detector, the frequency deviation, and phase noise [10]. Similar to Figure 3.8, the differential detector configuration is shown in Figure 3.14a, and the conversion of frequency to voltage relationship in Figure 3.14b. If heterodyne detection is employed, then a BPF is used to bring the signals back to the electrical domain.
Figure 3.14 (a) Configuration of a CPFSK differential detection and (b) frequency to voltage conversion relationship of FSK differential detection.
The detected signal phase at the shotnoise limit at the output of the LPF can be expressed as
where τ is the differential detection delay time, ∆ω is the deviation of the angular frequency of the carrier for the “1” or “0” symbol, φ(t) is the phase noise due to the shot noise, n(t) is the phase noise due to the transmitter and the LO quantum shot noise and takes the values of ±1, the binary data symbol.
Thus, by integrating the detected phase from $\frac{\text{\Delta}\text{\omega}}{2}\text{\tau}\to \text{\pi}\frac{\text{\Delta}\text{\omega}}{2}\tau ,$ we obtain the probability of error as
Similar to the case of DPSK system, substituting Equations 3.40 and 3.41 into Equation 3.45, we obtain
where ω_{m} is the deviation of the angular frequency with m the modulation index, and T_{0} is the pulse period or bit period. The modulation index parameter b is defined as the ratio of the actual frequency deviation to the maximum frequency deviation. Figure 3.15 shows the dependence of degradation of the power penalty to achieve the same BER as a function of the linewidth factor Δυτ and the modulation index β.
Figure 3.15 (a) Dependence of receiver power penalty at a BER of 10–9 on modulation index β (ratio between frequency deviation and maximum frequency spacing between f1 and f2). (b) Receiver power penalty at a BER of 10–9 as a function of the product of the beat bandwidth and the bit delay time—effects excluding LD phase noise.
Optical phase diversity receivers combine the advantages of the homodyne with minimum signal processing bandwidth and heterodyne reception with no optical phase locking required. The term diversity is well known in radio transmission links that describes the transmission over more than one path. In optical receivers, the optical path is considered as due to different polarization and phase paths. In intradyne detection, the frequency difference, the IF, or the LOFO ((LO) frequency offset) between the LO and the central carrier is nonzero, and lies within the signal bandwidth of the baseband signal as illustrated in Figure 3.16 [11]. Naturally, the control and locking of the carrier and the LO cannot be exact, sometimes due to jittering of the source. Most of the time, the laser frequency is locked stably by oscillating the reflection mirror, and hence the central frequency is varied by a few hundreds of KHz. Thus, intradyne CoD is more realistic. Furthermore, the DSP in modern coherent reception system would be able to extract this difference without much difficulty in the digital domain [12]. Obviously, the heterodyne detection would require a large frequency range of operation of electronic devices, whereas homodyne and intradyne reception require simpler electronics. Either differential or nondifferential format can be used in DSPbased coherent reception. For differentialbased reception, the differential decoding would gain an advantage when there are slips in the cycles of bits due to walkoff of the pulse sequences over very long transmission noncompensating fiber lines.
Figure 3.16 Spectrum of CoD (a) homodyne, (b) intradyne, and (c) heterodyne
The diversity in phase and polarization can be achieved by using a π/2 hybrid coupler that splits the polarization of the LO and the received channels and mixing with a π/2 optical phase shift, and then the mixed signals are detected by a balanced photodetectors. This diversity detection is described in the next few sections (see also Figure 3.21).
Coherent techniques described above would offer significant improvement but face a setback due to the availability of stable LO and an OPLL for locking the frequency of the LO and that of the signal carrier.
DSP have been widely used in wireless communications and play key roles in the implementation of DSPbased coherent optical communication systems. DSP techniques have been applied to coherent optical communication systems to overcome the difficulties of OPLL, and also to improve the performance of the transmission systems in the presence of fiberdegrading effects including CD, PMD, and fiber nonlinearities.
Coherent optical receivers have the following advantages: (1) the shotnoiselimited receiver sensitivity can be achieved with a sufficient LO power; (2) closely spaced WDM channels can be separated with electrical filters having sharp rolloff characteristics; and (3) the ability of phase detection can improve the receiver sensitivity compared with the IMDD system [13]. In addition, any kind of multilevel phase modulation formats can be introduced by using the coherent receiver. While the spectral efficiency of binary modulation formats is limited to 1 bit/s/Hz/polarization (which is called the Nyquist limit), multilevel modulation formats with N bits of information per symbol can achieve up to the spectral efficiency of N bits/s/Hz/polarization. Recent research has focused on Mary PSK and even QAM with CoD, which can increase the spectral efficiency by a factor of log_{2}M [14–16]. Moreover, for the same bit rate, because the symbol rate is reduced, the system can have a higher tolerance to CD and PMD.
However, one of the major challenges in CoD is to overcome the carrier phase noise when using an LO to beat with the received signals to retrieve the modulated phase information. Phase noise can result from lasers, which will cause a power penalty to the receiver sensitivity. A selfcoherent multisymbol detection of optical differential Mary PSK is introduced to improve the system performance; however, higher analogtodigital conversion resolution and more DSP power are required as compared to a digital coherent receiver [17]. Further, differential encoding is also necessary in this scheme. As for the coherent receiver, initially, an OPLL is an option to track the carrier phase with respect to the LO carrier in homodyne detection. However, an OPLL operating at optical wavelengths in combination with distributed feedback lasers may be quite difficult to be implemented because the product of laser linewidth and loop delay is too large [18]. Another option is to use electrical PLL to track the carrier phase after downconverting the optical signal to an IF electrical signal in a heterodyne detection receiver as mentioned above. Compared to heterodyne detection, homodyne detection offers better sensitivity and requires a smaller receiver bandwidth [19]. On the other hand, coherent receivers employing highspeed analogtodigital converters (ADCs) and highspeed baseband DSP units are becoming increasingly attractive rather than using an OPLL for demodulation. Conventional block Mth power PE scheme is proposed in [13,18] to raise the received Mary PSK signals to the Mth power to estimate the phase reference in conjunction with a coherent optical receiver. However, this scheme requires nonlinear operations, such as taking the Mth power, and resolving the ±2π/M phase ambiguity, which incurs a large latency to the system. Such nonlinear operations would limit further potential for realtime processing of the scheme. In addition, nonlinear phase noises always exist in longhaul systems due to the Gordon–Mollenauer effect [20], which severely affect the performance of a phasemodulated optical system [21]. The results in [22] show that such Mth power PE techniques may not effectively deal with nonlinear phase noise.
The maximumlikelihood (ML) carrier phase estimator derived in [23] can be used to approximate the ideal synchronous CoD in optical PSK systems. The ML phase estimator requires only linear computations, and thus it is more feasible for online processing of real systems. Intuitively one can show that the ML estimation receiver outperforms the Mth power block phase estimator and conventional differential detection, especially when the nonlinear phase noise is dominant, thus significantly improving the receiver sensitivity and tolerance to the nonlinear phase noise. The algorithm of ML phase estimator is expected to improve the performance of coherent optical communication systems using different Mary PSK and QAM formats. The improvement by DSP at the receiver end can be significant for the transmission systems in the presence of fiberdegrading effects, including CD, PMD, and nonlinearities for both single channel and DWDM systems.
The electronic amplifier as a preamplification stage of an optical receiver plays a major role in the detection of optical signals so that optimum SNR and therefore the OSNR can be derived based on the photodetector responsivity. Under CoD, the amplifier noises must be much less than that of the quantum shot noises contributed by the high power level of the LO, which is normally about 10 dB above that of the signal average power.
Thus, this section gives an introduction of electronic amplifiers for wide band signals applicable to ultrahighspeed, highgain, and lownoise transimpedance amplifiers (TIAs). We concentrate on differential input TIAs, but address the detailed design of a single input single output with noise suppression technique in Section 3.7 with the design strategy for achieving stability in the feedback amplifier as well as low noise and wide bandwidth. We define the electronic noise of the preamplifier stage as the total equivalent input noise spectral density, that is, all the noise sources (current and voltage sources) of all elements of the amplifier are referred to the input port of the amplifier and thus an equivalent current source is found, from which the current density is derived. Once this current density is found, the total equivalent at the input can be found when the overall bandwidth of the receiver is determined. When this current is known, and with the average signal power, we can obtain without difficulty the SNR at the input stage of the optical receiver, and then the OSNR. On the other hand, if the OSNR required at the receiver is determined for any specific modulation format, then with the assumed optical power of the signal available at the front of the optical receiver and the responsivity of the photodetector we can determine the maximum electronic noise spectral density allowable by the preamplification stage and hence the design of the amplifier electronic circuit.
The principal function of an optoelectronic receiver is to convert the received optical signals into electronic equivalent signals, followed by amplification and sampling and processing to recover properties of the original shapes and sequence. So, at first, the optical domain signals must be converted to electronic current in the photodetection device, the photodetector of either pin or APD, in which the optical power is absorbed in the active region and both electrons and holes generated are attracted to the positive and negativebiased electrodes, respectively. Thus, the generated current is proportional to the power of the optical signals, hence the name “square law” detection. The pin detector is structured with a p+ and n+:doped regions sandwiched by the intrinsic layer in which the absorption of optical signal occurs. A high electric field is established in this region by reverse biasing the diode, and thus electrons and holes are attracted to either sides of the diode, resulting in generation of current. Similarly, an APD works with the reversebiasing level close to the reverse breakdown level of the pn junction (no intrinsic layer) so that electronic carriers can be multiplied in the avalanche flow when the optical signals are absorbed.
This photogenerated current is then fed into an electronic amplifier whose transimpedance must be sufficiently high and generates low noise so that a sufficient voltage signal can be obtained and then further amplified by a main amplifier, a voltage gain type. For highspeed and wide band signals, transimpedance amplification type is preferred as it offers wide band, much wider than high impedance type, though the noise level might be higher. With TIAs, there are two types, the single input single output port and two differential inputs and single output. The output ports can be differential with a complementary port. The differential input TIA offers much higher transimpedance gain (Z_{T}) and wider bandwidth as well. This is contributed to the use of a longtail pair at the input and hence reasonable high input impedance that would ease the feedback stability [24–26].
In Section 3.3, a case study of coherent optical receiver is described from the design to implementation, including the feedback control and noise reduction. Although the corner frequency is only a few hundreds of MHz, with limited transition frequency of the transistors, this bandwidth is remarkable. The design is scalable to ultrawideband reception subsystems.
Two types of TIAs are described and distinguished by the term single input TIA and differential input TIA. They are distinguished by whether the amplifiers provide at the input a single port or a differential twoport. The later type is normally designed using a differential transistor pair termed as “a longtail pair” instead of a single transistor stage for the former type TIA.
We prefer to treat this section as a design example and describe the experimental demonstration of a wide band and lownoise amplifier. In the next section, the differential input TIA is treated with large transimpedance and reasonably low noise.
An example circuit of the differential input TIA is shown in Figure 3.17, in which a long tail pair or differential pair is employed at the input stage. Two matched transistors are used to ensure the minimum common mode rejection and maximum different mode operation. This pair has a very high input impedance and thus the feedback from the output stage can be stable. Thus, the feedback resistance can be increased up to the limit of the stability locus of the network pole. This thus offers the high transimpedance Z_{T} and wide bandwidth. A typical Z_{T} of 3000–6000 Ω can be achieved with 30 GHz 3 dB bandwidth (see Figure 3.19), as shown in Figures 3.18 and 3.19. Also the chip image of the TIA can be seen in Figure 3.18a. Such a TIA can be implemented in either InP or SiGe material. The advantage of SiGe is that the circuit can be integrated with a highspeed GeAPD detector and ADC and DSP. On the other hand, if implemented in InP then highspeed pin or APD can be integrated and then radio frequency (RF) interconnected with ADC and DSP. The differential group delay may be serious and must be compensated in the digital processing domain.
Figure 3.17 A typical structure of a differential TIA [27] with differential feedback paths.
Figure 3.18 Differential amplifiers: (a) chip level image and (b) referred input noise equivalent spectral noise density. Inphi TIA 3205 (type 1) and 2850 (type 2).
Figure 3.19 Differential amplifier: frequency response and differential group delay.
There are several noise sources in any electronic systems, which include thermal noises, shot noises, and quantum shot noises, especially in optoelectronic detection. Thermal noises result when the operating temperature is well above the absolute temperature at which no random movement of electrons and the resistance of electronic element occur. This type of noise depends on the ion temperature. Shot noises are due to the current flowing and random scattering of electrons, and thus this type of noise depends on the strength of the flowing currents such as biasing current in electronic devices. Quantum shot noises are generated due to the current emitted from optoelectronic detection processes, which are dependent on the strength of the intensity of the optical signals or sources imposed on the detectors. Thus, this type of noise depends on signals. In the case of CoD, the mixing of the LO laser and signals normally occurs with the strength of the LO being much larger than that of signal average power. Thus, the quantum shot noises are dominated by that from the LO.
In practice, an equivalent electronic noise source is the total noise as referred to the input of electronic amplifiers that can be measured by measuring the total spectral density of the noise distribution over the whole bandwidth of the amplification devices. Thus, the total noise spectral density can be evaluated and referred to the input port. For example, if the amplifier is a transimpedance type, then the transimpedance of the device is measured first, and then the measure voltage spectral density at the output port can be referred to the input. In this case, it is the total equivalent noise spectral density. The common term employed and specified for TIAs is the total equivalent spectral noise density over the midband region of the amplifying device. The midband region of any amplifier is defined as the flat gain region from DC to the corner 3 dB point of the frequency response of the electronic device.
Figure 3.20 illustrates the meaning of the total equivalent noise sources as referred to the input port of a twoport electronic amplifying device. A noisy amplifier with an input excitation current source, typically a signal current generated from the PD after the optical to electrical conversion, can be represented with a noiseless amplifier and the current source in parallel with a noise sources whose strength is equal to the total equivalent noise current referred to the input. Thus, the total equivalent current can be obtained by taking the product between this total equivalent current noise spectral density and the 3 dB bandwidth of the amplifying device. Thus, the SNR at the iutput of the electronic amplifier is given by
Figure 3.20 Equivalent noise spectral density current sources.
From this SNR referred at the input of the electronic front end, one can estimate the eye opening of the voltage signals at the output of the amplifying stage which is normally required by the ADC for sampling and conversion to digital signals for processing. Thus, one can then estimate the require OSNR at the input of the photodetector and hence the launched power required at the transmitter over several span links with certain attenuation factors.
Detailed analyses of amplifier noises and their equivalent noise sources as referred to input ports are given in Annex 2. It is noted that noises have no direction of flow as they always add and do not substract, and thus the noises are measured as noise power and not as a current. Thus, electrical spectrum analyzers are commonly used to measure the total noise spectral density, or the distribution of noise voltages over the spectral range under consideration, which is thus defined as the noise power spectral density distribution.
Over the years since the introduction of optical coherent communications in the mid1980s, the invention of optical amplifiers has left coherent reception behind until recently, when longhaul transmission suffered from nonlinearity of dispersion compensating fibers and standard single mode fiber transmission line due to its small effective area. Furthermore, the advancement of DSP in wireless communication has also contributed to the application of DSP in modern coherent communication systems. Thus, the name “DSPassisted coherent detection,” that is, when a realtime DSP is incorporated after the optoelectronic conversion of the total field of the LO and that of the signals, the analog received signals are sampled by a highspeed ADC and then the digitalized signals are processed in a DSP. Currently, realtime DSP are intensively researched for practical implementation. The main difference between realtime and offline processing is that the realtime processing algorithm must be effective due to limited time available for processing.
When polarization division multiplexed (PDM) QAM channels are transmitted and received, polarization and phase diversity receivers are employed. The schematics of such receiver are shown in Figure 3.21a. Further, the structures of such reception systems incorporating DSP with the diversity hybrid coupler in optical domain are shown in Figure 3.21b–d. The polarization diversity section with the polarization beam splitters at the signal and LO inputs facilitate the demultiplexing of polarized modes in the optical waveguides. The phase diversity using a 90°optical phase shifter allows the separation of the inphase (I) and quadrature (Q) phase components of QAM channels. Using a 2 × 2 coupler also enables the balanced reception using photodetector pair connected B2B and hence a 3 dB gain in the sensitivity. Section 2.7 of Chapter 2 has described the modulation scheme QAM using IQ modulators for single polarization or dual polarization multiplexed channels.
Figure 3.21 Scheme of a synchronous coherent receiver using DSP for PE for coherent optical communications. (a) Generic scheme, (b) detailed optical receiver using only one polarization phase diversity coupler, (c) hybrid 90° coupler for polarization and phase diversity, (d) typical view of a hybrid coupler with two input ports and eight output ports of structure in (c). TE_V, TE_H = transverse electric mode with vertical (V) or horizontal (H) polarized mode, TM = transverse magnetic mode with polarization orthogonal to that of the TE mode. FS = phase shifter; PBS = polarization beam splitter; and MLSE = maximumlikelihood phase estimation.
The schematic of synchronous coherent receiver based on DSP is shown in Figure 3.22. Once the polarization and the I and Qoptical components are separated by the hybrid coupler, the positive and negative parts of the I and Q are coupled into a balanced optoelectronic receiver as shown in Figure 3.21b. Two PDs are connected B2B so that push–pull operation can be achieved, hence a 3 dB betterment as compared to a single PD detection. The current generated from the B2B connected PDs is fed into a TIA so that a voltage signal can be derived at the output. Further, a voltagegain amplifier is used to boost these signals to the right level of the ADC so that sampling can be conducted and the analog signals converted to digital domain. These digitalized signals are then fetched into DSP and processing in the “soft domain” can be conducted. Thus, a number of processing algorithms can be employed in this stage to compensate for linear and nonlinear distortion effects due to optical signal propagation through the optical guided medium, and to recover the carrier phase and the clock rate for resampling of the data sequence and so on. Chapter 6 will describe in detail the fundamental aspects of these processing algorithms. Figure 3.22 shows a schematic of possible processing phases in the DSP incorporated in the DSPbased coherent receiver. Besides the soft processing of the optical phase locking as described in Chapter 5, it is necessary to lock the frequencies of the LO and that of the signal carrier to a certain limit within which the algorithms for clock recovery can function, for example within ±2 GHz.
Figure 3.22 Flow of functionalities of DSP processing in a QAMcoherent optical receiver with possible feedback control.
At ultrahigh bit rate, the laser must be externally modulated; thus, the phase of the lightwave conserves along the fiber transmission line. The detection can be direct, selfcoherent, or homodyne and heterodyne. The sensitivity of coherent receiver is also important for the transmission system, especially the PSK scheme under both homodyne and heterodyne transmission techniques. This section gives the analysis of receiver for synchronous coherent optical fiber transmission systems. Consider that the optical fields of the signals and LO are coupled via a fiber coupler with two output ports 1 and 2. The output fields are then launched into two photodetectors connected B2B and then the electronic current is amplified using a transimpedance type and further equalized to extend the bandwidth of the receiver. Our objective is to obtain the receiver penalty and its degradation due to imperfect polarization mixing and unbalancing effects in the balanced receiver. A case study of the design, implementation, and measurements of an optical balanced receiver electronic circuit and noise suppression techniques is given in Section 3.7 (Figure 3.23).
Figure 3.23 Equivalent current model at the input of the optical balanced receiver under CoD, average signal current, and equivalent noise current of the electronic preamplifier as seen from its input port and equalizer. FC = fiber coupler.
The following parameters are commonly used in analysis:
E_{s} 
Amplitude of signal optical field at the receiver 
E_{L} 
Amplitude of local oscillator optical field 
P_{s}, P_{L} 
Optical power of signal and local oscillator at the input of the photodetector 
s(t) 
The modulated pulse 
$\u27e8{i}_{NS}^{2}(t)\u27e9$ 
Mean square noise current (power) produced by the total optical intensity on the photodetector 
$\u27e8{i}_{s}^{2}(t)\u27e9$ 
Mean square current produced by the photodetector by s(t) 
S_{NS}(t) 
Shot noise spectral density of $\u27e8{i}_{s}^{2}(t)\u27e9$ and local oscillator power 
$\u27e8{i}_{Neq}^{2}(t)\u27e9$ 
Equivalent noise current of the electronic preamplifier at its input 
Z_{T}(ω) 
Transfer impedance of the electronic preamplifier 
H_{E}(ω) 
Voltage transfer characteristic of the electronic equalizer followed the electronic preamplifier 
The combined field of the signal and LO via a directional coupler can be written with their separate polarized field components as
where ${\text{\varphi}}_{m(t)}$represents the phase modulation, K_{m} is the modulation depth, and K_{sX}, K_{sY}, K_{LX}, and K_{LY} are the intensity fraction coefficients in the X and Y directions of the signal and LO fields, respectively.
Thus, the output fields at ports 1 and 2 of the FC in the Xplane can be obtained using the transfer matrix as
with α defined as the intensity coupling ratio of the coupler. Thus, the field components at ports 1 and 2 can be derived by combining the X and Y components from Equations 3.49 and 3.50; thus, the total powers at ports 1 and 2 are given as
where ω_{IF} is the intermediate angular frequency, which is equal to the difference between the frequencies of the LO and the carrier of the signals. ϕ_{e} is the phase offset, and ϕ_{p} − ϕ_{e} is the demodulation reference phase error.
In Equation 3.51, the total field of the signal and the LO are added and then the product of the field vector and its conjugate is taken to obtain the power. Only the term with frequency that falls within the range of the sensitive of the photodetector would produce the electronic current. Thus, the term with the sum of the frequency of the wavelength of the signal and LO would not be detected and only the product of the two terms would be detected as given.
Now assuming a binary PSK (BPSK) modulation scheme, the pulse has a square shape with amplitude +1 or −1, the PD is a pin type, and the PD bandwidth is wider than the signal 3 dB bandwidth followed by an equalized electronic preamplifier. The signal at the output of the electronic equalizer or the input signal to the decision circuit is given by
For a perfectly balanced receiver, K_{B}=2 and α = 0.5; otherwise K_{B}=1. The integrals of the first line in Equation 3.52 are given by
V _{D} (f) is the transfer function of the matched filter for equalization, and T_{B} is the bit period. The total noise voltage as a sum of the quantum shot noise generated by the signal and the LO and the total equivalent noise of the electronic preamplifier at the input of the preamplifier and at the output of the equalizer is given by
For homodyne and heterodyne detection, we have
where the spectral densities ${{S}^{\prime}}_{IX}^{},{{S}^{\prime}}_{IE}^{}$ are given by
Thus, the receiver sensitivity for BPSK and equiprobable detection and Gaussian density distribution is given by
with δ given by
Thus, using Equations 3.52, 3.55, and 3.58 we obtain the receiver sensitivity in the linear power scale as
In the case when the power of the LO dominates the noise of the electronic preamplifier and the equalizer, the receiver sensitivity (in linear scale) is given as
This shotnoiselimited receiver sensitivity can be plotted as shown in Figure 3.24.
Figure 3.24 (a) Receiver sensitivity of coherent homodyne and heterodyne detection, signal power versus bandwidth over the wavelength. (b) Power penalty of the receiver sensitivity from the shotnoiselimited level as a function of the excess noise of the LO.
Under a nonideal condition, the receiver sensitivity departs from the shotnoiselimited sensitivity and is characterized by the receiver sensitivity penalty PD_{T} as
where η is the LO excess noise factor.
The receiver sensitivity is plotted against the ratio f_{B}/λ for the case of homodyne and heterodyne detection, as shown in Figure 3.24a, and the power penalty of the receiver sensitivity against the excess noise factor of the LO, as shown in Figure 3.24b. Receiver power penalty can be deduced as a function of the total electronic equivalent noise spectral density, and as a function of the rotation of the polarization of the LO that can be found in [30]. Furthermore, in [31], the receiver power penalty and the normalized heterodyne center frequency can vary as a function of the modulation parameter and as a function of the optical power ratio at the same polarization angle.
A generic structure of the coherent reception and DSP system is shown in Figure 3.25, in which the DSP system is placed after the sampling and conversion from analog state to digital form. Obviously, the optical signal fields are beating with the LO laser whose frequency would be approximately identical to the signal channel carrier. The beating occurs in the square law photodetectors, that is, the summation of the two fields are squared and the product term is decomposed into the difference and summation term; thus, only the difference term is fallen back into the baseband region and amplified by the electronic preamplifier, which is a balanced differential transimpedance type.
Figure 3.25 Coherent reception and the DSP system.
If the signals are complex, then there are the real and imaginary components that form a pair. Another pair comes from the other polarization mode channel. The digitized signals of both the real and imaginary parts are processed in real time or offline. The processors contain the algorithms to combat a number of transmission impairments such as the imbalance between the inphase and the quadrature components created at the transmitter, the recovery of the clock rate and timing for resampling, the carrier PE for estimation of the signal phases, adaptive equalization for compensation of propagation dispersion effects using ML phase estimation, and so on. These algorithms are built into the hardware processors or memory and loaded to processing subsystems.
The sampling rate must normally be twice that of the signal bandwidth to ensure that the Nyquist criteria are satisfied. Although this rate is very high for 25 G to 32 GSy/s optical channels, Fujitsu ADC has reached this requirement with a sampling rate of 56 G to 64 GSa/s, as depicted in Figure 3.37.
The linewidth resolution of the processing for semiconductor device fabrication has been progressed tremendously over the year in an exponential trend as shown in Table 3.1. These progresses could be made due to the successes in the lithographic techniques using optical at short wavelength such as the UV, the electronic optical beam, and Xray lithographic with appropriate photoresist such as SU80, which would allow the line resolution to reach 5 nm in 2020. So, if we plot the trend in a loglinear scale as shown in Figure 3.26, a linear line is obtained, meaning that the resolution is reduced exponentially. When the gate width is reduced, the electronic speed would increase tremendously; at 5 mm, the speed of the electronic CMOS device in SiGe would reach several tens of GHz. Regarding the highspeed ADC and digitaltoanalog converter (DAC), the clock speed is increased by paralleling, delaying of extracted outputs of the registers, and taking the summation of all the digitized digital lines to form a very high speed operation. For example, for Fujitsu 64 GSa/s DAC or ADC, the applied clock sinusoidal waveform is only 2 GHz. Figure 3.27 shows the progresses in the speed development of Fujitsu ADC and DAC.
Figure 3.26 Semiconductor manufacturing with resolution of line resolution.
Figure 3.27 Evolution of ADC and DAC operating speed with corresponding linewidth resolution.
Semiconductor Manufacturing Processes and Spatial Resolution (Gate Width) 


10 μm—1971 
800 nm (0.80 μm)—1989: 
90 nm—2002: electron 
14 nm—approx. 2014: 
3 μm—1975 
UV lithography 
lithography 
Xray lithography 
1.5 μm—1982 
600 nm (0.60 μm)—1994 
65 nm—2006 
10 nm—approx. 2016: 
1 μm—1985 
350 nm (0.35 μm)—1995 
45 nm—2008 
Xray lithography 
250 nm (0.25 μm)—1998 
32 nm—2010 
7 nm—approx. 2018: 

180 nm (0.18 μm)—1999 
22 nm—2012 
Xray lithography 

30 nm (0.13 μm)—2000 
5 nm—approx. 2020: 

Xray lithography 
Effective number of bits (ENOBs) is a measure of the quality of a digitized signal. The resolution of a DAC or ADC is commonly specified by the number of bits used to represent the analog value, in principle, giving 2^{N} signal levels for an Nbit signal. However, all real signals contain a certain amount of noise. If the converter is able to represent signal levels below the system noise floor, the lower bits of the digitized signal only represent system noise and do not contain useful information. ENOB specifies the number of bits in the digitized signal above the noise floor. Often, ENOB is also used as a quality measure for other blocks such as sampleandhold amplifiers. This way analog blocks can also be easily included to signalchain calculations as the total ENOB of a chain of blocks is usually below the ENOB of the worst block.
Thus, we can represent the ENOB of a digitalized system by
where all values are given in dB, and the signaltonoise and distortion ratio (SINAD) is the ratio of the total signal including distortion and noise to the wanted signal; the 6.02 term in the divisor converts decibels (a log_{10} representation) to bits (a log_{2} representation); and the 1.76 term comes from quantization error in an ideal ADC^{*}
http://en.wikipedia.org/wiki/ENOB  cite_note3 (access date: Sept. 2011).
.This definition compares the SINAD of an ideal ADC or DAC with a word length of ENOB bits with the SINAD of the ADC or DAC being tested. Indeed the SINAD is a measure of the quality of a signal from a communications device, often defined as
where P is the average power of the signal, noise, and distortion components. SINAD is usually expressed in dB and is quoted alongside the receiver sensitivity, to give a quantitative evaluation of the receiver sensitivity. Note that with this definition, unlike SNR, a SINAD reading can never be less than 1 (i.e., it is always positive when quoted in dB).
When calculating the distortion, it is common to exclude the DC components. Because of the widespread use, SINAD has collected a few different definitions. SINAD is calculated as one of the following: (i) The ratio of (a) total received power, that is, the signal to (b) the noiseplusdistortion power. This is modeled by the equation above. (ii) The ratio of (a) the power of original modulating audio signal, that is, from a modulated radio frequency carrier to (b) the residual audio power, that is, noiseplusdistortion powers remaining after the original modulating audio signal is removed. With this definition, it is now possible for SINAD to be less than 1. This definition is used when SINAD is used in the calculation of ENOB for an ADC.
Digital input 
000 
001 
010 
011 
100 
101 
110 
111 
Analog output (V) 
−0.01 
1.03 
2.02 
2.96 
3.95 
5.02 
6.00 
7.08 
The offset error in this case is −0.01 V or −0.01 LSB as 1 V = 1 LSB (lower sideband) in this example. The gain error is (7.08 + 0.0.1)/(7/1)/1= 0.09 LSB, where LSB stands for the least significant bits. Correcting the offset and gain error, we obtain the following list of measurements: (0, 1.03, 2.00, 2.93, 3.91, 4.96, 5.93, 7) LSB. This allows the INL and DNL to be calculated: INL = (0, 0.03, 0, −0.07, −0.09, −0.04, −0.07, 0) LSB, and DNL = (0.03, −0.03, −0.07, −0.02, 0.05, −0.03, 0.07, 0) LSB.
Differential nonlinearity (DNL): For an ideal ADC, the output is divided into 2N uniform steps, each with Δ width as shown in Figure 3.28. Any deviation from the ideal step width is called DNL and is measured in number of counts (lower sidebands). For an ideal ADC, the DNL is 0 LSB. In a practical ADC, DNL error comes from its architecture. For example, in an successiveapproximationregister ADC, DNL error may be caused near the midrange due to mismatching of its DAC.
Figure 3.28 Representation of DNL in a transfer curve of an ADC.
Integral nonlinearity (INL) is a measure of how closely the ADC output matches its ideal response. INL can be defined as the deviation in LSB of the actual transfer function of the ADC from the ideal transfer curve. INL can be estimated using DNL at each step by calculating the cumulative sum of DNL errors up to that point. In reality, INL is measured by plotting the ADC transfer characteristics. INL is popularly measured using either (i) bestfit (best straight line) method or (ii) end point method.
Bestfit INL: The bestfit method of INL measurement considers offset and gain error. One can see in Figure 3.29 that the ideal transfer curve considered for calculating the bestfit INL does not go through the origin. The ideal transfer curve drawn here depicts the nearest firstorder approximation to the actual transfer curve of the ADC.
Figure 3.29 Bestfit INL.
The intercept and slope of this ideal curve can lend us the values of the offset and gain error of the ADC. Quite intuitively, the bestfit method yields better results for INL. For this reason, many times, this is the number present on ADC datasheets.
The only real use of the bestfit INL number is to predict distortion in timevariant signal applications. This number would be equivalent to the maximum deviation for an AC application. However, it is always better to use the distortion numbers than INL numbers. To calculate the error budget, endpoint INL numbers provide a better estimation. Also, this is the specification that is generally provided in datasheets. So, one has to use this instead of endpoint INL.
Endpoint INL: The endpoint method provides the worstcase INL. This measurement passes the straight line through the origin and maximum output code of the ADC (Figure 3.6). As this method provides the worstcase INL, it is more useful to use this number as compared to the one measured using best fit for DC applications. This INL number would be typically useful for error budget calculation. This parameter must be considered for applications involving precision measurements and control (Figure 3.30).
Figure 3.30 Endpoint INL.
The absolute and relative accuracy can now be calculated. In this case, the ENOB absolute accuracy is calculated using the largest absolute deviation D, in this case, 0.08 V:
The ENOB relative accuracy is calculated using the largest relative (INL) deviation d, in this case, 0.09 V.
For this kind of ENOB calculation, note that the ENOB can be larger or smaller than the actual number of bits (ANOBs). When the ENOB is smaller than the ANOB, this means that some of the LSBs of the result are inaccurate. However, one can also argue that the ENOB can never be larger than the ANOB, because you always have to add the quantization error of an ideal converter, which is ±0.5 LSB. Different designers may use different definitions of ENOB!
The ENOB of an ADC is considered as the number of bits that an analog signal can convert to its digital equivalent by the number of levels represented by the modulo2 levels, which are reduced due to noises contributed by electronic components in such a convertor. Thus, only an effective number of equivalent bits can be accounted for. Hence, the term ENOB is proposed.
As shown in Figure 3.31b, a real ADC can be modeled as a cascade of two ideal ADCs and additive noise sources and an AGC amplifier [31]. The quantized levels are thus equivalent to a specific ENOB as far as the ADC is operating in the linear nonsaturated region. If the normalized signal amplitude/power surpasses unity, the saturated region, then the signals are clipped. The decision level of the quantization in an ADC normally varies following a normalized Gaussian PDF; thus, we can estimate the RMS noise introduced by the ADC as
Figure 3.31 (a) Measured ENOB frequency response of a commercial realtime DSA of 20 GHz bandwidth and sampling rate of 50 GSa/s. (b) Deduced ADC model of variable ENOB based on the experimental frequency response of (a) and spectrum of broadband signals.
where:
Given the known quantity LSB^{2}/12 by the introduction of the ideal quantization error, σ^{2} can be determined via the Gaussian noise distribution. We can thus deduce the ENOB values corresponding to the levels of Gaussian noise as
where A is the RMS amplitude derived from the noise power. According to the ENOB model, the frequency response of ENOB of the digital sampling analyzer (DSA) is shown in Figure 3.31a, with the excitation of the DSA by sinusoidal waves of different frequencies. As observed, the ENOB varies with respect to the excitation frequency, in the range from 5 to 5.5. Having known the frequency response of the sampling device, what is the ENOB of the device when excited with broadband signals? This indicates the different resolution of the ADC of the receiver of the transmission operating under different noisy and dispersive conditions; thus, an equivalent model of ENOB for performance evaluation is essential. We note that the amplitudes of the optical fields arrived at the receiver vary depending on the conditions of the optical transmission line. The AGC has a nonlinear gain characteristic in which the inputsampled signal power level is normalized with respect to the saturated (clipping) level. The gain is significantly high in the linear region and saturated in the high level. The received signal R_X_{in} is scaled with a gain coefficient according $\text{R\_}{\text{X}}_{\text{out}}=\text{R\_}{\text{X}}_{\text{in}}/\sqrt{{\text{P}}_{\text{in\_av}}/{\text{P}}_{\mathrm{Re}\text{f}}}$ where the signalaveraged power P_{in_av} is estimated and the gain is scaled relative to the reference power level P_{Ref} of the AGC; then a linear scaling factor is used to obtain the output sampled value R_X_{out}. The gain of the AGC is also adjusted according to the signal energy, via the feedback control path from the DSP (see Figure 3.31b). Thus, new values of ENOB can be evaluated with noise distributed across the frequency spectrum of the signals, by an averaging process. This signaldependent ENOB is now denoted as ENOBs.
Figure 3.32a shows the BER variation with respect to the OSNR under B2B transmission using the simulated samples at the output of the 8bit ADC with ENOBs and full ADC resolution as parameters. The difference is due the noise distribution (Gaussian or uniform). Figure 3.32b depicts the variation of BER versus OSNR, with ENOBs as the variation parameter in case of offline data with ENOB of DSA shown in Figure 3.1a. Several more tests were conducted to ensure the effectiveness of our ENOB model. When the sampled signals presented to the ADC are of different amplitudes, controlled, and gain nonlinearly adjusted by the AGC, different degrees of clipping effect would be introduced. Thus, the clipping effect can be examined for the ADC of different quantization levels but with identical ENOBs, as shown in Figure 3.33a, for the B2B experiment. Figure 3.33b–e shows, with BER as a parameter, the contour plots of the variation of the adjusted reference power level of the AGC and ENOBs for the cases of 1500 km longhaul transmission of full CD compensation and nonCD compensation operating in the linear (0 dBm launch power in both links) and nonlinear regimes of the fibers with the launch power of 4 and 5 dBm, respectively. When the link is fully CD compensated, the nonlinear effects further contribute to the ineffectiveness of the ADC resolution and hence moderate AGC freedom in the performance is achieved. On the other hand, in the case of nonCD compensation link (Figure 3.33d and e), the dispersive pulse sampled amplitudes are lower with less noise allowing the resolution of the ADC to be higher via the nonlinear gain of the AGC; thus, effective PE and equalization can be achieved. We note that the offline data sets, employed prior to the processing using ENOBs to obtain the contours of Figure 3.33, produce the same BER contours of 2 × 10^{–3} for all cases. Hence, a fair comparison can be made when the ENOBs model is used. The opening distance of the BER contours indicates the dynamic range of the ENOBs model, especially the AGC. It is obvious from Figure 3.33a–e that the dynamic range of the model is higher for noncompensating than for full CD compensated transmission and even for the case of B2B. However, for nonlinear scenario for both cases, the requirement for ENOBs is higher for the dispersive channel (Figure 3.33c and e). This may be due to the crossphase modulation effects of adjacent channels, and hence more noise.
Figure 3.32 (a) B2B performance with different ENOBs values of the ADC model with simulated data (8bit ADC) and (b) OSNR versus BER under different simulated ENOBs of offline data obtained from an experimental digital coherent receiver.
Figure 3.33 Comprehensive effects of AGC clipping (inverse of PRef) and ENOBs of the coherent receiver, experimental transmission under (a) B2B, (b) DWDM linear operation with full CD compensation (CDC) and (b) linear, (c) nonlinear regions with 4 dBm launch power; and (d) nonCD compensation and (d) linear, (e) nonlinear region with launch power of 5 dBm.
The structures of the DAC and ADC are shown in Figures 3.34 and 3.35, respectively. Normally, there would be four DACs in an IC, in which each DAC section is clocked with a clock sequence which is derived from a lower frequency sinusoidal wave injected externally into the DAC. Four units are required for the inphase and quadrature phase components of QAMmodulated polarized channels; thus the notations of I_{DAC} and Q_{DAC} are shown in the diagram. Similarly, the optical received signals of PDMQAM would be sampled by a fourphase sampler and then converted to digital form into four groups of I and Q lanes for processing in the DSP subsystem. Because of the interleaving of the sampling clock waveform, the digitalized bits appear simultaneously at the end of a clock period that is sufficiently long, so that the sampling number is sufficiently large to achieve all samples. For example, as shown in Figure 3.35, 1024 samples are achieved at a periodicity corresponding to 500 MHz cycle clock for 8bit ADC. Thus, the clock has been slowed down by a factor of 128, or alternatively, the sampling interval is 1/(128 × 500 MHz) = 1/64 GHz. The sampling is implemented using a CHArged mode Interleaved Sampler (CHAIS).
Figure 3.34 Fijitsu DAC structures for four channel PDM_QPSK signals: (a) schematic diagram and (b) processing function.
Figure 3.35 ADC principles of operations (CHAIS).
Figure 3.36 shows a generic diagram of an optical DSPbased transceiver employing both DAC and ADC under QPSK modulated or QAM signals. The current maximum sampling rate of 64 GSa/s is available commercially. An IC image of the ADC chip is shown in Figure 3.37.
Figure 3.36 Schematic of a typical structure of ADC and ADC transceiver subsystems for PDMQPSK modulation channels.
Figure 3.37 Fujitsu ADC subsystems with a dual convertor structure.
This chapter has described the principles of coherent reception and associated techniques with noise considerations and main functions of the DSP. The DSP algorithms will be described, not in a separate chapter, but between lines in the chapters.
Furthermore, the matching of the LO laser and that of the carrier of the transmitted channel is very important for effective CoD, if not degradation of the sensitivity of results. The International Telecommunication Union standard requires that for DSP based coherent receiver the frequency offset between the LO and the carrier must be within the limit of ±2.5 GHz. Furthermore, in practice it is expected that in network and system management the tuning of the LO is to be done remotely and automatic locking of the LO with some prior knowledge of the frequency region to set the LO initial frequency. Thus, this action briefly describes the optical phase locking the LO source for an intradyne coherent reception subsystem.