When it comes to channel requirements for 40G/100G network deployment, the primary consideration is link attenuation. This is simply the optical power loss between the passive fiber optic component links between fiber optic transceivers. Regarding attenuation, the HSSG defines an OM3/OM4 link with a total attenuation value of no more than 1.5dB for all connectors. The structured cabling standard TIA 942 recommends 4 connector pairs for links between active equipment interfaces, so for a link with a total attenuation budget of 1.5dB consisting of 4 pairs of connectors, each pair connects The attenuation value of the device should not be greater than 0.375dB.
In addition to attenuation, bandwidth is another key factor to consider when building a 40G/100G transport network. To ensure sufficient bandwidth required to achieve the rate, the HSSG specifies the fiber types and their supported transmission distances. For quite some time, the fiber of choice for data centers has been OM3 multimode fiber and will continue to be used to deploy 40G/100G networks. In fact, the HSSG has specified OM3 fiber to support higher-speed network applications. OM3 fiber can support 40GBASE-SR4/100GBASE-SR4 to achieve at least 100m transmission. In addition, the HSSG has incorporated into the specification the recently TIA-approved OM4 fiber, which is capable of transmitting 125m at higher rates. HSSG believes that multimode fiber supports such a transmission distance enough to support mainstream applications in data centers, and also includes single-mode fiber supporting a transmission distance of more than 125m into the standard.
Another factor that affects link performance is optical jitter. Recall that 40G/100G networks use parallel optics technology, which means that optical signals are split and recombined for transmission in different fibers. The unfinalized IEEE 802.3ba standard recommends that the wiring jitter value be no greater than 79ns. 100G optical products (click here)have passed the internal jitter test and must strictly comply with the jitter delay of less than 0.75ns defined by the InfiniBand standard. Deploy networking solutions with stringent jitter performance to ensure high compatibility of the cabling infrastructure to support a wide variety of network applications. Designing a fiber cabling infrastructure for 40G/100G network applications requires not only meeting the 0.75ns jitter requirements of 40G/100G networks, but also considering InfiniBand and future 32G and higher Fibre Channel data rate requirements. Furthermore, the low-jitter networking solution verifies the process quality and product consistency of the fiber optic cable design and termination to provide long-term reliable operation.
Future of 40G/100G Network
Similar voices are heard from both data center designers and data center managers. Data center infrastructure must have high reliability, high manageability, high flexibility and high scalability, which will not change due to the transition to the next generation of higher Ethernet speeds. However, here are some notable changes that will affect infrastructure worth mentioning. During the evolution to 40G/100G, MPO technology will involve multi-core transceiver channels, and this change will involve many factors. First, for systems supporting 40G/100G, appropriate MPO-based optical fibers must be selected as backbone cables. These backbone cables will likely be part of a modular, pre-connected data center solution that only supports 1G/10G Ethernet, barring the immediate deployment of higher data rate networks. Secondly, the optical fiber distribution frame, which is the distribution point between the main optical cable and the active equipment, can be compatible with two switching methods, namely, the two-core module and the MPO adapter panel. Following the above guidelines, migrating to a higher speed network will be a breeze, and then use the previously mentioned MPO jumpers and branch jumpers to complete the entire link.
The density of fiber terminations per rack is very important to the data center and becomes more and more important as the data center grows. 40G/100G networks require a 12- or 24-fiber MPO connector to replace the original dual-fiber connector, forcing a considerable increase in fiber capacity per end-equipment port. Figure 7 illustrates the importance of density in supporting Ethernet’s migration to higher speeds. A typical system currently designed can handle up to 2880 fiber cores in a single rack through 10 independent 288 fiber distribution frames. For a typical 100G fiber switch with 8 cards/chassis, 16 ports/card, this capacity would be increased to 3072 cores, and this would fit in a 4U rack height that could accommodate the required 128 MPO fiber adapters realized in the fiber optic distribution frame.
Faced with such a high number of fibers in the chassis, especially the MPO jumpers and branch jumpers between the front panel of the fiber distribution frame and the active device ports, it is extremely critical to adopt a clear management method. Another problem is that the number of fibers in the chassis is so high that a higher core-count trunk cable must be installed, but at the same time, it is limited by the number of fibers in a single trunk, which is typically 144 cores. In the 100G network chassis mentioned above, 3072-core optical fibers are used, and about 22 144-core trunk optical cables enter the rear panel of the optical fiber distribution frame. If a trunk optical cable with a higher core count is used to reduce the number of optical cables, for example, a 288-core trunk optical cable is used to equip a 100G network chassis, and only 11 optical cables are required for full configuration, which will save a lot of space for the optical fiber distribution frame, which can be used to accommodate Fiber entry channel, torsion release, and storage for high core count fiber optic trunk cables.