Connecting with Light-IEEE Spectrum

2021-11-29 07:29:19 By : Mr. Zhongbin Shen

The IEEE website places cookies on your device in order to provide you with the best user experience. By using our website, you agree to the placement of these cookies. To learn more, please read our privacy policy.

The movement of data in computers is almost the opposite of the movement of traffic in cities. In the city center, in a crowded microprocessor core, bits fly at an alarming speed. But further afield, on the wide copper wires that connect one processor to another and a circuit board to another processor, things get slower. The Pentium 4, launched this spring, runs at 2.4 GHz, but data is transmitted on a bus that only runs at 400 MHz. However, on the highway of the world's optical fiber telecommunications network, the speed is accelerating again. Obviously, the closer the engineer can connect the optical highway to the microprocessor, the fewer copper bottlenecks will appear, as if you can drive directly onto the highway from the lane.

So some researchers say that in just a few years, many copper connections in computers will succumb to high-speed optical interconnects, where photons instead of electrons will transmit signals between boards or chips, or even from chips. From one part to another [see picture above].

The idea is simple in principle and similar to a telecommunications system. The electrical signal from the processor will modulate the miniature laser beam, which will irradiate the photodetector through the air or waveguide, and the photodetector will transmit the signal to the electronic device. Although communication with light is currently more expensive than communication with electric current, one day only optical technology can meet the needs of increasingly powerful microprocessors, just as they are now the only reasonable way to move Internet traffic in the world.

IBM Communications Technology Senior Manager Modest Oprysko said: "We have predicted that for some system requirements, the data rate is high enough, the link length is long enough, we will have to use optical devices." Thomas J. Watson Research Center (New York State Yorktown Heights).

The bit rate of a wire is dominated by its parasitic resistance, capacitance and inductance. At low frequencies, the series resistance and parallel capacitance of the circuit board trace determine its behavior, determine the transition (rise and fall) time, and thus limit its data rate. At higher frequencies, like those on today's circuit boards, the series inductance of the wire becomes more important than the resistance as a hindering factor, but the end result is the same-there is a limit to the rate at which the trace transmits pulses.

All these parasitic factors depend to a large extent on the geometry of the wire, especially its length. For example, resistance is proportional to the length of the wire and inversely proportional to its cross-sectional area. Because of this dependence on geometry, the final bit rate result of a simple line is proportional to its cross-section, but decreases with the square of its length. So thinner—especially longer—means a lower bit rate. On the chip, this story has some caveats, but the conclusion is the same.

The switching time limitation can be eliminated by driving the line harder, but this is not a good solution. It increases noise, increases power requirements, and exacerbates already serious thermal management problems. Or, the wire can be made thicker, but then you will run out of space. Photons are not subject to these limitations; their biggest problem is absorption and attenuation, which is not a problem inside the computer or even in the entire room.

In telecommunications, from the all-optical long-distance network to the metropolitan area network with lower bandwidth, the transmission speed slows down, and finally climbs into the home along the copper wire. Just as telecommunications companies hope to increase total bandwidth by moving all-optical transmission closer to your home, designers of optical interconnects also want to get as much bandwidth as possible as close to the microprocessor as possible. The optics has been connected to the computer system within a distance of less than 300 meters, and the backplane setting is also in progress, which will speed up the data transmission from one board to another in the computer. A further aspect is a system for increasing the bandwidth between two microprocessors or between chip stacks used for massively parallel computing. All the components are in place-cheap lasers, sensitive detectors, and the methods needed for transmission from one to the other. The question now is only when optical interconnects will perform well enough and when their production costs will drop low enough for them to replace copper wires.

Many developers of optical interconnection schemes collect signals through some kind of waveguide. For example, DaimlerChrysler Research (Ulm, Germany) is developing waveguide-based optical backplanes to connect to computers on airplanes or to route signals in computers used in telecommunications [see above]. As with most optical interconnection schemes, photons are generated by a vertical cavity surface emitting laser (VCSEL) [see "Optical Array"]. The light passes through a lens and expands the beam to a diameter of a few millimeters. The expanded beam hits the waveguide entrance more accurately, and at the waveguide exit, it is redirected by the mirror, passes through another lens, and focuses it on the detector. DaimlerChrysler claims that polymer waveguides are easier to integrate into systems than optical fibers.

Most optical interconnection schemes envision the use of a vertical cavity surface emitting laser (VCSEL) as the light source and one of two types of semiconductor detectors. All three technologies use III-V materials, such as gallium arsenide instead of silicon, and therefore cannot be integrated directly with silicon CMOS devices that communicate with them. Nevertheless, the development of silicon CMOS-based lasers and detectors is an active area of ​​research that can smoothly introduce optical interconnections.

Many detector designs use pin photodiodes, which are a sandwich of p-type, intrinsic, and n-type semiconductor materials that generate current when impacted by photons. The metal-semiconductor-metal interlayer creates the MSM photodetector, which works in roughly the same way, and is faster than pin photodiodes, but has lower sensitivity. Researchers use both in their designs, depending on which attributes are most important to them.

GaAs VCSELs have been commercialized in the past few years. These devices are diode lasers. Unlike most devices used for long-distance telecommunications, they emit from the surface of the laser instead of the edge, so they can be placed on top of the chip. It is relatively easy to build arrays of hundreds or thousands on a plane and make them close to the size of a transistor, a few hundred nanometers. However, a small number of individual VCSELs in the array may not work properly. You have wasted a lot of money by binding a useless array to an expensive microprocessor, so if the optical interconnect is to take off, VCSEL manufacturers must reduce the failure rate.

As chips continue to shrink, transistors are squeezed closer, and so are the VCSELs that communicate with them. This spacing gives rise to the specter of problems common to wires: as they get closer and run at higher speeds, the threat of electrical interference between adjacent channels increases. Although the light from the laser itself is not affected by crosstalk, the current driving it is.

The power to drive the VCSEL also generates heat, which is worse for lasers than electronic circuits. As GaAs heats up, its material properties will change, and the laser will be detuned and drift to different wavelengths. Above about 80 °C, they will stop working completely. Therefore, the challenge is to improve their efficiency so that they can be driven with lower power.

Perhaps the biggest difficulty of VCSEL is GaAs. In fact, any laser semiconductor has a lattice that does not match silicon. This means that they must be manufactured in a separate process, which greatly increases production costs. They can be glued to CMOS chips, but they will thermally expand at a different rate than silicon, which can result in misalignment and even damage to the device. Photodetectors are also based on GaAs, so they have the same shortcomings, although some researchers are working on silicon-based detectors.

Naturally, optoelectronic engineers want to use silicon to make VCSELs, but this material is not easy to emit light. Unlike semiconductors such as GaAs, silicon has an indirect band gap, which makes free electrons more likely to release energy in the form of heat, rather than combining with holes and emitting photons.

But Wilfred Pinfold, the technical director of Intel’s Microprocessor Research Laboratory, saw some hope in the work in the UK and Australia last year. In March 2001, Kevin Homewood of the University of Surrey (Guilford, UK) reported the successful manufacture of silicon light-emitting diodes (LEDs). He achieved this goal by bombarding silicon with boron ions. These create obstacles in the silicon lattice, trapping electrons in a small area, increasing the probability that they will find holes to bind to them.

Last August, Martin Green of the University of New South Wales (Sydney, Australia) reported a new type of silicon LED. The device etched an inverted pyramid, reflecting some of the emitted photons back to the silicon, where they could stimulate further emission. It converts electrons into photons with an efficiency of 1%-very low, but 100 times that of silicon LEDs before.

Unfortunately, LEDs are not as useful as more powerful, more focused lasers. Moreover, after decades of struggle, some researchers have expressed doubts about silicon's opportunities. "If God wants ordinary silicon to emit light efficiently, he will not give us gallium arsenide," said Elias Towe of Carnegie Mellon University (Pittsburgh).

In March last year, a prototype with a data rate of 1 Gb/s per channel with a distance of less than 100 cm was successfully tested. The company is providing development samples with a channel rate of 2.5 Gb/s and is developing a system with 10 Gb/s channels in the laboratory.

Another waveguide-based solution belongs to the two-year-old Primarion Company (Tempe, Arizona). The company is in the early stages of manufacturing optical circuits that transmit data at short distances of 10 Gb/s. The goal is to introduce fiber all the way to the processor. In its design, the electrical signal is transmitted through the circuit board to the laser driver chip, which processes the signal and prepares to send it out. Connected to the laser driver is another module containing 12 VCSEL arrays. The laser beam passes through the optical fiber to a similar device on another board, and the photodetector transmits the signal to the receiver chip for reprocessing and conversion back to electrical energy.

John Burns, vice president of technology development at Primarion, believes that it will only take two to three years for compact optical I/O devices to appear in computers. These combine the processing circuit and the laser or receiver into a single package with electrical contacts on the bottom and fiber optic sockets on the side. Ultimately, the company envisioned having a printed optical board, just like a printed circuit board, but with polymer waveguides embedded instead of copper wires. Then, the signal processing circuit can be next to the device that needs data, and the signal will be transmitted as light through the circuit board, minimizing the time it takes for the data to be used as electronics.

Fiber-based interconnects have made their commercial debut, but have not yet been used in computers themselves. In late 2000, Agilent Technologies (Palo Alto, California) began selling a fiber optic connector for connecting computers 300 meters apart. A version of the plug-in package developed in cooperation with Anthony FJ Levi, head of the Advanced Interconnect and Network Technology Project at the University of Southern California (USC, Los Angeles), can transmit data at an aggregate rate of 10 Gb/s. It contains four transmitters and receivers, and is connected to a similar package by four optical fibers. The 12-channel version can run at speeds up to 40 Gb/s, but only transmits in one direction at a time. Other companies now offer similar products for rack-to-rack connections between servers, data storage centers, and network and telecommunications switching.

According to David Dolfi, Agilent's optical interconnection and networking department manager, Agilent's next goal is to do similar things for backplanes in computers. He said that in order to keep up with faster processors and more complex systems, the bandwidth of the backplane must be increased by an order of magnitude, which will require optical links. Unless some breakthroughs are made in electrical links, Dolfi expects optical interconnections to enter the box within three to five years.

The air through there

Although recent, less complex interconnection schemes transmit light through waveguides, engineers interested in high-performance computing are developing chip-to-chip optical communications through the air. The so-called free-space interconnection can increase the data throughput between chips by a thousand times. This increase stems from the combination of optical high data rates and the fact that optical data channels can be denser than electrical channels. On the one hand, the channels can form a two-dimensional array instead of one-dimensional copper wires or optical fibers interconnected on the board. Second, the channels will not experience crosstalk when they are close.

Sadik Esener, a professor of electrical engineering at the University of California, San Diego, and his colleagues developed a system in which three stacks of 16 chips in each group can communicate with each other optically. A 16 x 16 VCSEL array sends the beam through the air, where an integrated micro lens focuses the beam and a diffraction grating redirects it. The result is a tightly interconnected 3D chip array for supercomputing and other applications [see image below].

Most researchers believe that in the future, such free-space interconnections will be farther than those that use fiber-guided light, partly because they require higher alignment accuracy than plug-in cards in ordinary desktop computers. Anyone who has ever turned on a home computer and saw the dust accumulated inside knows another obstacle.

Nevertheless, free-space optics still attracts engineers who consider computing in a revolutionary way. It has been an active research field for high-performance computing in North America, Japan, and especially Europe, and has been working hard for a long time. The appeal is simple. Suppose you want a supercomputer that can be optimized for multiple tasks. You set the processor to work best with a given algorithm, perhaps for image processing, but when you want to switch to weather modeling, the architecture is no longer ideal. Using free space optical interconnects, you can reconfigure the way the processors communicate with each other by redirecting the laser to different detectors.

John Neff, director of the Optoelectronic Computing System Center at the University of Colorado (Boulder), is studying microelectromechanical systems (MEMS) that may be used to control light beams. In Neff's experimental system, a microscopic mirror is attached to two metals with different thermal properties. By passing an electric current through them to heat them, one will stretch more than the other, bending it to the desired direction. However, this is a slow process, and the maximum switching speed is about kilohertz. But using electrostatic repulsion instead of heat to bend the mirror will produce a speed of 10 kHz, and Neff thinks he can increase the speed to 100 kHz.

The economics of mobile data determine the exact point of optical interconnection is controversial. According to Levi of the University of Southern California, when microprocessors reach speeds of 10 GHz, their inability to communicate quickly with computer memory or other chips in a multi-processor system will begin to stifle their usefulness. "By then, [will] become more and more difficult...Intel uses these processors to make useful systems," Levi said.

Of course, Intel is not yet willing to ring the death knell for electrical interconnection. Wilfred Pinfold, technical director of its microprocessor research laboratory, said: "We still see many opportunities to continue to promote the development of copper signals."

First, engineers can redesign the chip and circuit board architecture to shorten the distance of signal transmission. Designers can add more pins to increase the number of I/O links, but not indefinitely. Intel can transmit data at 10 GHz through the copper wires on the board, and it should be able to push it to at least 20 GHz. Pinfold said: "We don't know where it disappears." Nevertheless, his laboratory is still working hard to make progress in electrical and optical interconnection, as are IBM, Advanced Micro Devices and other competitors.

Most of the research on optical interconnection in the United States has been funded by a US$70 million program run by the Defense Advanced Research Projects Agency (DARPA, Arlington, Virginia). The massively parallel computing system it considers can be used to simulate nuclear reactions. But Ravindra Athale, who is in charge of this research, expects it to also penetrate consumers. Progress should be from expensive and dense computer systems to simpler and cheaper computer systems, and from longer distances to shorter distances. Rack-to-rack connections have already emerged, board-to-board links may appear within a few years, and chip-to-chip interconnections may occur in about ten years.

Regarding the optical connection between one system and another on a single chip, most experts are vague. "It believes this is beyond the feasibility of today's technology," said Levi of the University of Southern California. However, DARPA's Athale said that understanding whether cross-chip optical interconnects are practical is one of the reasons for advanced research. "The answer is possible, it is final."

NEIL SAVAGE, a technology writer based in Lowell, Massachusetts, wrote a profile of IEEE Medal of Honor recipient Herwig Kogelnik earlier in the June issue of Spectrum.

The IEEE Proceedings of the IEEE in June 2000 was almost dedicated to the subject of optical interconnection. It contains reports describing free-space-based and waveguide-based systems developed in laboratories in North America, Japan, and Europe. In addition, an article by David AB Miller (F) detailed the basic principles of shifting from electrical interconnection to optical interconnection.

Georgia Institute of Technology has an Interconnection Focus Research Center, a consortium of universities that researches interconnection-related issues. Its website is located at http://www.ifc.gatech.edu/.

Rodrigo Liang explained the philosophy and future of this $5 billion AI startup

Artificial intelligence, especially the huge neural network designed to understand and interact with humans, is not suitable for the computer architecture that has dominated for decades. Many startups recognized this in time to develop chips, and sometimes the computers they drive. Among them, SambaNova Systems, located in Palo Alto, is the leader. This summer, the start-up company passed a $1 billion venture capital investment, bringing the company's valuation to $5 billion. It aims to use a custom technology stack including software, computer systems, and processors to handle the largest neural networks that require the most data, and sell it as a service rather than a software package. IEEE Spectrum spoke with SambaNova CEO Rodrigo Liang in October 2021.

IEEE Spectrum: What was the original idea behind SambaNova?

Rodrigo Liang: This is the biggest change since the Internet. Most of the work on AI is done on the legacy platform. The legacy [processor] architecture has been around for 25 or 30 years. (These architectures tend to support instruction flow rather than data flow.) We thought, let us return to the first principle. We will completely subvert the paradigm, no longer worry about instructions, but worry about data, to ensure that the data is where it is needed. Remember, today, you have little control over how to move data in the system. In the legacy architecture, you have no control over the location of the data and the cache where it is located.

"Once we create the hardware, it provides an opportunity to truly explore models such as GPT-3." — Rodrigo Liang, CEO SambaNova

So let’s go back to the first principle and say: “Let’s see what AI really wants, not what other architectures lead to AI.” What it wants is to actually create a constantly changing network. Neural networks have data paths that connect and reconnect as algorithms change.

We break things down into a set of different sub-operators. Today, you will add, subtract, multiply, divide, load, and store as typical operators. Here, you need operators that help data flow-such as mapping, reduction, and filtering. These things are more focused on data, rather than on instructions.

Once you understand what these software programs want to look like and how they want to flow, you will draw conclusions about which basic units you need and how much software controllability you need to make these networks interconnect and flow most effectively. Once you reach this point, you will realize that "we can actually implement it in the processor"-a high-density, efficient, and high-performance silicon chip whose single purpose is to run artificial intelligence efficiently. This is what we are building here with SambaNova.

Is this an example of hardware-software co-development, I hear more and more terms?

Liang: 100%. The first step is that you use the software, you break it down, and see what you want it to do. Then we build the hardware. What hardware allows us to do is explore bigger problems than we previously thought. In the developer's lab, things are small because we can't handle production-scale data sets. But once we created the hardware, suddenly there was an opportunity to really explore models such as GPT-3. People use thousands of GPUs to run these models and hundreds of people manage the models. That is really impractical. How many companies can afford the ability to hire hundreds of people to manage a model and interconnect thousands of GPUs to run one thing?

SambaNova Systems Cardinal SN10 Reconfigurable Data Flow Unit (RDU) is the next generation processor in the industry. RDU is designed to allow data to flow through the processor in a way that the model is designed to run freely without any bottlenecks. SambaNova

So we asked, "How do we automate all of this?" Today, we are deploying GPT-3 on behalf of our customers and operating models for them. We provide hardware as a software service. These customers subscribe to it and pay our monthly fee for the forecast.

So now we can ask, how is the software running? How is the hardware running? With each generation, you iterate, and you get better and better. This is contrary to traditional hardware design. In traditional hardware design, once you build a microprocessor, you throw it over the fence, and then someone uses it to do something, maybe you will eventually hear something about it thing. Maybe you don't.

Because we define it from software, we build hardware, we deploy software, we make money from these services, and then the feedback loop is closed. We are using what we built, and if it doesn't work well, we will know soon.

"We are not trying to be everything for everyone. We have chosen some routes that we are very good at and really focus on the production of artificial intelligence."

So you are developing a new chip that involves feedback from experience so far?

Liang: Yes. We are constantly building hardware; we are always building software-new software versions can do different things and can support new models that people may have just begun to hear about. We have close ties with the university research conducted by professors at Stanford University, Cornell University, and Purdue University. We stay ahead and can see what is about to happen; so our customers don’t have to do this. They will believe that we can help them choose the right model to be launched soon.

Is this full-stack model of hardware and software-as-a-service, computing companies the future of the field?

Liang: We are the only people doing this today for several different reasons. First of all, in order to provide these differentiated services, you really need a differentiated chip. You start with people who can produce high-performance silicon chips for this type of calculation, which requires certain skills. However, having the skills to build a software stack, then the skills to create models on behalf of our customers, and then the skills to deploy on behalf of our customers are really difficult things to do; this is a lot of work.

For us, we are able to do this because we are very focused on specific workloads, specific types of models, and specific types of use cases that are most valuable to the enterprise. Then we focused on putting these into production. We don't want to be everything to everyone. We have chosen some routes that we are very good at, and really focus on the production of artificial intelligence.

"How will [small and medium-sized companies] compete in the next era of artificial intelligence? They need someone to come in and provide them with a lot of infrastructure so they don't have to build it themselves."

For example, for the natural language model, we use it for certain use cases and use it in production. For image models, we only consider high resolution. Today, the resolution of the artificial intelligence world is shockingly low. [Today's computers] cannot train high-resolution images; they must downsample them. We are the only people today who can perform real resolution, original resolution and train them as they are.

It sounds like your company must have employees who can understand the complete technology stack from software to chips.

Liang: Yes. This is one of the most differentiated advantages we have. Chip companies know how to make chips, but they don’t understand stacks. Artificial intelligence companies know how to do artificial intelligence, but they cannot do silicon. And compiler technology-think about it... how few companies actually write languages. For certain classes of people, it is difficult to truly understand these technologies across the gap. We can form a team that truly does this. If you want to do software and hardware co-design, you must really understand cross-industry, because if you don't do this, you won't be able to get its advantages.

I think the other thing you are also involved is the expertise of the client’s own home. If you get out of the Fortune 50, most of them don't have an AI department with 200 A-level data scientists. There may be five of them. If you consider the professional gap between these large companies and your Fortune 500 companies, how will they compete in the next era of artificial intelligence? They need someone to come in and provide them with a lot of infrastructure, so they don't have to build it themselves. Most companies don't want to be an artificial intelligence center. They have a very healthy business, selling anything they sell. They only need the capabilities brought by artificial intelligence.

SambaNova Systems DataScale is an integrated software and hardware system optimized for data flow from algorithms to chips. SambaNova DataScale is the core infrastructure for organizations that want to quickly build and deploy next-generation artificial intelligence technologies on a large scale. Samba Nova

We do it on their behalf. Because everything is automated, we can provide services to our systems and platforms more effectively than anyone else. Other service companies will have to staff on behalf of others. But this is not practical. As far as the semiconductor shortage is concerned, there is also a shortage of artificial intelligence experts. Therefore, if I hire as many people as my clients must hire, I cannot scale up the business. But because I can do it automatically and more efficiently, they don't have to hire all these people, nor do I need it.

"Give me the entire data set; don't cut it."

What is the next milestone you are looking forward to? what are you doing?

Liang: Well, we have raised more than US$1 billion in venture capital at a valuation of US$5 billion, but the company is still very young. We are about to celebrate our fourth anniversary, so we have many wishes for ourselves to be able to help a wider customer base. Like I said, if you really see how many companies actually put artificial intelligence into production, then it is still a small percentage. Therefore, we are very focused on letting customers use AI for production and providing people with our solutions. You will see that we talk a lot about big data and big models. If you encounter difficult problems due to too much data, and the model you need is too large, then this is our cab. We are not being children. Our position is when you have a large enterprise model that contains a lot of data; let us solve this problem for you. We will deploy larger and larger models and larger and larger solutions for people.

Tell me a result that surprised you? What is one of the coolest things that the system you have seen done?

Liang: One of our partners, the Argonne National Laboratory, is working on this project to map the universe. Can you imagine? They are drawing a map of the universe.

They have done a lot of work, trying to map the universe (training artificial intelligence) using the very high resolution images they have taken over the years. Well, as you know, artifacts in the atmosphere do cause a lot of problems. The accuracy rate is actually not very good. You have to downsample these images and stitch them together, and then you have all the atmospheric noise.

Some scientists are much smarter than me and can solve all these problems. But we came in, shipped the system, plugged in the power source, and within 45 minutes, they started training. They drew the whole thing without changing the size of the image, and with a higher accuracy than they obtained a few years ago, and in much less time.

We are really proud of this. This is the type of thing you are confident that your technology can accomplish, and then you will see great customers do things you didn't expect and get such great results.

Like I said, we are built for big. In an e-commerce with all uses and all products, give me the entire data set; don't chop it up. Today, they have to cut it down because the infrastructure does not allow it. In banking, all the risks you face in all entities, well, let me see all the data. For all these different use cases, more data will produce better results. We firmly believe that if you have more data, it will actually produce better results, and this is our goal.

The pandemic helped turn electric bikes into dominance

Lawrence Ulrich is an award-winning automotive writer and the former chief automotive critic for the New York Times and the Detroit Free Press.

When the U.S. House of Representatives passed the “Better Rebuild Act” last week, a little-known clause set aside approximately $4.1 billion in tax credits to further stimulate the already booming electric vehicles that Elon Musk hasn’t even set foot in. market.

Electric bicycles, also known as electric bicycles, have shifted from novelty to mainstream at an alarming rate. During the pandemic, they are a boon for hard-working deliverymen (and their impatient customers) and commuters who don’t care about sweating when they arrive. Although the ridicule tends to focus on the "purity" of cycling—the idea that electric bike riders are to some extent lazy crooks—electric assistance is actually attracting people to leave the sofa for healthy exercise. This is especially popular for older or unskilled riders (it describes a lot of people) who might otherwise avoid cycling altogether, delayed because of the forbidding hills or longer distances.

Although the powerful "Class 3" models test the patience of pedestrians especially in crowded cities such as New York, with an assisted speed of close to 30 mph, electric bicycles have now become the front and center of discussions about future urban transportation. They are a way to potentially free up valuable street space, provide alternatives to cars, and reduce energy consumption and harmful emissions. California, through its powerful Air Resources Board, recently allocated a $10 million rebate to electric bike buyers, which is a small-scale version of state or federal tax relief for electric car buyers. A variety of possibilities are inspiring cool technological ideas, including rainproof cargo bikes with a roof; to embedded wireless chargers and automatic stabilization systems on the road to help advanced riders. CityQ is accepting pre-orders for a four-wheel freight "bicycle" that it boasted as a "Tesla-like" bike.

It is estimated that in 2020, 500,000 electric bicycles will be sold in the United States alone, compared with 210,000 plug-in vehicles.

According to market research firm NPD Group, from 2019 to 2020, the pandemic has helped the sales of electric bicycles increase by 145%, more than twice the growth of traditional bicycles. It is difficult to determine the exact number of industry sales; however, the New York Times quoted experts as saying that Americans bought approximately 500,000 electric bicycles in 2020, compared with approximately 210,000 plug-in vehicles.

Industry analysts expect that the adoption rate will continue to rise. A report by a business research company shows that the global electric bicycle market will grow from US$32.5 billion last year to US$53 billion in 2025, with a compound annual growth rate of 9.9%. Even in Europe, where bicycles are saturated, sales of electric bicycles increased by 23% in 2019. Deloitte estimates that by 2023, there will be 300 million electric bicycles on the streets of the world. This is a lot of bikes, batteries, and saved muscle power from grateful riders. If you don’t know the speed of electric bicycles, or you want to try electric bicycles, please check some technologies, terms and participants:

The technology behind electric bicycles is divided into two simpler categories, even if the choice between them is not so simple. The hub motor integrates the motor directly in the center of the wheel (front or rear) to form a closed system independent of the bicycle chain and pedal drive. There are two main types: Geared hub motors contain internal planetary gears for speed reduction, allowing the motor to run efficiently at high speeds, while bicycle wheels rotate at lower speeds. The gearless hub motor connects the stator of the motor directly to the bicycle axle. This reduces a key weakness-gears. Except for the bearings, there are no moving parts and no wear. In-wheel motors are relatively inexpensive, low maintenance costs, and mass-produced by millions of people. Do-it-yourself people can find a complete 800 to 1,000 watt hub motor kit for about $200, where the cost of mid-drive power may be three to five times that. The hub motor does not add extra pressure or wear to the chain or gear lever. Compared with the intermediate drive setting, it has another advantage: if the hub motor makes a sound, you can still step on the pedal to go home, and vice versa; if the chain is Or the pedal breaks, and the rider can move on under the influence of electricity. shortcoming? Almost every hub motor has a gear ratio; it is suitable for flat ground, but not suitable for mountain climbing, because the motor cannot be matched with a central drive to resist gravity, and may even overheat during long periods of ascent. Hub motors can also make the bike feel unbalanced and steer awkwardly-as if it is being pushed or pulled instead of pedaling. Due to the motors mounted on the wheels, it is more difficult to replace the tires.

Some electric bicycle companies claim that they can travel 80 or even 100 miles without assistance, but expert riders say that this is only possible when most of these miles are downhill.

In contrast, "medium speed" bicycles have the motor positioned inside the frame and between the pedals on the bottom bracket. The motor power is transmitted to the rear wheels through a chain drive. Like electric cars, these motors have become lighter, stronger, quieter and more affordable. The biggest advantage-there is also a parallel disadvantage-is the transmission of power through the traditional chain and gear seat: the motor can provide the main torque at a lower gear and a higher speed on a steep hill or from a standstill. Like your pedal.

Even when climbing long distances, this exciter-like power will continue. The main disadvantage is the constant increase in power through bad chains: professional cyclists can generate approximately 400 watts of power per hour. Most people with normal thighs can't even do half of it. But electric bicycles can generate up to 750 watts of continuous power. This is why most mid-drive electric bicycles are equipped with upgraded chains. If the chain breaks, you will have nowhere to go, just like riding a vintage bicycle.

On the bright side, the newer mid-drive motors are significantly smaller and lighter than the hub unit. Hidden in the frame, they make some electric bicycles look so concealed that onlookers don't know that it is electric.

For both types, a speed sensor or torque sensor detects pedal force or wheel rotation and activates the motor to provide useful forward thrust. Riders can usually adjust the level of electric assist, or just step on the pedal harder to increase the grunt of the motor accordingly. But the intermediate drive brings another advantage. The real torque sensor can detect the manpower exerted on the pedal crank and dial in the electric assist smoothly. In-wheel motors usually use a simple cadence sensor on the wheels and can produce unstable or unpredictable motor acceleration, especially when going uphill.

A big problem with the scope of electric bicycles is that there are many variables: rider weight, wind and tire resistance, different terrain and terrain. Some electric bicycle companies claim that they can travel 80 or even 100 miles without assistance, but expert riders say that this is only possible when most of these miles are downhill. According to general experience, throttling electric bicycles that combine 500 to 750 watt motors and 480 watt-hour (Wh) batteries can only travel about 20 miles at most with battery power; or less than 25 watt-hours per mile. Pedal assist bicycles go farther: approximately 15 watt-hours per mile, or 32 miles on the same 480 Wh battery, the preset electric assist is roughly "medium". The price of electric assist is weight. Lithium-ion batteries usually add 6 to 8 pounds of weight to a bicycle; once the energy is exhausted, the weight your legs must bear.

As speeders in the world of electric bicycles, Class 3 models are usually only allowed to drive on roads or bicycle lanes "side-to-side" and restricted to bicycle lanes or multi-purpose paths shared with pedestrians.

The battery can be installed on the rear rack for easy access and removal, at the cost of a less than ideal location: too high and too far back will affect handling. The battery mounted externally on the down tube-the rod directly below the seat-eliminates this problem, keeping the weight low and along the main axis of the bike. The battery integrated in the down tube creates the most stylish appearance, making these electric bicycles look less bulky and more like traditional bicycles.

Driven by the national advocacy organization and industry trade association PeopleForBikes, more than 30 states have adopted a "level 3" system to standardize electric bicycles based on the type of electric bicycle assistance and the speed at which they push you. All three levels limit the output power of the motor to 750 watts or 1 horsepower.

However, the road infrastructure available to each class is still a crazy quilt of local, state or national regulations. As speeders in the world of electric bicycles, Class 3 models are usually only allowed to drive on roads or bicycle lanes "side-to-side" and restricted to bicycle lanes or multi-purpose paths shared with pedestrians. In Europe, electric mountain bikes or eMTB are popular on non-motorized roads. For American riders, please note that the U.S. Forest Service, Bureau of Land Management, and National Park Service consider eMTB to be no different from off-road vehicles, ATVs, or other motor vehicles. Therefore, even Tier 1 bicycles are prohibited from entering non-motorized lanes. Some states, including Pennsylvania, Utah, and Colorado, have made exceptions for trails in state parks.

Players, and what you will pay

Electric bikes range in price from $1,200 for the Aventon 350 Pace 350 Step-Through to $7,500 (or higher) for "connected" bicycles such as the Stromer ST3 Sport. Stromer's luxurious "e-commuter" is equipped with a powerful rear hub motor (600 watts and 44 Nm of torque), Pirelli fat tires, and connectivity functions such as GPS, remote locking and unlocking, statistical readings and wireless updates. Most of the biggest names in the bicycle industry have accepted electric bicycles: Giant, Trek, Specialized, Schwinn. Even automakers like BMW, who are focused on expanding their mobile product portfolio, are participating. Last week, Porsche acquired a majority stake in GreyP, a high-end Croatian bicycle company founded by Mate Rimac, an electric supercar entrepreneur and creator of the US$2.4 million Rimac Nevera. Rimac himself controls Bugatti Rimac, and Porsche holds a minority stake in the newly merged dream car supplier. For a bicycle manufacturer, all of this is a noble company: Imagine the technology from the seven-figure electric Rimacs and Bugattis to your bicycles for work or pleasure.

As DDR enters the multi-gigabit range, we see the use of high-speed serial technology in memory systems. New storage devices are using equalization (e.g. DDR5) and/or multi-level modulation (e.g. DDR6X and GDDR7 with PAM4).

Therefore, designing with DDR has become more challenging than ever, and standard signal integrity analysis is no longer sufficient.

Register now for this free webinar.

Due to the channel effect, you need to regenerate the signal from the closed data eye diagram. Keysight provides a design process that allows you to: