According to Dr. Meindl, future opportunities for gigascale integration,
or GSI, will be governed by a hierarchy of limits. The biggest challenge
Dr. Meindl sees is interconnect technology.
"The intrinsic switching delay of a 1-micron MOSFET is 10 picoseconds.
For a 100-nanometer (or 0.10-micron) MOSFET, it's 1 picosecond. If
I look at the response time of a 1 millimeter long low volt interconnect-
-or minimum geometry interconnect--for a 1-micron technology, it's
1 picosecond. In other words, the transistor takes 10 times longer
to switch than the interconnect.
"But for 100-nanometer (0.10-micron) technology, the interconnect
is going to take 100 times longer to switch than the transistor. Right
now, we're in a transition time where we're going from being transistor
switching time dominated to interconnect switching time dominated,
and we need some clever new architectures to overcome these problems."
In addition to interconnect technology, the chip system itself is
subject to numerous limits, according to Dr. Meindl. "System level
limits are countless. But I'd like to suggest to you that--after looking
at the various limits in the context of the hierarchy at the system
level--I think there are five generic limits. The first limit is imposed
by the architecture of the chip. A second is imposed by the switching
energy of semiconductor technology, the third one is imposed by the
heat removal capability of the packaging, the fourth by the clock
frequency or timing you'd like to achieve. And finally there is a
limit imposed by the sheer size of the chip."
Where we have headroom, he said, is to improve switching performance
because transistors are going to get better and better. But, where
we don't have any headroom is in improving the technology of interconnects.
"What's needed is very smart ideas and architectures, for one thing,
to keep interconnects short."
Dr. Meindl said that he sees a "flattening out" of the downward curve
of minimum feature size that has enabled regular, predictable die
shrinks and consequent performance improvements and cost reductions.
Discussing the industry's track record in reducing minimum feature
size (in microns), by calendar year, he noted that "It was about 25
microns in 1960, 2- 1/2 microns in 1980, and there's no doubt that
the minimum feature size in the year 2000 for commercial products
is going to be in the 0.25-micron to 0.18-micron range.
"Now after that I show three possible scenarios which I call the 125-
nanometer pessimistic scenario, the 62-nanometer realistic scenario,
and the 31-nanometer optimistic scenario. (Regarding the 62-nanometer
scenario) when we reach a 62-nanometer minimum feature size, corresponding
to a 50-nanometer minimum channel length, for bulk technology we're
going to flatten out and stop scaling down. Why? Because we will be
softly colliding with the minimum allowable dimensions of bulk MOSFETs
and no matter how much money we invest we're not going to defeat the
laws of physics."
The long-discussed move away from optical lithography is cited as
another factor for the impending slowdown in the industry's ability
to reduce feature size.
"I'm showing a slowdown in the rate of scaling after the 125-nanometer
generation, and this slowdown is to about half the rate of the historical
scaling. Why is this slowdown starting at the 125-nanometer level?
The reason is that, I don't know of anyone in the microlithography
community who is saying we can carry optical lithography to feature
sizes below 125 nanometers.
"I would say that the biggest mistake I've seen repeated every few
years for many years now is a projection of when optical lithography
would run out of gas. But once again, I think that it's so universal
now, it's so agreed upon," that it is going to happen in the near
future, Dr. Meindl said.
"I'm suggesting that once we get to the 125-nanometer dimension we'
re not going to be able to use optical lithography. That means we
will need a new lithography technology." That, in turn, means the
industry will need, "a new masking technology, new resist technology,
and even new metrology. I think those technological problems and
associated economic problems are going to slow down the rate of scaling
after we hit 125 nanometers."
Finally, if physics and lithography don't slow things down, Dr. Meindl
said, economics might.
"If we feel very confident that physics allows us to continue to scale
downward, and we feel very confident that we have a sub-optical lithography
that is manufacturing worthy, then is it going to be a tolerable business
risk to invest $5 billion in a new factory in the year 2000? And it
will only go up after that."
Despite this pessimistic-sounding projection, Dr. Meindl also predicts
that the industry will achieve a 1 billion transistor chip by the
year 2000 and a trillion-transistor device by 2020.
"Even for the pessimistic scenarios we're going to have a billion-
transistor chip by the year 2000. We're going to see gigabit memory
chips by that year. As we look out at the year 2020, were going to
be approaching a trillion transistor chip or terascale integration.
I think that's the prospect that we have."
Also at the ninth annual Hot Chips Symposium:
* During the High-End CPUs session, Brad Burgess, chief architect
for Motorola at the Somerset Microprocessor Design Center, provided
additional technical details on the recently-introduced MPC750 PowerPC
processor (EN, Aug. 4). Mr. Burgess noted that while the device has
four stages, "from a programming perspective it's a three-stage pipeline."
In the device's load-store unit, store gather is supported for word
size, cache inhibited and write-through stores. "This is to help the
bandwidth of the 64-bit architecture get utilized better," Mr. Burgess
said.
* David Papworth at Intel provided a deeper look at the Pentium II
introduced earlier this year, what can be described as Intel's implementation
of MMX multimedia technology on the Pentium Pro architecture. MMX
consists of 57 new instructions that were added to provide multimedia
capability. When asked whether Intel is looking at the possibility
of adding more multimedia instructions to the MMX set, Mr. Papworth
responded, "Yes, we are always looking at ways to enhance performance."
He noted, however, that "We have to consider the relative cost and
inertia effects," of changing the MMX stack at this point.
* Kevin Normoyle, lead engineer for the "I" series UltraSparc microprocessors
at Sun Microsystems, discussed Sun's planned UltraSparc IIi (EN, Antenna,
Oct. 7, 1996) code-named "Sabre". The UltraSparc IIi is the first
in Sun's "I" series chip designed to provide a balance of price/performance
and ease of use. The UltraSparc IIi features four-way superscalar
instruction issue, synchronous, external L2 cache, and two separate
clock domains with the internal clock logic running at 132MHz.
* John Wharton, head of Applications Research of Palo Alto, acted
as panel moderator for a session titled "If I Were Defining 'Merced'
" during which panelists Keith Diefendorff, director of microprocessor
architecture at Apple; Bruce Lightner, VP of development at Metaflow,
John Novitsky, VP of marketing at MicroModule Systems; Martin Reynolds,
senior analyst at Dataquest; and Pete Wilson from Motorola took a
sometimes serious and sometimes tongue-in-cheek look at the planned
Intel-Hewlett Packard CISC/RISC architecture due out in the next couple
of years.
For example, Mr. Lightner noted that Microsoft (and its customers)
are still using 16-bit code by and large; that full 32 to 64-bit conversion
will be hard, slow, impossible; that software will have to be re-written
for 64-bit APIs; that we need to start sometime and that 64 bits STILL
won't be enough.
COPYRIGHT 1997 Cahners Publishing Company
DeTar, Jim, Gigascale integration hitting the wall? (1997 Hot Chips Symposium in Palo Alto) (Industry Trend or Event)., Vol. 43, Electronic News, 09-01-1997, pp 14(2).