CitazioneFiringSquad have completed an article which outlines the rise and fall of the former graphics giant 3dfx. An interesting aspect of this article is the "behind-the-scenes" images which accompany the article. Buried within the article, there was this intriguing piece of information.
Fear- The first part based on 3dfx and Gigapixel technology. Fear actually consisted of two separate parts: Fusion and Sage II. Fusion was derived from combining 3dfx and Gigapixel technology. This was a part targeted at DirectX8-9 (though the specification was nothing near final). Being from Gigapixel, it was a deferred rendering architecture. At the time of 3dfx closing shop, Fusion was considered RTL complete and tape out was expected in March of 2001. Sage II was slightly behind Fusion, but it was making ground.
Introduction
We've chronicled the humble beginnings of industry titans ATI and NVIDIA in the past, but for today's article we're doing something a bit different. Rather than discuss the origins of a 3D company you're familiar with, 3dfx, we were given the unique opportunity to learn more about what was going on within the company around the time of its sudden downfall. However, unlike previous industry articles we've published, this one comes straight from the horse's mouth!
For obvious reasons our source would like to remain anonymous, but we've known him for quite awhile and can assure you that he is indeed legit. He will briefly go over the early days of 3dfx, before going into detail over each of the company's products. From the original Voodoo Graphics chipset, all the way to unannounced parts such as Fearless and Mojo, it's all covered here. So without further discussion, lets listen up to what he has to say!
In the beginning...
It was a sad loss for the entire graphics industry when 3dfx announced they were closing their doors. Within the last year and half there have been several articles on the subject of 3dfx's demise, looking into both what went wrong and the future generation of products that would have been. Unfortunately, these authors were ill informed on the subject, having made errors on the facts and missed key points. This article will attempt to clear up some of the facts. It will not present every single event that occurred at 3dfx, as that would take an entire book. Rather, highlights will be given that took place along the life of the company.
With the initial introduction of the Voodoo Graphics chipset, 3dfx was given a substantial performance lead. As one of the first true 3D accelerators, the competition for it was Rendition's Verite, S3's Virge and NVIDIA's NV1. PowerVR soon followed with a part, but it was plagued with compatibility issues. Even with these competitors, Verite was the only true 3D accelerator, with S3's decelerator Virge taking a large part of the OEM market. Thus, achieving the performance lead, 3dfx was crowned the winner and the market was theirs.
While it was a product that was not originally scheduled, Voodoo2 soon followed. Voodoo2, much like every other product that followed, was created to fill a gap in 3dfx's product cycles. Voodoo2 again took the market in performance, more than doubling Voodoo Graphic's performance with SLI configurations. Yet through all this, the goal was to deliver Rampage.
Voodoo Banshee
Management changes
It was sometime in between the Voodoo/Voodoo2 period that Greg Ballard came onto the scene as CEO. He was there for marketing, and he was good at it, though there was something missing when it came to technology. He pushed a variety of 3dfx marketing campaigns that helped bring 3dfx to the top. Problems apparently came from his lack of understanding how the graphics industry functioned. Ballard desired to deliver a single chip 2D/3D solution as the competition had thus far done the same.
This would allow 3dfx to enter the mainstream and OEM markets, increasing revenue. It would also renew trust in 3dfx as their ill-fated Voodoo Rush (a multi-chip 2D/3D solution with a separate vendor's 2D core) had created doubters. With limited engineering resources at the time, the only option for this to occur was to remove staff from another project and dedicate them to this. Thus Rampage lost vital engineering resources and Banshee was created.
Voodoo Banshee
With the release of Voodoo Banshee, 3dfx was able to offer a solid 2D/3D solution. Unfortunately, all was not pleasant in the land of 3dfx. Having removed the second texture unit on Banshee's pixel pipeline, multi-texturing performance was below that of a single Voodoo2 solution.
Additionally, NVIDIA for the first time had become a real competitor with their TNT graphics core. NVIDIA's TNT offered similar performance to that of Banshee (in some cases slower, in others faster). Several additional features were built within TNT that were not included in Banshee, such as 32-bit color and textures, as well as high-resolution textures. This made it a somewhat more appealing solution for consumers and developers. 3dfx thus began losing market share and developer confidence.
With all this having taken place and Banshee already having consumed much of Rampage's resources, 3dfx was forced to take Rampage back to the drawing board. It was no longer the high-end board they had hoped. The anticipated market leading performance and feature set no longer existed as NVIDIA had gained considerable ground and the part was simply taking too long to deliver. Thus, with Rampage substantially delayed another stopgap product would be required, this one in the form of Avenger.
Avenger/Delays
Avenger becomes Voodoo3
Avenger, which later became known as Voodoo3, was 3dfx's follow-up to Banshee. Originally this product was to be named Banshee2, for that is really what it was. However, 3dfx management knew that the Voodoo name provided much greater brand recognition and so they opted for that name. Voodoo3's feature set was identical to that of Banshee. It was simply a higher-clocked version of the previous chip with a second texture unit installed. Performance was definitely competitive, with NVIDIA's TNT2 and TNT2 Ultra often falling behind in performance, but the lack of new features made NVIDIA's solution more appealing once again. This hurt 3dfx's sales and caused them to further lose market share and developers confidence.
Just prior to the launch of Avenger, the merger with STB Systems was announced. STB had been an add-in board manufacturer and they had pretty much dominated the OEM market with products in nearly all the major OEM systems. For 3dfx, the hope was to get their products into OEM systems. For STB the hope was to finally have a say in each chip's feature set.
Many would say a mistake made by 3dfx in all this was cutting off supply to other board manufactures. With several companies having strong brand recognition in the United States and Europe, this reduced potential sales. Additionally, Asian board makers, typically having niche with Asian system builders, were cut off. This hurt 3dfx's sales throughout the remainder of their existence.
Product delays
With the oncoming merger almost complete, many at STB were under the impression that 3dfx's next part, Rampage, was all but taped out. This would have been true had 3dfx not decided to make some last minute changes to the design. These were not minor changes either, but major feature introductions. The most important new addition was SLI support. Had SLI not been an included feature, what would be called VSA-100 in its original form, would have been nothing more than a TNT2 Ultra. 3dfx knew this would not be an appealing solution, so Rampage was redesigned to allow for multi-chip boards, theoretically doubling performance (or more, depending on many chips were used). Additionally, 3dfx engineers added FXT-1 texture compression.
Adding technology meant additional delays. Delays not only came from adding features, but also from the new issues that spawned as a result of these additions. Problems crept up along the development path and even more delays were found. Officials within 3dfx did not help this problem either. There were serious delays from simple miscommunications within the company.
One example of this was somebody apparently forgetting to go to Asia to pickup the first batch of completed VSA-100 chips. Another example was a mistake in QA. Quake3 was repeatedly locking their system on Voodoo5 and they could not determine the cause. After a two-week delay the cause was found to be a bad Ghost image that was repeatedly used. These and other reasons set VSA-100 back by weeks.
GeForce vs. Voodoo5
NVIDIA launches GeForce
While all this was developing, NVIDIA was coming on strong. They had released their GeForce256 chip, which took a nice performance lead over Voodoo3. As a follow up, NVIDIA brought the GeForce2 to market. These two parts offered a considerable number of additional features that 3dfx did not provide with Voodoo5. While 3dfx did offer anti-aliasing that was considerably superior to NVIDIA's, they had a tough time selling it due to NVIDIA's aggressive marketing and technology demos. From this, 3dfx lost the majority of their developer support and a considerable amount of consumer confidence.
Voodoo5 6000 problems
In the end, Voodoo5 was a fairly successful product. However, the high-end board, Voodoo5 6000, was forever delayed. There were many happenings with this board, but it boils down to this: 3dfx did not consider the design well enough before the board was announced.
The AGP specification simply was not designed with this type of product in mind. Many attempts were made to work around this, even completely changing the board design and the bridge chip used. Yet in the end, Voodoo5 6000 was canceled in the last weeks of 3dfx.
The specific issue that resulted in the final cancellation was an AGP issue with certain motherboards. While most motherboards did function, there were several that did not quite meet AGP spec, resulting in the boards not functioning. While a BIOS fix on these boards would have likely resolved the issue and though the incompatible boards were few in number, 3dfx chose not release the product. And thus they again failed to retake the performance crown they so badly wanted and lost even more consumer confidence.
While all these events were occurring, 3dfx was losing money. The board manufacturing plant in Mexico was never at capacity, reducing profits on each graphics board sold by roughly 10% from the intended 25% margin. Only in 3dfx's final months did management decide to start selling out the remaining factory space, filling the product lines. This brought the board plant to near profitability on its own, but this was just one change that was too little, too late.
Money/Rampage
More inside details
3dfx was notorious for spending money. In the last year or so, roughly $30-50,000 was spent monthly on lunches. This did not include the additional snacks and drinks that were provided to employees. Hiring didn't stop until the last few weeks, with all of us keeping hope that the company would pull through. Of course this did not happen.
Could 3dfx have lasted? Perhaps. They were offered a line of credit, but the board opted not to accept it as they would not accept the terms. Rumors within the company also circulated that an investor had expressed strong interest in the company, but backed out from a simple "goof" on the boards' part (specifically, it was said to be their mention to the investor the possibility of a buyout by another company). But what would the future have held for 3dfx?
Next generation parts
Daytona- 3dfx's first low-end OEM part. Daytona was effectively a VSA-100 part with a DDR memory controller and a 64-bit memory bus. The idea was to deliver a cheaper version of the VSA-100, with the 64-bit bus making a notable dent in cost. Daytona simply could not be finalized though. It would tape out and a bug would be found, then tape out again and another bug would be found. Fortunately, a chip was not made between each tape out with the final number being A7 silicon. In the end, this resulted in considerable delays and final Daytona silicon never coming to life.
(https://www.forumzone.it/public/uploaded/re-voodoo/3dfx-testing.jpg)
Rampage (Spectre) - 3dfx's next high-end graphics part was capable of quad-chip support. Rampage silicon had come back from the fab just weeks before the announcement of 3dfx's demise. Sage, Rampage's geometry processor had recently taped out as well, so expectations were high. The first revision of Rampage silicon was able to achieve 200 MHz clock frequencies without active cooling. Originally, the expectation had been to ship it at 200 MHz, but with this capability, there was nothing limiting it from 250+ MHz clock speeds.
Messaggio dalla redazione:Contenuto non disponibile in quanto rimosso da server esterno o server esterno off line |
Inside The Demise of 3dfx
September 26, 2002 - Brandon Bell
Money/Rampage
More inside details
3dfx was notorious for spending money. In the last year or so, roughly $30-50,000 was spent monthly on lunches. This did not include the additional snacks and drinks that were provided to employees. Hiring didn't stop until the last few weeks, with all of us keeping hope that the company would pull through. Of course this did not happen.
Could 3dfx have lasted? Perhaps. They were offered a line of credit, but the board opted not to accept it as they would not accept the terms. Rumors within the company also circulated that an investor had expressed strong interest in the company, but backed out from a simple "goof" on the boards' part (specifically, it was said to be their mention to the investor the possibility of a buyout by another company). But what would the future have held for 3dfx?
Next generation parts
Daytona- 3dfx's first low-end OEM part. Daytona was effectively a VSA-100 part with a DDR memory controller and a 64-bit memory bus. The idea was to deliver a cheaper version of the VSA-100, with the 64-bit bus making a notable dent in cost. Daytona simply could not be finalized though. It would tape out and a bug would be found, then tape out again and another bug would be found. Fortunately, a chip was not made between each tape out with the final number being A7 silicon. In the end, this resulted in considerable delays and final Daytona silicon never coming to life.
Rampage bringup
This is what you got when your socket isn't connecting well
Rampage (Spectre) - 3dfx's next high-end graphics part was capable of quad-chip support. Rampage silicon had come back from the fab just weeks before the announcement of 3dfx's demise. Sage, Rampage's geometry processor had recently taped out as well, so expectations were high. The first revision of Rampage silicon was able to achieve 200 MHz clock frequencies without active cooling. Originally, the expectation had been to ship it at 200 MHz, but with this capability, there was nothing limiting it from 250+ MHz clock speeds.
Screenshot of Quake 3 running on Rampage
Rocket fire
Of interesting note are the two bugs that did exist in Rampage silicon. The first was the DAC being flipped, reversing the color channels. It is hard to be certain how this bug managed to slip through, but it did. One possible reason it was not detected is because this was one of the few places on the chip that had not been simulated. The temporary fix was an interesting little board that was attached between the monitor cable and VGA connector. It flipped all the color channels, making it display correctly.
The second bug was an AGP issue that had initially caused some problems but was corrected for bring up boards by fibbing the chips.
Messaggio dalla redazione:Contenuto non disponibile in quanto rimosso da server esterno o server esterno off line |
Here are the specs on Rampage, and its companion chip, Sage:
Rampage
200+ MHz Core
Approximately 30 million transistors
4 Pixel Pipelines
8 textures per-pass
DX 8 Pixel Shader 1.0
Quad-Chip support
Sage
50 million triangles/sec sustained
150 million triangles/sec real world
DX8 1.0 Vertex Shader
Approx. 20 million transistors
Next generation cores
Tantrum- A single chip combination of Rampage and Sage. Targeted at the OEM market, performance would be lower than a Rampage-Sage combination, with considerably reduced cost.
Fear- The first part based on 3dfx and Gigapixel technology. Fear actually consisted of two separate parts: Fusion and Sage II. Fusion was derived from combining 3dfx and Gigapixel technology. This was a part targeted at DirectX8-9 (though the specification was nothing near final). Being from Gigapixel, it was a deferred rendering architecture. At the time of 3dfx closing shop, Fusion was considered RTL complete and tape out was expected in March of 2001. Sage II was slightly behind Fusion, but it was making ground.
Fusion
250+ MHz Core
Approx 60 Million transistors
4 pixel pipelines
8 texture per-pass via loop back
Deferred Rendering Architecture
DX8-DX9 Pixel Shader
Sage2
100 Million Triangles/sec Sustained
300 Million Triangles/sec Theoretical
DX8-DX9 Vertex Shader
Fearless- A single-chip Fusion-Sage2 part. Comparable to what Tantrum was to Rampage.
Mojo- The distant future of 3dfx. This was based on an entirely new generation of design. It was considered the next-generation of deferred rendering. Targeted at DX9 and higher, it had a considerably extensive feature set. With Fear's anticipated performance being such a high level, the raw performance specifications of Mojo were actually slightly lower. Mojo was a single-chip solution unlike Fear and Spectre, including the geometry processor with the pixel pipeline.
Conclusion
Did 3dfx sell out? Perhaps. Many within the company thought so. Many fans of the company felt let down as well. Members of the board are reported to have received notable perks for the purchase of 3dfx's name and IP, with the dissolution of the company. And of course the end of an era came. Certainly it was a fun era, but as they say, all good things must come to an end.
Qui da ultimo sta parlando della nuova generazione di core forse..................
Aleeeeeeeeeeeeeeeeeee!!!!
Scritto Da - re-voodoo il 06 Ottobre 2002alle ore 19:11:20
TRADUCI! :D :h
Ho forse parlano del passato?!?!?
Non capisco alcune frasi qualcuno è più bravo di me a tradurre?????
Scritto Da - re-voodoo on 26 Settembre 2002 21:25:45
:lin: DA SCLERO :lin:
Datemi un po di tempo, ve lo riposto tradotto tra qualche gg, VOLETE?????
se non vi interessa mi risparmio la fatica!!!
A me interessa molto, specialmente se si parla del futuro...
dovrebbe essere la storia di 3dfx e di quello che sarebbero stati sage e rampage.... la avevo già vista...
mi interessa anche a me.
il problema che non so l'inglese!!!! :(:(:(
ciao ;)
Scritto Da - davide on 26 Settembre 2002 23:44:21
OK, allora datemi qualche gg, lo do alla mia ragazza che l'inglese lo mangai x colazione!!!!!
(è quasi una prof di inglese)
Ti potrei aiutare che sono uscito con 9 l'hanno scorso però non ho tempo! Piuttosto se ho tempo traduco il mojo :D
Il nome 3dfx è ben noto a numerosi appassionati di grafica: questa azienda americana è stata, infatti, la prima a rivoluzionare il mondo dei videogiochi con l'acceleratore 3D VooDoo, capace nella seconda metà degli anni 90 di rivoluzionare la grafica videoludica.
Sul sito Firingsquad, a questo indirizzo, è stata pubblicata una retrospettiva su 3dfx estremamente interessante, con varie informazioni su quelli che avrebbero dovuti essere i prodotti futuri di 3dfx prima che venisse acquistata da nVidia.
Che peccato però all'inizio mi ero davvero..............e alla fine......
re-voodoo in ogni caso qualunque siano le novità del futuro non penso che si ricreerà una azienda chiamata 3dfx...magari qualcun altro ne sfrutterà studi e progetti ma il nome 3dfx purtroppo è destinato a scomparire....
basta un traduttore
Glide deve sapere delle cose molto importanti riguardanti questa faccenda.
essì........:h :h :h
o, come promesso ho la traduzione, datemi solo un po di tempo x buttarla giù, e una volta fatto le posto!!
CIAO
CitazioneProduct delays
With the oncoming merger almost complete, many at STB were under the impression that 3dfx's next part, Rampage, was all but taped out. This would have been true had 3dfx not decided to make some last minute changes to the design. These were not minor changes either, but major feature introductions. The most important new addition was SLI support. Had SLI not been an included feature, what would be called VSA-100 in its original form, would have been nothing more than a TNT2 Ultra. 3dfx knew this would not be an appealing solution, so Rampage was redesigned to allow for multi-chip boards, theoretically doubling performance (or more, depending on many chips were used). Additionally, 3dfx engineers added FXT-1 texture compression.
Adding technology meant additional delays. Delays not only came from adding features, but also from the new issues that spawned as a result of these additions. Problems crept up along the development path and even more delays were found. Officials within 3dfx did not help this problem either. There were serious delays from simple miscommunications within the company.
One example of this was somebody apparently forgetting to go to Asia to pickup the first batch of completed VSA-100 chips. Another example was a mistake in QA. Quake3 was repeatedly locking their system on Voodoo5 and they could not determine the cause. After a two-week delay the cause was found to be a bad Ghost image that was repeatedly used. These and other reasons set VSA-100 back by weeks.
a fusione quasi completata, molti alla FSB avevano l'impressione che il pezzo successivo della 3dfx, RAMPAGE non sarebbe mai stata lanciata. Questo sarebbe stato vero se la 3dfx non avesse deciso di apportare al disegno alcune modifiche dell'ultimo minuto. E queste non furono nemmeno delle piccole modifiche, ma introduzione di caratteristiche maggiori, la più importante aggiunta fu il supporto "SLI". Non fosse stato aggiunto il supporto "SLI" quello che si chiamava VSA 100 nella sua forma originale, non sarebbe stato altro se non una TNT 2 Ultra.
La 3dfx sapeva che questa non sarebbe stata una soluzione attraente, cosi la RAMPAGE fu ridisegnata per permettere alle board multi chip, teoricamente, una performance doppia (o meglio che dipende da + chip), inoltre gli ingegneri 3dfx aggiunsero la compressione di trama FXT-1.
Aggiungere tecnologia significava ritardi ulteriori. I ritardi solo alle caratteristiche agiunte, ma anche dai nuovi problemi che si vennero a creare a causa di questa aggiunte.
i problemi sorsero lentamente lungo il sentiero di sviluppo e naquero sempre maggiori ritardi.
nemmeno i funzionari 3dfx furono d'aiuto.
Ci furono seri ritardi da semplici problemi di comunicazione interni alla compèagnia; un seempoi di ciò: qualcuno si dimenticò di andare in ASIA a raccogliere il 1° letto di chip della completa VSA-100. Altro errore e nel controllo di qualità; Quake 3 ripetutamente bloccava il sistema sulla Voodoo
5 e non si riuscì a determinare la causa.
Dopo un ritardo di 2 settimane la causa fu trovata essere una cattiva immagine ghost che era stata usata.
queste ed altre ragioni bloccarono il VSA-100 per settimane.
Grazie mille che ti interessi di tradurre il testo. Comunque ragazzi non disperate, qualcosa di simile tornerà...:cool:
no problem, cmqe ho postato solo una parte.
dato che è lungo posto solo le parti + interessanti, OK!!!!
chi mi traduce questo:
Ultime parole di 3DFX:
'...we deeply regret [this action]. Again, we want to extend our sincerest thanks to every one of you who helped 3dfx revolutionize 3D graphics and 3D gaming on the PC. Rest assured, the 3dfx legacy will live on through the combined strengths of these two great companies. Sincerely, Scott Sellers 3dfx Founder and CTO'
ciao e grazie
aH AH!!!!!! scordavo!!!!!!!!
CitazioneEditor's Note: Tim Zegers is the owner of Rashly Productions, and creator of arguably the most potent 3dfx fan site on the internet. We recently contacted Tim for his opinion on 3dfx's past product releases and business decisions. Tim took the liberty to go one-step forward and hypothesize on future nVidia offerings making use of 3dfx technology as well. The ensuing article is the product of our request, as presented by renowned 3dfx fan and aficionado Tim Zegers. It's a very interesting read that we hope you thoroughly enjoy.
Background
Ever since the demise of 3dfx on December 15, 2000, there has been great speculation on what 3dfx could have done if they were still in the market. What if they had another year? Would the infamous Rampage bailed them out of financial turmoil?
A little primer here: (for those of you that don't know), 3dfx was the biggest pioneer in 3D gaming on the PC. Their Voodoo 3D accelerator revolutionized PC gaming, making it a viable platform able to compete with console systems of its day. Sure, 3dfx wasn't the first to offer 3D acceleration on the PC, but they were able to make it more mainstream than ever before. With the release of the Voodoo Graphics chipset (and later the Voodoo2) 3dfx had put a headlock on the then-feeble competition. Everything in 3dfx-land was dandy until some questionable business decisions came into play following the release of the Voodoo2.
Many are not aware that 3dfx's infamous Rampage (next-generation) graphics chip was scheduled to be released immediately following the Voodoo2. During this time 3dfx acquired STB, and was over-going a transition in which they would be working with new employees, and manufacturing video cards in-house (rather than selling graphics chips to third-party board manufacturers).
As a direct result of the pending changes at 3dfx, there were some delays in the production of the next-generation Rampage chips. With the ensuing threat of a 32-bit, marketing-pumped nVidia TNT2 release, 3dfx couldn't sit out this product cycle entirely. They released the Voodoo3 (essentially as a placeholder) until Rampage was ready for the big-time.
It should be stated here that the Voodoo3 was a solid product. At the time of its immediate release, it had the industry's fastest 3D, and arguably the best 2D image quality as well. What single-handedly hindered the Voodoo3's success was the release of nVidia's TNT2 several months after Voodoo3-based video cards hit store shelves.
The TNT2 wasn't as fast as the Voodoo3 on paper, but it offered faster Direct3D and (many times) OpenGL framerates than 3dfx's offering. Perhaps more importantly, 3dfx lost the marketing battle fought by nVidia in regards to 32-bit color support. Even though the TNT2 couldn't necessarily run games at desirable framerates in 32-bit mode, many consumers saw "32-bit" and noticed the numerical value to be higher than 3dfx's claimed "16-bit" rendering and based their buying decision upon this fact.
Despite 3dfx's attempts to stress that 32-bit rendering was not necessary at that time in gaming, and their efforts to stress that the Voodoo3 made use of a quality 22-bit post filter, the general public bought into nVidia's side and 3dfx lost substantial market share to nVidia. Thankfully for 3dfx, Voodoo3 video cards still sold well, and 3dfx wasn't in any real trouble, yet.
3dfx's tumble downwards
In purchasing STB, 3dfx immediately lost the OEM backing they once had in the past. Seemingly every OEM that 3dfx had left behind placed an order with nVidia for their graphics chips. Indirectly, 3dfx had dug themselves into a hole. They would be forced to produce all of their newer video cards in-house or sell to third-parties at lower prices. They chose to stick with in-house production just as they'd done with their Voodoo3 cards.
Unfortunately, 3dfx was having trouble releasing their next line of chips, the VSA-100 series in a timely fashion. The original plan was to release the VSA-100 based Voodoo4 and Voodoo5 in early 2000, but they weren't ready for the spotlight in January, or February, or March... (you get where this is going). Not until the summer season did consumers find VSA-100-based products on store shelves.
Instead of competing with nVidia first-generation GPU video cards, 3dfx was going head-to-head with nVidia's refined GeForce2 product line. To 3dfx's credit, the VSA-100-based video cards had significantly better image quality than their GeForce2 counterparts. (3dfx had vowed to improve the image quality beyond their competition following all the snubbing incurred from the 32-bit vs 16-bit debate months earlier). But the masses were looking for speed alone, and as such, the GeForce2 handily snatched the crown from 3dfx's kingdom. 3dfx had lost the speed game they so handily dominated years before.
To add to the misery, 3dfx didn't ship nearly as many Voodoo5 video cards as they had initially projected. They had a surplus of memory chips and circuitry left over because demand was not increasing for their products as time progressed.
Despite having purchased GigaPixel in March of 2000, the hyped Tile Based Rendering technology wasn't implemented soon enough to help alleviate memory bandwidth issues. 3dfx continued to use the traditional "brute force" approach, loading their cards with an unheard of 64MB and (proposed) 128MB of video RAM. With sales for Voodoo5 cards lagging, 3dfx had purchased too many RAM chips and soon found themselves in debt to memory companies.
The $186 million dollar purchase of GigaPixel was, in my opinion, a bad move by 3dfx because they had already missed their proposed deadline on the Voodoo5 and were in no position to purchase another company's assets at that point.
The Voodoo5 6000 as a commercial product was scrapped in the fall of 2000, and 3dfx made one last hurrah in the OEM market by licensing their chips to PowerColor. It was already too late and 3dfx had one last weapon that they hoped would save them from what appeared to be (and eventually was) a bleak future.
Going on a Rampage
Rampage. Say this word to any avid 3dfx fan, and they'll surely tell you something along the lines of "it would have saved 3dfx."
Rampage was the internal name for 3dfx's next-generation part intended for direct competition with nVidia's GeForce3 product line. 3dfx learned from their mistakes and pulled out all stops while making this card. It was to have had the power and image quality to remain unmatched by the competition's offerings. As far as performance is concerned, the Rampage would have been slightly speedier than a GeForce3 offering. However, the Rampage's image quality and feature set would have easily separated it from the competition.
The high-end Spectre offering was to make use of two Rampage chips, and one Sage unit. The Sage unit wasn't a "smart chip" of some sort, but rather, it was 3dfx's external transformation and lighting unit that supported twenty-five hardware lights. Since the AGP specification can only detect one chip on any given video card, the AGP slot would detect the Sage chip, which would be supported by the Rampage chip(s). The DDR memory bandwidth would have been rated at 12.8GB/sec, easily slaughtering the GeForce3's 7.36GB/sec.
One of the most-advertised features for Matrox's new Parhelia offering is its 10-bit per RGBA component processing. Well, Rampage would have layed the smackdown on Matrox as well. Rampage was set to feature 13-bit per RGBA component, allowing it to have 52-bit internal color processing. It still made use of 32-bit output, but the quality, and precision of colors would have been of a much higher standard than anything offered by today's commercial video card products.
3dfx's beloved FSAA was planning a return as well. It would have still used RGMS (rotated grid multi sampling), but Rampage's texturing abilities would have directly affected how anti-aliasing worked. The M-Buffer (similar to the Voodoo5's T-Buffer), would have allowed for 4 sample AA per clock with no pixel rate loss, unlike the older T-Buffer, which took a 4x performance hit when AA is enabled.
The only downside would have been that MS takes the same texel coordinates and jitters them, thus there's no texture clean up. 3dfx implemented the advanced anisotropic filtering to remedy this once and for all. It could do up to 128-tap anisotropic filtering. If quad texturing was used, Rampage would provide for (performance-hit) free FSAA. When dual texturing was used with anisotropic filtering, Rampage again would have provided (performance-hit) free FSAA.
Several other features would have been brilliant as well. Rampage continued to feature support for the cinematic effects from the T-Buffer, and probably would have included additional effects not seen in Voodoo5 products. It would have featured photorealistic rendering as an extension of the Rampage's VTA. Finally, it was planned to include hardware Photoshop effects (toon shading).
I honestly believe this would have revolutionized the industry because other chipmakers surely would have followed suit. Unfortunately, 3dfx went out of business just when Rampage-based products began initial testing in 3dfx's labs.
Beyond Rampage
Other chips were planned for production after Rampage's release. Unfortunately, there is little known about them, and all what is heard about them are rumors. After Spectre was released, it has been said that the next product line would have been from a line known simply as Fear. Fear was basically a refresh to the Rampage, just as the GeForce4 is to the GeForce3. It would have made use of Sage II as its bus master. After Fear had its run in the market, 3dfx was said to have planned to release Mojo to the world. Mojo would have made use of a radically new architecture created in large part using technology developed by Gigapixel. It is supposed that Mojo would have had made use of an advanced tile based rendering technique and most likely kicked some serious b00ty.
3dfx's life after death
The NV30 from nVidia will be the first chip to incorporate 3dfx technology acquired from the 3dfx buyout. When nVidia got ahold of 3dfx's technology from the buyout, they mixed nVidia and (former) 3dfx staff. They broke them up into two groups and had each group argue for different technologies to be included as a part of a radically new video card product. One group argued for technology nVidia had been culturing in-house while the other argued for technology created by 3dfx engineers. In this way, former 3dfx employees and longtime nVidia veterans collaborated with one another on creating the most revolutionary and arguably effective product possible.
My expectations for the NV30 do not include seeing the advanced tile based rendering (developed by GigaPixel) in action. The technology is great, but it's too drastic a change from nVidia's current chips/architecture. NV30 will likely feature an external hardware transformation and lighting (T&L) unit (like the proposed 3dfx Sage) and it will be as powerful, or more powerful than Sage would have ever been.
I would surely expect to see NV30 make use of some of 3dfx's FSAA technology. The FSAA on the Voodoo5 is just beginning to be equaled today by the GeForce4. nVidia will have to figure out a way to use Rampage's FSAA (optimized by nVidia algorithms of course), and get it to work with the NV30's new texturing unit.
The NV30 will be SLI capable, but nVidia will surely leave the SLI solutions to the workstation market in the form of their Quadro followup. nVidia definitely won't be keeping the "GeForce" name as it would imply that the NV30 is a new product based upon the same "GeForce" architecture (which it definitely is not). On a side note, no matter how cool it would be to see it happen, nVidia won't be adding GLide support to the NV30 so don't stock up on the original Unreal just yet.
These are of course things that nVidia should do. It's entirely possible that nVidia will be stubborn, and not use any of the purchased 3dfx technology in the NV30. Of course, this would make the buyout all but pointless (oh wait, they eliminated their biggest competition) and I, for one, will be very disappointed, as will many 3dfx users, I'm sure.
3dfx lovers unite!
To this day, the 3dfx community is quite strong. More than users looking to solve Windows XP-Voodoo3 compatibility issues, these people have actually became close friends. The hub of the 3dfx community would have to be the x3dfx forums where any/all 3dfx-related questions can be answered.
As far as drivers are concerned, have no fear. To this day, all new games work with the Voodoo5 (including the soon to-be-released Unreal Tournament 2003) and are playable as long as they are tweaked properly and are teamed with a fast CPU.
The team at 3dfx Underground has taken on the task of essentially re-writing new drivers from the ground up. Thus far, the beta drivers are amazing. There is full Windows XP support, and (soon) DirectX 9 and OpenGL 1.3 compatibility!
The 3dfx empire revolutionized 3D graphics forever. If it weren't for them, graphic performance, and quality would have been at least two years behind where it is today. It's just too bad that Rampage never made it out. For 3dfx its release was so close, and yet so far.
Il ruolo di 3dfx in nVidia
(https://www.forumzone.it/public/uploaded/Neo/dave_kirk.jpg)
David Kirk è, dal 1997, Chief Scientist di nVidia, posizione che lo vede fortemente impegnato in tutto quello che ha a che vedere con lo sviluppo di nuove tecnologie per la grafica 3D. Lo scorso 29 Aprile ho avuto l'opportunità di incontrarlo a Milano, in occasione di una tappa di un tour europeo con vari giornalisti, e di scambiare alcune domande in un colloquio a due.
Quale persona informata di tutti i prossimi sviluppi tecnologici e di mercato di nVidia, Dave Kirk è la persona ideale da intervistare per un giornalista; questo, però, implica il fatto che sia molto difficile riuscire a ricuperare da lui alcune informazioni che non vadano necessariamente a toccare NDA (Non Disclosure Agreement), accordi tra produttori che non permettono di discutere di vari temi chiave. In termini concreti, da Dave non sapremo nulla sul nuovo chip nVidia NV30, fatta eccezione quello che la stessa nVidia vuol far sapere su questa nuova architettura.
Quale è stato il contributo dei tecnici 3dfx nei progetti NV20 ed NV25?
Acquisendo 3dfx abbiamo introdotto in nVidia numerose nuove tecnologie da quest'ultima sviluppate; vari ex tecnici di 3dfx sono, inoltre, entrati a far parte del team di sviluppo e ricerca di nVidia, portanto il loro contributo. Per questo motivo, mi risulta molto difficile dire quale sia il contributo specifico di ex tecnici 3dfx nel progetto NV25, mentre in quello NV20 il loro ingresso in azienda è avvenuto in un periodo nel quale il progetto era ormai completamente concluso (Inverno 2000).
Mi piace pensare che in nVidia ogni tecnico faccia parte di un unica grande famiglia, quindi che i contributi di ciascuno siano fondamentali per il raggiungimento dell'obiettivo comune, a prescindere dal background professionale.
Quale è stato il peso dei progetti futuri ai quali 3dfx stava lavorando nella definizione dei vostri chip video?
Non posso rivelare molte informazioni sui progetti ai quali 3dfx stava lavorando nel momento in cui, nell'autunno 2000, è stata acquisita; indubbiamente si trattava di tecnologia molto interessante, che partiva da punti differenti dai nostri di quel periodo. Indubbiamente entrare a contatto con quello sviluppo tecnologico ci ha permesso di migliorare ulteriormente il nostro lavoro. Sicuramente quanto abbiamo adesso, il chip GeForce 4, è a un livello superiore di quanto 3dfx sarebbe stata in grado di fornire, ovviamente senza i problemi finanziari ai quali è andata incontro, e meglio ancora di quanto da soli saremmo stati in grado di sviluppare.
Cosa c'è attualmente, di 3dfx all'interno delle schede GeForce 4? Sicuramente gli studi di 3dfx in campo di Full Screen Anti Aliasing si fanno molto sentire.
Indubbiamente alcune delle implementazioni del Full Screen Anti Aliasing che vediamo attualmente nelle schede GeForce 4 e GeForce 4 MX sono "figlie" degli ex tecnici 3dfx; oltre a questo, anche il video processing engine del chip GeForce 4 MX è stato sviluppato a partire da tecnologia 3dfx.
NV30 e DirectX 9
I chip GeForce 3 e, soprattutto, GeForce 4 hanno permesso di ottenere livelli prestazionali impensabili sino a pochi anni fa; con numerosi titoli è ora possibile giocare in modo fluido alla risoluzione di 1600x1200 @ 32bits. Garantire più frames al secondo alle risoluzioni elevate è indubbiamente un obiettivo interessante, ma non quello finale: qual è, al momento, l'obiettivo di nVidia con l'attuale e la futura generazione di chip video 3D? O, detto in altro modo, cosa volete ottenere con il chip NV30?
Come ben noto, la serie di chip GeForce 4 Ti ha permesso di guadagnare un considerevole margine prestazionale con Full Screen Anti Aliasing rispetto alla precedente generazione di chip video nVidia, la serie GeForce 3. E' ora possibile abilitare il FSAA anche a risoluzioni elevate, con un impatto sulle prestazioni velocistiche non eccessivamente elevato e, comunque, tale da salvaguardare fluidità delle scene e giocabilità.
Con la prossima generazione di chip video intendiamo proseguire questa direzione, offrendo la possibilità di abilitare il Full Screen Anti Aliasing a tutte le risoluzioni ottenendo un impatto complessivo sui frames al secondo generati il più ridotto possibile.
Negli ultimi anni la risoluzione media di gioco è via via aumentata, sino ad ottenere con le ultime generazioni di schede video una elevata fluidità alla risoluzione di 1600x1200. Il nostro obiettivo, a questo punto, non è quello di offrire un superiore quantitativo di pixel, quanto di aumentare la qualità complessiva di ogni pixel visualizzato.
La realtà che ci circonda e che cerchiamo di riprodurre con le nostre GPU è estremamente complessa; per questo motivo ogni pixel che riproduciamo con le nostre GPU dev'essere trattato e arricchito in modo da riprodurre nel modo più fedele possibile la realtà. L'obiettivo di nVidia con le future generazioni di chip video è proprio quello di arrivare ad un livello di realismo, nella riproduzione 3D, pari a quanto si può oggi vedere in film d'animazione quali Toy Story e Final Fantasy, ovviamente tutto in real time e a schermo pieno.
Quale sarà il memory controller del chip NV30? Si vocifera di un'architettura a 256bit per il bus memoria.
E' evidente che non possa entrare in dettagli tecnici su quello che sarà il bus memoria del chip NV30, ma voglio risponderti comunque partendo con una constatazione. Un bus memoria più ampio permette di ottenere bandwidth della memoria video estremamente elevate, ma a un prezzo: un'architettura a 256bit è molto complessa, quindi costosa da costruire al momento attuale. Le schede video di fascia alta hanno costi molto elevati già oggi, chiedere un prezzo superiore per integrare un bus memoria a 256bits mi pare eccessivo, almeno al momento.
Con questo, però, non voglio dire che un bus memoria di quest'ampiezza sia necessariamente qualcosa di sbagliato: il vero problema, a mio avviso, è capire se tale tecnologia sia oggi richiesta ed utilizzata. Mi spiego: se per migliorare la qualità dell'esperienza 3D cerchiamo non di generare più pixel, ma di interagire su ogni singolo pixel in modo da renderlo il più vicino possibile alla realtà (not more pixels, but better pixels), necessitiamo di GPU sempre più complesse e veloci, non di più elevata bandwidth della memoria video. Se il numero di pixel non varia, infatti, un quantitativo addizionale di bandwidth video non permette di ottenere un miglioramento qualitativo dell'immagine generata; aiuterebbe in termini prestazionali se si usassero più pixels, ma al momento con la risoluzione di 1600x1200 @ 32bits presa come parametro di riferimento per i giochi 3D non vediamo il bisogno di avere più bandwidth della memoria video rispetto a quanto oggi possiamo fornire con la scheda GeForce 4 Ti4600.
Almeno un vostro concorrente presenterà, prossimamente, una soluzione video di fascia alta dotata di bus memoria a 256bits, con una conseguente bandwidth della memoria video molto elevata.
Probabilmente questo nostro concorrente ha optato per una soluzione di questo tipo non pensando, a mio avviso, ad ottenere pixels migliori, quanto a fornirne un quantitativo sempre maggiore. E' una questione di approcci differenti e, personalmente, credo il nostro sia quello vincente.
Credi pertanto sarà possibile avere memorie DDR con frequenze di lavoro molto elevate nei prossimi mesi? Il limite attuale è tra 700 e 750 Mhz, con i più recenti chip Samsung da 2,6ns che ancora devono fare il loro debutto sul mercato.
Si, credo che con le prossime generazioni di schede video verranno utilizzate memorie più veloci. Il problema, comunque, non è tanto quello di avere una bandwidth della memoria video elevata, quanto come utilizzarla al meglio.
Hardware Displacement Mapping nelle DirectX 9: cosa ci puoi dire?
Con il nome di DirectX 9 si identifica comunemente la nuova generazione di API Microsoft; ovviamente in nVidia sappiamo molte cose su queste nuove API ma, avendo firmato un NDA con Microsoft, mi è impossibile dirti di più. (nota: segue un prolungato sorriso che invita a formulare la domanda successiva ;))
nVidia ha seguito la strada tracciata da Matrox, implementando nelle proprie schede video il supporto a 2 monitor. Prevedete particolari evoluzioni della tecnologia nView e, in particolare, il futuro supporto a 3 display con una scheda video di fascia desktop?
Credo che nView rappresenti un'interessante feature dei nostri più recenti prodotti e per questo motivo abbiamo scelto di implementarla nella gamma GeForce 4. Credo, però, che il nostro focus principale non sia rivolto alle soluzioni multimonitor. Per questo motivo non vedo interesse a sviluppare supporto a 3 monitor contemporanei: ci sarà sicuramente un mercato interessante per questo genere di prodotti ma a mio avviso rimane una nicchia molto piccola e definita del mercato. nVidia, comunque, produce una scheda video professionale con supporto simultaneo a 4 monitor, pertanto il nostro interesse a configurazioni multimonitor va oltre le piattaforme consumer e desktop spostandosi anche verso il mercato professionale.
Una critica che è stata fatta molto spesso in passato alle schede video nVidia è quella della ridotta qualità d'immagine con applicazioni 2D, soprattutto alle risoluzioni più elevate. A onor del vero, con le schede GeForce 4 Ti e GeForce 4 MX la situazione è notevolmente migliorata, ma come ben sai non c'è mai limite al meglio. Per quale motivo quello della qualità 2D rappresenta, a tutt'oggi, un limite delle schede nVidia?
nVidia non costruisce schede video ma chip video; nei nostri chip video, in modo particolare per quelli della famiglia GeForce 4, la parte che viene utilizzata per la generazione delle immagini 2D è di ottima fattura, non c'è nulla che contribuisca a peggiorare l'immagine. Il segnale, però, è fortemente influenzato dai componenti che vengono montati sulla scheda video: i nostri partner scelgono condensatori, transistor e altri componenti in modo autonomo, anche se nVidia rilascia delle guide di riferimento per cercare di indirizzare al meglio i produttori e ottenere la migliore qualità finale possibile. Questo margine di scelta, ovviamente, fa si che alcune schede video GeForce 4 abbiano una qualità d'immagine, con grafica 2D, nettamente superiore ad altre di differenti produttori, pur se basate sullo stesso chip video.
Per cercare di fornire ai nostri utenti un prodotto di qualità abbiamo lanciato un programma con i principali produttori di schede video basate su chip nVidia, in modo da aiutarli a costruire schede video che massimizzino la qualità, soprattutto con riferimento alla riproduzione 2D ad alta risoluzione.
In occasione del Cebit 2002 di Hannover, lo scorso mese di Marzo, sono stati mostrati sistemi Socket A basati su chipset nForce con supporto alle memorie DDR333 e DDR400. Quando vedremo queste soluzioni sul mercato?
Gli annunci del Cebit sono serviti, in primo luogo, a mostrare come nVidia sia allineata agli altri produttori di chipset Socket A, quindi pronta a supportare ed utilizzare memorie DDR333 e DDR400. Dal mio punto di vista, però, il vero ostacolo alla diffusione di piattaforme con queste caratteristiche è dato proprio dalle nuove memorie: ci sono, al momento, ancora alcuni problemi legati sia al riconoscimento dei vari standard da parte del JEDEC, sia ottimizzazioni dei moduli e delle piattaforme per operare a frequenze di lavoro così elevate.
Da parte nostra abbiamo mostrato che, tecnologicamente, il supporto ai nuovi standard per le memorie DDR c'è e che, pertanto, la piattaforme nForce rimane ai vertici del mercato dei chipset Socket A. Sarebbe più utile chiedere al JEDEC se e quando le memorie DDR400 diventeranno uno standard per il mercato.
Ci saranno piattaforme Pentium 4 basate su architetture nForce? L'architettura memorie a due bus paralleli si presta molto bene all'impiego con questo processore.
Abbiamo stretto nel corso dell'ultimo anno un legame molto forte con AMD; consideriamo AMD un partner importantissimo, con il quale sarà possibile in futuro costruire soluzioni estremamente interessanti. E' vero che sarebbe molto interessante verificare le potenzialità della piattaforme nForce con processori Pentium 4 ma al momento abbiamo scelto di dedicarci esclusivamente a sistemi AMD.
Detto in altro modo, da parte vostra c'è interesse versouna piattaforme nForce con processore Pentium 4 ma le royalties che Intel chiede per l'utilizzo del Bus Pentium 4 sono un ostacolo difficilmente superabile?
Non posso commentare molto in merito ma indubbiamente questo è un grosso ostacolo. In ogni caso, se vedremo la possibilità di un chipset nForce per processori Pentium 4 e ci saranno margini di accordo con Intel, sicuramente questa piattaforma farà il suo debutto sul mercato.
Alcuni rumors hanno ventilato la possibilità di avere una piattaforme Hammer con chipset nForce. Il controller memoria è integrato nel Core della cpu, che cosa nVidia può proporre come segno distintivo?
Come hai ben detto, si tratta di rumors. Qualora tu avessi la risposta a questa domanda, che personalmente non ho, ti considererei estremamente fortunato e, soprattutto, potenzialmente molto ricco. (segue fragorosa risata mia e di Dave Kirk ;))
Scritto Da - davide on 02 Novembre 2002 18:26:29