NVIDIA announced the Ampere-based Geforce cards earlier this month and given the feedback we’ve been receiving from social media, it’s obvious are hyped about this product. Jensen Huang came out with a brief presentation, demonstrating the performance improvement that Ampere brings to the table. Building on what the Turing-based RTX 20 series did in 2018, the RTX 30 series of graphics cards will continue in that direction, focusing on raytracing as well as DLSS to help improve performance raytraced games and application even further.
And that has been the silent noise in the background during the build-up for this launch. NVIDIA promises twice the performance of the RTX 2080 Ti or more at less cost which sent shockwaves to the current market with users already panicking to sell their RTX 20 series graphics cards. Still, the noise rings on: do we really get twice the performance of the previous generation?
Today the veil lifts as reviews for the NVIDIA GeForce RTX 3080 Founders Edition goes live. As the debut card for the RTX 30 series, NVIDIA’s RTX 30 series card is not only armed with the latest 8nm Ampere GPU forged by Samsung baked into a compact new PCB designed exclusively for the Founders Edition, the first custom card out the gate, the template at which AICs should strive for. Featuring a flow-thru design, the new reference card features a radical new approach to cooling which is certainly quite the take but also presents some questions. And speaking of questions, much like you we want these answered but with the global situation right now, that has proven quite problematic as NVIDIA has pushed this launch later than previously stated.
With that being said, this is an incomplete review. I will present to you my standard gaming tests that has consumed the 7 days that I had to test this card along with 9 other GPUs to provide an updated comparative data for today’s testing but with the findings we had during these tests, it must be stressed that there’s certainly a lot lacking to prove this review complete. Still, my gaming test suite is complete and it is my standard gauntlet so as not to waste your time any further we’ll skip all the technical details and jump straight to testing.
In this review, we’ll pit the RTX 3080 head-to-head against the RTX 2080 Ti and a few other RTX 20 series cards plus a custom RX 5700 XT to see where the RTX 3080 stands.
|RTX 2080 Ti FE||RTX 3080 FE|
|Boost frequency||1635Mhz||1710 MHz|
|Memory||11GB GDDR6||10GB GDDR6X|
|Memory frequency||14 Gbps||19 Gbps|
Closer Look – RTX 3080 Founders Edition
Much of the discussion about the GeForce RTX 3080 has been centered around the power pin connection. The new 12-pin MicroFit standard is NVIDIA’s new power connector and a converter plug to a 2x 8-pin power is included. More about this in a few photos. Going back to the subject at hand, the NVIDIA GeForce RTX 3080 Founders Edition is the first custom PCB Founders Edition card from NVIDIA and they are making this quite a spectacle by introducing a radical cooler design, a compact PCB and a new power connector. The packaging itself, not as special.
At a hair past 11 inches and approximately 4 inches wide, the NVIDIA GeForce RTX 3080 Founders Edition graphics card is the slimmest design out of all the currently announced models of the RTX 3080. As teased before, this card features a new flow-thru design which sees a fan on both ends of the cooler and moves air towards the back of the card with the other fan venting it towards the sides. The design is aimed to create a more optimal airflow for the case internals. We’ll look into this further in upcoming coverage.
The cooler itself looks quite good and is a good departure from the silver and black of older cards. Between the GTX 10 series and the RTX 30 series, I like both designs amongst all Founders Edition but this RTX 3080 cooler is growing on me more and more.
Here are more shots of the new 12-pin MicroFit connector. As you can see its practically the width of a single 8-pin connector. NVIDIA says they just wanted a new connector for their usage but has opened up the standard for use by any other companies. The connector is rated past the standard 150W of each 8-pin connector and allows NVIDIA to pack a compact connector with their PCB. The problem is that angled slot really doesn’t lend itself well to cable management with the stock converter.
Companies like FSP and Seasonic have come out with their own connectors for use directly for the new cards with their power supplies.
PCIe Gen3 vs. Gen4
We’ll start this off with the most frequently asked question: do I need to have PCIe Gen4 to maximize the RTX 3080? We’ll answer that here. Given that PCI Express Gen4 is only present on AMD platforms, this test will answer two questions: 1) does Gen4 have an impact and 2) do we test with Intel or AMD?
|Test Setup: Intel|
|Processor||Intel Core i9-10900K|
|Motherboard||ASUS ProArt Z490 Creator 10G|
|RAM||Gskill TridentZ RGB DDR4-3600 32GB (8GB)x4 CL16|
|VGA||NVIDIA RTX 3080 Founders Edition|
|Storage||Patriot Viper VPN4100 1TB|
|Power Supply||Seasonic Platinum 1050w|
|Test Setup: AMD|
|Processor||AMD Ryzen 9 3900X|
|Motherboard||ROG Crosshair VIII Formula|
|RAM||Gskill TridentZ RGB DDR4-3600 32GB (8GB)x4 CL16|
|VGA||NVIDIA RTX 3080 Founders Edition|
|Storage||Patriot Viper VPN4100 1TB|
|Power Supply||Seasonic P1000 Platinum 1000w|
Here are our two test bench that we maintain for situations like this. I was already considering switching to AMD earlier this year but with Intel still holding on to their performance crown by a little bit, we stay for 1 more generation. Back to our test, our two systems will help us show show which system to use for this review.
For testing, we use the Final Fantasy XV benchmark. I find this benchmark more realistic than 3DMark Time Spy but speaking of 3DMark, here’s a quick summary of some tests done with 3DMark for our Gen3 vs Gen4 comparison:
This tests goes through the feature tests as well as the more intense benchmarks from the 3DMark suite. We have a 2080 Ti on the Intel system just for a quick comparison if we do get a massive jump. There is a PCI Express test in 3DMark and we can see have an improvement there, proving that gen4 is still indeed present but in actual GPU performance, Gen4 does not really impact everything as the Intel system proves the better system from pure synthetics.
Back to Final Fantasy XV, testing on the same system with Gen3 against Gen4 we get the benchmark results below:
In an AMD vs AMD setup, the difference can be accounted to standard deviation. Here’s some further charts for your reference:
The final set of charts shows us that we still get a minimal advantage on an Intel system which ultimately led me to decide to stick with our Intel system. This testing was done 3 days ago and thankfully results are favorable in my case as I would need to retest the entire lot we have.
Power Draw, Clock Speed and Temperature
We’ll switch things up and open with the power and temperature behavior of the graphics card first. We use Final Fantasy XV Benchmark to simulate a gaming workload but for those looking extreme loads, we do put our cards through Kombustor on first installation for stress testing to check for stability. For our reviews though, we use Final Fantasy XV to simulate a true gaming scenario. Power draw is captured inline via PCAT or Powenetics so no other components affects readings. Readings are taken from the average 15 min idle readings for both load and idle.
Let’s take a look at clock behavior versus temperature:
Sorry about the music. Homelander was cussing the entire time in the background.
Test Setup and Methodology
Processor: Intel Core i9 10900K
Memory: G.Skill TridentZ DDR4-3600 32GB
Storage: WD Blue SSD 1TB SATA
PSU: Seasonic Platinum 1050w
Cooling: Corsair H150i Pro 360mm AIO
Monitor: ROG PG27UQ 4K 144hz HDR1000
For a full-hardware workout, visit https://benchmarks.ul.com for our system warm-up and stress test of choice.
For benchmarking methodology please see our game benchmark method guide.
Test results are gathered and produced on CapFrameX.
Since this is a GPU review, we benchmarked the area of the games that put heavy load on the GPU.
All our test runs are repeatable, click the links below for area and details. Read our benchmarking methodology.
- DOTA2 – Kiev Major Grand Finals Game 5: OG vs Virtus.Pro (54:05 – 55:05)
- Counter-Strike: Global Offensive: FPS Benchmark Workshop Map
- The Witcher 3 – Woesong Bridge
- Grand Theft Auto V – Palomino Highlands
- Rainbow Six: Siege – Benchmark Mode
- Shadow of the Tomb Raider – Kuwaq Yaqu
- Call of Duty Modern Warfare 2019 – Fog of War
- Monster Hunter World: Iceborne – Wildspire Waste
- F1 2020 – Benchmark Mode
See our Youtube playlist for benchmark sequences.
Note: Some proprietary technologies of NVIDIA like PCSS, HBAO+, and HairWorks work on AMD GPU’s BUT to maintain uniformity amongst GPUs, these have been turned OFF.
You can click on any of the benchmark charts enlarge. You can also move forward and backwards to quickly navigate through our charts via gallery view. For this test, only the out-of-box normal mode will be tested.
Counter-Strike: Global Offensive (CSGO)
Counter-Strike: Global Offensive, popularly known as CSGO, competes for Steam’s most popular game. It has found a resurgence in its popularity and has recently peaked in 2020 in the number of players that play the game. Based on Valve’s Source Engine, the game received major asset overhauls during the years since its inception nearly 10 years ago. Still, it’s a light game and can be played on fairly lighter systems but the competitive scene for CSGO has seen average players demand high FPS from their systems thus gaining favorable standing with GPU vendors just from the demand for higher FPS alone. CSGO is a game that can easily go past 500FPS on enthusiast systems on maximum settings. We’re including CSGO as requested by our community.
API: DirectX9 (default)
Maximum In-Game Settings
Texture Streaming Disabled
Note: JUNE 2020 – DOTA2 has recently implemented a transition from DirectX9 to DirectX11 and new install of the game will prompt users to switch from DX9 to DX11. With that said, we are testing DOTA2 in DX11 from now on.
In contention for the most popular game on Steam and the biggest competition in eSports: DOTA 2 is powered by the Source 2 engine. The game is fairly light on low to medium settings but maxed out, with heavy action on screen especially during clashes, can really stress most systems. This is a game where frame times matter as responsiveness is very important in high-stakes competition. We’re looking at consistently low frametimes in this game for the best experience
Our test uses actual game replay, using the segment from game 5 of the Kiev Major 2017 Grand Finals between OG and VP. The clash during the 54:05 to 55:05 of the game is a nice example of how much a system will get punished during intense team fights in DOTA2.
You can watch the replay of the actual game used in the benchmark here in Youtube or download the replay file here for your DOTA2 client: Game 3149572447. (save it to your DOTA2 replays folder)
API: DirectX11 (default)
Best-Looking slider setting (Ultra)
Rainbow Six: Siege
Nearly 4 years later and Rainbow Six: Siege has become a phenomenon after a lukewarm beginning. The massive shift in focus of the game sees it stepping into eSports territory and the excellent mix of gameplay mechanics, good design and a dedicated dev team has put R6: Siege in a position it couldn’t even picture during launch. Rainbow Six: Siege focuses heavily on tactical and creative gameplay and its vertical levels and highly destructible maps encourage players to be quick on their feet so the action is always going. Powered by Ubisoft’s own AnvilNext 2.0 engine which powers some of Ubi’s recent visual masterpieces, R6:Siege also feature excellent graphics and can get very taxing at high detail settings. The game also features an Ultra HD texture pack download for those that want higher resolution textures but will of course demand more from the system.
API: DirectX 11
Anti Aliasing: TAA
Ultra HD Texture pack not installed
Ambient Occlusion: SSBC
The Witcher 3: Wild Hunt
CD Projekt Red’s latest installment in the Witcher saga features one of the most graphically intense offering the company has to date. As Geralt of Rivia, slay monsters, beasts and men as you unravel the mysteries of your past. Vast worlds and lush sceneries make this game a visual feast and promises to make any system crawl at its highest settings. This game has found great resurgence in its playerbase thanks to the release of Netflix’ Witcher series.
API: DirectX 11
Frame Rate: Unlimited
Nvidia HairWorks: Off
Motion Blur: Off
Ambient Occlusion: SSAO
Depth of Field: On
Chromatic Aberration: Off
Light Shafts: On
Grand Theft Auto V
The fifth and most successful installment to date in the highly controversial Grand Theft Auto series brings a graphical overhaul to the PC version of GTA V which many have lauded as a superior approach in porting a console game to PC. Featuring large areas and detailing, GTA V is a highly challenging application in terms of scene complexity.
Our benchmark uses a run from Palomina Highlands running through a lush area to a remote road all the way to a neighborhood in our car to simulate multiple scene changes.
API: DirectX 11
Very High settings
Anisotropic Filtering: 16x
Motion Blur disabled
Advanced Graphics enabled
Shadow of the Tomb Raider
Shadow of the Tomb Raider is the latest installment in the reboot run of the classic Tomb Raider franchise. The game follows the story set forth by the previous game which Shadow of the Tomb Raider short follows after. Technology-wise, the game uses the Foundation engine updated to meet the demand of developer Eidos Montreal to push the engine to its limits. The game supports DirectX 12 and is one of the launch titles to support RTX technology namely DLSS which launched a couple of months post-launch.
API: DirectX 12
Graphics Settings Preset: Highest
Texture Quality: Ultra
Texture Filtering: 8x Anisotropic
Raytraced Shadow: OFF
Call of Duty Modern Warfare (2019)
Call of Duty Modern Warfare is a reboot of the original Call of Duty 4: Modern Warfare storyline, set in a different world where you, along with Captain Price have to stop the world from going to war. Call of Duty Modern Warfare reignites the franchise by introducing full crossplay support where Xbox and PS4 players can play together with PC players. On PC, the game features a new engine pushing photorealism for COD far beyond what their older engine is capable of. The new engine also introduces raytracing and the AI is designed to perceive light as well. With a revitalized multiplayer arena, the game will require fast frame rates.
API: DirectX 12
Render Resolution: 100%
Texture Resolution: High
Texture Filter Anisotropic: High
Particle Quality: High
Shadow Map Resolution: Extra
Particle Lighting: Ultra
DirectX Raytracing: OFF
Ambient Occlusion: Both
Anti-Aliasing: Filmic SMAA T2X
World Motion Blur: Off
Shaders Installed before benchmarks*
Monster Hunter World: Iceborne
Easily Capcom’s most successful game to date. Available in both consoles and PC, Monster Hunter World ranks in Steam’s top played games for the platform. The 2020 Iceborne update for PC brings the game to new PC frontier, introducing DirectX 12 support. The game features rich graphical detail settings and an Ultra HD texture pack for highend gamers. MHW’s features fast-paced action with traditional RPG farings and has captured a new market thanks to the transition from portable.
Our benchmark for this game uses an expedition track in the Wildspire Waste Southwest Camp (Area 1) and finishes in the Rathian nest at Area 12 in the caves. This run gives us runs from barren area, to watery area with lush vegetation to a cave which replicates the varied nature of exploration and monster combat in MHW.
API: DirectX 12
Graphical Settings: Manual (customized from High)
All variable settings set to High
Image Quality: High
Max LOD Level: No Limit
Volume Rendering Quality: High
Motion Blur: Off
DLSS and AMD FidelityFX: OFF
The latest iteration of the F1 series from CodeMasters features support for DirectX 12 as well as more photorealistic graphics than ever. Now heavily featured in the official F1 esports scene, much attention has been given in the development of this game particularly for added realism.
API: DirectX 12
Settings: Ultra High
There is a known issue with the beta drivers and F1 2020. This is corrected with the public drivers and will fill this section once testing has commenced.
DLSS and Raytracing Quick Test
There’s no denying it, we’re not seeing the performance promise we many have initially and that’s the crux of the matter for the RTX 3080. It is a generational leap in performance, but to use word “greatest generational leap” is a bit circumstantial.
Most of our benchmarks and games are selected primarily to neutralize any proprietary technology. As the chart above shows, DLSS on the RTX 3080 suddenly puts us in 100FPS+ territory vs. the 2080 Ti. This puts NVIDIA’s promise to perspective: more than twice the performance of the RTX 2080 Ti is indeed possible, but only in situations wherein the technology of the RTX 3080 can be taken advantage off.
That puts us in a dilemma. From a price perspective, the RTX 3080 does present a visible performance uplift in most games but in general, this is a small increase compared to what initially expected and NVIDIA might’ve expected this and used the pricing to offset expectation. Given that, the main take here is that NVIDIA is pushing for DLSS and raytraced games given the amount titles they provided us during the testing of these cards. And that leaves us at impasse which I’ll discuss in the closing of this article.
The RTX 3080 is the amalgamation of lessons learned in the past and many of those lessons have been corrected: a better Founders Edition cooling, improved raytracing performance as well as DLSS, and a few others. Let’s get one thing clear: the NVIDIA GEFORCE RTX 3080 is a powerful piece of hardware. Unfortunately it is limited by the software of its time.
As a graphics card that is heavily geared towards accelerating their own technology, this assumes that games themselves utilize the technology like RTX and DLSS but in 2020, the world only knows a fraction of games that support either DLSS or raytracing. There is no one game right now that readily supports both in its fullest form. There are good technological examples and demoes but right now NVIDIA is hinging themselves on the availability of these games, primarily looking at the one game that can set raytracing off and for AAA the next candidate is Cyberpunk 2077. On the more mainstream side, its Minecraft RTX and Fortnite. And that’s the major problem here. Esports has never been representative of the majority of PC gamers, a lot of which are AAA gamers but both of them would agree putting RTX in Fortnite and Minecraft and building an RTX PC around that game is like buying a Lamborghini for your kid. Now, I don’t want to discriminate, everyone’s free to play what they want but again, the argument is directed towards NVIDIA and how they shaped this release to be.
Normally I’d call this section the conclusion but sadly that’s not the case, barring the fact that time wasn’t on our side, I still did manage to complete our regular benchmark suite which is usually enough when doing graphics card reviews. Unfortunately its only when we analyze the data, which I did exactly 18 hours ago as of this writing that its quite clear, that NVIDIA’s promises are framed around the simple fact that raytracing and DLSS performance are receiving massive performance uplift but traditional raster performance, which is what our game benchmarks are designed to reflect, only show a marginal improvement. Now this puts us at impasse: with NVIDIA pursuing proprietary tech for their graphics cards, its actually unfair to them not to utilize testing games with those turned on. With more and more games coming out utilizing NVIDIA’s tech, the traditional view of looking at performance as brute force throughput is at a fork in the road as this approach won’t seem to stop and ultimately it will to NVIDIA cards performing in one way and the rest of the market performing in another.
That said, straight up performance is still incredible for the RTX 3080, as expected from a generation of cards. Unlike the RTX 2080 and the unkillable GTX 1080 Ti from days gone, the RTX 3080 manages to put a significant gap ahead of the RTX 2080 Ti in most games, averaging mostly 20% versus the RTX 2080 Ti. Barring any and all marketing, that alone would easily warrant the upgrade and certainly it does as the RTX 3080 Founders Edition perform cool and quiet throughout loading scenarios. This performance improvement does come at at a 70W premium but overall, its still a significantly faster card in pure raster performance.
Again I’d like to stress that this is not a complete conclusion. The NVIDIA GeForce RTX 3080 is a glue that binds multiple technologies that make up a gaming ecosystem that we’ve barely scratched the surface of. Aside from raytracing, DLSS, there is also RTX IO, Broadcast, and then the aspect of system latency that will be the focus of upcoming 360hz monitors equipped with NVIDIA Reflex Latency analyzer and the accompanying games that utilize NVIDIA Reflex. Many of those things come together to present the true value of what the RTX 30 series can do and the GeForce RTX 3080 as the primary central card for enthusiasts serves as an acceleration tool for these technologies.
Another improvement NVIDIA has done with their brand new Founders Edition. As the first custom PCB Founders Edition card, NVIDIA has excelled this time around proving that the 2-slot, flow-through cooler has its merits. Its slim profile allows it to sit in your case and compliment airflow while keeping the card cool. Albeit, the Ampere silicon looks to be readily cooler than its predecessors and this gives the Founders Edition for the RTX 30 series some wiggle room as the RTX 3080 sports a compact PCB as well, showing its AICs what could be done with such a minimal foot print. Still, >300W is a lot of heat to dispense but the cooler does enough to keep the card below 75*C on an 28*C ambient room. No small feat given the fact that most AICs are bringing out their largest cards ever into the market for this generation.
You are probably reading through this looking for the part where I say that the NVIDIA GeForce RTX 3080 totally decimates any previous generation graphics card and you should upgrade immediately. There’s also a good number of you waiting for me to say that the RTX 3080 is a bust and those lucky bastards that picked up a brand new RTX 2080 SUPER for $350 are lucky. In the beginning of this section I mentioned that this is a first impression, not a conclusion. I feel like there’s still a dimension wherein the NVIDIA GeForce RTX 3080 can still flex its muscles further but as it is, in a more traditional gaming scenario, the RTX 3080 only presents an average of 20% improvement in the games that we tested. In games that WE DID NOT test though, with DLSS and raytracing enabled, the data just speaks for itself: with these technologies, the RTX 3080 can do more. Right now I am advancing the games we have in our library to the point where a suitable benchmark point can be used. I’m not one to just fire up a game and use the first area for testing or just fire up a canned benchmark and use the info presented there. Going forward, I will look into a dedicated raytracing/DLSS benchmark suite in preparation as well to competitor cards that may soon have raytracing. For other technology, we’ve yet to shape that.
So in closing, at $699, the NVIDIA GeForce RTX 3080 Founders Edition graphics card is priced the same as the GeForce RTX 2080 graphics card from 2-years ago. If you bought the card back then and are looking to upgrade now from your RTX 2080, the easy answer would definitely be to go for it as you seem to be an early adopter. Speaking to early adopters again, the performance uplift is sound for the cost and even with NVIDIA overselling their card, at the end of the day, we do get improvement, albeit, that will go underutilized by some. Speaking now to those who are concerned about value, its a situation where you need to warrant the cost from what you get. I can easily say the RTX 3080 has amazing performance up to 4K, but if you play mostly COD Warzone in 1440p coming off an RTX 2080 Ti or RTX 2080 Super, you’re only getting +/- 20FPS. This is the part where you contemplate if that 20FPS is worth the entire $699 you will spend on this upgrade. The same logic applies to pretty much any other game.
The NVIDIA GeForce RTX 3080 Founders Edition graphics card is packed with technology, looks good and is a lot of positive adjectives more. Its also a graphics card that is extremely forward looking. It is promising, and Jensen Huang’s presentation oversold on that premise. At $699, we’re not being robbed so this is a situation of bad marketing rubbing off on a good product. The NVIDIA RTX 3080 is good and as a fan of DLSS and as 4K 144hz gamer, I am fervently waiting for the day that DLSS is to be a real-time technology that works on all games. Today is not that day but the RTX 3080 is a good peek into what to expect.
Leave a Reply