>The following results are based on AMD's official data which has been presented to the media.
Will probably need to wait till independent reviews come out
They're probably accurate - AMD's benches for the RX 5000 and 6000 GPUs, to date, have been accurate. It is, however, common sense that you wait for independent third-party reviews from people who aren't sponsored by the vendor.
Accurate isn't the same as representative.
Companies pick benchmarks that show their products in the best light. That doesn't mean they aren't accurate.
Just look at those ray tracing results for the 6950 vs 3090. They're trying to give the impression that AMD 6000 series is a few % behind, or even trading blows with NVidia for raytracing performance. We all know that that is simply not an accurate summary. I think the article was EXTREMELY charitable to AMD when they stated:
>...there's a reason why AMD didn't focus that much on raytracing performance...
They cherry picked the few results that show AMD cards in a decent light (pun intended) and attempted to pass that performance on as typical. It's disingenuous and just further reinforces the reality that none of these companies are your friend and they all participate in misleading marketing. Early/Mid Ryzen was such a breath of fresh air. A defeated AMD came out and said the truth:
>We got these chips that are \[OK to great, depending on the generation\] for gaming, they're really great for productivity and multi-tasking, and we're sellin' 'em cheap enough to be compelling. Also, we're going to have 5 years of socket support so that'll be nice.
That was enough for me to buy a Ryzen 5 1600, Ryzen 5 2600x, Ryzen 5 3600, Ryzen 7 3700x, Ryzen 9 5900x, Ryzen 9 5950x over that period. The BS marketing is a turn off after being spoiled by a relatively honest 6-7 years from AMD.
RTX is cool and nice, but very unecessary imho. Performance hit is not worth for the visual improvements. Right now It is a concept tech for domestic consumers at best, even on nvidia. Needs more performance development with ENERGY EFFICIENCY, but what the rumours indicate is that they are brute forcing to deliver the performance the market is expecting for the next generation of RTX. 490\~600W just for the GPU is a joke.
the belief is raytracing is what will bring game visuals to the next level. rtx has to start somewhere and obviously right now it's in a primitive state with little benefit relative to performance. there are many technologies that we use today that were viewed as "unnecessary" at the time of release but are now viewed as standard.
Yeah but lots of things on GPU started this way. Remember when AA was an absolute performance killer?
I agree it's still too new for full adoption and for it to make enough sense in most games. It will however become huge and completely change the standard rendering pipeline down the road.
I agree that ray tracing is still in the "window dressing" stage at this point. Lumen in UE5 might even kill it off. However, that still doesn't excuse AMD for being misleading in their marketing. Why not just take what they have and call it good. They have the strongest 1080p and 1440p (possibly 4k until 4090) performance outside of ray tracing and DLSS, and RSR is significantly closer to DLSS 2.x than before, but not quite as good, BUT it's more readily available. Oh, and it costs 2/3 the price of a 3090. Those are all strengths and realistic ones. But, Nooo, they had to go and pretend like their pitiful ray accelerators even come close to matching rt cores. It's OK for them to lose the fight vs. NVidia's proprietary tech, but they want their cake and they want to eat it, too.
I was basing my observations on that demo released about 8 months ago I think. I didn't realize that they had ray tracing enabled while showcasing their rt competitor. That sucks. I really thought they had a rt competitor with significantly less overhead.
The whole RT thing is a common misconception based upon Nvidia cards outperforming AMD ones in Nvidia sponsored titles (where Nvidia pay for optimisation priority), but if you use non Nvidia sponsored titles then competing cards are on par with each other.
AMD literally can't pick anything but cherrypicked results because non cherry picked results would give Nvidia an unfair advantage. Why do you think Nvidia always use sponsored titles in their advertisement benchmarks?
Nvidia has more parts of the ray tracing pipeline in hardware so it makes sense they perform better. Not sure how you could claim otherwise.
At best AMD comes close to them in total frame time but never in ray throughput.
Easy proof of your statement is the SAM marketing. AMD says 'up to 15%' more performance, shows off a few games, the lowest one is 14%, but in actual real world tests with a 36 games there are games like Apex that have a 10% performance loss... And when you average the performance all together it only ends up averaging 3% faster
https://www.amd.com/en/technologies/smart-access-memory
https://www.techspot.com/article/2178-amd-smart-access-memory/
well, software today were not developed with this tech in mind. If you put a 12 core cpu to run old games optimized for heavy single thread/core usage, like Crysis, you will have performance issues too. I would wait for the upcoming games / software to take better use of so many new technologies being released today to decide if its good or not.
Not sure why folks are down voting, this is accurate. SAM isn't a magic extra performance button, software has to be designed to use it for there to be any advantage.
Depending on what game selection. The 6900xt is the fastest GPU in most titles especially if you crank up things like Field of View in games which harms Nvidia badly as it increases CPU load and they have more CPU overhead.
Like I said in another thread, I see 3-5% improvement at most, IDK where people kept getting up to 10% from.
It’s 100$ more for a couple percent, but at least things are MSRP now.
Sure, but GPU's overall are selling for over MSRP. What could happen is that AIB pricing won't change since the market won't care about the difference in performance, so AMD would just be getting a bigger slice of the pie.
Imo I'd rather AMD get the extra money than the AIB's since AMD does the most work. AMD wasn't benefitting at all by Gigabyte charging 900 for a 6700 XT last year.
Also a smaller price jump than Nvidia made too.
It's poor marketing, reviews will be holding the higher MSRP against these cards. AMD could just keep the official MSRPs while slightly upping the price board partners pay. It's a tradeoff between pissing off board partners or getting a worse end-user perception.
Inflation, lol. We are lucky that cryptomining seems to have taken a dive in the past couple months, otherwise these babies would have been +25% or more vs their predecessors.
\*\*EDIT --Are sarcasm and humor simply lost on most people these days? Or is it just miners being salty?
The gtx 470 launched for $349 in 2010, which is $460 in todays money. When 12 years of inflation doesn't justify current prices, 2 years of it certainly doesn't either. Also it's funny how inflation is suddenly a major factor in GPU pricing, when it hasn't been one for 10 years.
That'd be interesting but to be quite frank the 6950XT performs as well as I expect it to. The Infinity Cache does too good of a job that it's up to core clocks to make the rest.
AMD's focus on memory architecture/hierarchy makes it easy to find where performance is limited.
Edit: By performing well I meant the weak performance increase on the refresh. Memory was never a bottleneck for RX 6800 and up; just like the Radeon VII and R9 Fury series.
How was memory not a bottleneck if increasing memory clocks always yields better performance, even at stock settings...
Memory was absolute dog shit and a bottleneck on RDNA2. Check benchmarks and see how the 6900XT LC with 18Gbps scores against normal 6900XTs... The benefit from fastest memory was gigantic, specially at higher resolutions (Time Spy Ultra/Extreme for instance).
Having owned a Radeon VII (rma twice) I got improved performance by raising memory clocks but it was not "proportional" to the increase in bandwidth. Faster memory increases bandwidth and reduces latency. Reduced latency means the shader cores wait less and can execute their work faster which can free resources for other threads to use. This effectively allows the shader cores to do more work and improves performance provided it has been spilled to GPU Global Memory.
So if you want to put it that way my RVII was limited by memory speed but to be quite frank these cards get much better performance out of core clockspeeds which indicates an ALU (RX 6800 and up) bottleneck or ROP (RVII.)
Another key takeaway is the Vega 56 bios mod/flash. If you flash a compatible Vega 64 bios (which I did on a V56) it frequently performed within 1% of the Vega 64. This indicated the card was not (particularly) limited by compute power.
TLDR: You can always improve performance by making something good better but why not fix the weaknesses first where you have more to gain?
I had a Vega 56 Pulse flashed to a 64.
Performance increased drastically because the 64 vbios ran the memory at 1.356V rather than 1.29V (iirc) which meant you could increase memory speed by a couple of hundred Mhz (>500 Mhz iirc).
However, performance was nowhere near the 1% you claim from the 64. Go check the benchmarks, 56 flashed was still way behind the 64's in most tasks.
The behaviour on RDNA2 is the same. I've played with my 6900XT a lot, mine was volt modded, currently ranked amongst top 30 on 3DMark benchsuits (sold my PC 6 months ago) and have been ranked top 1 and top 5 on most of the benchsuits for a while. I want to believe I know these cards.
Want proof how memory is a gigantic bottleneck? People flashed Ref LC vbios (18Gbps) on their XTXH cards (16Gbps) just so that they could get access to relaxed vram timings which allowed them to actually overclock memory to >17Gbps stable and increase performance by a gigantic fuckton while core remained the same.
More proof can be seen on mining, core is so fast, you only need around 900/1000 Mhz to feed the memory running at 16.5Gbps (in my case). This meant you could mine at 120-140W.
Either way I understand your point of view, but RDNA2 could be extremely good with faster memory which is a shame. That's why the ref LC model was very sought after from some xocers, volt mod them and you get access to best of both worlds, fast core and fast memory.
Unfortunately having issues finding the specific V56 mod review with the results I was talking about. Regardless the source came about 5 years ago and may not hold up to todays standards. The Vega 56 flashed to 64 provided an uplift (some showed within 2%, a tiny overclock would put it at parity with the V64.) Gamers Nexus in particular modded the V56 and it "killed" the stock V64. Once again 5 years is a long time for hardware so I can see reasoning that the V64 held up better as games demand more computer power + driver optimizations
Regarding the 6900XT it's all relative. I was able to find the post being made about the claims of memory overlooking with liquid cooled bios significantly improving performance but the settings/benchmark weren't posted. Thing is when you run the core clocks beyond the manufacturers settings it puts additional pressure on the memory system as the increased clock speed allows the cores to make more memory requests/work faster. You can, effectively, increase core clockspeeds enough that you start hitting a memory wall and then claim it's "bandwidth-limited" or "needs more bandwidth."
Which is exactly what I think is happening to you if you are ranked top 30 in 3D Mark, the volt mod being mentioned also gives away the card is being ran out of spec. My analysis was largely based on stock configurations (comparing the performance of RX 6800, 6800XT, and 6900XT) and average overclocking done by reviews where multiple games/synthetics are tested. At 4K (where bandwidth matters) I found a very strong correlation between the increased core count and performance, same also applied to clockspeed. The cut ROPs (96 vs 128) on the 6800 had very little impact on performance unless my methodology was incorrect.
I'll admit to overlooking heavy overclocking which can skew the performance characteristics of a card. However by the metrics of stock configurations memory bandwidth is a non-issue. As mentioned in an earlier post you will always get scaling off of faster memory speeds (it does two things at once) but the RX 6800 and higher scale incredibly well to shader core count and clockspeeds. Also the best way to check for bandwidth bottlenecks is running a title/benchmark that uses hardware MSAA (shader based MSAA renders the test useless) and cranking up the MSAA resolve. If you are NOT limited by memory bandwidth then MSAA (at the chosen level) will have no performance impact on the rendered scene.
Considering the regular 6600XT beats the 3060 and the non XT 6600 trades blows the 6650XT beating it for $400 isn't anything to write home about
If it beat or matched the 3060ti then that would be interesting but I don't think it will making a kinda obsolete product especially as stock is more and more readily available
Which makes sense, since it's the only AAA title so far which requires RT to function. Everything else just uses RT as optional window dressing for their rasters, rather than an integral part of the rendering pipeline.
Correct. Fully raytraced lighting is actually simpler to implement and compute in realtime than the compounded layers required for a good raster. Adding RT effects on top of a good raster kills performance, but replacing the raster entirely improves performance on adequate hardware.
Because there'd be a huge uproar of gamers who have pre-RT hardware. Considering the (admittedly recovering but still...) state of the gpu market it's good that hasn't happened yet.
Fair point, but why not remaster all games with it like metro did, removing every raster and leaving only ray tracing doesn't sound too hard (I have no idea how easy or hard it would be)
Because PC gamers with RT-capable GPUs are still a minority, and devs still want to release on PS4 and XB1. When RT vapable PCs and PS5/XBS become more common, we will see more games require RT. The transition may be slowed by the limited RT capabilities of the current consoles, though.
Most gamers will be ok with the RDNA 2 cards. However I would wait for RDNA 3 here and build more slowly with a phoenix apu before buying the 7900xt or rtx 4090 here for 8k gaming as prices also start to drop on TVs made for 8k at 55 inches and when Asus and gigabyte make PC gaming monitors with 8k and 120gz using HDMI eARC (for a sound bar for games like God of war)or DP cables that can handle native 8k gaming then you're future proofing is done for a few years here until the late 2020s here.
> Fully raytraced lighting is actually simpler to implement and compute in realtime than the compounded layers required for a good raster.
i thought new features like mesh shaders made it so raster performance was also a moving performance target. totally agree with you, but it seems like it's gonna take many many generations for raytracing to wholesale supercede raster implementation.
Judging by the performance uplift in Metro Exodus Enhanced, I think it will come as soon as RT hardware becomes ubiquitous. For shadows and reflections in particular, it is far simpler to implement an RT solution than anything else. For global illumination, the Metro devs also found it easier to implememt with RT than their original solution. They did a great deep dive article and video on it.
Actually, you’re right. Playing control and seeing little reflections in wine glasses is the whole reason I went and bought Metro, now that I think back.
It's fine to not care about graphics but most people who spend a ton of money on a GPU tend to care. It's probably why AMD sticks much closer to Nvidia in pricing at the lower pricepoints where RT is less relevant but not so much at the top end.
I think even just proper reflections, especially when you combine it with a low roughness cut-off, without all the limits & artifacts of SSR are a massive improvement. Even in a game like RE8 with limited & low res RT the difference can be quite substantial imo.
it's a bit of a hot take, but i think moving forward amd's raytracing implementation will be the game developers' target, since it's amd RT that lives in consoles.
Raytracing is raytracing. The hardware solution is different but similar at the same time. We're already 1.5 years into the gen though with plenty of console first games that include RT and it hasn't changed anything. Not even in games that gets sponsored by AMD. They just use less RT so the hit to performance is smaller. Even still, Nvidia sponsors tons of games anyway.
Hopefully RDNA3 will bridge the gap somewhat.
A lot of the new console games are cross gen. We still haven’t seen any of the heavy hitters move their engines or games to next gen only yet. There will be a shift when that happens.
Shift how? This is the same stuff people were saying in the PS4 gen about AMD in general.
I feel like AMD sponsored games that still run better on Nvidia with RT showcase that these claims will be bogus. This either means Nvidia is better at RT in general or it's very hard to optimize for AMD's solution and that without optimization performance is better on Nvidia as an AMD sponsored game isn't going to be optimized on Nvidia either.
Also I don't believe AMD is going to stick with their current solution. They'll definitely improve it so potential RDNA2 optimizations won't matter in a couple of years.
According to Hardware Unboxed , the RTX 3060ti and the RX 6700XT have around the same performance
https://youtu.be/pnZRuY-jFVM
I doubt that the RX 6650XT would match the RX 6700XT
Also , according to data provided to reviewers through AMD's official guide , the RX 6650XT is just 2% faster than the RX 6600XT on average
https://videocardz.com/newz/official-radeon-rx-6x50xt-series-gaming-performance-leaks-out-rx-6950xt-is-4-faster-than-rx-6900xt
Depends on the games. Some games the 6700xt is better than the 3070, some games it falls a bit behind the 3070, and some games it gets a double digit beating by the 3070, putting it squarely in the 3060Ti category.
Although, I’ve learned that relying on graphs instead of in-game performance shows two different stories. When you look at frame times, and actual in-game FPS, the 3070 and 6700XT actually trade blows. Of coursing tilting in favor of the 3070, but not by as big of a margin that hardware unboxed supposedly says. That and a lot of reviewers don’t specify whether SAM is used or not, nor are games that get patches to fix SAM performance re-tested.
The problem I see with graph reports is most use the in-game benchmark, or situations that aren’t common in game, to gauge performance. For instance with AC Valhalla the in-game benchmark shows me at 88.6 FPS, but when I’m roaming the country side, I sit between 96-110 FPS, when I’m in small villages it sits at roughly 92-98 FPS, in raventhorpe I get between 82-94 FPS. So, I see the graphs as a “rough idea,” but it doesn’t show frame times. I watched a video between the 3070Ti and 6700XT and the frame times on the 6700XT were superior, while the 3070Ti was pulling better FPS, the 6700XT played smoother.
>RX 6650 XT vs RX 6600 XT = 2% Faster
...it says. That still looks far away from the 3060ti to me, even at 1080p. Even a $350 price tag is hard to justify for a card with no dedicated machine learning capability, and RT compute per core being around half that of the competition. This looks like a shameful release, and I'd imagine they won't sample these cards to any reviewer, because they look like a joke.
I agree, no impactful performance increase over a 3060ti, DLSS and RT features, not any cheaper
For people in the west where prices are starting to equalise this product doesn't make sense
This product is gonna fall so hard on its face
Any difference 5% or less is usually within the actual variance of most tests so it's effectively a null result or a tie. So saying card x is 2% faster than card y is BS marketing and effectively they are the same.
So here AMD is saying they are sell you the same card for more money...... disappointing.
It's has 17.5gbps memory, and does boost higher. It's not the same card. Maybe if you only run two tests it's up to variance, and reviewers usually do 3 or less, but there is a going to be a testable difference if they really wanted to commit and minimize variance by running the same tests like a few dozen times. I can flip a coin 8 times and likely won't get a 50/50 results, and might be way off, but if you flip it a thousand times the chances of not getting a relatively close 50/50 score are astronomically low.
It is the same card, the only thing changed is the memory chip, those are actually different. The rest is just a factory overclock and a raised power limit. My point is that every test has a margin of error (+/-) and any result with in the margin of error is not something that we should be paying extra for. Most reviewers that are reputable (hub, GN, LTT) won't consider any results in the margin as valid. They won't name one better than the other, because there isn't a true difference.
I feel like that's a distortion of what margin of error really means. And margin of error only exists in a large degree if you don't do enough tests. Most reviewers (hub, GN, LTT) won't view those results as valid because they don't have time to tests it a hundred times, and testing it like 3 times is an insufficient data set to draw conclusions from. They are only telling you you should take their results with a grain of salt, because they don't have a large enough data set. They are only warning you that they can't be sure of their results because of time constraints.
It's no different than if you're in a medical study with only two dozen subjects, the results are not good enough to legalize a drug, but if you have 120,000 subjects it's much more reliable and enough to legalize it. People will tell you to be skeptical of a drug that only has a couple dozen test subjects, but not a of a drug that's been around for two decades.
But I just don't think that a card is objectively faster in every way is just relying on margin of error. That just can't be the case.
It's not only the small sample size but also the inaccuracies of the tests them selves. We are dealing with averages. The average frame rate of a given card running a given game. Thats why small percentage differences aren't very impressive. They are meaningless. If a 3070 does 120fps in a game and a 6700xt does 121fps and the 6750xt does 124 your not going to see a difference.... essentially they are the same
But I don't view them as the same. If the difference is measurable with enough data, and it's only 2% you can argue it's not worth it. If you're argument is it's not wroth it I agree. When Hardware unboxed after 40 game benchmark run wit 3 runs each, and 120 total says they are the same, then that's not true. If they said the difference doesn't really matter enough to spend extra, then sure. I'm much of a fan of Gamers Nexus because I think their understanding on margins of error is much more accurate, and scientific and they don't throw those words around that much to make everything look like a wash. It just makes them seem more precise.
Most of the reviewers I follow say there is anywhere from a 3-5% +/- margin .... That means anything in a 10% range is functionally the same. So if your telling me a card is 2% faster, that is an impossible difference to prove because the test isn't accurate enough to prove it. Your test could show 2% win but it could actually be 5% faster or equally it could be 3% slower , the test just can't give you an accurate enough result.
So when a company, AMD or Nvidia or Intel says it's 2% faster it's BS marketing. Now when the margin is +/-5% and you get a difference of 10 or 15%, that is a measurable result and they can say x card is faster than y card....
In the end of these are indeed the launch MSRP, they are way overpriced for what we are getting..... I'm disappointed
>Most of the reviewers I follow say there is anywhere from a 3-5% +/- margin
I've heard them say that when we're talking about a single game being tested a couple of times, but not as a whole over a hundred benchmark runs total over multiple games. Hardware Unboxed had no trouble pointing out how the 6700xt is 3% slower than the 3070 on average.
>test isn't accurate enough to prove it.
when you do enough runs, and get enough sample data it becomes accurate.
6600XT are too expensive where I am. They cost around 600$. If there is a GPU that costs $300, I am game. I am really thinking about getting a 3050 as an upgrade. It costs upwards 450$, but it is what it is.
So literally just the same price-to-performance as everything else for the last 5 years. Where every other technology gets cheaper over time for the same performance, GPUs haven't budged in half a decade.
Because it's all determined based on cryptomining potential and what miners are willing to pay. Mining really needs to die already so GPU pricing can finally get unfucked.
The average price of GPUs and smart phones is rising faster than inflation. At least with smart phones there is mid range but today's budget GPUs are more than yesterdays midrange.
I usually spend $200ish on a gpu every 4-5 years but my $200 1060 has absolutely nothing to replace it in that price range and nothing on the horizon. I’ve been looking at a 6600 but even $300 is a big stretch. The 3050 is barely an upgrade and the 6500xt is junk. Sad state for pc gaming.
> So literally just the same price-to-performance as everything else for the last 5 years
Show me a card from 5 years ago with the same performance as a 3070 for $500. Or even $600. Or for that matter, as the 3060ti for $400/$500. The 2080 was $700 IIRC, and the 3060ti seems to be on par with it, and the 3070 beats it, while trading blows IIRC with the much more expensive 2080ti. Heck, even at $1000 for a 12GB 3080 that ties the 5-year-old 2080ti on launch price, with a notable jump in performance. The 6900XT for $1000 also beats the 2080ti.
Obviously prices are currently inflated, but at the moment there are still better values than 5 years ago, in terms of price to perf.
That's because people always like to complain. The people where I live thinks they should be able to buy a house today at the same prices as 10 years ago. That's why there is something called a market, don't like it then don't buy it or just get a console instead.
the 6800 msrp was $580 and beats a 3070ti and AMD wants you to think a 6750xt which is supposedly faster than a regular 3070 is $550 is a good deal?? Literally I cannot stand AMD the past year
Absolute trash. Raising the prices that much on cards that don't perform better than 5% on average than their older counterparts? Ridiculous AMD, absolutely ridiculous.
These prices sound bad. Performance is epsilon better than the originals and price is more than epsilon higher. The 6750XT looks like the relatively best upgrade, but even there I can have a 6700XT today for $499 and it's 10% more dollars for 7% more performance at some point in the future.
Beats the 3090 at what though? At 4K? At 1080p? At ray tracing?
6900xt already beat the 3090 most of the time at 1080p and 1440p. But at 4K or ray tracing the 3090 ran away. I can’t imagine a minor clock bump changes that much at all.
The 6x50 refreshes aren't priced relative to Nvidia's MSRP prices, but relative to their street prices. You won't find a 3060 for $329, so that's not the price AMD is trying to match.
Both amd and nvidia can sit on their 2 years old tech priced at 50-100% above msrp as far as I'm concerned. Prices are finally falling, unless shit hits the fan again these cards will lose value in the following year unlike pretty much any other generation, unless you absolutely have to buy a gpu today I wouldn't do it.
Apparently, there is no reference design from AMD for the RX 6650XT . Only the RX 6750XT and RX 6950XT will be sold by the AMD web store
https://videocardz.com/newz/amd-radeon-rx-6950xt-to-cost-1099-rx-6750xt-549-rx-6650xt-399
I used to comment here loads and get right into the detail of all of this stuff, was super exciting.
Sad to see that there's consistently little to be excited about regarding Radeon anymore (or GeForce, really). Not a slight on AMD as a business - I understand why they're doing what they're doing, but I can't help but mourn the days where the shit they put out was *exciting*, y'know?
well depends on the game. Heavy RT games fuck no lol. Dying light 2 or cp2077 is a complete wash. Some games with lighter implementations you can see equal performance
No. They have pretty much abandoned the 6800 and 6800xt with yeilds being good enough to sell all navi21 chips as 6900xt/6950xt. Defective chips are so rare at this point they would have to cut down fully functional chips to meet demand ....they aren't going to do that
No, take it back. I know perfectly well what you're trying to do. You are so confident in your 7900XT that you're justifying a price creep now. Stop it, AMD. I love your CPUs as much as you do, but this is ridiculous.
Man, those are some EXTREMELY misleading ray tracing results. What a joke. Even if, and that's a **big if**, they somehow managed to get that close in their cherry picked subset of games with the minimal "performance" ray tracing ( aka, throw away 20% of your FPS for literally no noticeable difference in most games), presenting it this way was clearly intended to mislead prospective customers into believing that AMD cards handle ray tracing almost as well as nvidia cards. We all know that that is absolutely inaccurate. I liked AMD better when they just told us like it was over the last 5 years "Ryzen is OK for gaming (and getting better every generation up until 5000 series was actually trading blows with Intel 10 and 12 series), great for productivity, and we're selling them cheap with a long-lasting socket platform."
Can we talk about how stupid this "catchy" phrase is?
No, these are good GPUs. Honestly, with the exception of 6500XT these are all pretty good GPUs. It's the prices that are bad.
Good GPUs ruined by greed? Yes! Waste of sand? Go home, you're drunk.
These are cards that are 2-7% better than their predecessors, it's not pointless. If they came at the same MSRP you'd be cheering for the upgrade. It's all about the cost.
I belive one can overclock the 6600xt to get the same results as the 6650xt.
The youtuber 'Ancient Gameplays' made some videos comparing the rtx 3060 with OC+rebar with the 6600xt with OC+SAM.
Those 6600xt results look a lot like the results from 6650xt
How is it cool at all? They jacked the priced up by 30%+ for single digit gains? Now instead of overpaying scalpers new buyers can overpay AMD directly. Price to performance of this refresh is trash.
30%? It's more like 12% if anything and that's the MSRP which means nothing. The market sets the price of these, so if there's barely any performance difference then the price won't change.
This is AMD stealing the scalper cut from AIB's. Only people going for reference cards will be hit with this increase.
AMD's main issue is, and has always been their drivers. Team green had kicked their @$$ in that area for years.
Also, availability is non-existent for AMD. I had little choice but to buy a 3060, because there are no 6700's around.
So raytracing was stupid and a gimmick when AMD sucked at it, now when it's trying to catch up, it's a cool feature? Maybe more Xs in product names will sell more units.
I think it's also worth considering that these cards are not just a small performance boost; they remain more power efficient and generate less heat, and push the heat out instead of in to a case. These are benefits that might be worth paying for, depending on your build.
Wasn't the 6750 XT rumoured just a couple of days ago to cost $499?
With the release supposedly 4 days from now, I'll wait for reviews. The pre-release numbers are mostly underwhelming, except for the 3DMark results, which were 10%-20% faster that the 00 parts.
Yes, and this current $550 number is also just a rumor. It could be completely different at launch day. The price of a product is usually fluid right up until the last possible second.
From what I've heard from GN videos the actual MSRP is set super late. Sure they have a general range the whole time but the exact figure can be decided minutes before the announcement.
I thought this was going to be an opportunity for AMD to refresh their lineup to a price to performance level that actually makes sense after they've been jacking up their MSRPs 10% higher than the cards are actually worth. You can already find the 6600 and 6600xt selling for around MSRP.
Is AMD not afraid of Intel's upcoming stuff? Do they know something we don't? Is it really a big failure? Are are they planning to drop prices in the next 6 months by $50-100 when ARC for desktop releases?
Does my 5700XT still hold up? Are any of these really worth dropping the money for right now or should I just keep waiting for the market to correct and be happy for a while?
6750XT being faster than a 3070 would mean it will need to be at least 20% faster than a 6700XT though, if that is the case then it must be a good enough refresh that performs reasonably well over the standard 6700XT.
But as always i am going to wait for official third party benchmarks first before concluding this story.
Waits for the rtx 3090 prices to drop to 1399 then 1299 USD MSRP respectfully in response to this AMD GPU here as this is why competition is needed again here like 10 years ago for Intel in the CPU market here.
It was already uncommon knowledge that in non Nvidia sponsored titles (where they pay for more optimisation especially with RT) that AMD and Nvidia cards where on par with each other in most cases with only margin of error splitting them. It would seem that 2-12% performance increase puts them over the line with non Nvida sponsored titles and probably brings them closer to on par with Nvidia sponsored titles.
Stupid products that nobody wanted or asked for.
If they're going to do a mid-cycle refresh, then fine. But they should do what Nvidia did and replace the older cards with the new ones like they did with the 2070 and 2080 being replaced by the 2070S and 2080S at the same price point.
Instead, AMD tries to squeeze more cash out of us by giving us stupid mid-cycle refreshes on a process that's going to be obsolete in 6 months.
I personally gave up just grabbed a Sapphire pulse RX 6600 for 300€ without tax and grab on to it until the end of next gen cause I don't see them being cheap anytime soon.
AMD is deliberately discontinuing older cards and raising MSRP even more.
Those thots are gonna kill PC gaming of they continue with these practice's
>The following results are based on AMD's official data which has been presented to the media. Will probably need to wait till independent reviews come out
I know right. Never trust this until you get independent reviews
They're probably accurate - AMD's benches for the RX 5000 and 6000 GPUs, to date, have been accurate. It is, however, common sense that you wait for independent third-party reviews from people who aren't sponsored by the vendor.
Accurate isn't the same as representative. Companies pick benchmarks that show their products in the best light. That doesn't mean they aren't accurate.
Just look at those ray tracing results for the 6950 vs 3090. They're trying to give the impression that AMD 6000 series is a few % behind, or even trading blows with NVidia for raytracing performance. We all know that that is simply not an accurate summary. I think the article was EXTREMELY charitable to AMD when they stated: >...there's a reason why AMD didn't focus that much on raytracing performance... They cherry picked the few results that show AMD cards in a decent light (pun intended) and attempted to pass that performance on as typical. It's disingenuous and just further reinforces the reality that none of these companies are your friend and they all participate in misleading marketing. Early/Mid Ryzen was such a breath of fresh air. A defeated AMD came out and said the truth: >We got these chips that are \[OK to great, depending on the generation\] for gaming, they're really great for productivity and multi-tasking, and we're sellin' 'em cheap enough to be compelling. Also, we're going to have 5 years of socket support so that'll be nice. That was enough for me to buy a Ryzen 5 1600, Ryzen 5 2600x, Ryzen 5 3600, Ryzen 7 3700x, Ryzen 9 5900x, Ryzen 9 5950x over that period. The BS marketing is a turn off after being spoiled by a relatively honest 6-7 years from AMD.
RTX is cool and nice, but very unecessary imho. Performance hit is not worth for the visual improvements. Right now It is a concept tech for domestic consumers at best, even on nvidia. Needs more performance development with ENERGY EFFICIENCY, but what the rumours indicate is that they are brute forcing to deliver the performance the market is expecting for the next generation of RTX. 490\~600W just for the GPU is a joke.
the belief is raytracing is what will bring game visuals to the next level. rtx has to start somewhere and obviously right now it's in a primitive state with little benefit relative to performance. there are many technologies that we use today that were viewed as "unnecessary" at the time of release but are now viewed as standard.
Yeah but lots of things on GPU started this way. Remember when AA was an absolute performance killer? I agree it's still too new for full adoption and for it to make enough sense in most games. It will however become huge and completely change the standard rendering pipeline down the road.
I agree that ray tracing is still in the "window dressing" stage at this point. Lumen in UE5 might even kill it off. However, that still doesn't excuse AMD for being misleading in their marketing. Why not just take what they have and call it good. They have the strongest 1080p and 1440p (possibly 4k until 4090) performance outside of ray tracing and DLSS, and RSR is significantly closer to DLSS 2.x than before, but not quite as good, BUT it's more readily available. Oh, and it costs 2/3 the price of a 3090. Those are all strengths and realistic ones. But, Nooo, they had to go and pretend like their pitiful ray accelerators even come close to matching rt cores. It's OK for them to lose the fight vs. NVidia's proprietary tech, but they want their cake and they want to eat it, too.
Lumen in UE5 looks very bad without hardware raytracing to back it up actually. just an fyi i guess.
I was basing my observations on that demo released about 8 months ago I think. I didn't realize that they had ray tracing enabled while showcasing their rt competitor. That sucks. I really thought they had a rt competitor with significantly less overhead.
The whole RT thing is a common misconception based upon Nvidia cards outperforming AMD ones in Nvidia sponsored titles (where Nvidia pay for optimisation priority), but if you use non Nvidia sponsored titles then competing cards are on par with each other. AMD literally can't pick anything but cherrypicked results because non cherry picked results would give Nvidia an unfair advantage. Why do you think Nvidia always use sponsored titles in their advertisement benchmarks?
[удалено]
You're delusional.
Nvidia has more parts of the ray tracing pipeline in hardware so it makes sense they perform better. Not sure how you could claim otherwise. At best AMD comes close to them in total frame time but never in ray throughput.
Easy proof of your statement is the SAM marketing. AMD says 'up to 15%' more performance, shows off a few games, the lowest one is 14%, but in actual real world tests with a 36 games there are games like Apex that have a 10% performance loss... And when you average the performance all together it only ends up averaging 3% faster https://www.amd.com/en/technologies/smart-access-memory https://www.techspot.com/article/2178-amd-smart-access-memory/
well, software today were not developed with this tech in mind. If you put a 12 core cpu to run old games optimized for heavy single thread/core usage, like Crysis, you will have performance issues too. I would wait for the upcoming games / software to take better use of so many new technologies being released today to decide if its good or not.
Regardless it's a thing that will not materialise within the cycle (active marketing+sales) of this product.
Not sure why folks are down voting, this is accurate. SAM isn't a magic extra performance button, software has to be designed to use it for there to be any advantage.
You can probably extrapolate the other results by comparing amd benches to 3rd party bences for the original cards
AMD says 6950XT is 4% faster than 6900XT at 4K and at the same time claiming its 11% faster than 3090. Accurate my ass lol
Depending on what game selection. The 6900xt is the fastest GPU in most titles especially if you crank up things like Field of View in games which harms Nvidia badly as it increases CPU load and they have more CPU overhead.
Huge understatement there.
Yeah, whatever, wake me up when anyone makes a video card under $200
6400! Wait... no don't!
ABORT! Put him back to sleep!
But but , he would have his eternal rest !!!
RX 6650 XT vs RX 6600 XT = 2% Faster So this justified a price increase according the AMD, ridiculous.
+10% $$$ for +4% performance +15% $$$ for +7% performance +5% $$$ for +2% performance Bad deal.
Agreed price hike to performance uplift is bad on all three.
I guarantee that trend is going to continue, if not end up worse with the 7000 series.
Like I said in another thread, I see 3-5% improvement at most, IDK where people kept getting up to 10% from. It’s 100$ more for a couple percent, but at least things are MSRP now.
Sure, but GPU's overall are selling for over MSRP. What could happen is that AIB pricing won't change since the market won't care about the difference in performance, so AMD would just be getting a bigger slice of the pie. Imo I'd rather AMD get the extra money than the AIB's since AMD does the most work. AMD wasn't benefitting at all by Gigabyte charging 900 for a 6700 XT last year. Also a smaller price jump than Nvidia made too.
It's poor marketing, reviews will be holding the higher MSRP against these cards. AMD could just keep the official MSRPs while slightly upping the price board partners pay. It's a tradeoff between pissing off board partners or getting a worse end-user perception.
Inflation, lol. We are lucky that cryptomining seems to have taken a dive in the past couple months, otherwise these babies would have been +25% or more vs their predecessors. \*\*EDIT --Are sarcasm and humor simply lost on most people these days? Or is it just miners being salty?
The gtx 470 launched for $349 in 2010, which is $460 in todays money. When 12 years of inflation doesn't justify current prices, 2 years of it certainly doesn't either. Also it's funny how inflation is suddenly a major factor in GPU pricing, when it hasn't been one for 10 years.
Apparently you didnt get the /s. The plague on mankind known as crypto-mining is the actual reason.
There is no /s in your comment. How am I supposed to detect sarcasm from text?
I wouldn't say a dive. It's slowed down a little. We'll know when it actually takes a dive.
Cant happen soon enough.
I'd like to see 6950xt vs 6900xt @ 1440p and 1080p
That'd be interesting but to be quite frank the 6950XT performs as well as I expect it to. The Infinity Cache does too good of a job that it's up to core clocks to make the rest. AMD's focus on memory architecture/hierarchy makes it easy to find where performance is limited. Edit: By performing well I meant the weak performance increase on the refresh. Memory was never a bottleneck for RX 6800 and up; just like the Radeon VII and R9 Fury series.
How was memory not a bottleneck if increasing memory clocks always yields better performance, even at stock settings... Memory was absolute dog shit and a bottleneck on RDNA2. Check benchmarks and see how the 6900XT LC with 18Gbps scores against normal 6900XTs... The benefit from fastest memory was gigantic, specially at higher resolutions (Time Spy Ultra/Extreme for instance).
I thought the core clocks, and power limit were raised on that card as well.
Having owned a Radeon VII (rma twice) I got improved performance by raising memory clocks but it was not "proportional" to the increase in bandwidth. Faster memory increases bandwidth and reduces latency. Reduced latency means the shader cores wait less and can execute their work faster which can free resources for other threads to use. This effectively allows the shader cores to do more work and improves performance provided it has been spilled to GPU Global Memory. So if you want to put it that way my RVII was limited by memory speed but to be quite frank these cards get much better performance out of core clockspeeds which indicates an ALU (RX 6800 and up) bottleneck or ROP (RVII.) Another key takeaway is the Vega 56 bios mod/flash. If you flash a compatible Vega 64 bios (which I did on a V56) it frequently performed within 1% of the Vega 64. This indicated the card was not (particularly) limited by compute power. TLDR: You can always improve performance by making something good better but why not fix the weaknesses first where you have more to gain?
I had a Vega 56 Pulse flashed to a 64. Performance increased drastically because the 64 vbios ran the memory at 1.356V rather than 1.29V (iirc) which meant you could increase memory speed by a couple of hundred Mhz (>500 Mhz iirc). However, performance was nowhere near the 1% you claim from the 64. Go check the benchmarks, 56 flashed was still way behind the 64's in most tasks. The behaviour on RDNA2 is the same. I've played with my 6900XT a lot, mine was volt modded, currently ranked amongst top 30 on 3DMark benchsuits (sold my PC 6 months ago) and have been ranked top 1 and top 5 on most of the benchsuits for a while. I want to believe I know these cards. Want proof how memory is a gigantic bottleneck? People flashed Ref LC vbios (18Gbps) on their XTXH cards (16Gbps) just so that they could get access to relaxed vram timings which allowed them to actually overclock memory to >17Gbps stable and increase performance by a gigantic fuckton while core remained the same. More proof can be seen on mining, core is so fast, you only need around 900/1000 Mhz to feed the memory running at 16.5Gbps (in my case). This meant you could mine at 120-140W. Either way I understand your point of view, but RDNA2 could be extremely good with faster memory which is a shame. That's why the ref LC model was very sought after from some xocers, volt mod them and you get access to best of both worlds, fast core and fast memory.
Unfortunately having issues finding the specific V56 mod review with the results I was talking about. Regardless the source came about 5 years ago and may not hold up to todays standards. The Vega 56 flashed to 64 provided an uplift (some showed within 2%, a tiny overclock would put it at parity with the V64.) Gamers Nexus in particular modded the V56 and it "killed" the stock V64. Once again 5 years is a long time for hardware so I can see reasoning that the V64 held up better as games demand more computer power + driver optimizations Regarding the 6900XT it's all relative. I was able to find the post being made about the claims of memory overlooking with liquid cooled bios significantly improving performance but the settings/benchmark weren't posted. Thing is when you run the core clocks beyond the manufacturers settings it puts additional pressure on the memory system as the increased clock speed allows the cores to make more memory requests/work faster. You can, effectively, increase core clockspeeds enough that you start hitting a memory wall and then claim it's "bandwidth-limited" or "needs more bandwidth." Which is exactly what I think is happening to you if you are ranked top 30 in 3D Mark, the volt mod being mentioned also gives away the card is being ran out of spec. My analysis was largely based on stock configurations (comparing the performance of RX 6800, 6800XT, and 6900XT) and average overclocking done by reviews where multiple games/synthetics are tested. At 4K (where bandwidth matters) I found a very strong correlation between the increased core count and performance, same also applied to clockspeed. The cut ROPs (96 vs 128) on the 6800 had very little impact on performance unless my methodology was incorrect. I'll admit to overlooking heavy overclocking which can skew the performance characteristics of a card. However by the metrics of stock configurations memory bandwidth is a non-issue. As mentioned in an earlier post you will always get scaling off of faster memory speeds (it does two things at once) but the RX 6800 and higher scale incredibly well to shader core count and clockspeeds. Also the best way to check for bandwidth bottlenecks is running a title/benchmark that uses hardware MSAA (shader based MSAA renders the test useless) and cranking up the MSAA resolve. If you are NOT limited by memory bandwidth then MSAA (at the chosen level) will have no performance impact on the rendered scene.
At stock speeds I'm 100% with you, memory isn't an issue.
Considering the regular 6600XT beats the 3060 and the non XT 6600 trades blows the 6650XT beating it for $400 isn't anything to write home about If it beat or matched the 3060ti then that would be interesting but I don't think it will making a kinda obsolete product especially as stock is more and more readily available
To be fair, it’s looking like the 6650xt will hang with a 3060ti at 1080p, but it obviously will lose ground as resolution goes up.
Or when any RT is used.
Yeah but for me Metro Exodus is the only title where I’ve gained anything from ray tracing
Which makes sense, since it's the only AAA title so far which requires RT to function. Everything else just uses RT as optional window dressing for their rasters, rather than an integral part of the rendering pipeline.
Can’t argue with that. What’s crazy though, is even though it’s fully ray traced, it still runs better than most of the half assed implementations.
Correct. Fully raytraced lighting is actually simpler to implement and compute in realtime than the compounded layers required for a good raster. Adding RT effects on top of a good raster kills performance, but replacing the raster entirely improves performance on adequate hardware.
So why isn't it used in every new game?
Because there'd be a huge uproar of gamers who have pre-RT hardware. Considering the (admittedly recovering but still...) state of the gpu market it's good that hasn't happened yet.
Fair point, but why not remaster all games with it like metro did, removing every raster and leaving only ray tracing doesn't sound too hard (I have no idea how easy or hard it would be)
Because PC gamers with RT-capable GPUs are still a minority, and devs still want to release on PS4 and XB1. When RT vapable PCs and PS5/XBS become more common, we will see more games require RT. The transition may be slowed by the limited RT capabilities of the current consoles, though.
Most gamers will be ok with the RDNA 2 cards. However I would wait for RDNA 3 here and build more slowly with a phoenix apu before buying the 7900xt or rtx 4090 here for 8k gaming as prices also start to drop on TVs made for 8k at 55 inches and when Asus and gigabyte make PC gaming monitors with 8k and 120gz using HDMI eARC (for a sound bar for games like God of war)or DP cables that can handle native 8k gaming then you're future proofing is done for a few years here until the late 2020s here.
Fair point, but then why not just remaster old games with it, both versions can exist
> Fully raytraced lighting is actually simpler to implement and compute in realtime than the compounded layers required for a good raster. i thought new features like mesh shaders made it so raster performance was also a moving performance target. totally agree with you, but it seems like it's gonna take many many generations for raytracing to wholesale supercede raster implementation.
Judging by the performance uplift in Metro Exodus Enhanced, I think it will come as soon as RT hardware becomes ubiquitous. For shadows and reflections in particular, it is far simpler to implement an RT solution than anything else. For global illumination, the Metro devs also found it easier to implememt with RT than their original solution. They did a great deep dive article and video on it.
Control looks way different with raytracing as well. The amount of reflective glass in the game makes a night and day difference with RT.
Actually, you’re right. Playing control and seeing little reflections in wine glasses is the whole reason I went and bought Metro, now that I think back.
Control is amazing with RT on the glass. Best use case I've seen
You can also zoom in on NPC eyes in photo mode to see your reflection.
I did see that. It's pretty crazy to see it in real time when you do photo mode
It's fine to not care about graphics but most people who spend a ton of money on a GPU tend to care. It's probably why AMD sticks much closer to Nvidia in pricing at the lower pricepoints where RT is less relevant but not so much at the top end. I think even just proper reflections, especially when you combine it with a low roughness cut-off, without all the limits & artifacts of SSR are a massive improvement. Even in a game like RE8 with limited & low res RT the difference can be quite substantial imo.
it's a bit of a hot take, but i think moving forward amd's raytracing implementation will be the game developers' target, since it's amd RT that lives in consoles.
Raytracing is raytracing. The hardware solution is different but similar at the same time. We're already 1.5 years into the gen though with plenty of console first games that include RT and it hasn't changed anything. Not even in games that gets sponsored by AMD. They just use less RT so the hit to performance is smaller. Even still, Nvidia sponsors tons of games anyway. Hopefully RDNA3 will bridge the gap somewhat.
A lot of the new console games are cross gen. We still haven’t seen any of the heavy hitters move their engines or games to next gen only yet. There will be a shift when that happens.
Shift how? This is the same stuff people were saying in the PS4 gen about AMD in general. I feel like AMD sponsored games that still run better on Nvidia with RT showcase that these claims will be bogus. This either means Nvidia is better at RT in general or it's very hard to optimize for AMD's solution and that without optimization performance is better on Nvidia as an AMD sponsored game isn't going to be optimized on Nvidia either. Also I don't believe AMD is going to stick with their current solution. They'll definitely improve it so potential RDNA2 optimizations won't matter in a couple of years.
According to Hardware Unboxed , the RTX 3060ti and the RX 6700XT have around the same performance https://youtu.be/pnZRuY-jFVM I doubt that the RX 6650XT would match the RX 6700XT Also , according to data provided to reviewers through AMD's official guide , the RX 6650XT is just 2% faster than the RX 6600XT on average https://videocardz.com/newz/official-radeon-rx-6x50xt-series-gaming-performance-leaks-out-rx-6950xt-is-4-faster-than-rx-6900xt
Depends on the games. Some games the 6700xt is better than the 3070, some games it falls a bit behind the 3070, and some games it gets a double digit beating by the 3070, putting it squarely in the 3060Ti category. Although, I’ve learned that relying on graphs instead of in-game performance shows two different stories. When you look at frame times, and actual in-game FPS, the 3070 and 6700XT actually trade blows. Of coursing tilting in favor of the 3070, but not by as big of a margin that hardware unboxed supposedly says. That and a lot of reviewers don’t specify whether SAM is used or not, nor are games that get patches to fix SAM performance re-tested.
Yes , you are correct As for the matter of SAM , Hardware Unboxed had said that it was turned on when benchmarking all the games
The problem I see with graph reports is most use the in-game benchmark, or situations that aren’t common in game, to gauge performance. For instance with AC Valhalla the in-game benchmark shows me at 88.6 FPS, but when I’m roaming the country side, I sit between 96-110 FPS, when I’m in small villages it sits at roughly 92-98 FPS, in raventhorpe I get between 82-94 FPS. So, I see the graphs as a “rough idea,” but it doesn’t show frame times. I watched a video between the 3070Ti and 6700XT and the frame times on the 6700XT were superior, while the 3070Ti was pulling better FPS, the 6700XT played smoother.
>6650xt will hang with a 3060ti at 1080p lol not even close.
>RX 6650 XT vs RX 6600 XT = 2% Faster ...it says. That still looks far away from the 3060ti to me, even at 1080p. Even a $350 price tag is hard to justify for a card with no dedicated machine learning capability, and RT compute per core being around half that of the competition. This looks like a shameful release, and I'd imagine they won't sample these cards to any reviewer, because they look like a joke.
I agree, no impactful performance increase over a 3060ti, DLSS and RT features, not any cheaper For people in the west where prices are starting to equalise this product doesn't make sense This product is gonna fall so hard on its face
Any difference 5% or less is usually within the actual variance of most tests so it's effectively a null result or a tie. So saying card x is 2% faster than card y is BS marketing and effectively they are the same. So here AMD is saying they are sell you the same card for more money...... disappointing.
It's has 17.5gbps memory, and does boost higher. It's not the same card. Maybe if you only run two tests it's up to variance, and reviewers usually do 3 or less, but there is a going to be a testable difference if they really wanted to commit and minimize variance by running the same tests like a few dozen times. I can flip a coin 8 times and likely won't get a 50/50 results, and might be way off, but if you flip it a thousand times the chances of not getting a relatively close 50/50 score are astronomically low.
It is the same card, the only thing changed is the memory chip, those are actually different. The rest is just a factory overclock and a raised power limit. My point is that every test has a margin of error (+/-) and any result with in the margin of error is not something that we should be paying extra for. Most reviewers that are reputable (hub, GN, LTT) won't consider any results in the margin as valid. They won't name one better than the other, because there isn't a true difference.
I feel like that's a distortion of what margin of error really means. And margin of error only exists in a large degree if you don't do enough tests. Most reviewers (hub, GN, LTT) won't view those results as valid because they don't have time to tests it a hundred times, and testing it like 3 times is an insufficient data set to draw conclusions from. They are only telling you you should take their results with a grain of salt, because they don't have a large enough data set. They are only warning you that they can't be sure of their results because of time constraints. It's no different than if you're in a medical study with only two dozen subjects, the results are not good enough to legalize a drug, but if you have 120,000 subjects it's much more reliable and enough to legalize it. People will tell you to be skeptical of a drug that only has a couple dozen test subjects, but not a of a drug that's been around for two decades. But I just don't think that a card is objectively faster in every way is just relying on margin of error. That just can't be the case.
It's not only the small sample size but also the inaccuracies of the tests them selves. We are dealing with averages. The average frame rate of a given card running a given game. Thats why small percentage differences aren't very impressive. They are meaningless. If a 3070 does 120fps in a game and a 6700xt does 121fps and the 6750xt does 124 your not going to see a difference.... essentially they are the same
But I don't view them as the same. If the difference is measurable with enough data, and it's only 2% you can argue it's not worth it. If you're argument is it's not wroth it I agree. When Hardware unboxed after 40 game benchmark run wit 3 runs each, and 120 total says they are the same, then that's not true. If they said the difference doesn't really matter enough to spend extra, then sure. I'm much of a fan of Gamers Nexus because I think their understanding on margins of error is much more accurate, and scientific and they don't throw those words around that much to make everything look like a wash. It just makes them seem more precise.
Most of the reviewers I follow say there is anywhere from a 3-5% +/- margin .... That means anything in a 10% range is functionally the same. So if your telling me a card is 2% faster, that is an impossible difference to prove because the test isn't accurate enough to prove it. Your test could show 2% win but it could actually be 5% faster or equally it could be 3% slower , the test just can't give you an accurate enough result. So when a company, AMD or Nvidia or Intel says it's 2% faster it's BS marketing. Now when the margin is +/-5% and you get a difference of 10 or 15%, that is a measurable result and they can say x card is faster than y card.... In the end of these are indeed the launch MSRP, they are way overpriced for what we are getting..... I'm disappointed
>Most of the reviewers I follow say there is anywhere from a 3-5% +/- margin I've heard them say that when we're talking about a single game being tested a couple of times, but not as a whole over a hundred benchmark runs total over multiple games. Hardware Unboxed had no trouble pointing out how the 6700xt is 3% slower than the 3070 on average. >test isn't accurate enough to prove it. when you do enough runs, and get enough sample data it becomes accurate.
6600XT are too expensive where I am. They cost around 600$. If there is a GPU that costs $300, I am game. I am really thinking about getting a 3050 as an upgrade. It costs upwards 450$, but it is what it is.
So literally just the same price-to-performance as everything else for the last 5 years. Where every other technology gets cheaper over time for the same performance, GPUs haven't budged in half a decade. Because it's all determined based on cryptomining potential and what miners are willing to pay. Mining really needs to die already so GPU pricing can finally get unfucked.
The average price of GPUs and smart phones is rising faster than inflation. At least with smart phones there is mid range but today's budget GPUs are more than yesterdays midrange.
I usually spend $200ish on a gpu every 4-5 years but my $200 1060 has absolutely nothing to replace it in that price range and nothing on the horizon. I’ve been looking at a 6600 but even $300 is a big stretch. The 3050 is barely an upgrade and the 6500xt is junk. Sad state for pc gaming.
couldn't agree more as someone in the same situation with an rx580, dropping $500 on a 3060ti is really not that appealing
Same
Almost same situation. Rx 570 owner here but the prices just doesn't make sense right now. Mid range GPU is just so overpriced now days
As demand goes up with static or not-growing-fast-enough supply, prices go up or don't go down as quickly as they normally would.
Performance per dollar is up lots over 5 years. A 3070 Ti today is available for 1080 Ti’s MSRP and it absolutely smokes that part.
> So literally just the same price-to-performance as everything else for the last 5 years Show me a card from 5 years ago with the same performance as a 3070 for $500. Or even $600. Or for that matter, as the 3060ti for $400/$500. The 2080 was $700 IIRC, and the 3060ti seems to be on par with it, and the 3070 beats it, while trading blows IIRC with the much more expensive 2080ti. Heck, even at $1000 for a 12GB 3080 that ties the 5-year-old 2080ti on launch price, with a notable jump in performance. The 6900XT for $1000 also beats the 2080ti. Obviously prices are currently inflated, but at the moment there are still better values than 5 years ago, in terms of price to perf.
That's because people always like to complain. The people where I live thinks they should be able to buy a house today at the same prices as 10 years ago. That's why there is something called a market, don't like it then don't buy it or just get a console instead.
the 6800 msrp was $580 and beats a 3070ti and AMD wants you to think a 6750xt which is supposedly faster than a regular 3070 is $550 is a good deal?? Literally I cannot stand AMD the past year
Absolute trash. Raising the prices that much on cards that don't perform better than 5% on average than their older counterparts? Ridiculous AMD, absolutely ridiculous.
sure, but we all know they will drop in the market at 1.5x that price
coulda called it the 6969xt
6969xt: Nice Edition
These prices sound bad. Performance is epsilon better than the originals and price is more than epsilon higher. The 6750XT looks like the relatively best upgrade, but even there I can have a 6700XT today for $499 and it's 10% more dollars for 7% more performance at some point in the future.
The 6750 is actually the worst since is just 30$ shy of a 6800.
6800s aren't available anywhere close to alleged MSRP. Realistically it's at least a $700 product.
and you think a 6750xt will be close to MSRP? partner cards will sell this for $700 or over
As 6700xt is selling at $499 today, I can’t imagine many 6750xt selling for $700.
Yeah these are essentially identical. Maybe the 18Gbps G6 costs an extra dollar per GB
This is more AMD stealing the scalper margin from AIB's than anything.
It's actually cheaper if the rumours are to go by. 18GBps is the most commonly made now.
Beats the 3090 at what though? At 4K? At 1080p? At ray tracing? 6900xt already beat the 3090 most of the time at 1080p and 1440p. But at 4K or ray tracing the 3090 ran away. I can’t imagine a minor clock bump changes that much at all.
At 3dmark. 3dmark loves RDNA2
>Beats the 3090 at what though? At 4K? At 1080p? At ray tracing? Cherry picked AMD sponsored games.
6900XT does not beat the 3090 "most of the time" at 1440p.
>I can’t imagine a minor clock bump changes that much at all. Doesn't it also have faster memory than the 6900XT?
Yea, you’re right. Should help a bit at 4K. I missed that.
Isn't the 3060 MSRP $329? I hope the 6650XT beats it for over 20% more cost.
The 6x50 refreshes aren't priced relative to Nvidia's MSRP prices, but relative to their street prices. You won't find a 3060 for $329, so that's not the price AMD is trying to match.
Both amd and nvidia can sit on their 2 years old tech priced at 50-100% above msrp as far as I'm concerned. Prices are finally falling, unless shit hits the fan again these cards will lose value in the following year unlike pretty much any other generation, unless you absolutely have to buy a gpu today I wouldn't do it.
6650 XT looks like a VHS tape. Kind of like it, though.
Radeon Beta-max Edit: Oh, god damn it! That is a blower fan design!
i really wanted a 6650XT reference to put into my SFFPC
Apparently, there is no reference design from AMD for the RX 6650XT . Only the RX 6750XT and RX 6950XT will be sold by the AMD web store https://videocardz.com/newz/amd-radeon-rx-6950xt-to-cost-1099-rx-6750xt-549-rx-6650xt-399
Id love to see the RT performance on the 6750. Stronger to me means a clear cut winner, not faster raster
I used to comment here loads and get right into the detail of all of this stuff, was super exciting. Sad to see that there's consistently little to be excited about regarding Radeon anymore (or GeForce, really). Not a slight on AMD as a business - I understand why they're doing what they're doing, but I can't help but mourn the days where the shit they put out was *exciting*, y'know?
And they still have no fucking Deep Learning software support....
But is it faster while doing heavy ray tracing, or just at regular old rendering?
well depends on the game. Heavy RT games fuck no lol. Dying light 2 or cp2077 is a complete wash. Some games with lighter implementations you can see equal performance
I doubt it will make much difference, they aren't adding any more CUs
These mfs gonna be at least +60% msrp tho so I wouldn’t get my hopes up too high just yet my friends
You could probably overclock non 6X50XT cards for similar results.
Is there a 6800 or 6850 XT?
No. They have pretty much abandoned the 6800 and 6800xt with yeilds being good enough to sell all navi21 chips as 6900xt/6950xt. Defective chips are so rare at this point they would have to cut down fully functional chips to meet demand ....they aren't going to do that
Thanks!
No, take it back. I know perfectly well what you're trying to do. You are so confident in your 7900XT that you're justifying a price creep now. Stop it, AMD. I love your CPUs as much as you do, but this is ridiculous.
Looking at the data no it is not faster than my RTX 3060.
Great, but will it crash Davinci Resolve?
Yikes. GPU market is so trash
Add $400 to each of these and you have the real price, sadly.
Maybe 3 months ago you'd be right. 6700 XT's are on Newegg for 490
Maybe in the US, but the rest of us don't have it anywhere close that good. Cheapest 6700 XT in my corner of the world is still $750.
absolute dogshit pricing
Man, those are some EXTREMELY misleading ray tracing results. What a joke. Even if, and that's a **big if**, they somehow managed to get that close in their cherry picked subset of games with the minimal "performance" ray tracing ( aka, throw away 20% of your FPS for literally no noticeable difference in most games), presenting it this way was clearly intended to mislead prospective customers into believing that AMD cards handle ray tracing almost as well as nvidia cards. We all know that that is absolutely inaccurate. I liked AMD better when they just told us like it was over the last 5 years "Ryzen is OK for gaming (and getting better every generation up until 5000 series was actually trading blows with Intel 10 and 12 series), great for productivity, and we're selling them cheap with a long-lasting socket platform."
these GPU's, in the words of Tech Jesus, are a waste of sand
Can we talk about how stupid this "catchy" phrase is? No, these are good GPUs. Honestly, with the exception of 6500XT these are all pretty good GPUs. It's the prices that are bad. Good GPUs ruined by greed? Yes! Waste of sand? Go home, you're drunk.
its not that their "bad" its that their existence is pretty pointless. Basically the same cards for a worse price.
These are cards that are 2-7% better than their predecessors, it's not pointless. If they came at the same MSRP you'd be cheering for the upgrade. It's all about the cost.
Exactly. I’m glad we’re on the same page
I belive one can overclock the 6600xt to get the same results as the 6650xt. The youtuber 'Ancient Gameplays' made some videos comparing the rtx 3060 with OC+rebar with the 6600xt with OC+SAM. Those 6600xt results look a lot like the results from 6650xt
Pretty distorted headline that misleads in terms of what is more meaningfully being claimed here. Single digit gains on original hardware.
So... more stagnation? Guess I'll be skipping another generation. What a waste of sand
Oh yeah, I'm already imagining the tagline Steve is going to come up with for this round of reviews... Do we know when the embargo lifts?
This is cool for new buyers but I'll just overclock my reference 6900xt and wait for the 7000 series.
How is it cool at all? They jacked the priced up by 30%+ for single digit gains? Now instead of overpaying scalpers new buyers can overpay AMD directly. Price to performance of this refresh is trash.
30%? It's more like 12% if anything and that's the MSRP which means nothing. The market sets the price of these, so if there's barely any performance difference then the price won't change. This is AMD stealing the scalper cut from AIB's. Only people going for reference cards will be hit with this increase.
It's cool they get a faster card and $99 extra over the 6900xt is nothing like what the scalpers charge.
AMD is done 5600x 300$ milking until ADL took sales back, now this garbage. They killing pc community
AMD's main issue is, and has always been their drivers. Team green had kicked their @$$ in that area for years. Also, availability is non-existent for AMD. I had little choice but to buy a 3060, because there are no 6700's around.
Meh, nothing exciting about a late cycle launch. Maybe 5% better and will be obsolete In a few months.
Nice. Newer tech being cheaper with more performance than older tech? Shocking... Lezgo AMD.
Disappointing, Lovelace will stomp them
So raytracing was stupid and a gimmick when AMD sucked at it, now when it's trying to catch up, it's a cool feature? Maybe more Xs in product names will sell more units.
Me who bought my 6700 XT from Micro Center for $879.99 6 months ago: 🥲🥲
Yeah but you've been gaming for 6 months
time is money. honestly if you mined with your card for the past 6 months, you would have easily gotten $360 of that back (avg $2/day x 30 x 6).
rx 6600 xt is already a bit faster than 3060, the rx 6650 xt must be almost 3060ti level
I think it's also worth considering that these cards are not just a small performance boost; they remain more power efficient and generate less heat, and push the heat out instead of in to a case. These are benefits that might be worth paying for, depending on your build.
Wasn't the 6750 XT rumoured just a couple of days ago to cost $499? With the release supposedly 4 days from now, I'll wait for reviews. The pre-release numbers are mostly underwhelming, except for the 3DMark results, which were 10%-20% faster that the 00 parts.
Yes, and this current $550 number is also just a rumor. It could be completely different at launch day. The price of a product is usually fluid right up until the last possible second.
From what I've heard from GN videos the actual MSRP is set super late. Sure they have a general range the whole time but the exact figure can be decided minutes before the announcement.
Did they add more raytracing cores (correct name?) or is this refreshed card with higher clocks and better VRAM?
Well I just went ahead and bought a Sapphire Nitro+ 6900 xt, so either way I'm in the long haul.
I doubt the prices but I will keep my hope.
I thought this was going to be an opportunity for AMD to refresh their lineup to a price to performance level that actually makes sense after they've been jacking up their MSRPs 10% higher than the cards are actually worth. You can already find the 6600 and 6600xt selling for around MSRP. Is AMD not afraid of Intel's upcoming stuff? Do they know something we don't? Is it really a big failure? Are are they planning to drop prices in the next 6 months by $50-100 when ARC for desktop releases?
Does my 5700XT still hold up? Are any of these really worth dropping the money for right now or should I just keep waiting for the market to correct and be happy for a while?
wait 5 more months for a 7700XT
6750XT being faster than a 3070 would mean it will need to be at least 20% faster than a 6700XT though, if that is the case then it must be a good enough refresh that performs reasonably well over the standard 6700XT. But as always i am going to wait for official third party benchmarks first before concluding this story.
Any singlefan 6650XTs on the list?
Where does amds currently video card lime up against nvidia?
By faster, I bet they mean 2-5% lol.
What about Rx 6850xt?
Waits for the rtx 3090 prices to drop to 1399 then 1299 USD MSRP respectfully in response to this AMD GPU here as this is why competition is needed again here like 10 years ago for Intel in the CPU market here.
I'm waiting till the next gen comes out.
Damn, If I only knew they were coming just a couple of months ago, I would have waited to buy the 6700xt
No 6850? And what about using 3d-vcache for extra infinity cache? or is that being saved for the 7k series?
Will it sell for msrp for board partners ? 🤣
>6650 XT Faster Than RTX 3060 For $399 Since RTX 3060 MSRP is $349 i guess it should be faster for $399 :P.
The only way I’ll upgrade now is if they release a 6969xt
What a bargain 😒
Huge if we can buy these at MSRP
I just bought an rx 6600, did I get fucked?
It was already uncommon knowledge that in non Nvidia sponsored titles (where they pay for more optimisation especially with RT) that AMD and Nvidia cards where on par with each other in most cases with only margin of error splitting them. It would seem that 2-12% performance increase puts them over the line with non Nvida sponsored titles and probably brings them closer to on par with Nvidia sponsored titles.
Stupid products that nobody wanted or asked for. If they're going to do a mid-cycle refresh, then fine. But they should do what Nvidia did and replace the older cards with the new ones like they did with the 2070 and 2080 being replaced by the 2070S and 2080S at the same price point. Instead, AMD tries to squeeze more cash out of us by giving us stupid mid-cycle refreshes on a process that's going to be obsolete in 6 months.
I personally gave up just grabbed a Sapphire pulse RX 6600 for 300€ without tax and grab on to it until the end of next gen cause I don't see them being cheap anytime soon. AMD is deliberately discontinuing older cards and raising MSRP even more. Those thots are gonna kill PC gaming of they continue with these practice's
I have a 1060 6gb and that 3060 is looking sweet for 399
More awesome cards I won’t be able to find in stock or close to MSRP :(