T O P
trillykins

Lol, since they had the rig on the table I expected them to do some actual benchmarks, but I suppose they don't have a compatible system yet. Interesting about the higher latency.


[deleted]

[удалено]


SkullBrian

Based on this week's WAN Show, it kinda sounded like Linus might not have as of Friday.


[deleted]

[удалено]


SkullBrian

No the segment was explicitly on the tardiness of review samples from brands, but what you said is also true. He indicated he cannot say whether or not he even has it, but he said it wouldn't be the first time consumers get their hands on something before LMG does to even START their review process.


SolidoTY

They are under NDA so can't post anything for a few more days.


Nin021

Thought the exact same but I believe it's because of the 12th gen not beeing releases yet, can't remember the term what its called again.


UlrikHD_1

Review embargo?


Nin021

Thanks, that's it! I'm not a native English speaker so I somewhat lost it there :)


Darkomax

ANother term is NDA for non disclosure agreement (though I don't know if a review embargo and a NDA is the same thing, but it's the same ceoncept)


Mr_Figtree

An NDA is a contract you sign, usually with penalties attached, a review embargo is an informal agreement where the reviewer only stakes their reputation for being able to keep a secret. Reviewers who break embargoes don't get to see things before they launch anymore.


SolidoTY

NDA is the contract they sign and the embargo is part of it.


DeadLikeYou

I was eyeing that noctua GPU the whole time. Kinda stupid, but it’s my favorite design of a gpu so far.


trillykins

Oh, didn't even notice it was the Noctua variant. Curious how much better it is than the regular GPU coolers.


TimeForGG

There are reviews out already.


GarbageFeline

There you go https://www.youtube.com/watch?v=Hpk4UM1VQOY


trillykins

Ah, cool. Continues to surprise me just how massive it is. Might actually be about twice as tall as my Asus 3080 card which is already massive.


DeadLikeYou

Exactly what I want to know as well.


HoneyBadgerSloth1337

Was the same from DDR3 to DDR4


Quigat

Next week: water cooling DDR5


betercallsaul

Are you trying to get a job at LTT? Because that's how you get a job at LTT.


AnimeAlt44

RIP that one RED cam.


sk9592

Did you miss the follow up? It took them a year, but they were eventually successful in water cooling the Red camera. And then converting it back to a stock Red camera as well.


AnimeAlt44

I did catch them finally succeeding with the water cooling project but missed them converting that back to a usable camera.


sk9592

There was never a video dedicated to them converting it back. Linus just mentioned it in passing during a WAN show.


AnimeAlt44

Ah I see. I still enjoy LMG but my days of following every piece of content they release are long gone so I miss these things.


Draakon0

They have an LMG clips channel if you don't want to watch full show and instead like to hear snippets here and there on topics you are interested in.


Lower_Fan

I love LTT and i don't keep up with everything too many channels now and the wan show some weeks is very redundant


warenb

> the wan show some weeks is very redundant Lately every wan show main topic be like "MORE thoughts on...".


[deleted]

[удалено]


Devgel

But I want to water cool my water loop?


Maimakterion

You can with a multi-loop heat exchanger sandwiching a TEC or heat pump.


Ivanovitch_k

now I want to watercool my heatpump.


[deleted]

[удалено]


psychosikh

That's what Microsoft did with their data center in Scotland. They just put it into the sea.


AK-Brian

Just daisy chain each loop's radiator into an infinite series of increasingly large buckets. Easy peasy.


1RedOne

They should combine a water cooler loop with a window AC unit for icy cold temps, if it's possible without condensation damage


RBeck

That's basically how a nuclear power plant works.


CassandraVindicated

That's basically how any stream-driven power plant works.


ZhaitanK

> Next week: ~~water cooling DDR5~~ Connecting the individual DRAM sticks to the water cooled room.


yaosio

Two weeks from now: Full submersion in moving mineral oil.


Rentta

That was already a thing in early 00's and so was watercooling psu's and hdd's


crawlerz2468

I swear if there's no RGB


kedstar99

It would be cool to know in detail the different types of ECC. He chose the words 'basic ECC'. Why not full proper fledged ECC and is there a specific difference in the types of ECC?


wrathek

The ECC DDR5 supports is simply on-stick correction, which is totally invisible to the OS/CPU. “Full ECC” which is used in/important for servers is done at both ends - it does what consumer DDR5 does on stick, and then it also does it at the CPU, so that any errors that may occur in transport are caught and fixed as well.


Slyons89

My basic understanding: Full fledged ECC memory attempts error correction and reports the errors back to the CPU and OS, and those can be logged/reviewed/affected by software. The ‘basic’ ECC functionality attempts to error-correct on the RAM itself and doesn’t report the errors back to the system. This is similar to how GDDR6X operates, it self error corrects but doesn’t report back. You can overclock it really far but eventually performance starts to decline massively, because of all the required error correction, but it still prevents a crash.


phire

GDDR6X actually has the opposite partial ECC to DDR5. GDDR6 can detect errors in data transfers (between the memory die and the gpu's memory controller). It can't correct them, but it can report and retry the transfer. But it can't even detect if the data itself in memory gets corrupted. DDR5 has on-die ECC. It can detect if there was an error while the data was stored, and even transparently fix it. But when the data is being transferred across the bus to the memory controller, it's not protected anymore. DDR5 also supports real ECC on top of that, where each memory stick has two extra memory chips and the channels are increased to 40bits, with 8 extra bits of correction data. The CPU's memory controller can then detect, report and correct any errors.


crab_quiche

DDR5 and DDR4 have CRC like GDDR, they can detect issues in data transfer. DDR4 only has it during writes, DDR5 also has it during reads.


VenditatioDelendaEst

So with DDR5, the only window for undetected corruption is when the data is in the DRAM chip's buffer? If so, I am suddenly less annoyed about DDR5 ECC needing 10 chips instead of 9.


crab_quiche

Yes, but as someone who designs DDR, buffers from the dqpads to the arrays and the arrays to the dqpads are the most likely place for things to go wrong, especially when overclocking.


ikea2000

So are we talking about what he refers to as “Basic DDR5” (standard)? While full ECC protects data all the way: transfer, storage and buffers as well?


crab_quiche

By “basic” I believe he means on die ECC. So when we load into the array, done in 128 bits, we also are going to store 8 more bits for on die ECC that will be checked and fixed when we read it. I would not consider this protection. This was added so that manufacturers could get more yield, if we have one bit that is bad, we don’t have to go to use a different redundant row or column, cause the ECC will fix it. I don’t remember the exact numbers but we are using about 10 less total columns in DDR5 using the same process and bit failure rates as DDR4. 10 doesn’t sound like much but that’s about 1% less columns, so 1% less die area, or 1% cheaper per bit, which really adds up when you sell a couple quadrillion bytes per month. Normal ECC works by adding an extra chip to the rank and sending error correcting data to it instead of normal data. So once we read everything, we correct it(if necessary) on the memory controller. CRC’s are calculated based of data being transferred by the controller and get added on to the end of a data transfer, and then compared on chip to what was transferred. If it doesn’t line up a signal is sent to the controller and data is resent. The buffers are not really protected, you can design them to be sort of protected by CRC, but you can still have issues with wrong data being stored into the banks or sent out over the dq’s if not designed properly. Because DRAM processes are designed to maximize memory bits/area, the transistors are really weak for general logic and can have some huge variances, plus everything after receiving the data is generally asynchronous so if everything is not timed perfectly stuff can go wrong. You don’t have to use CRC, but I believe it is generally used when using ECC, since even though there is a small chance that you can have multiple bit flips that will be undetectable, it there becomes an exponentially smaller chance that something won’t be detected if it is also protected with CRC.


COMPUTER1313

There was probably a cost-benefit calculation done to determine that the extra binning for DDR5 without any ECC was more expensive than using an extra chip so that more of the memory dies can be used instead of going into lower speed (and less profitable) sticks or the scrap bin. For HDDs, about 10% of their capacity is just used for ECC. It might be great to "disable" ECC to get an extra 400GB capacity out of a 4TB HDD... right up until all of your files get corrupted. https://en.wikipedia.org/wiki/Hard_disk_drive#Error_rates_and_handling > Modern drives make extensive use of error correction codes (ECCs), particularly Reed–Solomon error correction. These techniques store extra bits, determined by mathematical formulas, for each block of data; the extra bits allow many errors to be corrected invisibly. The extra bits themselves take up space on the HDD, but allow higher recording densities to be employed without causing uncorrectable errors, resulting in much larger storage capacity.[69] For example, a typical 1 TB hard disk with 512-byte sectors provides additional capacity of about 93 GB for the ECC data.[70] > 2013 specifications for enterprise SAS disk drives state the error rate to be one uncorrected bit read error in every 10^16 bits read,[75][76] > 2018 specifications for consumer SATA hard drives state the error rate to be one uncorrected bit read error in every 10^14 bits.[77][78] And it's also likely the same reason why GDDR uses ECC. Because at a certain speed and capacity, it became cheaper to use extra processing/capacity to make a memory chip run at full speed than to sell it as a half speed.


Slyons89

Great explanation, thanks!


0OOO00000OO00O0O0OOO

When im on the lookout for Real ECC DDR5 what would the labeling be on websites that sell them? ​ * 512 GB Crosshair DDR5 RAM with ECC and Real ECC ?


Nicholas-Steel

Basic ECC can fix errors in the memory banks but not errors for data in the process of being transmitted. Full ECC covers both scenarios. That's my understanding.


f3n2x

"Basic" and "full" is a bit misleading. AFAIK conventional ECC doesn't do any error correction on the module, they just have an additional chip on which the memory controller stores checksums for the rest of the data. This can correct both on-chip as well as transfer errors but only when the CPU actually reads the data. DDR5 ECC is a regular on-chip-sweep silently catching and correcting bit flips as part of the refresh cycle. This doesn't catch transfer errors but it also doesn't cost any bandwidth and doesn't let bit flips accumulate over time to the point where they might become unrecoverable if not read for an extended period of time.


Nicholas-Steel

So... technically I'm right but I've oversimplified it. Thanks for the additional information.


Noreng

This is correct.


Noreng

> This is similar to how GDDR6X operates, it self error corrects but doesn’t report back. No, GDDR6X doesn't have error correction. Nvidia implemented a method to preserve stability by implementing error detection and retransmitting. If a memory transfer fails on GDDR6X, it's simply rerun. This is different from ECC, which will correct the result on the fly.


VenditatioDelendaEst

I thought that had been around since GDDR5?


Noreng

Not the rerunning solution as far as I know. I suspect GDDR6X is prone to some erroneous data transfers even when running "stock", which could explain why it's implemented.


NoCSForYou

Its a parity bit. Its been around from around the start of digital signal transfer.


VenditatioDelendaEst

The concept of parity bits has. Data-in-flight checksums for video card memory, specifically, [were added in GDDR5](http://www.hwstation.net/img/news/allegati/Qimonda_GDDR5_whitepaper.pdf). >A new feature of GDDR5 is the capability for detection of transmission errors occurring on the high speed signal lines. As graphics systems store increasingly more code in the DRAM, error detection becomes essential, as random bit fails associated with any high speed data transmission would lead to unacceptable system failures. >In GDDR5 the transmitted data is secured via CRC (cycle redundancy check) using an algorithm that is well established within high quality communication environments like ATM networks. The algorithm detects all single and double errors with 100% probability. The CRC scheme is implemented on a per byte basis, securing all DQ and DBI# lines. A eight bit checksum is calculated by the DRAM on each data burst (8 DQs + 1 DBI# x burst of 8 = 72 bit) and returned to the controller via dedicated EDC pins. When the DRAM controller detects an error, the command that caused the error can be repeated. Error detection can be used to trigger re-training of the data transmission line which allows the system to dynamically adapt to changing conditions like e.g. temperature and voltage drift.


TiL_sth

The on-die ECC is there because error rate of ddr5 is too high without it. I don't think we should expect higher reliability with normal DDR5 compared to non-ecc DDR4 for instance.


JerryD2T

https://www.msi.com/blog/all-you-need-to-know-about-ddr5-memory-modules The ODECC section here should clear it up if anyone’s looking for a short answer. It’s error checking but only on an on-chip level. Not during transmission.


Larrythesphericalcow

Which modules you buy is going to be a lot more important now that the VRMs are on the DIMMs themselves. It used to be that the only difference between more and less expensive modules was the heatspreaders/RGB. Now it will actually effect performance.


fistymcbuttpuncher

Market segmentation achieved! -RAM Manufacturers


Larrythesphericalcow

You have to wonder if G.skill, Kingston, Corsair, etc pushed to have this be part of the spec.


thatoneguyyouknow3

Eh, it makes sense from a spec perspective. It's a W all around if you ask me, if you want an ultra high performance kit the RAM manufacturer can make 100% sure it has the power performance it needs. And now motherboard manufacturers don't have to add extra power stages for RAM either.


Larrythesphericalcow

I would agree. But as Linus points out motherboard manufacturers aren't actually going to cut prices. It means you're going to have to spend more on RAM then you otherwise would. I think more enthusiasts are willing to spend extra on a motherboard then extra on RAM. A nicer motherboard potential gives you better CPU overclocking, networking, audio, USB connectivity, etc. Spending more money on RAM just gets you better RAM overclocks. None of this matters that much. I'm still interested in DDR5. But it is mildly annoying.


PJ796

I mean this is still the better way to do it, as they're reducing the current loop which means less overall inductance in the AC current path (the current that comes from the bulk capacitors), which means better transient performance


Larrythesphericalcow

That's certainly an advantage to doing it this way. I won't dispute that.


Khaare

The main winners of this, and the main reason why it's being done, are servers, where you can now pay-as-you-go on the RAM power delivery instead of always paying for 4TB or whatever worth of RAM power delivery on every motherboard.


Larrythesphericalcow

Good point.


VenditatioDelendaEst

Er, I'm pretty sure the more expensive ones have been binned for performance ever since XMP came out, at least.


Larrythesphericalcow

The DRAM chips themselves sure. But now you're probably going to have to pay extra on top of that to get VRMs that can handle those speeds.


VenditatioDelendaEst

The manufacturers have zero incentive to sell unbalanced configurations. If you make a kit with chips that could do 7200 MT/s with a power supply that's only good for 6400 MT/s, you can't sell it as 7200 XMP, so you have wasted your expensive (because rare) high bin chips.


Larrythesphericalcow

Disagree. They already sell kits that are rated for speeds virtually no one will be able to hit just for marketing.


PlankWithANailIn

90% of them are all going to use the same off the shelf parts. 5v to 1.1v linear or buck converter is hardly cutting edge stuff.


Kougar

Memory chips are more noise sensitive than the average circuit, though. We still can't rely on motherboard vendors to implement VRMs that are stable and able to meet base Intel spec without throttling. And apparently we can't rely on GPU vendors to have good soldering, since most still claim Ampere failures are just from soldering problems. We can't even rely on PSU makers to not switch out and downgrade the buck convertors and other parts of the PSU to related that can't meet their own label spec because of supply disruptions. If it's possible for vendors to find ways to cut a corner then some companies are going to cut it.


Khaare

If you followed the latest buildzoid videos he's speculating that the Ampere failures are likely down to how NVidia designed their power delivery. Manufacturing issues could be involved, but the design itself seems to be riding very close to the edge and could leave open opportunities for certain workloads to brick the cards.


Kougar

Aye, again I said "GPU vendors...still claim", I don't subscribe to the explanation myself. I could've phrased that reply way better. Buildzoid made a pretty convincing case that the real problem is many Ampere cards simply have a poorly implemented VRM design where most of the assumed safety features are simply not there. Any regulation that adjusts itself retroactively after the VRM was already overdrawn/power spiked is terrible and guarantees all cards will fail eventually once enough damage has been done to the power components.


VenditatioDelendaEst

Suppose you get a kit of memory that can't run a (reasonable) XMP. Are you going to RMA the motherboard, or the RAM? Making the memory vendor responsible for the memory voltage regulator has better incentive alignment than making the motherboard vendor do it.


Kougar

Don't get me wrong, even if I don't see cost-savings on the motherboard (and I don't expect that I will) I am still in favor of moving the voltage regulation onto the modules! Just ended a very long, lengthy affair with a dodgy 32GB kit DDR3 from a company I thought was the most reputable manufacturer of the lot, and it's something I'd really not want to ever have to deal with again. Even if nothing else, moving the power regulation to the module means it's more likely to be the module and I'm fine with that.


Larrythesphericalcow

The parts aren't that expensive but they are still going to charge a premium for nicer ones.


mantrain42

Oh god, I am not looking forward to a second source of VRM hysteria.


Larrythesphericalcow

What was the first?


mantrain42

Motherboard VRM? :)


Larrythesphericalcow

Oh, gotcha. For some reason I read your comment as a second "round" of VRM hiestiera and I thought you were talking about a specific product. What you actually said makes more sense.


Aos77s

Yay a video showing it but no benchmarks cause nda :(


Vitosi4ek

I'm all for speed improvements, but the capacity improvements don't sound that useful right now. At the risk of sounding like Bill Gates in the 80s... who needs 128GB of RAM on a regular desktop/laptop? I currently have 32 in my system and that's spectacularly excessive for regular use/gaming, and will become even less important once DirectStorage becomes a thing and the GPU could load assets directly from persistent storage. One use case I can come up with is pre-loading the *entire OS* into RAM on boot, but that's about it.


RonLazer

You're not seeing the whole picture. Part of the reason why such high capacities couldn't be utilized effectively was bandwidth limitations. There's no point designing your code around using so much memory if actually filling it would take longer than just recalculating stuff as and when you need it. DDR5 is set to be a huge leap in bandwidth from DDR4, and so the useable capacity from a developer perspective is going to go up. To put it in perspective, I use a scientific code which calculates millions of integrals each "cycle". It has multiple settings which allow it to store the integral results on disk and read them back each cycle, or to entirely recalculate them each time. There isn't even an option to store them in memory, because if they could fit in memory then that part of the calculation would be so trivially quick as to be irrelevant, and if there were enough of them to make it faster to cache them then they wouldn't fit in memory. Now the tradeoff might not be required, with 512Gb of memory (or more) we can just store every single integral in memory cache, and then when we need to read them we can pull data from the memory faster than we can recalculate. If you don't care because you're just a gamer, imagine being able to pre-load every single feature of a level, and indeed adjacent levels, and instead of needing to pull them from disk (slow) just fishing them out of RAM. No more loading screens, no more pop-in (provide direct-storage comes into play as well of course), everything the game needs and more can be written and read from memory without much overhead.


____candied_yams____

> To put it in perspective, I use a scientific code which calculates millions of integrals each "cycle". It has multiple settings which allow it to store the integral results on disk and read them back each cycle, or to entirely recalculate them each time. There isn't even an option to store them in memory, because if they could fit in memory then that part of the calculation would be so trivially quick as to be irrelevant, and if there were enough of them to make it faster to cache them then they wouldn't fit in memory. Fun. You doing mcmc simulations? Mind quickly elaborating? I'm no expert but from playing around with stan/pymc3, it's amazing how much ram the chains can take up.


RonLazer

Nah, Quantum Chemistry stuff.


KaidenUmara

this is code for "he's trying to use quantum computing to make the ultimate boner pill"


Lower_Fan

I'm genuinely surprise that billions are not poured each year into penis enlargement research Edit: Wording


myfakesecretaccount

Billionaires don’t need to worry about the size of their bird. They can get nearly any woman they want with that kind of money.


Lower_Fan

I mean for profit it would sell like hotcakes


Roger_005

What about the size of their penis?


KaidenUmara

lol i've joked about patenting a workout supplement called "riphung" It would of course have protein, penis enlargement pill powder and boner pill powder inside. If weed gets legalized at the federal level, might even add small amount of THC in it just for fun lol.


Ballistica

But dont you already have that? We have a relatively small-fry operation in my lab but we have several machines with 1TB+ ram already for that exact purpose. Would DDR5 jsut make it cheaper to build such machines?


RonLazer

Like I explained, it's not just that the capacity exists but whether or not it's bandwidth is enough to be useful. High capacity dimms at 3200MHz are expensive (like $1000 per dimm) and still run really slowly. 32gb or 64gb dimms tend to be the only option to still get high memory throughput, and on an octa-channel configuration that caps out at 256gb or 512gb. Using a dual socket motherboard that's a 1tb machine, but you're also using two 128 thread CPUs and suddenly it's 4Gb of memory per thread which isn't all that impressive. Of course it depends on your workload, some use large datasets with infrequent access, some use smaller datasets with routine access.


GreenFigsAndJam

Sounds like something that's not going to occur this generation when it's going to require $1000 worth of ram at least for more typical users


bogglingsnog

It will likely happen [quicker than you think](https://cosmicconnexion.com/pics/ram-prices-over-time-14.png)


arandomguy111

That graph isn't showing what you think it is due to the scale. If you look at the end of it you can clearly see a significant decline in the downward trend starting the 2010's. See this analysis by Microsoft for example focused more on the post 2010s and why this generation of consoles had a much lower memory jump - https://images.anandtech.com/doci/15994/202008180215571.jpg


bogglingsnog

That just means we're just about primed for a new memory technology :)


RonLazer

Prices will come down pretty quickly, though tbh we already buy $10k Epyc CPUs and socket 2 of them in a board, even if memory was $1000 vs $500 it would be a rounding error for our research budget.


Allhopeforhumanity

Exactly, even in the HEDT space maxing out a Threadripper system with 8dimms is a drop in the bucket when your FEA and CFD software licenses are 15k per seat per year.


wankthisway

DDR5 is in its early days. Prices will come down, although with the silicon shortage who knows at this point.


JustifiedParanoia

first or second gen of ddr5 systems (2022 or 2023)? maybe not. 2024 and beyond? possibly. DDR3 went from base speeds of 800 to 1333/1600mhz over 2-3 years, and the cost came down pretty fast too. DDR4 did the same over its first 2-3 years with 2133-2666, then up to 3200. And, we also expanded from 2-4gb as the general ram amount to 16-32gb. If DDR5 starts at 4800, by 2024 you could be running 64gb at 6800 or 7200MT/s, which offers a hell of a lot more options than current, as you could load 30gb of a game at a time if need be, for example.....


gumol

> for more typical users who's that, exactly?


TheZephyrim

It won’t change anything right away, but once consoles start using this sort of tech then game devs will suddenly start to develop around the sudden lack of limitations. Same with direct storage etc. Like imagine the next Elder Scrolls not having load screens or pop-in. That could be a reality if Bethesda gets early enough access to a dev console that has DDR5 and foregoes releasing on the PS5/Series X. Same with other new games.


gumol

Plenty of people need 128 GB of RAM and more. Computer hardware isn’t just about gamers.


Allhopeforhumanity

DDR5 will be fantastic for a lot of HEDT FEA and CFD tools. I routinely chunk through 200+ GB of memory usage in even somewhat simple subsystems with really optimized meshes once you get multiphysical couplings going. Bring on 128GB per dimm in a threadripper-esque 8-dimm motherboard please.


happyhappypeelpeel

Yep. I've bumped against memory limits many times running multiphysics sims. I should be set for my needs for now since I upgraded to 64GB, but I have pretty basic sims at the moment.


insanityisforthemeek

Those people already have access to platforms which support 128GB of RAM and more, they've had access to these platforms for years now. The question was related to regular "desktop/laptop"s which is fair because there is very little use for such amount of memory on mainstream platforms these days, it's been like this for a long time that 8 is borderline ok, 16 is just fine and 32 is overkill for most. If you're really interested in 128GB of RAM and more, you've probably invested in some HEDT platform already.


pixel_of_moral_decay

Relatively speaking... gaming doesn't stress computer hardware terribly much. It's just the most intensive thing people casually do so it's a benchmark. Same way the Big Mac isn't the worst food you can eat by a huge margin... but it's the benchmark for how food is compared because of it's familiarity. Most software engineering folks in any office push their hardware way harder than most gamers ever can. But compiling on multiple cores for example isn't as relatable as framerates in games from a PR perspective.


KlapauciusNuts

Compiling isn't actually that stressful to hardware. In the sense that while it is a highly parallel task (depending on the code flow), it offers little opportunity for instruction level parallelism and certainly makes no use of SIMD, so while it busies a core, it only uses a fraction of it's logic so it does not consume that much power, compared to, for example, rendering or transcoding video.


useless_it

> Compiling isn't actually that stressful to hardware. It actually is, for RAM at least. Compiling a huge project in a *tmpfs* is the only reliably way I could detect some faulty memory sticks (not even a weekend long memtest could trigger those issues).


KlapauciusNuts

That's true. Ordinarily not that much, but if you are using tmpfs you should be maxing the controller. But consider the following. Ram might have been perfectly fine, but be a fault on software. Linux does not like a lot when tmpfs uses more than 25% of memory


useless_it

> Linux does not like a lot when tmpfs uses more than 25% of memory Care to cite any source on that? I regularly use 90% of the memory for *tmpfs*.


KlapauciusNuts

Gentoo wiki. Old article. Probably not online anymore or relevant nowadays.


Seanspeed

>Relatively speaking... gaming doesn't stress computer hardware terribly much. For CPU's or memory, no. For GPU's, yes.


pixel_of_moral_decay

Even GPU’s… machine learning for example are way more taxing.


limitless350

I’m hoping with the extra space available things will be made to use it more than before. We were under some restrictions before about how much ram was readily available. I remember floods of comments about how much of a pig google chrome is for ram, but now, who cares. Take more, work faster and better, a massive abundance of ram will be open for use. Maybe games can load nearly every region onto ram and loading zones will not exist at all. For now they’re probly gonna be gobbled up for server use but once games and PCs start using more ram there should be advantages to it.


JamClam225

>I'm all for speed improvements, but the capacity improvements don't sound that useful right now. For servers they can be. Getting double the capacity without the need for 1 additional server will save a lot of money if you're doing RAM heavy workloads. On the flip side, having a tiny SFF PC with 256GB's of RAM is nuts.


MasterShiftposter

> At the risk of sounding like Bill Gates in the 80s He never said the "640k..." thing.


Devgel

>who needs 128GB of RAM on a regular desktop/laptop? You never know, mate! Back in the 90s people were debating 8 vs 16 'megs' of RAM as you can see in this Computer Chronicles episode of 1993 [here](https://www.youtube.com/watch?v=2EBaj3kJNGI&lc=Ugw-DDzuZ96GAkro1yp4AaABAg). Nowadays we are still debating 8 vs 16, although instead of megs we are talking about gigs! I mean, who would've thought?! Maybe in 30 years our successors will be debating 8 vs 16 "terabytes" of memory although right now it sounds absolutely absurd, no doubt!


Geistbar

First PC I built had 512mb of RAM. It's entirely believable that we'll see consumer CPUs with that much cache within a decade. It's easy for people to miss, but we consistently see arguments for why the computing resources of today are "good enough" and no one will ever need more. Whether it's resolution, refresh rates, CPU cores, CPU performance, RAM, storage space, storage speed... Software finds a way to use it. Or our perception of "good enough" changes as we experience something better. As you say, give it 10 years and people will scoff at 32GB of RAM as wholly insufficient.


Xanthyria

Within a decade? In a couple months we’ll already be at like 256! The claim isn’t wrong, but it might be half that time :D


Geistbar

I like the play it safe. We don't know the future of AMD's v-cache. It could be that within a generation or two AMD will conclude it isn't a good idea from an economical standpoint, at which point we'll be back to "traditional" cache scaling. Or they could double down on it and we'll be there in 3 years. The future is often unpredictable.


FlipskiZ

I highly doubt AMD won't continue with the cache. Memory this close to the CPU is incredibly useful, and seems to be a low hanging fruit for 3D chips. A big problem with CPUs is not being able to feed it data fast enough for it to process, which stuff like cache partially solves.


Geistbar

That's my assumption as well. But as I said in the first sentence: I like to play it safe.


AnimeAlt44

There is one thing that is different between now and then though, which is the state of years old hardware. In the past while people were debating the longevity of high end hardware, couple year old hardware was already facing the fate of obsolescence. Now though, several year old high end or even mid range hardware are still chugging along quite happily.


happyhappypeelpeel

I had an i7-2700k that lasted 11 years @ 5.2GHz. Still kicking, now it's the dedicated lab PC.


Aggrokid

Except iOS devices for some reason, which can still get by swimmingly with 3GB RAM.


xxfay6

In 2003, 16MB would've been completely miserable and the standard was somewhere around 256MB I presume (can't find hard info). But 10 years ago was 2011, where 4GB was *enough* but 8GB was plenty and enough for almost anything. Nowadays... 8GB is still good enough for the vast majority of users. Yes, my dual-core laptop is using 7.4GB (out of 16GB) and all I have open is 10 tabs in Firefox, but I remember my experience on 8GB was still just fine.


HolyAndOblivious

I dunno what eat,s so much ram


KlapauciusNuts

RAM is extremely useful because we can always find new uses for it. There are all sort of files, databases, transient objects that can be left in memory to access them very quick, improving efficiency. But you are right, I don't think we will see many people go above 32GB, most will stick with 16 if not 8. (I'm not talking gaming here). But, anyway, this is a huge boon to anyone using the Adobe suite, and software like AutoCAD. I am, however, quite excited at the idea of replacing my homelab "servers" with a single computer with DDR 5 and 128GB. Maybe 196. Plus meteor lake and zen 4D / zen 5 both look like they may offer some exciting stuff for my particular use case. But that is going to have to wait at least until mid 2024.


SirActionhaHAA

> At the risk of sounding like Bill Gates in the 80s... But there wasn't any recorded proof that he said it and he denied it many times, calling it a stupid uncited quote


vriemeister

Here's the actual quote(I hope) >I have to say that in 1981, making those decisions, I felt like I was providing enough freedom for 10 years. That is, a move from 64k to 640k felt like something that would last a great deal of time. Well, it didn’t – it took about only 6 years before people started to see that as a real problem. > >\--Bill Gates


Seanspeed

It might surprise you to learn that you can do things with your PC other than game. Also DirectStorage has almost nothing to do with system memory demands, and is entirely about VRAM. It will also not be loading directly from storage, it still has to be copied through system RAM.


LeeroyGarcia

They did say "regular desktop/laptop" tho


Seanspeed

Still applies. The vast majority of work computers are 'normal' PC's, for instance.


LeeroyGarcia

Fair enough!


mik3w

With 128GB RAM you could fit the OS and entire 'smaller' games in there, so there should be less reads from the hard drive. (Since some games are over 100GB especially with 4k texture packs and such). It's great news for the server/cloud world and creators / developers that need more RAM. When 32GB, 64GB and higher becomes the norm, OS and app developers will find ways to utilise it


HolyAndOblivious

OS used to be 128mb and completely functional. I want that back. Specifically the being functional part


mckirkus

Direct Storage moves data from SSD->DRAM->VRAM. If you have a metric ass-ton of DRAM, you wouldn't need to use the Disk except at load time. You could have an old-school spinning platter HDD and it would take a while to load at 500MB/s but then it would only get used for game saves. Now that's not how it actually works, which is why an SSD is required, but I suspect game devs could, if enough DRAM is detected, just dump all assets on game load to DRAM. Given game sizes these days I suspect you'd need 128GB+ of DRAM to pull it off consistently.


jesta030

My home server installs the OS (a Linux distro) straight to RAM on every boot. Then runs windows 10 and another Linux distro as virtual machines with 16 and 4 gigs of allocated RAM respectively and a bunch of docker containers as well. 32 gigs is still plenty.


BFBooger

>docker LOL and here I am with a docker container that needs 40GB.


infernum___

Freelance Houdini artists will LOVE it.


happyhappypeelpeel

For the foreseeable future I imagine only professional customers. Complex engineering simulations can certainly eat up huge amounts of RAM, usually after running for 4 hours before crashing with an "Out of Memory" error. I imagine rendering 3D or complex video effects can also use a substantial amount of memory but I have no real insight in that industry. I suppose you can also run large, superfast RAM disks without spending a million dollars, so there's that! NVMe has certainly closed the performance gap between RAM and hard drives in terms of raw data transfer speeds, but random I/O is off the charts.


yuhong

AFAIK the launch do not even include any capacity improvements, that will come later.


Golden_Lilac

Windows will cache/page file everything into memory if it’s available. That alone drastically speeds up your computer. Basically it’s storing everything in memory (freed up as needed), so if you close something and opening again it will be significantly faster. Things kept in memory won’t have to be dropped as much either. To a point it’s overkill, but I can confirm that windows will use all of 32gigs for it. So going higher stands to benefit the overall “feel” and responsiveness.


00Koch00

Im getting short at 16 gigs 16 gigs was an absolute overkill when i bought it 5 years ago ...


1leggeddog

So in a nutshell: * Double the bandwith * Double the price * Still, if not more expensive, motherboards. Because FU.


AnimeAlt44

I mean really it’s “because new tech” like it has always been during every new generation but I guess the persecution complex works too.


mycall

The irony that my first IBM PC with CGA was $3500. Tech is cheaper at some social level.


Larrythesphericalcow

Oppression is when I have to get a job to buy a 3090. /s


100GbE

Yeah sucks, it should be. -Faster in every way. -Cheaper, at least half price or lower. -Able to wash your car. -Start at 128GB module size, up to 1TB each.


Zerasad

You joke, but faster at the same price used to be the norm before.


Snoo93079

RDRAM would like a word


100GbE

https://imgur.com/a/zLFFJfr


g3t0nmyl3v3l

-It pays you to use it


returnsfourohfour

You forgot one: slower than DDR4 at the same speeds.


bossman118242

so should i stop upgrading my am4 system? want to upgrade to a 5950x and will be on am4 for 10 years probably.


trillykins

Depends on what you're planning on using it for. The difference between DDR3 and DDR4 for gaming was minimal and I think the difference in transfer speeds were similar. Of course it might be too early to say for sure.


RplusW

Wait for the Vcache refresh AM4 in 2022


winzarten

I was on DDR3L until this summer, still percetly fine and I was gaming at 1440p medium-high setting in most games. I switched because the MB died. If something would make you move from AM4, it wouldn't be the memory. Keep also in mind, that even today we're not going for the top DDR4 performance, and most of the builds use 3200-3600Mhz ram stick, not 4000+... because the price difference is not worth in most applications.


bossman118242

thanks for the reply, im sticking with my current system then and getting the 5950x like ive been wanting to. i mianly upgrade because i like tech and i like the bleeding edge sometimes so i can stay where im at for awhile.


iliasdjidane

I think the 5950x is pretty futureproof for the next 5years for gaming and general productivity, but it would depend what your want to use it for. Im on AM4 5800x as well, I work on CAD and graphic design and redering software and I honestly feel my rig is overkill for now


greggm2000

I doubt you will be. CPUs are going to change a lot faster than you might expect, now that Intel is properly competing. Ten years from now, an AM4 gaming system will be used for retro computing, nothing more. (ok, ok, hyperbole there, but it’s still mostly true)


DependentAd235

As far as games go, the current console generation will be a buffer for gaming requirements. He’s got at least 5 years if not more before something new appears.


greggm2000

That’s a good point. He might not get all the “visual bells and whistles” that PC games often have over their console equivalents, but they’ll still run well… except maybe for some PC-only games. 10 years though, that’s really stretching it. 5 I can agree on. 10, with what’s coming? No way, not even close.


Serenikill

Should be 1 last am4 CPU next year with more cache


azardak

I'm moving away from my 5950x because I've had nothing but issues with the platform. From not detecting 2nd NVME's that work perfectly on a Z590 board to USB dropping randomly, etc.


Disturbed2468

Your motherboard is most likely defective. Current speculations is there's an issue with the motherboard chipset itself but no garauntees.


azardak

I’ve used 3 different motherboards across Asus/MSI/Gigabyte.


Reallycute-Dragon

Mobo is probably the issue, what model is it? It took gigabyte multiple bios updates to get my x570 board to a good state. I had constant issues with fans randomly stopping beforehand. Real fun when all your fans and pumps stop. It's mostly working now. Just make sure you are on the latest bios and all that fun stuff.


BRlEN

"3080ti for 1200 bucks is a good deal"


RplusW

I mean it actually is though


BRlEN

yeah a few percent performance for 70% more cost, yep!


TuristGuy

Still a good deal since you don't find other thing that perform the same.


BRlEN

lol almost pay double=good deal...


rootbeerfetish

Considering it'll be a couple years before it's worth it for DDR5 I think I made the right call getting a brand new DDR4 system a year ago with 32gb 3600mhz cas16 stuff.. I've been very pleased.


dan1991Ro

If its 50 percent more expensive, then whats the damned point?


fishymamba

Prices will go down over time. DDR4 was much more expensive than DDR3 on release. I think I got 16GB DDR3 for less than $100 back in 2012. 16GB DDR4 kits on release were over $200.


kony412

Nothing, keep using DDR 1


kirmm3la

Does anyone know this means the RAM previews on Adobe Ae would be twice faster with DDR5?


BaconMirage

I like that they're potentially faster, in more ways than just Mhz. but i really doubt that it'll make any sorta difference for my use cases what are some cases where these improvements on DDR5 might excel, more so than just.. loading up a game a fraction of a second faster or something?


justarandomuser10

Servers, 4-8K editing, Code Build


Aggrokid

I have a noob question. Does PMIC on module make achievable memory speeds more independent of motherboard quality and generation? This is for future memory upgradability. As-is motherboards have different top supported memory speeds and QVLs.


GreysideBoss94

Thought you meant Dance Dance Revolution 5. No idea why. That's cool though. Good times though.


soda-pop-lover

I am looking for Sodimm DDR4 3200MHz Dual Rank x8 CL20 memory specifically and it costs like $250 in my country for 2x16 module. I just want to get it below $150 :( Hope DDR4 prices decreases drastically this year.