What to do about the current Nvidia issues?

Crimsus

Member
If anyone has been following the news (or watching GPU prices), Nvidia GPUs are not doing well right now. I'm not sure how much of PI's performance relies on CUDA cores, but as far as affordability is concerned, I'm a little worried. I am still running an RTX 2080 Super on my astro PC, but given the current state of insane Nvidia GPU prices, upgrading to an RTX 30 or 40 series card doesn't seem very sound.

Has anyone done any testing in PI to see how well AMD GPUs perform versus Nvidia GPUs of the same generation? Has anyone done any testing to see if there is a significant performance gain going from 3K CUDA cores to 8-9K CUDA cores?

I like the idea of faster processing (who doesn't?), but if the performance increase going from a 2080 to a 3080 or 4080 is negligible even though the number of CUDA cores is generally tripled, then I may have to jump ship and consider the latest AMD GPUs. There is no dire need to upgrade my GPU at the moment, but if something happens (Murphy's Law) and I need a GPU, I want to make sure I'm not buying something where performance in PI versus GPU price isn't justified.

-Crimsus
 
there is very little to no dependency on CUDA right now.

the main use of CUDA is if you manage to get a CUDA-enabled tensorflow going for StarNet / StarNet2.

rob
 
... and (if I understand it correctly), there would be no benefit of installing a AMD GPU regarding execution times in PixInsight.

Bernd
 
there is very little to no dependency on CUDA right now.

the main use of CUDA is if you manage to get a CUDA-enabled tensorflow going for StarNet / StarNet2.

rob

Thank you for the info. Of course I wouldn't be very diligent if I did not ask for clarification. You said, "...right now". Are there plans to do more processing using CUDA, or is the team sticking with what is tried and true for the core of PI?

-Crimsus
 
@Crismus GPU acceleration has been a WIP for a long time. most of us on PTeam are not necessarily developers so i don't have any insight into it other than what has been posted here. i think the problem is there's no good cross-platform solution for GPU compute acceleration so it's hard to come up with a single codebase for this. IIRC someone was working on it but the project has a lower priority now with WBPP changes taking first priority.
 
@pfile thank you for the insight. Part of my background is programming and I remember what dealing with the CUDA SDK is like on Windows. Nvidia doesn't play well with Linux (a historical feud that continues today), so I can only imagine what the monumental task would be like for someone trying to create a reliable cross-platform algorithm that behaves well on Linux and Windows.
 
whoops, sorry i misspelled your handle @Crimsus .

there's also OSX; nvidia and apple have been enemies for a long time now, and apple has gone their own way with MPS. so the whole space is very fragmented.
 
@pfile no worries (my real last name also gets butchered all the time). I was aware of the OSX version, but at this point I'm not really worried. Apple fell off the tree and rotted a long time ago. :ROFLMAO:

I'm satisfied with the responses and support you've provided. I think we can call this one 'done' and call it a day. THANK YOU!
 
Back
Top