AGEIA PhysX

The magic smoke.

Moderators: phlip, Moderators General, Prelates

User avatar
Amnesiasoft
Posts: 2573
Joined: Tue May 15, 2007 4:28 am UTC
Location: Colorado
Contact:

AGEIA PhysX

Postby Amnesiasoft » Sat Nov 24, 2007 8:19 am UTC

Because there isn't a thread about it.

What are peoples thoughts about it?

My personal feelings are that physics are overrated as is. All I've seen applied practically with the PhysX card, wasn't all that spectacular. Cell Factor is a fun distraction, but runs perfectly fine on a quad core machine. Unreal Tournament 3 has some maps by AGEIA that are intended for PhysX cards only, and run a whopping 5 FPS on my computer. The sad part is, it's actually less spectacular looking than the physics in Crysis or Stranglehold are.

The only way I see physics processors taking off are if some other company begins producing ones that are compatable with AGEIA's, there needs to be some competition to drive prices down. And they have to be interchangeable products, nobody would want to buy TWO physics cards just to play two different games.

As it stands, I see nothing spectacular that it can do that a second GPU is not capable of, but GPU physics is probably going to fail too.

Angstrom
Posts: 140
Joined: Sat Nov 24, 2007 9:14 pm UTC
Location: Texas
Contact:

Re: AGEIA PhysX

Postby Angstrom » Sat Nov 24, 2007 9:39 pm UTC

It's a great concept to offload physics calculations to a seperate device, but from what i've seen they have failed to implement it correctly.

IMHO it would be better to develop the use of other CPU cores.

SeekerDarksteel
Posts: 2
Joined: Tue Nov 27, 2007 2:04 am UTC

Re: AGEIA PhysX

Postby SeekerDarksteel » Tue Nov 27, 2007 2:13 am UTC

The problem with developing physics for the use of other cores rather than separate co-processors is that specialized cores are much better suited to the high data-parallel floating point operations that would be common in physics processing. And there doesn't seem to be any trend in the market towards heterogeneous multi-core processors outside of Cell, so you'd be running the physics thread on a standard core. The amount of benefit you'd gain from simply concentrating physics processing into a thread running on a standard core isn't going to touch the benefit you could get from a core with vector processing capabilities.

User avatar
Axman
Posts: 2124
Joined: Mon Sep 10, 2007 6:51 pm UTC
Location: Denver, Colorado

Re: AGEIA PhysX

Postby Axman » Tue Nov 27, 2007 3:08 am UTC

I'd say that the problem is that the hardware developers have chosen sides: http://www.custompc.co.uk/news/601680/a ... page1.html

I mean, if there wasn't enough friction between the two physics groups already...

Furthermore, the games that would have some of the greatest advantages from physics modeling, specifically, the action and FPS genres, are almost designed from the ground up to be incompatible with real-time physics. Because they're best when served multiplayer, which means that the physics needs to happen on the server* and get pushed out to the players. Single-player action games are rare to the point of extinction, and putting that kind of hardware overhead as a requirement for a phantom genre is to physics acceleration the final nail.

This is what people call the death of innovation. Those people are wankers, though, because software physics, run on GPU cores**, will take this unspent success and roll it into the mainstream right under the pundit's noses.

*The first major game engine to require this stuff gets a donut.
**Props to ATI/AMD and MS DX10.1 for expanding on this. NVIDIA might have CUDA, but that's riding coattails. This would require that AMD put more Linux-directed pressure on their ATI staff, which is something they've been good at lately.

User avatar
Amnesiasoft
Posts: 2573
Joined: Tue May 15, 2007 4:28 am UTC
Location: Colorado
Contact:

Re: AGEIA PhysX

Postby Amnesiasoft » Tue Nov 27, 2007 8:02 pm UTC

SeekerDarksteel wrote:so you'd be running the physics thread on a standard core. The amount of benefit you'd gain from simply concentrating physics processing into a thread running on a standard core isn't going to touch the benefit you could get from a core with vector processing capabilities.

I didn't say run physics in a single thread on a single core. The key here is multithreading your physics. If it can be done in parallel on a separate card, then it can be done in parallel on a regular processor, or even better, your GPU, which people already have. While, yes, a piece of specialized hardware is almost infinitely superior at doing this, it's expensive and unnecessary.

EvanED
Posts: 4331
Joined: Mon Aug 07, 2006 6:28 am UTC
Location: Madison, WI
Contact:

Re: AGEIA PhysX

Postby EvanED » Tue Nov 27, 2007 9:54 pm UTC

Amnesiasoft wrote:I didn't say run physics in a single thread on a single core. The key here is multithreading your physics. If it can be done in parallel on a separate card, then it can be done in parallel on a regular processor, or even better, your GPU, which people already have. While, yes, a piece of specialized hardware is almost infinitely superior at doing this, it's expensive and unnecessary.

But your GPU is busy doing other stuff. ;-)

It's not a *completely* boneheaded idea. My impression too is that the physics engines are more flexible than trying to do GPGPU programming, though I'm not sure if I believe this with the latest generation of GPUs.

User avatar
tinyterror
The scariest piece of toast you ever did see
Posts: 78
Joined: Thu Nov 29, 2007 2:58 pm UTC

Re: AGEIA PhysX

Postby tinyterror » Thu Nov 29, 2007 3:37 pm UTC

PPU's are interesting pieces of hardware to be sure. That being said, I don't really see them ever being anything other than a niche product.

It is true that having specialized physics hardware is vastly superior to using general purpose CPU's to run physics code. The problem is that if a developer is going to have to chose between writing physics code to run multi-core machines or a specialized card with extremely low market penetration, I think we know how that choice is going to go. AGEIA's hardware is going to be relegated to servicing bundled games (see matrox and early 3dfx) and gimmicky tie-ins until either one of the big chip makers adds core extensions that make the extra hardware irrelevant or computing power reaches the point where general purpose hardware can do just as well.

As for physics hardware not being effective in FPS games because of player/net syncing issues, this is not really that big an issue. There are plenty of situations where eye candy physics effects that have no impact on the game could be handled client side. Obviously all the stuff that is going to effect player performance will have to be at least verified by the server to prevent weird syncing issues, but its really not that much. I think you would be really surprised (and amused) at how much reliance is put on client side physics computation in heavily multiplayer games. As long as my special physics hardware is not going to make it so that I gain or lose an advantage over people who don't also have the hardware, who cares if my explosions are chunkier or the boobs bounce more realistically?

User avatar
Larson
Posts: 335
Joined: Thu Aug 16, 2007 2:26 am UTC
Location: The Nerd Cave

Re: AGEIA PhysX

Postby Larson » Thu Nov 29, 2007 6:05 pm UTC

I look forward to adding an Ageia PhysX card to my collection of interesting hardware, next to a Killer NIC. They are both interesting ideas, but.....

User avatar
tinyterror
The scariest piece of toast you ever did see
Posts: 78
Joined: Thu Nov 29, 2007 2:58 pm UTC

Re: AGEIA PhysX

Postby tinyterror » Thu Nov 29, 2007 6:32 pm UTC

Killer nics are also pretty interesting. Are they worth it for games? Probably not.

Really the only difference between a killer nic and any other normal nic is the TCP offload engine build onto the KN and a bit of QoS stuff. Normally the network card doesn't give a damn what layer 3 or 4 protocol is coming in over the line. All of the work required for maintaining the IP stack is handled by the OS and the cpu it runs on. Killer nics have what they call an "npu" running on them which is essentially a processor that maintains its own IP stack. In theory, the TCP offload engine will reduce cpu load and decrease the time it takes for data coming in to be available to the application. This all sounds good and fine until you really think about it. If you are a hardcore gamer who is into squeezing every single last ms out of his ping, why would you be running anything network related besides your game and maybe ventrillo or something? The network traffic for most games is small enough to either fit into a single tcp packet (1500 bytes in most cases) or to only require a few packets. That is not a lot of packet reassembly going on. Tcp offload engines are much better suited to really high performance gigE and 10 gigE NICs where the traffic reassembly would seriously bog down the CPU. Any gamer who is pushing a solid gigabit of traffic while gaming has the kind of problems a killer nic will not fix. What really gets me is that the killer nic is a PCI card. Running a PCI NIC with a ToE is like putting spinners on a minivan. It is usually done by idiots who don't know any better.

The npu can also do some QoS traffic shaping, but such things can just as easily be handled by a halfway decent router with far greater effect, or even client side software.

So yeah, the killer nic is an interesting bit of hardware, but it probably wont yield any significant speed increase in games. You would be much better off investing in either a nice PCI-X or PCI-e nic with better OS support and less marketing foolishness.

SeekerDarksteel
Posts: 2
Joined: Tue Nov 27, 2007 2:04 am UTC

Re: AGEIA PhysX

Postby SeekerDarksteel » Thu Nov 29, 2007 8:27 pm UTC

Amnesiasoft wrote:
SeekerDarksteel wrote:so you'd be running the physics thread on a standard core. The amount of benefit you'd gain from simply concentrating physics processing into a thread running on a standard core isn't going to touch the benefit you could get from a core with vector processing capabilities.

I didn't say run physics in a single thread on a single core. The key here is multithreading your physics. If it can be done in parallel on a separate card, then it can be done in parallel on a regular processor, or even better, your GPU, which people already have. While, yes, a piece of specialized hardware is almost infinitely superior at doing this, it's expensive and unnecessary.


Ah, that's what you're misunderstanding. If we could just snap our fingers and multithread stuff we wouldn't have any problems at all. The fundamental problem is that it's extremely difficult to multithread particular applications, especially in a scalable manner, utilizing the thread primitives available on general purpose processors.

That's the very reason vector and stream processors exist. Because it's much easier to exploit the parallelism available in certain types of applications by using things single instruction/multiple data instructions than to actually multithread the application. While pretty much any parallelism can be reduced to thread level parallelism, it is simply unreasonable or infeasible to do so in an efficient, scalable manner.

User avatar
Amnesiasoft
Posts: 2573
Joined: Tue May 15, 2007 4:28 am UTC
Location: Colorado
Contact:

Re: AGEIA PhysX

Postby Amnesiasoft » Thu Nov 29, 2007 8:44 pm UTC

SeekerDarksteel wrote:Ah, that's what you're misunderstanding. If we could just snap our fingers and multithread stuff we wouldn't have any problems at all. The fundamental problem is that it's extremely difficult to multithread particular applications, especially in a scalable manner, utilizing the thread primitives available on general purpose processors.

That's the very reason vector and stream processors exist. Because it's much easier to exploit the parallelism available in certain types of applications by using things single instruction/multiple data instructions than to actually multithread the application. While pretty much any parallelism can be reduced to thread level parallelism, it is simply unreasonable or infeasible to do so in an efficient, scalable manner.

Maybe it's just me, but is OpenMP really that difficult to use? No, it's not.

EvanED
Posts: 4331
Joined: Mon Aug 07, 2006 6:28 am UTC
Location: Madison, WI
Contact:

Re: AGEIA PhysX

Postby EvanED » Fri Nov 30, 2007 12:35 am UTC

Amnesiasoft wrote:Maybe it's just me, but is OpenMP really that difficult to use? No, it's not.

But OpenMP will never be the best solution for problems that map nicely to GPU/PPU-style, massively parallel vector engines. For the relatively near future (at least several years), you're not going to be able to do as well with a CPU as you can with a GPU. The CPU and GPU have far different and largely contradictory design goals, which means there is a significant class of applications that will perform much better on the GPU. OpenMP isn't a suitable API for that sort of programming in the least [when it's done on the GPU, not when you wanted to make a program that does the same thing and runs on the CPU; then OpenMP may be a fine choice] .

User avatar
Anpheus
I can't get any worse, can I?
Posts: 860
Joined: Fri Nov 16, 2007 10:38 pm UTC
Location: A privileged frame of reference.

Re: AGEIA PhysX

Postby Anpheus » Sat Dec 01, 2007 4:14 am UTC

This is all irrelevant nowadays. The same operations a modern stream processor based graphics card does very, very fast are identical to operations you could perform to model physics. The Folding@Home project realized this and put ATI cards to use a while ago, I think it was a specific driver with the Radeon X1k lines of cards could run Folding@Home on the GPU, producing dozens to hundreds of times the output of a CPU alone. Those cards lacked the sort of stream processor mentality that would have sped up the process even more.
Spoiler:

Code: Select all

  /###\_________/###\
  |#################|
  \#################/
   |##┌         ┐##|
   |##  (¯`v´¯)  ##|
   |##  `\ ♥ /´  ##|
   |##   `\¸/´   ##|
   |##└         ┘##|
  /#################\
  |#################|
  \###/¯¯¯¯¯¯¯¯¯\###/

User avatar
'; DROP DATABASE;--
Posts: 3284
Joined: Thu Nov 22, 2007 9:38 am UTC
Location: Midwest Alberta, where it's STILL snowy
Contact:

Re: AGEIA PhysX

Postby '; DROP DATABASE;-- » Mon Dec 03, 2007 4:35 am UTC

I wonder if future graphic cards won't simply include a physics processor on-board, or have a system in place specifically to do physics on the GPU.
poxic wrote:You suck. And simultaneously rock. I think you've invented a new state of being.

User avatar
Anpheus
I can't get any worse, can I?
Posts: 860
Joined: Fri Nov 16, 2007 10:38 pm UTC
Location: A privileged frame of reference.

Re: AGEIA PhysX

Postby Anpheus » Mon Dec 03, 2007 6:55 am UTC

Thats the idea with future DirectX implementations.
Spoiler:

Code: Select all

  /###\_________/###\
  |#################|
  \#################/
   |##┌         ┐##|
   |##  (¯`v´¯)  ##|
   |##  `\ ♥ /´  ##|
   |##   `\¸/´   ##|
   |##└         ┘##|
  /#################\
  |#################|
  \###/¯¯¯¯¯¯¯¯¯\###/

User avatar
Sc4Freak
Posts: 673
Joined: Thu Jul 12, 2007 4:50 am UTC
Location: Redmond, Washington

Re: AGEIA PhysX

Postby Sc4Freak » Tue Dec 04, 2007 2:46 am UTC

'; DROP DATABASE;-- wrote:I wonder if future graphic cards won't simply include a physics processor on-board, or have a system in place specifically to do physics on the GPU.

See:
Close to Metal
CUDA
HavokFX
RapidMind
Intel's Larrabee

User avatar
Amnesiasoft
Posts: 2573
Joined: Tue May 15, 2007 4:28 am UTC
Location: Colorado
Contact:

Re: AGEIA PhysX

Postby Amnesiasoft » Tue Dec 04, 2007 7:01 am UTC

CUDA is more GPGPU than physics specifically. I think you're looking for Quantum Effects, which is Nvidia's physics implementation on GPU. Though, that's not to say you can't do physics with CUDA.

User avatar
Axman
Posts: 2124
Joined: Mon Sep 10, 2007 6:51 pm UTC
Location: Denver, Colorado

Re: AGEIA PhysX

Postby Axman » Tue Dec 04, 2007 8:04 pm UTC

They're both essentially the same thing, from the hardware's design perspective. QE is just an existing kit developed by NVIDIA for physics; CUDA is a bunch of, what, C libraries developed by NVIDIA that you can put together like Lego for GPGPU.

CTM is all open-source, and a lot more flexible, at least according the the F@H team.

User avatar
Midnight
Posts: 2170
Joined: Mon Dec 10, 2007 3:53 am UTC
Location: Twixt hither and thither. Ergo, Jupiter.

Re: AGEIA PhysX

Postby Midnight » Tue Dec 18, 2007 3:34 am UTC

i feel there's too little support with not enough gain. I felt just about the same way in regard to physics--neat and handy, but not something you should specifically offload into something else...
Then I played around with the orange box, messing with the gravity gun and playing portal... and I thought.. hey. Physics are really neat. There's a lot of potential here.
Splice that with playing Crysis and cranking the physics settings from low to high, and I go "Whoa. That's pretty bloody amazing."
And, factoring in some slowdowns I get while big physicsy explosions are happining in the aforementioned computerraper, I think the physics card MIGHT work. Might.
If it added incredibly detailed smoke, added dents from shrapnel, and realistic chunks out of wood and such--without slowing down the computer one bit, I think I'd go for it. but that would have to be on most games. Cause I buy my games regardless of Agea support, and I still would if I had the card--so it'd have to keep up.
uhhhh fuck.

User avatar
Tei
Posts: 63
Joined: Fri Nov 30, 2007 2:58 pm UTC

Re: AGEIA PhysX

Postby Tei » Wed Dec 19, 2007 1:37 pm UTC

I doubt that idea will fly... loong.

If is ever slightly successful, the next graphic card (nvidia, ati.. etc..) will include another core dedicated to physics.

And in 3 years, you may run your games in CPU's with 16 cores, so reserving 3 cores for this type of stuff may make sense.

Anyway is cool that some people is trying. Testing new ideas is good for everyone else :D


Return to “Hardware”

Who is online

Users browsing this forum: No registered users and 7 guests