Was looking for a card to replace the aging gtx 260 and decided on the hd 5870. Found the new dx11 nvidia cards coming out a bit too power hungry and hot running.
Came across one Lucid Logix website wich claims to be able to mix nvidia and ati cards in the same setup, to accelerate game rendering. The claim is; they brake down the image, not by sections, but per object and it all wraps up in the end and displays nice and dandy. They also claim you can have a GeForce GTX 275 896MB and Radeon HD 4770 512MB and the system will use both to build the buffer, faster. Now this appears to be true (by a few FPS) in some instances, judging by benchmark results. The same benchmarks also show with some games it is a disaster and it slows down the FPS, that gamer holy grail that I imagine is caving some dents in household finances.
The system already runs on a demo board called MSI Big Bang-FUZION. The reviews find it does not accelerate the process most of the time and think benchmarks on the Lucid Logix website are way too optimistic. Reviewers find it to be really hard to setup at times (depending on the game used) and it may even involve changing gfx cards around the slots until you find a win combo, sadly it appears you could have to do it per game for some confs to work proper.
Gimmicky or not it is a glimpse of a possibiliy, with opencl support around the corner, that all those thrown away gfx cards may one day not have to get retired so soonish and with multi sli/xfire confs being more and more supported on multi pci-e motherboards (think four PCI Express x16 slots), perhaps buying dif generation cards could be justified just to cut down render times on a workstation. Like giving your renders a power up once in a while, with the best card you can buy dispite of brand you currently use. My guess is, with directx out of the usage equation, it could work.
Hoping that some builder picks up on the idea and develops it so that maybe dif gen cards from even dif gfx makers and dif generations can be used on the same hardware board and be a building block for direct compute and opencl enabled software providing better system life spam.
Also this caught my attention:
http://www.genomeweb.com/informatics...nce-?page=show
It mainly talks about early adopters of gpu computing and how the brands are trying to get ahead with support. How the users are trying to work with both major builders so they don't get locked in, or even worst; one brand gets a major head start and kills competition.
It is from the stand point of GPU-Enabled Life Science Applications users but it extrapolates well. After all it is looking like a duopoly nowdays.
Something to think about at the dawn of what can turn out to be as cool as: "Hey this render is taking too long, I am going to go out and pick up another gpu so we can keep the deadline, be back in a bit" without a care for brand related issues.
Maybe you'll have ati and nvidia running side by side on the same workstation and not be bothered with what you will use to fill the other extra pci-e slots when upgrade time comes.
Came across one Lucid Logix website wich claims to be able to mix nvidia and ati cards in the same setup, to accelerate game rendering. The claim is; they brake down the image, not by sections, but per object and it all wraps up in the end and displays nice and dandy. They also claim you can have a GeForce GTX 275 896MB and Radeon HD 4770 512MB and the system will use both to build the buffer, faster. Now this appears to be true (by a few FPS) in some instances, judging by benchmark results. The same benchmarks also show with some games it is a disaster and it slows down the FPS, that gamer holy grail that I imagine is caving some dents in household finances.
The system already runs on a demo board called MSI Big Bang-FUZION. The reviews find it does not accelerate the process most of the time and think benchmarks on the Lucid Logix website are way too optimistic. Reviewers find it to be really hard to setup at times (depending on the game used) and it may even involve changing gfx cards around the slots until you find a win combo, sadly it appears you could have to do it per game for some confs to work proper.
Gimmicky or not it is a glimpse of a possibiliy, with opencl support around the corner, that all those thrown away gfx cards may one day not have to get retired so soonish and with multi sli/xfire confs being more and more supported on multi pci-e motherboards (think four PCI Express x16 slots), perhaps buying dif generation cards could be justified just to cut down render times on a workstation. Like giving your renders a power up once in a while, with the best card you can buy dispite of brand you currently use. My guess is, with directx out of the usage equation, it could work.
Hoping that some builder picks up on the idea and develops it so that maybe dif gen cards from even dif gfx makers and dif generations can be used on the same hardware board and be a building block for direct compute and opencl enabled software providing better system life spam.
Also this caught my attention:
http://www.genomeweb.com/informatics...nce-?page=show
It mainly talks about early adopters of gpu computing and how the brands are trying to get ahead with support. How the users are trying to work with both major builders so they don't get locked in, or even worst; one brand gets a major head start and kills competition.
It is from the stand point of GPU-Enabled Life Science Applications users but it extrapolates well. After all it is looking like a duopoly nowdays.
Something to think about at the dawn of what can turn out to be as cool as: "Hey this render is taking too long, I am going to go out and pick up another gpu so we can keep the deadline, be back in a bit" without a care for brand related issues.
Maybe you'll have ati and nvidia running side by side on the same workstation and not be bothered with what you will use to fill the other extra pci-e slots when upgrade time comes.
Comment