mindhive home

Welcome Guest ( Log In | Register )

 
Reply to this topicStart new topic
LivePlace Stunning Video Leaked
King Buzzo
post Aug 14 2008, 03:30 PM
Post #1


Capable
Group Icon

Group: Staff
Posts: 2,199
Joined: 9-November 05
Member No.: 10





TechCrunch reported spotting the following video posted on LivePlace.com, a very undercover project at the moment. Not much is known other than the site is registered to Brad Greenspan, one of the pioneers of MySpace, and the technology used is developed by OTOY. One thing is for sure, the rendering is very impressive.

Shortly after the leak the video was taken down. Right now, the site says "Live or Virtually Live? Coming very soon!" and has a space to sign up for updates.

OTOY is described as "OTOY is developing technology for delivering real-time “cinematic quality” 3D rendering through the browser. The technology has the potentially to be used for online games, 3D virtual communities, instant messaging and social networking applets among other things. OTOY has partnered with AMD to work on several of their promotional events."

Go to the top of the page
 
+Quote Post
Dino
post Aug 14 2008, 04:28 PM
Post #2


Experienced
Group Icon

Group: Worker
Posts: 2,500
Joined: 22-September 06
From: Oakland, CA
Member No.: 339





Wow!
Go to the top of the page
 
+Quote Post
King Buzzo
post Aug 14 2008, 05:09 PM
Post #3


Capable
Group Icon

Group: Staff
Posts: 2,199
Joined: 9-November 05
Member No.: 10





Forgot to mention in the original write-up, rumor has it it will be browser based as well...
Go to the top of the page
 
+Quote Post
Darkscorp
post Aug 14 2008, 05:24 PM
Post #4


Competent
Group Icon

Group: Worker
Posts: 1,650
Joined: 17-December 05
From: Orlando, Florida
Member No.: 80





holy moly..thats insane!!!!! And its within browser????

Can't wait to see what develops.
Go to the top of the page
 
+Quote Post
Erasmus
post Aug 14 2008, 08:19 PM
Post #5


Apprentice
Group Icon

Group: Worker
Posts: 609
Joined: 16-January 07
From: Florida
Member No.: 479





Sure is stunning!

But ...

Everything being rendered on the server, response to moving the camera is going to be subject to Internet delay. Short of being one hop from the server, there is no way to achive this degree of smooth movement within the current Internet infrastructure. With average response time, moving the camera is going to look like a relatively slow animation, certainly nothing even close to what the video is showing.

Buffering? maybe, but that would limit the camera movement and zoom range with a similar end-result.

As for user creation, despite all that being said in the video, it seems to be limited to 2D, i.e. image rendering, not 3D object rendering. You can change the wallpaper on the wall, but you can't change or move the wall. that will seriously limit creative interest.

All in all, a spectacular video rendering, but at least 5 years away, if not 10, from something you can actually do on the web ... point to point fiber, faster switches than currently in the infrastructure, etc.

Client-side rendering may be limited by the capacity of the end-user's computer, but goes a long way to overcome, or hide, Internet lag and we are stuck with it for many more years.
Go to the top of the page
 
+Quote Post
Wistrel
post Aug 18 2008, 04:38 PM
Post #6


Trained
Group Icon

Group: Staff
Posts: 1,230
Joined: 16-January 06
From: UK
Member No.: 155





OK not entirely following Erasmus's points but yes this is vaguely the issue or even wholly it... ok not making sense am I?

Right start again.

Yes their model is "we do all the 3d processing at our end and stream the output to the user's PC as a video, youtube style".

Looking at the practicalities, how do they do this?

Well imagine they have a gf8800 graphics card to make it look that pretty. They can probably use that especially as they only need low res output. Some cpu time can be spent compressing the vid too.

Then they send it to each user's computer. There are many users.

We know that youtube works with millions of users so it IS possible to do this but there are some key differences.

1. youtube sometimes pauses.
2. youtube has to buffer video before it starts (for a game this wouldn't be possible)
3. the videos are prerecorded

So for this to work we need the equivalent of a youtube video that can start playing instantly and a big graphics card at the server end and then it is possible.

The problem I see is that they need one per user so how do they do that?

My suspicion is that the people who say "won't work can't work" arn't giving these people enough credit. The problems they state are pretty darn obvious if you ask me so it seems ludicrous to think these guys havn't thought that one through before starting... surely?

So lets imagine for a second that they arn't idiots and are in fact very very clever. TBH I doubt any of us have ever thought about the problem they are facing properly. What is different from them say, giving each user an gf8800 for their own PC?

Well the difference is that the gf8800 is a generic card capable of doing anything. They don't need this.

So how about they flip the problem on its head and instead invest a shit load of cash in dedicated hard wired gear JUST for that engine/game? Maybe then they can run things a shit load faster? They only have to design it once then adding new machines to cope with rising user numbers is cheap. Not saying this IS how they could do things of course but all I'm saying is that maybe they have something similar up their sleave?

Alternatively maybe they have found a way of funding an outlay of 1 new gfx card per user? VC or advertising can really produce big bucks sometimes. Look at Amazon, it was years before they started making profit but by then they ruled the market

Hope that made sence (IMG:http://virtualmindhive.com/forum/style_emoticons/default/to_become_senile.gif)

Wist


--------------------
[i]"It is not important whats happening on the screen. It is important whats happening in your head."[/i] - possibly Mr McC ,-)
Go to the top of the page
 
+Quote Post
Erasmus
post Aug 18 2008, 06:22 PM
Post #7


Apprentice
Group Icon

Group: Worker
Posts: 609
Joined: 16-January 07
From: Florida
Member No.: 479





The problem is not so much the processing power on their end, but the response time delay between the end-user and the server. The server renders a new view upon receiving the view position of the user and sends it off. But the server does not know where the user is moving next, or zooming or panning, etc. It cannot render the next image set until it receives that new info from user. That is where the live video effect breaks down.

It is nothing like feeding a linear video where the server does not have to wait for the user to say "ok, now, turn a bit to the left and zoom in", render that image, send it off and wait for the next request. In most real world conditions, round-trip Internet delay far exceeds FPS. The server would be spending more time waiting for requests than serving them.

Now, bring in fiber and ultra-fast network switches (they are coming about) and end-to-end network delay falls within video frame rate. Meaning requests can be received and served back at say maybe 12-24 request per second per user with a delay of maybe just one frame. Now, it is possible to wait for the user to receive the view, then say "Ok, now turn a bit to the left", get that request, process the view and send it off real time.
Go to the top of the page
 
+Quote Post
Noggin
post Aug 19 2008, 01:23 PM
Post #8


Apprentice
Group Icon

Group: Stealth Project Orange
Posts: 631
Joined: 3-February 06
From: UK
Member No.: 188





Very nice! It's not a bad idea if it works fast enough - I think a lot of Erasmus' reservations about it are pertinent, although I have a feeling we might all be surprised how good it might be when we eventually see it.

Server-side rendering has a lot of practical drawbacks, but so does client-side. Until our internet connections are as fast and low-latency as your typical wired network these days, this is always going to be an issue.

From what the experts say, real-time raytracing (what's shown in this video) won't be too far away client-side. This is the only real reason I think this product could be rather short lived. If we can generate extremely realistic graphics on our PCs within the next 3-4 years, your average FPS will be more realistic than you care to imagine.

Even now, to render 24 FPS at 640x480 with a simple light schema on an optimised engine, you'd only need one of the latest dual-core processors. In the not-too-distant future, we'll be seeing 8-core PCs with optimised OS's which actually take advantage of all the cores, and 8-16GB RAM, I should imagine they could chuck out more complex scenes at a reasonable frame-rate.

As soon as the likes of nVidia and ATI (and Intel) get on the real-time raytracing bandwagon, we'll see the next revolution in graphics, and a new generation of MMOs to suit. If they look anything as cool as that video, I can't wait!
Go to the top of the page
 
+Quote Post
Noggin
post Aug 19 2008, 01:50 PM
Post #9


Apprentice
Group Icon

Group: Stealth Project Orange
Posts: 631
Joined: 3-February 06
From: UK
Member No.: 188





QUOTE (Wistrel @ Aug 18 2008, 04:38 PM) *
OK not entirely following Erasmus's points but yes this is vaguely the issue or even wholly it... ok not making sense am I?


Wist, sorry I missed your post before I replied somehow, and only when I refreshed the page did I notice it.

What you say does make total sense, and if that were actually the case I would totally agree with you.

Unfortunately there is something else entirely going on here. The wonderful graphics you saw there were not the product of an amazing graphics card in some machine somewhere. They are something else entirely.

3D graphics as we experience them today are generated by our 3D graphics accelerator cards (such as the nVidia 8800 series you suggested). They have a rich instruction set designed to create realistic looking 3D scenes.

As you will no-doubt have noticed, the scenes are often stunning (see Crysis for example) but they are far from perfect. Things like shadows often have no relation to the direction and type of light, reflections don't usually bear any resemblance to what they should be reflecting, etc.

This is because the cards use a selection of neat tricks to make things look more realistic, but don't actually go to the trouble of calculating what the scene should really look like. This is mostly left to the graphic artists, who put a lot of effort into making the graphics look as realistic as they can.

The video on this thread actually demonstrates a process known as ray-tracing. This is where a beam of light is taken from source (i.e. the sun or a light bulb), and the computer calculates each surface it hits on its journey, eventually producing an image. It's extremely processor intensive (imagine calculating the result of a million beams of light hitting objects in a complex scene), but the images it produces are far beyond the relatively simple trickery a conventional graphics accelerator uses.

This is why in that video they show various light sources changing, and the resulting shadow effects, not to mention the accurate lens-flares, focus blurs, etc. It's what the next generation of graphics will look like.

These people are obviously trying to get there first, using a grid of powerful servers to generate these impressive images, a bit beyond what most PCs would be capable of these days. It's a great idea too - and I'm sure it will be quite the graphical showcase.
Go to the top of the page
 
+Quote Post
Dino
post Aug 19 2008, 05:27 PM
Post #10


Experienced
Group Icon

Group: Worker
Posts: 2,500
Joined: 22-September 06
From: Oakland, CA
Member No.: 339





After those last few posts I just wanted to add something intelligent but couldn't think of anything so I'll go photoshop a pancake on my head to post here. (IMG:http://virtualmindhive.com/forum/style_emoticons/default/connie_girl_cleanglasses.gif)
Go to the top of the page
 
+Quote Post
Svetlana
post Aug 20 2008, 10:18 PM
Post #11


Good
Group Icon

Group: Staff
Posts: 2,978
Joined: 11-November 05
From: Seattle, USA
Member No.: 13





I think this is a 'convenient' leak ;)

I also wonder how something might progress to this point without taking into account the current and near-future situations that will have a direct impact upon it. The server and client issues would indeed need to be addressed... maybe they know something we do not. Or perhaps they're creating this for the sheer, extraordinary beauty and the potential power that it can harness one day. Either way, it's stunning.


--------------------
"The future is here, it's just not widely distributed yet." - William Gibson
Go to the top of the page
 
+Quote Post
P-J
post Aug 25 2008, 02:08 PM
Post #12


Beginner
Group Icon

Group: Worker
Posts: 287
Joined: 7-February 06
From: UK
Member No.: 195





QUOTE (Svetlana @ Aug 20 2008, 10:18 PM) *
I think this is a 'convenient' leak ;)

I also wonder how something might progress to this point without taking into account the current and near-future situations that will have a direct impact upon it. The server and client issues would indeed need to be addressed... maybe they know something we do not. Or perhaps they're creating this for the sheer, extraordinary beauty and the potential power that it can harness one day. Either way, it's stunning.



This isn't an area I know a great deal about (so will join Dino), however, just WOW..the rendering looks fantastic.

In reading the RSS feed Tera Nova Speachless on the forum I noticed this quote

"He said that AMD's new chip - the Radeon HD 4870 X2 - was able to process 2.4 teraflops of information per second, meaning it had a capability similar to a computer that - only 12 years ago - would have filled a room. AMD's chip fits inside a standard PC.

But he said that the line between what was real and what was rendered would not be blurred completely until 2020. "

So it sounds like this sort of rendering won't be possible for a while yet....

This post has been edited by P-J: Aug 25 2008, 02:15 PM
Go to the top of the page
 
+Quote Post
Guest_skeptic_*
post Aug 25 2008, 08:31 PM
Post #13





Guest






keep making it look more and more realistic so that those individuals that can't or won't form real relationships can continue to believe that they are not just playing make believe. it's such a short leap for some people and virtual reality, gaming and social networking sites are just another way of making unhappy people believe that they don't need to form real relationships to be happy - you all should be ashamed - wake up and live in the real world, no matter how unpleasant or difficult it may be!!
Go to the top of the page
 
+Quote Post
Domochevsky
post Aug 25 2008, 09:23 PM
Post #14


Qualified
Group Icon

Group: Worker
Posts: 1,094
Joined: 5-June 06
From: Germany
Member No.: 289





Yeah, im totally ashamed to be communicating with fictional People long range over the Internet.
I mean, what was i thinking, having a a big Part Social Life online and not shutting myself in to collect Stamps or something.
Its not like those are actual People or anything, nosir. Im also totally sure that Graphics are an integral part of Relationships and that were all here because were unhappy. (IMG:http://virtualmindhive.com/forum/style_emoticons/default/connie_testwall.gif)

This post has been edited by Domochevsky: Aug 25 2008, 09:30 PM


--------------------
Go to the top of the page
 
+Quote Post
Noggin
post Aug 26 2008, 07:38 AM
Post #15


Apprentice
Group Icon

Group: Stealth Project Orange
Posts: 631
Joined: 3-February 06
From: UK
Member No.: 188





QUOTE (P-J @ Aug 25 2008, 02:08 PM) *
This isn't an area I know a great deal about (so will join Dino), however, just WOW..the rendering looks fantastic.

In reading the RSS feed Tera Nova Speachless on the forum I noticed this quote

"He said that AMD's new chip - the Radeon HD 4870 X2 - was able to process 2.4 teraflops of information per second, meaning it had a capability similar to a computer that - only 12 years ago - would have filled a room. AMD's chip fits inside a standard PC.

But he said that the line between what was real and what was rendered would not be blurred completely until 2020. "

So it sounds like this sort of rendering won't be possible for a while yet....


Complete realism is one thing. I think 2020 might be a conservative estimate for such a feat. However, the type of rendering you saw in that demo will be with possible on a high-end PC within the next two or three years.

In fact, if we'd be happy with a one-pass, single light-source ray-trace, and a 640x480 image, I'm pretty sure it could be optimised to work on a high-end 8-core workstation (dual quad core) given the correct optimisations. I'm very sure it would be possible to attain 25FPS or more too.

The big difficulty with real-time ray-tracing is that different scenes can require hugely disparate amounts of processing, meaning that certain scenes could take ten times that of another. This is obviously a problem in gaming, where fluidity matters.

However, this is not so different from the early days of 3D acceleration, where there were hurdles just as big.

Processors will become powerful enough to produce photo-realistic images in real-time - it's just a waiting game. Software (namely the operating systems) have quite a jump to make between now and then to cope with it all.
Go to the top of the page
 
+Quote Post
Svetlana
post Aug 27 2008, 06:49 AM
Post #16


Good
Group Icon

Group: Staff
Posts: 2,978
Joined: 11-November 05
From: Seattle, USA
Member No.: 13





Just to throw it out there- we can't really neglect Moore's Law here. What took a decade to reach just ten years ago only takes a fraction of that today (the rate of technology doubles, essentially, every two years). Certainly interesting to ponder when we'll reach these levels, but history should show us that it will be much sooner than we expect... I think.


--------------------
"The future is here, it's just not widely distributed yet." - William Gibson
Go to the top of the page
 
+Quote Post
Domochevsky
post Aug 27 2008, 05:51 PM
Post #17


Qualified
Group Icon

Group: Worker
Posts: 1,094
Joined: 5-June 06
From: Germany
Member No.: 289





...or a lot longer than we expect. ;)

(I want my Jetpack!)

This post has been edited by Domochevsky: Aug 27 2008, 05:52 PM


--------------------
Go to the top of the page
 
+Quote Post
Svetlana
post Aug 27 2008, 08:37 PM
Post #18


Good
Group Icon

Group: Staff
Posts: 2,978
Joined: 11-November 05
From: Seattle, USA
Member No.: 13





QUOTE (Domochevsky @ Aug 27 2008, 10:51 AM) *
...or a lot longer than we expect. ;)

(I want my Jetpack!)


I cannot argue that, though I do breath a sigh of relief knowing that I won't be pummelled on the sidewalks by jet-packing commuters as they come in for landing (IMG:http://virtualmindhive.com/forum/style_emoticons/default/superman2.gif)


--------------------
"The future is here, it's just not widely distributed yet." - William Gibson
Go to the top of the page
 
+Quote Post
Dino
post Aug 28 2008, 03:27 PM
Post #19


Experienced
Group Icon

Group: Worker
Posts: 2,500
Joined: 22-September 06
From: Oakland, CA
Member No.: 339





Where's my flying car?
Go to the top of the page
 
+Quote Post
Domochevsky
post Aug 28 2008, 04:41 PM
Post #20


Qualified
Group Icon

Group: Worker
Posts: 1,094
Joined: 5-June 06
From: Germany
Member No.: 289





I also wonder what happened to Virtual Reality and VR Goggles... Is it dead? Back then it looked friggin ugly and those Things where heavy as Hell but today we have the Technology! So where is it? :S


--------------------
Go to the top of the page
 
+Quote Post

Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 

RSS Lo-Fi Version Time is now: 23rd March 2019 - 01:20 PM