Visions of a Freeman - 02 of April of 2013
Digital body for faster internet.

I am working in a project with digital 3D characters but I will show some of the concepts that I have been thinking about.

I was looking at television sent trough the internet and it occurred to me that a new type of television can be made for the internet. One that requires far less resources.

The human body does not have any part of it that can rotate 360 degrees, a 180 degrees angle seems to be the maximum angle that a 3D cartoon might need for any part of the body. Some parts of the body do not need 180 degrees, for example a finger articulation does not need 180 degrees but in some animations and depending on the mesh a thumb might need more than 180 degrees.

Each bone articulation can have a minimum and a maximum angle in the software that plays the animation. The software stores this angle.

Lets say the index finger can move to a maximum of 50 degrees but the software always provides the angle in a way that 180 is the maximum number that represents the maximum bone bend.

If we receive the number 140 for example from the software, and we know that the bone can only move up to a 50 degree angle we have to use a rule of three to find the exact angle for the finger.

50 = 180
?   = 140 (New value)

It's (50 multiplied by 140) which is divided by 180 = 38.8 degrees.

Why do I do such conversion to have the number 180 as the maximum force number? The reason is that it gives a more detailed movement to articulations such as eyes, mouth & fingers.

That being said the next step is to store the information.

I use a byte number, which has a maximum of 256 per character to indicate the bone number first, then I follow it with the 180 based angle number also in byte number. It should give me 2 numbers per bone, I separate the frames with a character 13 in binary:

[Bone Number][Bone 180 base][13]

So if I had many bones in a frame I would simply use more pairs of numbers, like this:

[Bone Number][Bone 180 base][Bone Number][Bone 180 base][Bone Number][Bone 180 base][13]

That is 3 bones moving in the same frame.

Now lets make it a little more complex, I am going step by step.

Since we are working in pairs of binary numbers we need will need to find a pair for character 13.

We follow character 13 with the animation number.

So if we were to refer to animation number 1 we would send:

[13][1]

We don't need to initiate a frame if it is not going to have any data.

Observe a new example.

First line sends:

[13][1][Bone Number][Bone 180 base][Bone Number][Bone 180 base][Bone Number][Bone 180 base][13][2][Bone Number][Bone 180 base]

Second like sends:
[Bone Number][Bone 180 base][Bone Number][Bone 180 base][Bone Number][Bone 180 base][13][3]

Third sends:
[Bone Number][Bone 180 base][Bone Number][Bone 180 base]

The computer will join all the data of all the lines and it needs a combination that tells if that the transmission for the information is over. So we use any character in a pair to indicate that. Lets say we use character 13 twice:

[13][13]

Binary numbers look like characters when shown in a single unit, I will use brackets and convert them to decimal to give an example of how it would look like in a friendly debugging form:

[13][1][3][45][5][32][6][70][13][2][6][43][7][33][13][13]

There you can see an example of 2 frames, [13][1] and [13][2] followed with bone number and the 180 force number.

It is possible to send a lot of animation data this way, it's very fast and great for internet transmisions.

We need a special character to tell the software where an animation starts, the animation's number.

We use character 181 for that.

[181][1][13][1][3][45][5][32][6][70][13][2][6][43][7][181][1][13][1][3][45][5][32][6][70][13][2][6][43][7][33][13][13]

Says animation 1 and animation 2 are equal, ending the data with [13][13]

A special character can tell the software to use an animation that it already has stored into memory. Lets make it character 182.

So we say:

[182][6]

That would tell the software to jump to animation 6.

Lets say we want it to do a frame and then return back to the new animation we just sent in order to continue it.

We use the character 183 as the (go sub) character.

[183][6] + [13][1][3][45][5][32][6][70][13][2][6][43][7][33][13][13]

It tells the software to jump to animation 6, executes it and then continues till [13][13] is met.

Now we need a jump to frame character within the animation the software is currently on.

We use 184 for that.

Example:

[183][6][184][3]

That tells it to "Go sub" and do all of animation 6, return and then continue jump to frame in the last selected animation not being the one we called on go sub.

That means we need a refer to animation character.

That would be 185.

[185][5]

It tells it to work on animation 5, you can now use character 184 to jump to frames in that animation, when you are done calling frames from another animation you close with [185][0], it will return to the data input mode in case more new information is received for new animations.

We would also need a remake animation character (186) followed with the animation number.

We might also need a remake frame number once 185 is called, that would be 187.

Then there is the way to send an animation frame that DOES NOT need to be stored as an animation at all, I can call that the streaming frame.

The streaming frames start with a character [200] followed by a 1 to indicate streaming mode is starting and a character [200] followed by a [0] to indicate streaming mode is over.

We will need a time delay character as well, we can use character 187, 188 and 199 followed by the delay, if we need more than 255 as a value we just add another time delay, the software adds it up. 187 can be used for miliseconds, 188 for seconds and 189 for minutes.

Reminder. All this is for communicational purposes over the internet, the software then gets the data of the protocol and feeds it into the appropriate arrays, classes and variables as the programmer sees fit. This is only a movement communication protocol.

It's in the idea phase, I'm just thinking about it and can be perfected.

The idea is create a protocol to communicate the movement of 3d rigged objects.

Then of course there is the need to actually move an object in the space. We can use numbers going up from 200 for that matter.

You can tell the software to stop taking in data in pairs of numbers with a code like: [200][1] and to return to the pair protocol with code [200][0]. That way you can feed numbers separated by coma, for the x, y and z positions of an object.

To indicate which bone we want to work on we can use the number 201 followed by a 1 to start working on the movement of that bone and later followed by a 0 to stop moving it.

The code 202 can be used to receive the X, Y and Z position all at the same time. Just make sure to follow it with a character 200 like indicated above.

The codes 203, 204 and 205 can set the X, Y, Z of the object separately. Needs character 200 as well.

It is also possible to not use the 180 max force number and just work directly with the angle itself, it should not break the standards of the protocol but it does mean that the detail will be less.

To create a 380 spin 2 instructions would be needed.

Now lets make a little more complex.

Up until now I have talked about the bone moving in only one axis. Be it X, Y or Z.

Now we need a character to know what axis we are going to work on when moving a bone.

We can use character 0 for that matter.

[0][1] To tell it we want to change the X axis.
[0][2] To tell it we want to change the Y axis.
[0][3] To tell it we want to change the Z axis.

If you move the same bone 2 times in the same frame, it just adds up, this is useful for objects that can rotate in 360 degrees.

To specify the names of the bones and animations we can use the codes [210] followed by the number of an animation followed by the [200] code for freeform data. We can use [211] in a similar way to name animations.

All this data is sent in high speeds in the internet, it's the software programmer that decides how the data will be used inside the software itself, free to make the array structures as necessary. It is only a communication protocol for the movement of 3D objects.

You can use any numbers for now, later on an industrial standard will get people working on the same page.

Codes to call other object animations and name them is also a good idea. It is possible to do multiple animations like that. Codes 212 to 220 can do that.

I have not made any software solutions with this idea of the protocol. This is only about the protocol itself.

It can be done in so many ways it's impossible to define right now, since it's consensus and agreed standards that shape how it can be agreed upon.

This is about the general concept about the need for a 3D movement communication protocol.

So fine, we now have the protocol, now we add the text to speech protocol. That is too advanced for now and is beyond the limits of this text.

News of the future.

Once we have the 3d objects and the protocol to move them plus a decent text to speech we can have things like television news for example working on 3D object presenters and not humans.

The idea is that in the way I have described the amount of data that is needed to give the same news is extremely less than the required by a streaming video. Actually even a slow internet can show a 3D news broadcast, cartoon or even movie, or a slow internet on a cellular phone that can read the protocol from web pages as well as streaming data.

You can script an entire 3D program after it is done on a web page and see it later... That is the ultimate in data management for news.

The quality of the model that is used to show the data of the movements that are being transmitted can vary according to the computer, anywhere from boxy robots to photo realistic television presenters but in order to do this a fast protocol for moving 3D objects is needed. I think I am the first one to think about this but I am sure I did not copy this idea from anywhere, it is totally my idea and I came up with it thinking about a way for users to control the movements of game avatars in multiplayer games.

Yet this new approach can and will revolutionize the internet, where internet can be a lot more efficient than streaming incredible amounts of data only to see a simple television program and I have not even talked about the costs of video streaming...

This solution can store an entire news broadcast in terms of only a few megabytes that can be downloaded and it is also to note that it also adds the value that the script of the voices are also searchable and can be consulted as text later on... Something that normal primitive television does not allow.

But in order to have 3D television a special communication protocol is needed, and that is the purpose of this text.

I added a little video to this page, on that idea of computers doing TV, have not seen anything like it lately but the technology that exists today makes it that it is possible to get images so realistic that it can rival the "real world" in the TV set.

It's a video about Max Headroom. I just made it one step closer to making these 3D characters a part of a new type of Television.

So here it is:
 

Note: Don't be afraid to experiment with 3D movement communication protocols, there are probably thousands of ways to do it, it just needs to be done.

Not to mention that a 3D character can have possibilities not even dreamed of by a normal television presenter, like for example it can warp anywhere in 3D scenes, have any studio set up or even have live interviews with other 3D characters in the world just like they were fresh in the place. One day they could be in Caracas for an interview and three minutes later in Tokyo for another live interview, you would not see much difference to it being actually there, cause the computers get ever more detailed and realistic.

V-Ray Architectural demo:
 

Add to that:
 

And we got the 3D people and we got the 3D architecture...

I did read in the BBC News that in order to make an animation like human it would be incredibly complex, but they are wrong. It is not necessary to emulate emotions, leave that to the actors themselves, all that is needed is a good mesh that can animate and the communication protocol that moves the bones in the mesh, the realism can be done by a human actor attached to special 3d Devices to mimic their movements in the 3D world, that will change the shape of future Television studios that could be the size of a small room, but in a huge 3D world.

In order to make the person walk all you have to do is have a robot follow his feet at the floor level, that way he can jump, run and walk in any direction while being suspended in the air, giving it the ultimate realism. The little floor parts for each foot can be moved with magnets, left foot on one level and right foot on another, like 2 little towers with spherical wheels for each step but with a different altitude to their magnetic parts, that way magnets can move them around by suspending those magnets from cables. The electric current activates the magnets on and off thus moving the 2 "floor" anywhere under the foot of the person acting, providing for great realism.

The face movements can have special sensors and sensor software that uses normal cameras similar to face recognition software.

The only thing needed is the protocol to communicate the movements. If you have a powerful computer you can see it in much greater detail, if you have a weak computer you can see only some details, but everyone can see, it is cheaper to have than streaming video and it even provides for a more transparent content in the news plus it is far easier to save for later viewing and it allows TV cartoon programs (at least for now) to be seen in Cellular phones without the need for a huge bandwidth, a whole news broadcast can have just 2 megs of protocol data.

The future does not come alone, we have to make it, this is a Vision to make the future. Like I said there are probably thousands of ways to do it, I just haven't found the first one to do it or even talk about it, so I do it.

Welcome to 3D Television. The communication protocol is only the first step.

Later on members of the audience would be able to even PARTICIPATE with the TV host using a regular web cam with a good software, but lets leave it at that for now, the purpose of this vision is to highlight it is time for 3D movement  communication protocols.

Back to index.