log in | register | forums
Show:
Go:
Forums
Username:

Password:

User accounts
Register new account
Forgot password
Forum stats
List of members
Search the forums

Advanced search
Recent discussions
- Elsear brings super-fast Networking to Risc PC/A7000/A7000+ (News:)
- Latest hardware upgrade from RISCOSbits (News:)
- RISCOSbits releases a new laptop solution (News:4)
- Announcing the TIB 2024 Advent Calendar (News:2)
- RISC OS London Show Report 2024 (News:1)
- Code GCC produces that makes you cry #12684 (Prog:39)
- Rougol November 2024 meeting on monday (News:)
- Drag'n'Drop 14i1 edition reviewed (News:)
- WROCC November 2024 talk o...ay - Andrew Rawnsley (ROD) (News:2)
- October 2024 News Summary (News:3)
Latest postings RSS Feeds
RSS 2.0 | 1.0 | 0.9
Atom 0.3
Misc RDF | CDF
 
View on Mastodon
@www.iconbar.com@rss-parrot.net
Site Search
 
Article archives
The Icon Bar: Games: 3D Engine design
 
  3D Engine design
  This is a long thread. Click here to view the threaded list.
 
Jeffrey Lee Message #85162, posted by Phlamethrower at 11:55, 28/12/2000
PhlamethrowerHot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot stuff

Posts: 15100
What I've been able to work out so far is that:
* We all want a new 3D engine
* Not all of us have the skill to do it ourselves (Or some people believe we don't)
* None of us have the time to do it ourselves, or even in a group

So although we can't actually program it, we should at least do something towards making one, for example laying out the features we want, so if one of us has time we might be able to do some real work towards it.
I'm avoiding OpenGL here, since giving a list of features for something we would just be porting over would be useless.

Features:
* Perspective corrected and non-perspective corrected texutre mapping
* Anti-aliased textures
* Support for flatshaded polygons
* Polygons are triangles
* Mainly 16 bit, 24 bit possibly and 8 bit with reduced features
* Support for Risc PC's and above (ROS 3.5 and above)
* Optimised for plotting textures in the same colour depth they were made
* Volumetric objects (e.g. fog)
* Support for virtually any screen size
* No voxels (for first version at least)
* Transparent textures
* Skeletal and vertex based model animations
* Support for hierarchial file system (e.g. PAK and normal files, in scanned in different orders, like Quake)
* Perhaps support for hardware rendering in the future (Once it's actually available)
* VIS-ing to cut down polygons during rendering
* Reflections/wavy water effects (Like Iron Dignity)
* Real-time lighting, and fixed maps like Quake
* Lens flares
* Modular system, so things like light maps aren't used if they aren't needed (to speed things up and cut down on memory)

If you have any changes to make to that list, then make them!

[Edited by Phlamethrower at 14:34, 28/12/2000]

  ^[ Log in to reply ]
 
Mark Quint Message #85163, posted by ToiletDuck at 15:33, 28/12/2000, in reply to message #85162
Ooh ducky!Quack Quack
Posts: 1016
yup sounds good grin
about the models bit though - d'you mean a little like Half-Life models????
  ^[ Log in to reply ]
 
Jeffrey Lee Message #85164, posted by Phlamethrower at 20:16, 28/12/2000, in reply to message #85163
PhlamethrowerHot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot stuff

Posts: 15100
Yep. Some kind of skeletal basis or something, so that you just set a variable to a certain value and the model moves accordingly (Like the player's mouth)
  ^[ Log in to reply ]
 
Nick Wright Message #85165, posted by nick2 at 22:04, 28/12/2000, in reply to message #85164
AA refugee
Posts: 2
Modular - now that's a good idea.
  ^[ Log in to reply ]
 
Shane Message #85166, posted by Ramuh at 22:49, 28/12/2000, in reply to message #85165
AA refugee
Posts: 35
Pardon me if this is a stupid question, as I'm sure people have already done some homework on this subject but...

Rather than writing something from scratch, what about porting an engine ? There are a few 3D engines floating around out there, some of which are public domain, or written by hobbyists, would it be possible to port them over to RISCOS ? Most of them probably use on OpenGL or DirectX, but I would have thought that the sources are in C/C++ (at least I know some of them are), so is this possible ?

I had a look at this page:
http://www.f10.parsimony.net/forum15628/index.htm

And there are a few links to engines, here's a couple of them:
http://www.radonlabs.de/
http://www.geocities.com/a_licu/

Not being as good an ARM coder as I used to be, I couldn't do it, not without a lot more practice, but is this a possible alternative ?

  ^[ Log in to reply ]
 
Steve Allen Message #85167, posted by [Steve] at 14:30, 29/12/2000, in reply to message #85166
AA refugee
Posts: 56
www.3dfiles.com has some useful info on 3d engines if you hunt around a bit.
  ^[ Log in to reply ]
 
Jeffrey Lee Message #85168, posted by Phlamethrower at 15:53, 29/12/2000, in reply to message #85167
PhlamethrowerHot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot stuff

Posts: 15100
Me:
I'm avoiding OpenGL here, since giving a list of features for something we would just be porting over would be useless.


Shane:

Rather than writing something from scratch, what about porting an engine ?

Porting bits of engines might be useful though, to cut down on the amount of work that we are doing.

  ^[ Log in to reply ]
 
Jeffrey Lee Message #85169, posted by Phlamethrower at 11:59, 2/1/2001, in reply to message #85168
PhlamethrowerHot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot stuff

Posts: 15100
OK, I think I've got the basis of the system worked out. It will be based around a module as the kernel, and other modules/programs will register with it to provide polygon drawing, object handling, etc. This basically makes it quite easy to expand, and cuts down tremendously on the work we have to do to get a working engine out the door.

These are the basic routines that would be filled in by the plug-ins then:

* Mass vertex work (e.g. working out where all vertices are in the world/on screen)
* Polygon list creation (e.g. working out what can be seen from a VIS list)
* Scene draw handler (What type of drawing - e.g. raytracing)
* Polygon drawer - Simply draws it on screen
* Polygon line drawer - Draws a horizontal line
* Collision checking for objects
* File handling (For the hierarchial system)
* Perhaps vertex maths if an expandable vertex system was given (e.g. 32 bit and 64 bit co-ords)

So in order to draw the screen, this would be done:

* Call to set all the vertices world positions
* Call to set vertices screen positions
* Call to generate polygon list (Now we know what can be seen)
* Call to scene renderer

And a simple scene renderer may just be this:
* Sort polygons into distance order
* Call the polygon drawer for each poly

While more complex ones would work with an 'active edge list' and draw the screen line by line, only drawing the polygons that are infront.

To stop vertices being calculated multiple times, a frame count could be used. If the frame count matches the stored one for that vert, then it is not re-calculated.

If anyone has any problems with the above, then say so! Also can someone think about how the mirrors would work - would it just be a case of copying the correct polygons to behind the mirror, making sure only the ones that can be seen are copied? Would this need multiple vertex/poly lists?

Also does anyone know of any good drawing methods - would brute force and a VIS list do? Any simple speed increases we could make to it?

After that, we need to decide on the data and structure that will be passed around through all the calls unhappy

  ^[ Log in to reply ]
 
Lee Johnston Message #85170, posted by johnstlr at 16:44, 2/1/2001, in reply to message #85169
Member
Posts: 193
There are a lot of messages here so quoting one won't make much sense. Therefore here are my thoughts on what has been said so far.

Just about everyone who reads Acorn Arcade must have a "wish list" for a 3D Engine but simply stating it doesn't get us any closer. In the first post Jeff states that he won't consider OpenGL. Well why not? To achieve everything in the list you need support for at least some of the features that OpenGL offers. All the plotting ideas are covered by OpenGL. For heirarchical and skeletal animation you need a method of composing and utilising matrix stacks - OpenGL provides these. Hardware rendering is a case of being able to swap drivers. This would easily be achieved by putting the API in a module. Note that you wouldn't want to swap a whole 3D engine so either the subset offered by hardware is placed in a swappable module or the 3D engine supports some sort of dynamically loaded libraries.

All the fancy features mentioned, such as lens flares, require the facilities built into an API like OpenGL.

In answer to Shanes point about porting an engine well the problem is two fold. Firstly engines on other platforms invariably use OpenGL or Direct3D. A direct port would, at the very least, require the subset of the API used to be ported as well. Also the engines tend to be optimised for non-ARM hardware - read lots of floating point. By the time you've become intimately familiar with an engine in order to port it properly you probably could have written one - look how long Martin Pipers port of the Quake engine took, and we never saw it.

If you're looking for information on 3D Engines then THE place to look is

http://cg.cs.tu-berlin.de/~ki/engines.html

a very good tutorial is available with the 3DGPL engine at

http://www.cs.mcgill.ca/~savs/3dgpl/

and if you're looking for a modular engine design then the only place worth looking at appears to be the Grand Unified Game Engine at

http://www.gauge3d.org/

Finally a great site, updated on a daily basis is

www.flipcode.com

  ^[ Log in to reply ]
 
Lee Johnston Message #85171, posted by johnstlr at 17:16, 2/1/2001, in reply to message #85170
Member
Posts: 193
OK, I think I've got the basis of the system worked out. It will be based around a module as the kernel, and other modules/programs will register with it to provide polygon drawing, object handling, etc. This basically makes it quite easy to expand, and cuts down tremendously on the work we have to do to get a working engine out the door.

When you say module based do you mean as a series of RISC OS modules, a series of DLLs or even a series of object code libraries that can be linked at link time in order to customise the build? If you envisage building a series of modules how will they advertise their services to other modules and how will those modules use the new services without knowing the SWI format beforehand? This last one is something of a research project in the Reflection community and is known as introspection cool

These are the basic routines that would be filled in by the plug-ins then:

* Mass vertex work (e.g. working out where all vertices are in the world/on screen)
* Polygon list creation (e.g. working out what can be seen from a VIS list)

So transforming the object coordinates to object space and culling by distance, then transforming to view space and culling against the view volume. The problem is that this is very dependent on the internal structure of your engine. Will you have a completely free form environment which will make this trickier / slower or something like Portals which are only of use on indoor scenes? While you can specify an API across this kind of service different techniques require vastly different data structures within the engine. This is why APIs like OpenGL are very general and don't offer scene handling facilities - there are too many to choose from.

* Scene draw handler (What type of drawing - e.g. raytracing)

Assuming you don't seriously mean ray-tracing I guess you mean what techniques do we use such as ray-casting, straight polygon plotting or something else? Well due to the route taken by hardware I would guess that anything other than straight polygon plotting is a non-starter.

* Polygon drawer - Simply draws it on screen
* Polygon line drawer - Draws a horizontal line

Straight plotting? Z-Buffer? Active Edge Table? C-Buffer? Each needs to be coded in different ways. Straight plotting suffers from overdraw. Z-buffer is slow but as the number of polygons in the scene increases the time to render doesn't increase that much as there is zero overdraw.

Active Edge Table is faster than z-buffer at low polygon counts and prevents overdraw. It's also difficult to write efficiently- I've done it, C source code available on request cool - and the complexity of managing the active edges increases with the polygon count.

C-Buffer (or coverage buffer). Turns the AET on its head. Instead of storing the polygon spans that need drawing it stores the areas of the screen that have been covered. This dramatically reduces the management overhead of complex scenes. There's a tutorial on it on www.flipcode.com.

* Collision checking for objects

Tricky. You really need to ensure all the math routines are working perfectly before attempting this one.

* Perhaps vertex maths if an expandable vertex system was given (e.g. 32 bit and 64 bit co-ords)

Ideally the underlying math routines should be capable of 64bit math. On a StrongARM the MULL and SMULL instructions make this easy and efficient. Lower processors will need to be catered for though.

So in order to draw the screen, this would be done:

* Call to set all the vertices world positions
* Call to set vertices screen positions
* Call to generate polygon list (Now we know what can be seen)
* Call to scene renderer

The problem is that each of these "calls" involve a lot of work.

And a simple scene renderer may just be this:
* Sort polygons into distance order
* Call the polygon drawer for each poly

Again this depends on the type of engine you have. BSP and Portal based engines remove the need for sorting polygons and they're pre-sorted in the data structures.

To stop vertices being calculated multiple times, a frame count could be used. If the frame count matches the stored one for that vert, then it is not re-calculated.

I'm not quite sure what you mean by this. Any object will contain a list of vertices and list of polygons. The polygons will contain pointers to the vertices they use. To transform the object you merely transform the vertices. Each vertex should only be transformed once from local to object space, from object space to view space and finally view to screen space. Each polygon will then automatically use the transformed vertices.

Note that it is possible to use APIs like OpenGL in ways that would mean multiple transformations and ways that would be single transformations. It depends on whether you want to do it quick and dirty or efficiently.

If anyone has any problems with the above, then say so! Also can someone think about how the mirrors would work - would it just be a case of copying the correct polygons to behind the mirror, making sure only the ones that can be seen are copied? Would this need multiple vertex/poly lists?

I would imagine that a mirror would consist of one (or more) polygons. You would take the normal which is perpendicular to the front face of the polygon and trace along it until you hit something and then render part of what you hit in the mirror. What you're really looking for is "environment mapping". Note Tomb Raider doesn't have proper mirrors. Also note that the process is similar to certain lighting models. If you have a good set of math routines it should be easy to add once you understand the priniciples fully. I don't claim to understand them though.

Also does anyone know of any good drawing methods - would brute force and a VIS list do? Any simple speed increases we could make to it?

Once your renderers are as fast as they can go then it all comes down to minimising the number of polygons you draw. If you're dealing with a lot of polygons then you have to prevent overdraw. The rest is really all culling at object and view space level.

After that, we need to decide on the data and structure that will be passed around through all the calls unhappy

Before you do that ask yourself what king of game you wish to write. If you're happy with indoor only games you need to think about techniques like BSP trees and Portals. If you want outdoor landscapes then you need to think about techniques such as Quad-trees and Oct-trees. They're very different data structures and engines like the new Unreal engine actually have both indoor and outdoor engines in them.

  ^[ Log in to reply ]
 
Jeffrey Lee Message #85172, posted by Phlamethrower at 19:26, 2/1/2001, in reply to message #85171
PhlamethrowerHot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot stuff

Posts: 15100
Arg! Why did you have to do such a big reply?!?!

Just about everyone who reads Acorn Arcade must have a "wish list" for a 3D Engine but simply stating it doesn't get us any closer.

That's why I'm working on designing the thing, breaking each point down into it's elements and then working out how to porgam it! Read the thing before doing silly replies like that!

In the first post Jeff states that he won't consider OpenGL. Well why not? To achieve everything in the list you need support for at least some of the features that OpenGL offers.

If I was to consider OpenGL, then all the engine would be would be a port of it. Instead I am designing it myself, and if it happens to share some features then it's just because they are both 3D engines!

For heirarchical and skeletal animation you need a method of composing and utilising matrix stacks - OpenGL provides these.

There's no point porting OpenGL because someone else is likely to try it (e.g. the Omega API), and my version is likely to clash with theirs. And if 'matrix stacks' are what I need then I'm sure I can work out how to do it myself, or whoever writes that part of the engine can just rip it from OpenGL if they want.

Note that you wouldn't want to swap a whole 3D engine so either the subset offered by hardware is placed in a swappable module or the 3D engine supports some sort of dynamically loaded libraries.

That's the idea! Read before reply!

All the fancy features mentioned, such as lens flares, require the facilities built into an API like OpenGL.

How many times do I have to say that I'm not porting OpenGL?!?!?!?!

Also the engines tend to be optimised for non-ARM hardware - read lots of floating point. By the time you've become intimately familiar with an engine in order to port it properly you probably could have written one - look how long Martin Pipers port of the Quake engine took, and we never saw it.

Finally you see the light! Porting OpenGL would mean a lot of work!

When you say module based do you mean as a series of RISC OS modules, a series of DLLs or even a series of object code libraries that can be linked at link time in order to customise the build?

A single RISC OS module as the kernel. Any other program should be able to link with it.

If you envisage building a series of modules how will they advertise their services to other modules and how will those modules use the new services without knowing the SWI format beforehand?

Eh?
Won't the other modules be programmed with the aim to work with the 3D module?

This last one is something of a research project in the Reflection community and is known as introspection cool

You finally realise that I don't want to spend my entire life making a fully-featured engine! All I really want is to get a working engine out the door, with the facility to be easily expanded in the future without anyone shouting at me to change the source!

So transforming the object coordinates to object space and culling by distance, then transforming to view space and culling against the view volume. The problem is that this is very dependent on the internal structure of your engine. Will you have a completely free form environment which will make this trickier / slower or something like Portals which are only of use on indoor scenes?

It should be pretty free-form, where if an object that uses any kind of VISing exists, then that object's code will be called to handle it.

While you can specify an API across this kind of service different techniques require vastly different data structures within the engine. This is why APIs like OpenGL are very general and don't offer scene handling facilities - there are too many to choose from.

My engine will also be very general, so I don't have to do much work.

Assuming you don't seriously mean ray-tracing I guess you mean what techniques do we use such as ray-casting, straight polygon plotting or something else? Well due to the route taken by hardware I would guess that anything other than straight polygon plotting is a non-starter.

Ray-tracing could be implemented, remember this is an open-ended system. I don't tell the user which plotting method to use.

I am leaning towards fixed vertex formats, probably just 32,64 and perhaps 128 bit. All would be fixed point, up to whatever degree the host program wants (Since that doesn't affect the maths much).

So in order to draw the screen, this would be done:

* Call to set all the vertices world positions
* Call to set vertices screen positions
* Call to generate polygon list (Now we know what can be seen)
* Call to scene renderer

The problem is that each of these "calls" involve a lot of work.

I'm thinking of making one call do multiple things, e.g. one call to do all the vertex and polygon list work. Of course this means more work for the person who writes those bits of code.

To stop vertices being calculated multiple times, a frame count could be used. If the frame count matches the stored one for that vert, then it is not re-calculated.

I'm not quite sure what you mean by this. Any object will contain a list of vertices and list of polygons. The polygons will contain pointers to the vertices they use. To transform the object you merely transform the vertices. Each vertex should only be transformed once from local to object space, from object space to view space and finally view to screen space. Each polygon will then automatically use the transformed vertices.

The method of storing a flag (the frame count) for how accurate each position (e.g. world space, screen position) of a vertex is is designed so that if there is a chance of a vertex being calculated twice in a frame, then this is stopped from happening. In reality this may not happen, so instead perhaps a simple method where the object is trusted to update it's own vertices is needed - e.g. if the object hasn't moved then the world vertices won't be recalculated, but if the camera has moved then the screen ones will.

Note that it is possible to use APIs like OpenGL in ways that would mean multiple transformations and ways that would be single transformations. It depends on whether you want to do it quick and dirty or efficiently.

The engine is going to keep track of the world positions, so that collision checking and things can be done by the host program. In terms of multiple transformations on a single vertex (e.g. a skeletal animation system), it depends on who writes the skeletal system code (Because it would just be a plug-in).

I would imagine that a mirror would consist of one (or more) polygons. You would take the normal which is perpendicular to the front face of the polygon and trace along it until you hit something and then render part of what you hit in the mirror.

I suppose that's one way of doing it, but is likely to be rather slow (What were you saying about ray-tracing?)

If you have a good set of math routines it should be easy to add once you understand the priniciples fully. I don't claim to understand them though.

Adding mirrors is likely to be left to some other poor soul, but what we need is the facility to support it.

After that, we need to decide on the data and structure that will be passed around through all the calls unhappy

Before you do that ask yourself what king of game you wish to write. If you're happy with indoor only games you need to think about techniques like BSP trees and Portals. If you want outdoor landscapes then you need to think about techniques such as Quad-trees and Oct-trees. They're very different data structures and engines like the new Unreal engine actually have both indoor and outdoor engines in them.

The renderers are all going to be plugins, so it's up to someone else to work out things like that (Unless I decide to code some, of course).

I know a rather simple portal system, which should work quite well for indoors areas. The method is basically splitting the level up into cube or cuboid shaped areas (All the same size to make working out where the camera is easier), and making a polygon list for each area. The 'portals' between the areas would then be calculated, and ray casting/tracing would be used to work out which portal can see where. If a portal can see a portal, then the two corresponding areas are added to the VIS list for the two areas the source portal connects. The system would try tracing one extra level (area) through at a time, until it runs into a dead-end.

I hope you understand that, because it's difficult to explain without a diagram smile

I hope to do two main things with this engine once useable - one would be a space game (No VISing, just distance checks), and the other would be an indoors game of some description (Using a VIS system like the one above).

About the plugins. There are likely to be two types - Run-time plugins, which would just be linked where they are in memory, and file-based ones (e.g. from an application I might make to hold the 3D engine, to keep everything held together). An application could make things like hardware rendering easier to manage, since you would only have to set the main program to do hardware rendering rather than each game that uses the module (Hint hint PC's)

Hmmm... Lighting... How could it be implemented? Bear in mind that not everything would want to use lighting though. Perhaps give the lightmap object/plugin the responsibilty of setting an object's light info?

Also that raises the question of properties themselves - How would an object have it's properties set? Would it just be a case of the object having to be compatible, or would there be some system governed by the module, where a property is only set/read if it is in the target object?

That would then tie into the global object format which everything would have to adhear to.

Hmm.. Perhaps nicking someone elses designs (e.g. OpenGL) might be easier after all....

ESPECIALLY SINCE NO-ONE ELSE SEEMS TO BE INTERESTED IN DOING SOMETHING TO HELP DESIGN/MAKE THIS THING (Except for Lee Johnston, who just gives me lots of things to think about, making me realise bits that I havent thought out properly and generally degrade my morale unhappy)

[Edited by Phlamethrower at 19:35, 2/1/2001]

  ^[ Log in to reply ]
 
Jeffrey Lee Message #85173, posted by Phlamethrower at 19:49, 2/1/2001, in reply to message #85172
PhlamethrowerHot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot stuff

Posts: 15100
Hmm...

http://www-solar.mcs.st-and.ac.uk/~davidb/Mesa/index.html

Seems that there is an OpenGL-compatable 3D engine already available for RISC OS. I'll take a look, and if it runs pathetically slow then I'll either speed it up (Hopefully without altering the main Mesa code), create a modified version of the code, or continue with this COMPLETELY NON-OPEN GL 3D engine.

  ^[ Log in to reply ]
 
Mark Quint Message #85174, posted by ToiletDuck at 22:26, 2/1/2001, in reply to message #85173
Ooh ducky!Quack Quack
Posts: 1016
right, if im getting it right, you are planning to write/port a 3D engine for RiscOS??
if so, <Applause> Thank You </Applause>
On the matter of help, I would be happy to help with what i can, as I think that the ideas discussed here are going in the right direction.
Perhaps the easiest way of approaching it would be to set up a small team to work on it.
  ^[ Log in to reply ]
 
Lee Johnston Message #85175, posted by johnstlr at 09:42, 3/1/2001, in reply to message #85174
Member
Posts: 193
Arg! Why did you have to do such a big reply?!?!

Would you rather that I was less comprehensive? cool

That's why I'm working on designing the thing, breaking each point down into it's elements and then working out how to porgam it! Read the thing before doing silly replies like that!

Sorry, I've seen too many people state wish lists who don't have the ability or inclination to back it up. It makes me a little skeptical.

If I was to consider OpenGL, then all the engine would be would be a port of it. Instead I am designing it myself, and if it happens to share some features then it's just because they are both 3D engines!

No because OpenGL isn't a 3D engine. It's an API offering facilities to ease the creation of applications that render 3D scenes. Maybe I should explain my apparent fixation with it. One of the hardest things of any piece of software is coming up with a decent API for it. It's often easier to "copy" one than design your own simply because the hard work has already been done for you. Obviously this means you have to understand the API that you intend to copy and have some idea of how to implement it efficiently. I guess it all depends on which route you wish to take.

There's no point porting OpenGL because someone else is likely to try it (e.g. the Omega API),

And this is the exact reason why, despite having the impulse to start looking at this stuff again, I haven't. I want to see what Microdigital propose. We don't want a situation where we have multiple incompatible APIs. If only we could get some information out of them....

That's the idea! Read before reply!

The only reason I'm making these points is that I've been there, tried it and invariably found it required more time / expertise than I could put in. I'm not trying to knock your efforts - I would never do that - merely putting my thoughts down.

Eh?
Won't the other modules be programmed with the aim to work with the 3D module?

So the 3D kernel module provides all these facilities statically and if you want to create a different driver you replace the 3D kernel?

You finally realise that I don't want to spend my entire life making a fully-featured engine! All I really want is to get a working engine out the door, with the facility to be easily expanded in the future without anyone shouting at me to change the source!

*grins*

It should be pretty free-form, where if an object that uses any kind of VISing exists, then that object's code will be called to handle it.

So is the object format described as part of the kernel or is it customisable by the application. If the later I assume the application will provide some callback the kernel can call to carry out specific actions. This is one area I've always been stuck on with modules (although I'll admit to not really applying myself to working it out). How does a RISC OS module callback user code? As I tend to code in C/C++ I've always relied on funciton pointer callbacks or predefined interfaces that are inherited by application objects.

My engine will also be very general, so I don't have to do much work.

Ok I think I'm building up a picture of what you're getting at now.

I am leaning towards fixed vertex formats, probably just 32,64 and perhaps 128 bit. All would be fixed point, up to whatever degree the host program wants (Since that doesn't affect the maths much).

The fixed format approach was the one taken by Direct3D until DirectX 8 and is still used by OpenGL. The advantage is that the application can specify which vertex fields it uses and you can reconfigure your transformation pipeline accordingly. Direct3D now uses pixel shaders and you really don't want to go there without hardware support cool

The method of storing a flag (the frame count) for how accurate each position (e.g. world space, screen position) of a vertex is is designed so that if there is a chance of a vertex being calculated twice in a frame, then this is stopped from happening. In reality this may not happen, so instead perhaps a simple method where the object is trusted to update it's own vertices is needed - e.g. if the object hasn't moved then the world vertices won't be recalculated, but if the camera has moved then the screen ones will.

So if the object hasn't moved or rotated you don't have to recalculate the object space coords. Yes my old engine did this. I also allowed an application to specify (dynamically) whether an object was "static" or "active". Static objects didn't move or rotate so the object space coords were always pre-calculated. Objects could have their type changed at any time. The "changed" flag was either set by the engine or application depending on whether the application let the engine manage the object or supplied a callback to handle it's behaviour.

The engine is going to keep track of the world positions, so that collision checking and things can be done by the host program. In terms of multiple transformations on a single vertex (e.g. a skeletal animation system), it depends on who writes the skeletal system code (Because it would just be a plug-in).

Hmm I've not written a heirarchical animation system but I don't see why vertices would be transformed twice. All the vertices could be stored as a list in the top level object and then child objects would just point to the ones they use. I don't know how easy this would be to do in practise though.

I suppose that's one way of doing it, but is likely to be rather slow (What were you saying about ray-tracing?)

No because what I described was ray-casting, not ray-tracing cool

Adding mirrors is likely to be left to some other poor soul, but what we need is the facility to support it.

Delegation is good cool

The renderers are all going to be plugins, so it's up to someone else to work out things like that (Unless I decide to code some, of course).

Yes but different 3D engine types require different data structures at the global level. I guess the kernel could know about all the possible data structures and plugins could cast data structures to the right type.

I hope you understand that, because it's difficult to explain without a diagram smile

Yes although I understand Portals slightly differently. I figure each "room" has a list of adjoining rooms and polygons which are "doors". When you generate the polygon list for a room if the "door" polygon is included you take all the polygons from the joining room and clip them against the "door" and add them to the list. Of course if any of these clipped polygons are "doors" then you repeat the process.

I hope to do two main things with this engine once useable - one would be a space game (No VISing, just distance checks), and the other would be an indoors game of some description (Using a VIS system like the one above).

Well if the space game isn't doing much in the way of visibility checks then a subset of a portal engine might be workable.

About the plugins. There are likely to be two types - Run-time plugins, which would just be linked where they are in memory, and file-based ones (e.g. from an application I might make to hold the 3D engine, to keep everything held together). An application could make things like hardware rendering easier to manage, since you would only have to set the main program to do hardware rendering rather than each game that uses the module (Hint hint PC's)

Well if everything is a module they could be installed into !System and then you only need to install the appropriate version for your hardware. However an application may prove better. I don't really know as I don't know how the new !Boot sequence works.

Hmmm... Lighting... How could it be implemented?

Depends on what you want cool

Bear in mind that not everything would want to use lighting though. Perhaps give the lightmap object/plugin the responsibilty of setting an object's light info?

Perhaps each object should carry a flag as to whether it needs lighting or not. You then need someway to specify lights in a world and light against them.

Also that raises the question of properties themselves - How would an object have it's properties set? Would it just be a case of the object having to be compatible, or would there be some system governed by the module, where a property is only set/read if it is in the target object?

I would guess that you'd want all objects to have all properties. If stored as individual bits they won't take up too much room. The application should know if a property "means" something in a given context. Mutually exclusive properties could even share bits.

ESPECIALLY SINCE NO-ONE ELSE SEEMS TO BE INTERESTED IN DOING SOMETHING TO HELP DESIGN/MAKE THIS THING (Except for Lee Johnston, who just gives me lots of things to think about, making me realise bits that I havent thought out properly and generally degrade my morale unhappy)

Sorry, that's not my intention cool

  ^[ Log in to reply ]
 
Richard Goodwin Message #85176, posted by rich at 09:47, 3/1/2001, in reply to message #85175
Rich
Dictator for life
Posts: 6828
Some very good points being made, but....
* Perhaps vertex maths if an expandable vertex system was given (e.g. 32 bit and 64 bit co-ords)
Ideally the underlying math routines should be capable of 64bit math. On a StrongARM the MULL and SMULL instructions make this easy and efficient. Lower processors will need to be catered for though.

Why cripple things before you start by requiring a 3D engine that works on anything less than a StrongARM(/Xscale)? I mean, I still have older Acorn machines (right back to my original Electron), and stil use many of them, but for a serious game we're already well past the sell by date of anything less than 200Mhz - even that's a bit slow for this type of game (taking into account the lack of peripheral hardware such as 3D graphics cards and floating point in most machines). I know the argument about selling to as many people as possible, but I think that's impossible when you start taling about 3D (<flamebait>have you ever played Doom on an ARM 3 machine, and that's not even "proper" 3D compared to things like Quake?</flamebait> smile )
  ^[ Log in to reply ]
 
Lee Johnston Message #85177, posted by johnstlr at 11:35, 3/1/2001, in reply to message #85176
Member
Posts: 193
Rich I actually do agree with you. My point about the 64bit math routines was because

like it or not there are a fair few ARM7500 machines out there

a developer may not actually have a SA machine but may be writing a game for one. If all that's required to get the engine working on the lower processors is to write a few math macros and then dynamically switch the ones used depending on the underlying processor then I don't see that it's a huge overhead. Put another way, say I was to decide that my needs for a RISC OS laptop outweighed my needs for a SA machine. Without those few measily routines I wouldn't be able to aid development of the engine. However I would certainly not expect to play any games using it

I'd certainly say though that RISC OS 3.5 should be an absolute minimum. It's not difficult to provide screen handing for 3.5+ machines and 3.1- machines but it is more work - especially dealing with requests more screen modes with colour depths that simply aren't possible on lower machines

  ^[ Log in to reply ]
 
Mark Quint Message #85178, posted by ToiletDuck at 12:11, 3/1/2001, in reply to message #85177
Ooh ducky!Quack Quack
Posts: 1016
yup, about the backwards compatability, its got to be there if you plan to have a decent market avaliable for the engine/game that uses the engine.
If u get a machine now, its going to be either StrongARM or ARM7500FE, so there is going to be a bigger market to support, especially as Set-Top boxes, Inet TVs, & NCs (most of which use RiscOS) catch on. If could support those, then you've build a product that can be sold to normal "pc" users, as well as to the tiny RiscOS market.
Getting back to the engine stuff, you're going to build a 3D engine right?, not a 3D API. (please say engine) as its the engine that developers are now looking for.
As you say, OpenGl is just an API, and so the game engine must be built under that.
If could build a 3D engine, that could render indoor scenes, with decent lighting (i would suggest pre-compiling/VISing the scenes/maps like the Quake engine does) and nice texture handling, then developers will finally be able to produce some good 3D games.
I also have plans to write a 3D game, (primarily indoor based, in 1st person), and so if I had a good engine & tools for it then i might actually be able to attempt it (but i need the tools as i cant program unhappy )
RO3.5+??? - damn, it wont work on our old A4000 then grin

good luck

Mark smile

p.s. lo Nathan, when are we having this IRC conf???? - i need to sort out some things like these game plans, the EMD levels & some Overcast stuff!!!, mail me sometime plz

[Edited by toiletduck at 12:13, 3/1/2001]

  ^[ Log in to reply ]
 
Jeffrey Lee Message #85179, posted by Phlamethrower at 12:26, 3/1/2001, in reply to message #85178
PhlamethrowerHot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot stuff

Posts: 15100
OK, thanks for the response people.

I think that for the design aspect at least it would be best if I work on it myself, only calling on you lot for help on specific aspects.
Once the kernel has been designed, you lot are free to offer your services to write lots of fancy plugins for it.

I think I've worked out the basic 'object' format that would be used. Each object would have a type word, a pointer to it's source data (e.g. having a room full of the same monster would only need 1 copy of the polygon definitions in memory), and any extra raw data that would be needed.

The format of the raw data would be layed out in a type definition, which says things like 'the origin is at this offset and is a 32bit vertex', laying out the format. The source data would have a definition as well. SWI calls would be used to read a variable, and if the variable is not found in the object itself then it's source will be searched. Adding variables to objects is likely to be impossible, so the layout must be pre-defined.

I'm thinking that each object could be made up of sub-objects - a model would be made from an array of polygon objects. This could make managing objects easier, hopefully without making it take up any more memory (Since it would just be raw data, the object definition saying 'X number of polygon objects').

There is something else I've been thinking of, and that is a run-time compiled language. With all the work on polygons and things that the plugins would be making, accessing the polygon and vertex code through SWIs or a table of pointers may be too slow. If it was a run-time compiled language, then the polygon and vertex code could be inserted directly into the plug in (which would be written in the language), like an assembler macro. Of course this doesn't have to be a complete language, all it could be is a simple series of macros that are output as the script says, or even something like an AOF file where they are inserted into the precompiled code.

Of course any of these is very tricky, so it might be best to work on standard vertex formats, and simply have the vertex handling code released so that the plug in writers can use it directly. Anybody got any ideas on that?

As to the machine and OS, I'm likely to either be using the extended BASIC compiler, which has support for the SA instructions, so support for older machines may be available through a simple macro or two, but since I have no older machine (RPC 600/700/ARM7500) to test it on, I will have no idea whether it runs at an acceptable speed.

I may soon have my hands on an Osaris, so can anyone who has info on them tell me what the 'plus pack' CTA are offering includes? Or even just what the basic version has? All I'm really after is a wall plug, the connection cable and maybe the web software. Once I get one I should be able to do a lot more programming, even though I won't be able to test it until I transfer it to the RPC. Also does anyone know if it comes in a range of colours? ;-)

  ^[ Log in to reply ]
 
Jeffrey Lee Message #85180, posted by Phlamethrower at 12:33, 3/1/2001, in reply to message #85179
PhlamethrowerHot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot stuff

Posts: 15100
If u get a machine now, its going to be either StrongARM or ARM7500FE, so there is going to be a bigger market to support, especially as Set-Top boxes, Inet TVs, & NCs (most of which use RiscOS) catch on. If could support those, then you've build a product that can be sold to normal "pc" users, as well as to the tiny RiscOS market.

I won't be really aiming to support those, since I have no idea how the OS is different smile

Getting back to the engine stuff, you're going to build a 3D engine right?, not a 3D API. (please say engine) as its the engine that developers are now looking for.

It will essentially be a 3D API, but I will write at least some simple 3D code to make sure that it works. Advanced things like textured polygons can just be ripped from someone else's code, and stuck in the with the official release as extra plug ins.

If could build a 3D engine, that could render indoor scenes, with decent lighting (i would suggest pre-compiling/VISing the scenes/maps like the Quake engine does) and nice texture handling, then developers will finally be able to produce some good 3D games.

A VIS program is likely to be made at some point

I also have plans to write a 3D game, (primarily indoor based, in 1st person), and so if I had a good engine & tools for it then i might actually be able to attempt it (but i need the tools as i cant program unhappy )

The tools are likely to be the first program I write for the engine

RO3.5+??? - damn, it wont work on our old A4000 then grin

Go and order an Omega now, damnit!

  ^[ Log in to reply ]
 
Mark Quint Message #85181, posted by ToiletDuck at 14:43, 3/1/2001, in reply to message #85180
Ooh ducky!Quack Quack
Posts: 1016
When you were talking about polygons you must work out how you are going to manipulate them, and how they are operated within the engine.

The way I was thinking it could work would be:

To use faces instead of polygons - one face is going to be upto 6 times quicker, with the only "disadvatage" being that they may be harder to build maps with, but even then you will be able to just use the faces you want to, and if a prefab system (which could be customised for the game) was introduced to the editor then you also have a time saving route for the developer.

Each face should be either a rectangle/square or a triangle, with each angle being able to be altered from the editor. (this would allow you to build complicated shapes from any number of planes)

With the idea of using faces, each face should be customisable, so that when you place you face in the editor, it perhaps starts life with the settings of it being transparent, and that it is not solid. You decide that this face should form a wall, so you then map an appropriate texture to it (from a texture library as part of the editor), and change its state to solid. The setting that could be applied to each face could reach further, perhaps settings like its transparency, or reflectiveness.

With this kind of system, any function within the game could be used, for example if you wanted an event to happen, you could create a "trigger" face, which triggers the event when touched. This could all be done through applying the different settings through the editor (BTW the editor should look like Worldcraft by Valve on windows -it was used to create the levels in Half-Life).

The idea of applying functions for faces, or groups of faces (e.g. a cube) would help solve some of the problems like lighting, where a group of faces could be assigned to have a light texture, and then give off X amount of light, and be X colour.

One good source of ideas, and understanding of methods that could be applied to a 3D engine/game is to look at how levels are made/constructed in games using the Quake engine. (in that you can "create" 2 different objects, either a brush which is the basis of objects such as walls and can have textures applied to them, or an entity which creates a "fixed sized brush" that will not appear in the game, but can hold information such as triggers, or a light.

For lighting, once a map has been completed, it should be compiled, during which light maps are created from the infomation given from the faces, so that when the engine is running the compiled map, it just "pastes" the light map onto the appropriate face, leaving the poort old StrongARM to actually get on with displaying the game at a decent framerate, rather than having to VIS the level on the fly.

hope that kind of helps

Mark smile (sorry for going on for too long grin)

hmmm Omega - i would if i had the money unhappy - but hey, im 1.5 years off going to Uni, & I might be able to pull of making a commercial RiscOS game....


[Edited by toiletduck at 14:49, 3/1/2001]

  ^[ Log in to reply ]
 
Jeffrey Lee Message #85182, posted by Phlamethrower at 16:14, 3/1/2001, in reply to message #85181
PhlamethrowerHot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot Hot stuff

Posts: 15100
Hmm, not quite what I was after there Mark.

Certainly the editor could use faces, but I'm pretty sure that the engine will stick to triangles.

When I get round to writing an actual game engine for the module rather than just the stuff to draw the screen, I probably will do stuff like Quake and HL can.

And I'll make sure that polygons/brushes/whatever can be either textured, solid, none or both now that you've reminded me.

  ^[ Log in to reply ]
 
Mark Quint Message #85183, posted by ToiletDuck at 11:06, 7/1/2001, in reply to message #85182
Ooh ducky!Quack Quack
Posts: 1016
what happened to the Scorpion 3D engine??
i just stumbled over its website: http://www.scs.leeds.ac.uk/stu/Scorpion/
but it hadnt been updated since February 2000. unhappy
the screenshots look good thou smile
  ^[ Log in to reply ]
 
Lee Johnston Message #85184, posted by johnstlr at 19:57, 7/1/2001, in reply to message #85183
Member
Posts: 193
Last I heard (and I can't remember exactly where I heard it wrong so I could be wrong) was that Stu had got a job and hasn't the time to get Scorpion into a state that could be distributed. Story of the RISC OS market really.
  ^[ Log in to reply ]
 
Nathan Message #85185, posted by Wrath at 07:19, 8/1/2001, in reply to message #85184
Member
Posts: 155
Last I heard (and I can't remember exactly where I heard it wrong so I could be wrong) was that Stu had got a job and hasn't the time to get Scorpion into a state that could be distributed. Story of the RISC OS market really.

I probably told you. Lee is right, Stu apparently got a job and doesn't have time for it anymore. I think it's the new way of getting a job, writing a 3D engine that is heavily optimised and show it to a computer games company. Same thing happened to Paul Thomson.

  ^[ Log in to reply ]
 
Message #85186, posted by chrisbazley at 16:47, 8/1/2001, in reply to message #85185
Member
Posts: 58
Getting back to the engine stuff, you're going to build a 3D engine right?, not a 3D API. (please say engine) as its the engine that developers are now looking for.
As you say, OpenGl is just an API, and so the game engine must be built under that.

OpenGL isn't "just an API". An implementation of OpenGL under RISC OS would be incredibly valuable.

Also, 3D engines are not built "under" OpenGL, they are built on top of it. That is why it is a pre-requisite for modern games.

The point of OpenGL is that every single game developer doesn't have to write their own texture mapping routine. It cuts out the code which would otherwise be written time and time again. It isn't some ubiquitous game engine which all companies use.

Programmers decide upon a design for a game engine, and implement it USING OpenGL. OpenGL itself does not dictate the design of all game engines!

  ^[ Log in to reply ]
 
Message #85187, posted by chrisbazley at 17:03, 8/1/2001, in reply to message #85186
Member
Posts: 58
There's no point porting OpenGL because someone else is likely to try it (e.g. the Omega API),

And this is the exact reason why, despite having the impulse to start looking at this stuff again, I haven't. I want to see what Microdigital propose. We don't want a situation where we have multiple incompatible APIs. If only we could get some information out of them....

LOL! As if Microdigital have the programmers and time to implement OpenGL under RISC OS. They haven't even done the sound/network drivers for Mico yet!!!

(Update: Well, apparently they have done the sound drivers, now. Also, in clarification, I mean that they will probably not be writing an implementation of OpenGL in *software*. I wouldn't want to imply that there will be no drivers to support their hardware - which isn't really the subject of this thread.)

If a group of people started a decisive (and visible) effort to set out a RISC OS OpenGL API, then Microdigital would be foolish to ignore it.

I don't see that the idea of implementing OpenGL need be in any way in conflict with Jeffrey's proposal of writing a game engine. He can write it in any way he likes, it just means he wouldn't have to write low-level rendering routines.

[Edited by chrisbazley at 21:23, 11/1/2001]

  ^[ Log in to reply ]
 
Mark Quint Message #85188, posted by ToiletDuck at 19:59, 8/1/2001, in reply to message #85187
Ooh ducky!Quack Quack
Posts: 1016
hehe u like ur OpenGL grin
  ^[ Log in to reply ]
 
Lee Johnston Message #85189, posted by johnstlr at 10:45, 9/1/2001, in reply to message #85188
Member
Posts: 193
hehe u like ur OpenGL grin

That's because it's quite a nice API cool

There are issues though. Implementing it as a module could have serious drawbacks - OpenGL relies on lots of little API calls that do relatively little. The SWI overhead will eat you alive. Of course there is always the option of exporting the module interface as a jump table but (and be honest here) does anyone really know how to do this?

There are also other issues, especially if you support multiple applications at once (desktop, multiple games loaded etc). You need some way of identifying the context for each application. Simply passing in a handle to each call, while easy, would mean diverging from the OpenGL API. I guess you could add an extension call like "SetApp(AppHandle_t handle)" which the app could call before it called any other functions. This owuld work pretty well in our co-operative multitasking environment. It'd fall flat on it's face in a pre-emptive environment.

Speculation's good right?

  ^[ Log in to reply ]
 
Alex Macfarlane Smith Message #85190, posted by aardvark at 11:54, 9/1/2001, in reply to message #85189
Member
Posts: 20
<quote>

There are issues though. Implementing it as a module could have serious drawbacks - OpenGL relies on lots of little API calls that do relatively little. The SWI overhead will eat you alive. Of course there is always the option of exporting the module interface as a jump table but (and be honest here) does anyone really know how to do this?
</quote>

I think QTM allows you to export the modle interface,and the source for this is at http://www.qthemusic.free-online.co.uk/">http://www.qthemusic.free-online.co.uk/ iirc.

Alex.

  ^[ Log in to reply ]
 
Message #85191, posted by chrisbazley at 13:41, 9/1/2001, in reply to message #85190
Member
Posts: 58
How about this then: Have a SWI interface for language independence (e.g. for use in BASIC, and for testing purposes).

Secondly, the module would have the ability to export the addresses of the OpenGL routines to applications written using the C library stubs. Like usage of the Shared C Library.

It would have to refuse to die (RMKill, RMReinit) if there were client programs running to which it had exported the interface.

I think SWI instructions operate like this :
ARM Instruction decoded -> Interrupts program and goes to OS -> OS decides which module SWI belongs to -> Module SWI handler calls appropriate routine in module.

Surely exporting the module interface would only be marginally simpler? :

Application BLs to correct place in exported jump table -> Jump table B(ranch)s to OpenGL routine in module.

This still involves breaking the processor pipeline a few times... would it be significantly quicker?

I think we need to talk to someone who knows about the Shared C Library, like the guy who maintains the SharedCLib stubs for GNU C. And Microdigital, of course...

  ^[ Log in to reply ]
 
Pages (5): 1 > >|

The Icon Bar: Games: 3D Engine design