OpenSceneGraph (OSG) FAQs 51 thru 100

FAQs 1 thru 51 square72_blue.gif FAQs 51 thru 100 square72_blue.gif FAQs 101 thru 150 square72_blue.gif FAQs 151 thru 200 square72_blue.gif


    I
    This unofficial Open Scene Graph (OSG) FAQ Part 2 ( 51 thru 100) is provided by Gordon Tomlinson , I hope you find the information contained with these FAQ's useful and helpful. If you have a tip, FAQ or code snippet you would like to share you can send it me at faqs@3dscenegraph.com and I will add to the FAQ, or if you spot an error in a FAQ or a change in OSG that makes a topic out dated let me know and I will get the items updated


     

  1. Integrated Graphics Card Problems
  2. Why Does my Application only Run at only 1-2hz
  3. What is a Pixel Format
  4. What does LOD stand for and What is a LOD
  5. What is a Symmetric Viewing Frustum
  6. What is an Asymmetric Viewing Frustum
  7. What is a Orthographic  Viewing Frustum
  8. What are Isectors
  9. What is a Line Segment
  10. What are the Difference between Real-time and Animation
  11. How Can I Convert my 3d Models to an OSG Format
  12. How do I Calculate the Vertical Field of View
  13. How do I Calculate the Horizontal  Field of View
  14. Why Does glReadPixels Capture other Windows
  15. How can I Force my Graphics Window on Top
  16. How  to stop  a Window being Ontop after using &wndTopMost
  17. How can I contribue to the Open Scene Graph
  18. How can I make Osg::ParticleEffect run for ever
  19. Best way to achieve Picking with OSG
  20. What are Reference Pointers
  21. How can I retrieve XYZ from scene view matrix
  22. How do I disable the automatic DOF animations
  23. How do I enable the automatic DOF animations
  24. What are Render Bins
  25. Can I share scene handlers between multiple camera
  1. What is the OSG co-ordinate system
  2. How to load a model without textures
  3. How replay a recorded animation paths in osgViewer
  4. How can I make and use a Perlin noise texture in OSG
  5. What is Producer
  6. Where can I find more information on Performer
  7. What is osgviewer
  8. What are the command line options for osgviewer
  9. How to set the background color in OSG
  10. Windows equivalent of GLX_SAMPLES_SGIS & GLX_SAMPLES_BUFFER_SGIS
  11. Why is there no cursor showing when I use Producer
  12. Texture not showing on my polygons why
  13. How to Print the SceneGraph out to a file
  14. How to specify the amount of Video memory to use
  15. Can I tell osgconv to use only the lowest LOD's from OpenFlight  files
  16. How to get the Normals from an Intersection test
  17. How can I dynamically turn a camaera On or Off
  18. Where are the Producer Camera Confige Examples
  19. Why do some of my DDS textures appear inverted
  20. Does OSG have an equivalent of glPushAttrib
  21. How can I search the OSG Mailing List
  22. .
  23. .
  24. .
  25. .

 

Resources

 

* 51  *  Integrated Graphics Card Problemsb

     

    Q: I'm using an Integrated Graphics Card  and I'm having problems with artefacts appearing and my frame rate is very poor.

    This is quite a common problem when using an integrated graphics cards (such as Intel 82845)

    Most OpenGL based programs such as Vega, Performer and OSG more than likely will have problems when they are used with integrated graphics such as the common  Intel 82845 chipset.

    The first thing to do is to visit the manufactures web site or contact their support channels to obtain their latest graphics driver for the card.

    Installing the newest  graphics driver normally helps to some extent, make sure you select at least 24bit or 32 bits for the colour,

    Also make sure and allocate as much RAM to the card as possible, you will need at least 64mb  the more they support the better, if you only have 32mb then your performance will not be good

    The performance of the integrated card can will always in most case be a lot worse the a dedicated graphics card as the integrated card in  most case use system ram, which slows it down and also place a lot of the processing of graphics commands on the machines normal CPU.

    To be honest integrated cards are terrible for 3d Real-time graphics, there fine for normal desktop activities but not graphics, the best recommendation I can give is to install a dedicate graphics card, you can get a  very reasonable card these days for say $100 or so which will blow away the integrate card.

     

* 52  *  Why Does my Application only Run at only 1-2hzb

     

    There can be many reasons that your simulation/application can be running at only 1-2 Hz or less.

    Typically this indicates that you may have dropped in to software rendering mode on your graphics card. This can happen when you set-up up Opengl and request your pixel format for the Opengl Window. Normally it means you have asked for a format or a setting that the card cannot support or does not support. I say cannot support as it may be that the resources of that card are limited when you request the pixel format, such as your resolution is too big for a 32bit z buffer, another Opengl has already consumed most of the resources etc. Or your requesting a setting not supported by your graphics card, you can find the formats supported by your cards with the following 

    On Irix you can use findvis on the command line to display the available bit plane configurations supported on the Irix system

    On Windows you can use a program from Nvidia to show the available bit plane configurations

    Then it might be a case you are trying to

    • display too much,
    • too many polygons
    • high screen resolutions
    • too many textures.
    • textures too big ( > 512,x512) ( for real-time try to stick below 512x512
    • Too many state changes
    • You Application is too complication making you CPU bound et
    • Your IO bound
    • Etc

    In this case you have to try and simply your application, reduce the data, reduce the applications work load, get fast machine, maybe use a multi-process machine, get better graphics, reduce your resolution etc

     

* 53  *  What is a Pixel Format b

 

    Each Opengl window uses a frame buffer, which is a collection of bit plane's storing the information about each pixel. The organization of these bit plane's defines the quality of the rendered images and is known as a Pixel Format.

    Pixel formats are made up from different Bit plane's which allocate for features such as:
     

      • colour information (RGB)
      • Alpha (transparency)
      • depth buffer(Z-bits)
      • Samples
      • Accumulation RGB Buffer
      • Accumulation Alpha Buffer
      • Stencil
      • Buffer Mode (Single/Double(default))

    Note that support for the various pixel format  configurations and combinations are not uniform across different Windows Graphics cards, Linux Systems and  Irix systems.

    Vega will ask the system for a bit plane specification supplied through the Lynx Windows panel settings or through code, the request may not be granted. When the notification level (in Systems panel) is set to Info or higher, messages tell the user which bit plane configuration is actually being used

    There are generally two methods of specifying bit plane configuration.
     

    • The first is to request individual assignments in each bit plane category by selecting values for a collection of option menus
    • The second method is to specify the value of the OpenGL Pixel Format  that contains a configuration acceptable to your application

    On Irix you can use findvis on the command line to display the available bit plane configurations supported on the Irix system

    On Windows you can use a program from Nvidia to show the available bit plane configurations http://developer.nvidia.com/object/nvpixelformat.html

    Color RGB
    Specifies the given number of bits for each of the Red, Green, and Blue components of each picture element. Larger values take longer to clear, and this may impact performance. Larger values per R,G,B produce smoother images because more colors are available. Vega runs in RGB or RGBA mode, not color index mode. 

    Alpha
    Some systems have storage of a fourth component, called alpha. This is used for transparency blending and storage of transparency values. It is possible to do some transparency methods without alpha planes. Alpha planes are required for any transparency method which requires that the current transparency level of a pixel be stored for later use.

    Depth Buffer Z Bits
    Depth information is stored in a Z-buffer. Larger numbers of bits allocated for the Z-buffer improve depth calculations and reduce "Z fighting". This effect occurs when the distance between two surfaces cannot be resolved within the numeric resolution. The image then flickers between the two surfaces. The near and far clipping plane distances also influence how the Z-bits store the depth information

    Samples
    Some systems, support multi-sampling. This technique allows each screen pixel to be resolved from a set of pixel fragments rendered at a higher resolution. This allows smoother, less jagged, images to be rendered. These images are referred to as anti-aliased. Aliasing is an artifact of digital sampling. The higher the number of samples for multi-sampling are supplied, the higher the quality of the images. The number of multi-samples available on a system is influenced by the resolution to which the display is set. On windows system this may need to be set at the graphics driver level first.

    Stencil
    The Stencil defines the number of bit plane's allocated for the stencil buffer. The statistics screen describing the fill rate of an application requires that there be at least 4 stencil planes allocated for the window's visual.

    Accumulation
    Species the number for red, green, and blue bits for an accumulation buffer, and also a number for alpha planes in the accumulation buffer. Some machines have hardware storage for accumulation buffers.

 

 

* 54  *  What does LOD stand for and What is a LODb


    LOD is an acronym for Level Of Detail (LOD)

    Basically the idea behind LOD processing is that objects which are barely visible donít require a great amount of detail to be shown in order to be recognizable.

    Object are typically barely visible either because they are located a great distance from the eye point or because atmospheric conditions are obscuring visibility.

    Both atmospheric effects and the visual effect of perspective minimize the importance of objects at ever increasing ranges from the current observers eye point. The effect is that the perspective foreshortening of objects, which makes them appear to shrink in size as they recede into the distance.

    To improve performance and to save rendering time, objects that are visually less important in a frame can be rendered with less detail.

    The LOD approach optimizes the display of complex objects by constructing a number of progressively simpler versions of an object and selecting one of them for display as a function of range.

    An undesirable effect called popping occurs when the sudden transition from one LOD to the next LOD is visually noticeable.

    To remedy this SGI graphics platforms offer a feature known as Fade Level of Detail that smoothes the transition between LOD's by allowing two adjacent levels of detail to be sub-sample blended. This is now supported by most Scenegraphs, as long as there graphics support multi-sampling

    Here's a link to a Practical overview of an LOD 

 

* 55  *  What is a Symmetric Viewing Frustumb


    The Symmetric frustum defines the perspective projection applied to all scene elements processed for a channel. The near clipping distance is used to form a plane called the near clipping plane. The far distance defines the far clipping plane.

    For the symmetric frustum, both these planes are perpendicular to the line of sight of the viewer. The horizontal and vertical FOV's (fields of view) determine the radial extent of the view into the scene. FOV's are entered as degrees for the full width of the view desired. Entering a -1 for either but not both FOV causes the system to aspect match that FOV axis.

    For example suppose the horizontal FOV is 45 degrees and the vertical is set to -1. Once the window and channel are sized, the system selects the appropriate FOV degree for the vertical FOV to maintain an aspect ratio equal to that of the channel viewport.

    See vpChannel and the Vega Prime Programmers Guide for further details.

 

* 56  * What is a Asymmetric Viewing Frustum b


    An Asymmetric frustum or oblique projection is similar to the symmetric projection, except that the line connecting the center of the near face of the frustum with the eyepoint is not perpendicular to the near plane. That is, the line of sight is off-axis. This is useful for creating video walls, or matching the visual system to a specific video projection system, like a dome where the projection device is off axis to the screen.

     

    This type of perspective frustum requires six values to define it. Clicking on the Asymmetric Frustum option displays the six entry fields. The near and far values are the same as the symmetrical frustum.

    The left, right, bottom, and top values define the side planes of the frustum. They are the angle offset in degrees for the plane they represent.

    See vpChannel and the Vega Prime Programmers Guide for further details.

* 57  *  What is a Orthographic  Viewing Frustum b



    The Orthographic projection is a non-perspective projection. This means objects appear to be the same size no matter what their distance is from the viewer. Generally used for a Map view or Hud overlay view

    The sides of the frustum are parallel to the line of sight of the viewer. The Near and Far distances define the near and far clipping planes.

    The Left, Right, Bottom, and Top values define the frustum side planes. These values bear a direct relationship to the scale of the object being viewed.

    See vpChannel and the Vega Prime Programmers Guide for further details.

    Also see the following Viewing Frustum Overview Image

 

* 58  *  What are Isectorsb

     

    Isectors provide the ability to handle collision detection between objects within a Scenegraphs and are an essential part of most visual simulations

    For example, a typical need is to obtain the current Height Above Terrain (HAT) information in a flight simulator or a driving simulator is determined by firing a vertical line segment from the aircraft or vehicle towards the terrain/ground and calculating the distance between the aircraft or vehicle and the intersection point on the ground.

    Another example is the use of an Isector to pick or selecthings in the scene, this is typically done using an Line of Site (LOS) isector

     

* 59  *  What is a Line Segmentb


    Generally in a line segment is talked about and used as part of an Isector which is used for collision detection.

    A line segment in this case is defined by 2 XYZ vectors a Begin and an End position. A vpIsector class such as vpIsectorLOS will position and orientate the line segment.

    Basically speaking  the Isector will traverse its target scene graph and test a nodes bounding spheres against the Line segments. If no intersection is found then the node and all the nodes children are a rejected, this allows for fast collision detection.

    If an intersection hit is encountered with the bounding sphere the test can them become more fine grained test of each child node for an intersection until the leaf geometry node is reached, then data on the collisions detected can be stored such as pointers to node, position of intersection, the normal perpendicular to the intersection etc. (This is of course an oversimplification of a more complicated process)

     

* 60  *  What are the Differences between Real-time and Animation Applications b


    Animations are typically used for films, high resolution renderings, images for print, and pre-programmed demonstrations.

    Real-time applications are used in application where responding to user input is part of  the simulation, for example, during flight training and interactive architectural  demonstrations. Both real-time and animation applications simulate real and imaginary worlds with highly detailed models, produce smooth continuous movement, and render at a certain number of frames per second .

    Some of the main differences are: 

    • Real-time application frames are rendered in real time, which means the frames are continuously recalculated and rendered as the user changes direction and chooses where to move through the scene to view
    • Animation frames are pre-rendered, which means the animator sets the order of the frames and chooses the parts of the scene to view. Each frame can take hours to render
    • Real-time applications are highly interactive, and the user controls the movement of objects within the scene; animations do not allow for human interaction, and the user is a passive  participant 
    • The typical emphases of real-time applications are interactivity and purpose. Models in real-time applications typically have less detail than models that are used in animations to increase the rendering speed and shorten the latency period, which is the time delay from user input until the application makes an appropriate response. To achieve realistic real-time simulations, the latency period must be too short for the user to perceive
    • The emphases of animations are almost always non-interactive aesthetics and visual effects. Models in animations usually have much more details; mainly because the use of frames that pre-rendered ( which can take hours or days), the effect on drawing speed can be pre-determined
    • Real-time applications typically are displayed at various frame rates, which range typically require rate off 60 frames per second, this may change depending on application goals and screen complexity
    • While animations based applications usually display at standard 24 frames per second for every pre-rendered sequence of images ( which can take hours per frame compared 16.666 milliseconds for real-time at 60hz)

 

* 61  * How Can I Convert my 3d Models to an OSG Format b


    This would depend on what format your source models are in.

    Typically you should  be able to use a format conversion program such as Polytrans or Deep Exploration. These offer a good selection of import formats and the ability to output OpenFlight models.
     

* 62  *  How do I Calculate the Vertical Field of Viewb


    Here's one way that you could do this is along the following lines:
     

Formula :







                        


    aspect_ratio = channel_width / channel_height

    width = tan ( horizontal_fov / 2.0 ) * 2.0 * near_clip

    height = width / aspect_ratio

    vertical_fov =  2.0 * atan(  height / ( 2.0 * near_clip  ))

     

 

     

* 63  * How do I Calculate the Horizontal  Field of View b


    Here's one way that you could do this is along the following lines:

     

Formula :





                        


    aspect_ratio  =  channel_height / channel_width

    height = tan ( vert_fov / 2.0 ) * 2.0 * near_clip

    width = height / aspect_ratio

    horizontal_fov =  2.0 * atan( width / ( 2.0 * near_clip  ))

     

 

     

* 64  * Why Does glReadPixels Capture other Windows  b


    When I do a glReadPixels and write this out as an image file or to an AVI file, I get other windows captured, why.

    Presuming that when you do the call to glReadPixel you have other windows overlapping the graphics window, then it is likely that you will see the other windows in your capture

    Unfortunately This is not so much a platform issue as it is a consequence of the OpenGL specification.

    Paraphrasing section 4.1.1 "Pixel Ownership Test": ...if a pixel in the frame buffer is not owned by the GL context, the window system decides the fate of the incoming fragment; possible results are discarding the fragment...  Note that no mention is made of whether front or back buffer; it's entirely the window system's call.  Any code depending on a particular implementation's behaviour is very non-portable.

    This seem to be more of a problem for Windows users and not as much on X11 based OS's (although not guaranteed).

    On windows you can force you application to the stay on stop and then glReadPixel will capture just the applications window

     

* 65  * How can I Force my OSG Window on Top b


    On Windows this is quite straight forward using the following on your window
     

Code :

--


    RECT wpos;

     

    SetFocus();

     

    BringWindowToTop();

     

    SetWindowPos(  &wndTopMost,

     

                                wpos.left,

     

                                wpos.top,

     

                                wpos.right - wpos.left,

     

                                wpos.bottom - wpos.top,

     

                               SWP_NOMOVE | SWP_SHOWWINDOW | SWP_NOSIZE );

     

 

     

     

* 66  *  How Can I Stop My Window being Ontop after using &wndTopMostb

     

    How can I stop my OSG window from being on top after using &wndTopMost after using FAQ 65

    On Windows this is quite straight forward using the following on your window
     

Code :

--


    RECT wpos;

    SetWindowPos(  &wndNoTopMost,

     

                                wpos.left,

     

                                wpos.top,

     

                                wpos.right - wpos.left,

     

                                wpos.bottom - wpos.top,

     

                               SWP_NOMOVE | SWP_SHOWWINDOW | SWP_NOSIZE );

     

 

     

     

* 67  *  How can I contribue to the Open Scene Graphb


    You need to send the changes or additions as whole files to the osg-submissions mailing list along with an explanation of the changes or new features.  Please do not send diffs or copy and paste extracts of the changes in emails, as these will be simply discarded by the integrators, as they are too unreliable for review and merging and can lead to too many errors.

    Alternatively you can also post changes and  submissions under the Community section of the Open Scene Graph web site. This is particular appropriate for complete new functionality such as Node Kits and Plug-ins, you can then inform the world via the osg-users or osg-submissions list of this entry

     

* 68  * How can I make Osg::ParticleEffect run for ever b


    Simply set the Lifetime of the ParticleEffect to '0' and the effect should then continue to run until you stop the effect or kill the application

    Particle::setLifeTime(....);

     

* 69  * Best way to achieve Picking with OSGb


    Q: I want to do picking in OSG should I use the osgpick demo which uses the intersectVisitor, or would I be better Opengl directly such as that found in  Max Rheiner'a GLPick demo ?

    Using IntersectVisitor is more efficient than using GLpick. The reason for this is that GL pick requires a round trip to the graphics pipeline and this is I/O bound and is generally an expensive operation . The IntersectVisitor does ray/line segment  intersections very efficiently by means of using the scene graph, node's bounding spheres and trivial rejection, all in the CPU, memory bound operations. 

    Note that what is lacking in the  current IntersectVisitor implementation, is the ability to pick lines and points. There are only ray intersections, which can intersect triangles and bounding volumes

     

* 70  *  What are Reference Pointersb


    Q: I see references to "Reference Pointers" all through the OSG source, but what are Reference pointers?

    "Reference Pointers" can also known as "Smart Pointers", "Auto Pointers", and may have different functionality, but are principally the same thing

    In brief, "Reference Pointers" are C++ objects that simulate normal pointers by implementing operator-> and the unary operator*. In addition to sporting pointer syntax and semantics, "Reference Pointers" often perform useful tasks such as memory management, reference counting, scope, locking all under the covers thus freeing the application from carefully managing the lifetime of pointed-to objects

    Here's a link to great right up on Reference Pointers with OSG by Don Burns. this should give you more than enough information to get you by

    http://dburns.dhs.org/OSG/Articles/RefPointers/RefPointers.html

    Other Articles by Don can be found here http://dburns.dhs.org/OSG/Articles

     

* 71  *  How can I retrieve XYZ from scene view matrixb

     

    To retrieve the current XYZ from Scene Views matrix you can do something along the lines of :

     

Code :

--

     

    osg::Matrix matrix  = m_sceneView->getViewMatrix();

    osg::Vec3 xyz( matrix(3,0), matrix(3,1), matrix(3,2) );

     

 

 

* 72  *  How do I disable the automatic DOF animations b


    For a single DOF transformation node  node you need to find or get a pointer to the node and then simple call the:
     

Code :

--

     

    osgSim::DOFTransform *m_myDofTransform = findMyDofNode( ... );

    m_myDofTransform->setAnimationOn( false );

     

 

    For multiple DOF nodes you  need to load your model or grab a pointer to a node then traverse using a custom NodeVistor and call the following  function on the node 'doftransform->setAnimationOn( false)' on all the transform nodes found. Code for the node visitor would look something lthe following:

     

Code :

--

     

     

class CswitchOffAnimation : public osg::NodeVisitor  {

     

    public :

     

    CswitchOffAnimation () :

     

           osg::NodeVisitor( osg::NodeVisitor::TRAVERSE_ALL_CHILDREN ){ m_aniState = false; }

     

     

    virtual void

    apply( osgSim::DOFTransform &transform )  {

                

     

    osgSim::DOFTransform *doftransform = dynamic_cast< osgSim::DOFTransform * >( &transform );

     

        if ( doftransform != NULL )

            doftransform->setAnimationOn( false );

            

          traverse( transform );

           

     } // func apply

     

     

    void

    setState( const bool state ) { m_aniState = state; }

     

     

    private :

     

    bool m_aniState;

     

} ; // class  CswitchOffAnimation

     

     

 


    You can then call the node visitor in on your node or newly loaded model

     

Code :

--

     

    CswitchOffAnimation  setAniVisitor;

     

    //

    // Switch the animations OFF

    //

    setAniVisitor.setState( false );

     

    m_myNode->accept( setAniVisitor );

     

     

     

    //

    // Switch the animations On

    //

    setAniVisitor.setState( true );

     

    m_myOtherNode->accept( setAniVisitor );

     

     

 

     

* 73  * How do I enable the automatic DOF animations b

     

    For a single DOF transformation node  node you need to find or get a pointer to the node and then simple call the:
     

Code :

--

     

    osgSim::DOFTransform *m_myDofTransform = findMyDofNode( ... );

    m_myDofTransform->setAnimationOn( true );

     

 

    For multiple DOF nodes you  need to load your model or grab a pointer to a node then traverse using a custom NodeVistor and call the following  function on the node 'doftransform->setAnimationOn( true)' on all the transform nodes found.

    See the code for the node visitor in FAQ 72:

     

* 74  *  What are Render Bins b


    A quick overview of Render bins :

    During the cull traversal, a Open Scene Graph can rearrange the order in which Geometry is rendered for improved performance and image quality. Open Scene Graph does this by binning and sorting the geometry.

    Binning is the act of placing Geometry into specific bins, which are rendered in a specific order. Open Scene Graph provides two default bins:

    • one for opaque geometry
    • one for blended, transparent geometry

    The Opaque render bin is drawn before the Transparent render bin so that transparent surfaces can be properly blended with the reset of the scene.

    Open Scene Graph applications are free to add new render bins and to specify arbitrary render bin orderings and the type of sorting within the render bins them selves

    A good source of information on Scene Graph, it components, traversal and Render bins can be found in the SGI Performer Online Documentation

     

* 75  *  Can I share scene handlers between multiple camerab

     

    Unfortunately  currently the answer is  No  you cannot share scene handlers between cameras. 

    Each camera must have a unique Scene Handler, each Scene Handler will have its own SceneView and each Scene View's State should have a unique contextID

    The Pseudo code goes something like this:

     foreach camera N

           cameraN = create Camera;

           osgProducer::OsgSceneHandler sh = create SceneHandler;

           sh->getSceneView()->getState()->setContextID(N);;

           cameraN->setSceneHandler(sh);

     

* 76  *   What is the OSG co-ordinate systemb

     

    The short answer that most of the Open Scene Graph examples and manipultors adhere what is found in most simulation packages which is:

    • X+ = "East"
    • Y+ = "North"
    • Z+ = "Up"

     

    The orientation is imposed by the osgGA matrix (and therefore camera) manipulators.  By default the osg core does not impose anything on the OpenGL default which is:

    •  X+  "to the right"
    •  Y+  "up"
    •  Z+  "out of the screen"

     

* 77  *  How to load a model without texturesb


    Currently there is no formal way of loading a model in to Open Scene Graph and have the loading ingore/not load the models textures

    You could possibly :

    • Setting the file paths so the textures are not found
    • Rename your texture directory
    • Create a copy of your model with no textures
    • Load into OSG traverse the tree and remove all texture states then save as OSG/IVE

     

    Also see FAQ 46 How to disable Textures on a osg::Node

     

* 78  *  How replay a recorded animation paths in osgViewerb

     

    To replay a previously recorded animation path in the osgViewer you can do the following

    osgviewer mymodel.osg -p saved_animation.path

     

* 79  *  How can I make and use a Perlin noise texture in OSGb

     

    You can find an example of generating a Perlin noise texture in the osgshaders example, it is used for the marble and erode effects.

    Here's one link to an article on Perlin noise texture usage

    http://www.engin.swarthmore.edu/~zrider1/advglab3/advglab3.htm

    Try a google for more links there are many :)

    http://www.google.com/search?q=perlin+noise+opengl

     

* 80  *  What is Producerb


    Producer ( more correctly  Open Producer) is a cross-platform C++/OpenGL library that is focused on Camera control. Producer's Camera provides projection, field of view, viewpoint control, and frame control and is used by Open Scene Graph.

    Producer can be used in a multi-tasking environment to allow multiple Camera's to run in parallel supporting hardware configurations with multiple display subsystems. Threading, Camera synchronization and frame rate control are simplified in the Producer programming interface.

    Producer Cameras have an internal rendering surface that can be created automatically, programmatically, or provided by the programmer, such that Producer can fit into any windowing system or graphical user interface. Producer manages multiple rendering contexts in a windowing system independent manner.

    Producer provides a simple, yet powerfully scalable approach for real-time 3D applications wishing to run within a single window to large, multi-display systems.

    Producer is highly  portable and has been tested on Linux, Windows, Mac OSX, Solaris and IRIX.  Producer works on all Unix based OSes (including Mac OSX) through the X11 Windowing system, and through the native win32 on Windows.

    Producer is written with productivity, performance and scalability in mind by adhering to industry standard and employing advanced software engineering practices.

    Software developers wishing to produce 3D rendering software that can display on a desktop, and move to a large system or clustered system of displays by simply changing a configuration file, can depend on Open Producer to handle all the complexity for them.

     

* 81  *  Where can I find more information on Producerb

* 82  *  What is osgviewerb

     

    osgviewer is a basic scene graph viewing application that is distributed with Open Scene Graph.

    osgviewer's primary purpose is an example of how to write a simple viewer using the Open Scene Graph API, osgviewer is also functional enough to use as a basic 3D graphics viewer

     

* 83  * What are the command line options for osgviewer b

     

    To print out the command line options available, in a console window type:

    osgviewer --help

    Options:

    --dem <filename>

    Load an image/DEM and render it on a HeightField?

    --display <type>

    MONITOR | POWERWALL | REALITY_CENTER | EAD_MOUNTED_DISPLAY

    --help-all

    Display all command line, env vars and keyboard & mouse bindigs

    --help-env

    Display environmental variables available

    --help-keys

    Display keyboard & mouse bindings available

    --image <filename>

    Load an image and render it on a quad

    --rgba

    Request a RGBA color buffer visual

    --run-till-elapsed-time

    Specify the about of time to run

    --run-till-frame-number <integer>

    Specify the number of frame to run

    --stencil

    Request a stencil buffer visual

    --stereo

    Use default stereo mode which is ANAGLYPHIC if not overriden by environmental variable

    --stereo <mode>

    ANAGLYPHIC | QUAD_BUFFER | HORIZONTAL_SPLIT | VERTICAL_SPLIT | LEFT_EYE | RIGHT_EYE | ON | OFF

    -O <option_string>

    Provide an option string to reader/writers used to load databases

    -c <filename>

    Specify camera config file

    -e <extension>

    Load the plugin associated with handling files with specified extension

    -h or --help

    Display command line paramters

    -l <library>

    Load the plugin

    -p <filename>

    Specify camera path file to animate the camera through the loaded scene

     

* 84  *  How to set the background color in OSGb

     

    Set set the clear or background color in ans Openg Scene Graph application you can use something alone the lines of: ( assuming your using Producer)
     

Code :

--

     

    osgProducer::Viewer  viewer;

    viewer.setClearColor( osg::Vec4( 1.0, 1.0, 1.0, 1.0) );

     

 

     

* 85  *  Windows equivalent of GLX_SAMPLES_SGIS & GLX_SAMPLES_BUFFER_SGISb

     

    Q: I'm trying to use multi-sampling andanti-aliasing and want to use  GLX_SAMPLES_SGIS & GLX_SAMPLES_BUFFER_SGIS but I cannot find these in GWLExtension.h or any were else on my macine

     

    Firstly note that GLX_SAMPLES_SGIS & GLX_SAMPLES_BUFFER_SGIS are actually SGI (Silicon Graphics) extension and are only available on SGI big Iron

    What you need to do is look through  GWLExtension.h for the equivalent #defines for your graphics driver and operating system

    Also you can go and check out the Opengl Extension Library hosted at SGI's web site

    http://oss.sgi.com/projects/ogl-sample/registry

    In this this case take a loot at the following, which should show you the  equivalent #defines that you need

     

    WGL_SAMPLE_BUFFERS_ARB       0x2041

     

    WGL_SAMPLES_ARB        0x2042

     

* 86  *  Why is there no cursor showing when I use  producerb

     

    Some people are having issues with the Cursor not showing inside of OSG/Producer, Christopher K Burns offered the following work around that has been of help to some users

    One solution is to force the load of the cursor resource is found in the producer project, specifically the file  "RenderSurface_Win32.cpp", change the function _setCursor to read:

     

Code :

--

     

    void

    RenderSurface::_setCursor( Cursor cursor ) {

     

    if( _useCursorFlag == false ) {

     

       ::SetCursor( _nullCursor );

     

        }

    else {

     

        _currentCursor = cursor ? cursor : _nullCursor;

     

        if ( _currentCursor == NULL ) {

     

            _currentCursor = ::LoadCursor( NULL, IDC_ARROW );  

     

            }  

     

            ::SetCursor( _currentCursor );

     

        }

     

     

    } // ::_setCursor

     

     

 

    Now  re-compile and re-link and  you hopefully you should once again be able to see the  cursor

     

* 87  *  Texture not showing on my polygons why b

     

    First thing to check is that texturing is enabled for the scene-graph or the node tree you polygons are attached to, see FAQ 47 on how to enable texturing

    If you are generating your own geometry then make sure you create or share an osg::StateSet with setTextureAttributeAndModes and give this to your osg::geode

    Also remember that you must also give your geometry Texture coordinates for the UV's which tell OSG/Opengl how to map the texture on to the polygons, if you have no texture coords then you will not see any texture on the geometry

    You can create your texture coordinates your self or you can use Opengl and glTexGen to create the Texture coords for you ( See the Opengl Programmers guide for more information on Textures and set-up of texturing state)

     

* 88  *  How to Print the SceneGraph out to a fileb

     

    This is quite easy to do as the OSG file format is an ASCII based format

    So you can simply load you file/files into your OSG application write your scene or node out to and OSG file using the  .osg plug-in e.g. osgDB::writeNodeFile( *my_node, "my_node.osg")

    Also you can simply load your files/files in the osgviewer application found in the OSG distribution  and then press the "o" key to write the whole SceneGraph ( saves to "saved_model.osg" )

     

* 89  *  How to specify the amount of Video memory to useb

     

    Q: Is there any way to tell OSG to use all of the available video memory or to be able specify the maximum amount of video memory to use?

    In a nut shell at this time there is no way to do this, this is beyond the control of OSG and would reside at the graphics driver level and currently I'm not aware of any driver ( Opengl/DirectX etc) that offers this ability

     

* 90  *  Can I tell osgconv to use only the lowest LOD's from OpenFlight  filesb

     

    Q: is it possible when using osgconv  to tell the program that when it encounters a LOD node just to use the lowest level of detail ?

    Currently with the application osgconv, there is not any direct way to force the application to use only the lowest LOD's.  

    What you could do is to use the LODScale on SceneView/OsgCameraGroup to force the selection of lower LOD levels but this cannot guarantee to force the lowest LOD.

    To do this you will have write your own code and then traverse the scene graph, modify it manually to remove the LOD's nodes you don't want and then write out your modified model

    Note you will also need to watch out for additive LOD's as the lowest level here is probably not what you want

     

* 91  *  How to get the Normals from an Intersection testb

     

    This is really very easy if you are using the osgUtil::IntersectVisitor as the base for your intersection testing

    The  osgUtil::IntersectVisitor::Hit stores the normal of the collision in variable  _intersectNormal

     

* 92  *  How can I dynamically turn a Camera On or Offb

     

    It is very straight forward to turn a Camera On or Off firest you ned to get a pointer to your camera

    e.g. Producer::Camera *camera = getPointerToMyCamera();

    then to turn the camera On call camera->enable();

    else to turn the camera Off call camera->disable();

     

* 93  *  Where are the Producer Camera Config Examplesb

     

    Q:  I cannot seem to  find any documentation  about Producer::CameraConfig or any examples of how to us a camera config, is there any documentation or examples  ?

    The documents can be found Producer disitrubtion:

    Producer/doc/CameraConfig.bnf

    Producer/doc/CameraConfig.example

    There are also other configuration (.cfg) file examples, look for them in doc/Tutorial/SourceCode/*/*.cfg

    Also note thst  the some of the Producer documentation is only available after you have actually built the software

     

* 94  *  Why do some of my DDS textures appear Invertedb

     

    What you are seeing is not really a bug but is correct for  DDS (Direct Draw Surface) textures and depends on how you generated them.

    As DDS was designed by Microsoft and thus originally intended for Direct X, which has its origin in the upper left corner, while OpenGL  has its origin in the lower left corner.

    Basically you need to tell your DDS generation tool to invert the image,many tools can do this some don't

    In Open Scene Graph you can post flip the DDS imagery by passing in the ReaderWriter::Options string "dds_flip" into the readImageFile( file, options )

     

* 95  *  Does OSG have an equivalent of glPushAttribb

     

    There is no equivalent to glPushing and glPopping of specifically categorized State attributes.  

    The application of state attributes occurs on a lazy state update, after all stateSets have been accumulated in a traversal of the scene graph, in that the stateSet is applied only at the drawable.  Pushing and popping of statesets occur during the recursion of the graph traversal.

    So, If you set a state attribute that affects line width (osg::LineWidth state Attribute) for example , it will be pushed when encountered on the scene graph and popped when returning from the traversal for any subgraph where the StateSet is applied

    osg::StateSet is a set of both modes and attributes and is something you attach to the scene graph to specificy the OpenGL state to use when rendering.

    osg::State is a per graphics context object that tracks the current OpenGL state of the that graphics context.  It tracks the state in OSG object and mode terms, rather just in totally raw OpenGL terms,  with respects to  OpenGL modes it does track OpenGL modes directly.

    osg::State exists to implement lazy state updating, and also as a way of getting current OpenGL state without requiring a round trip to the graphics hardware

     

* 96  *  How can I search the OSG Mailing Listb

* 97  *  b

    .

* 98  *  b

    .

* 99  *  b

    .

* 100  *  b

    .
 
 
 

 

 

© Copyright 2004-2005 Gordon Tomlinson  All Rights Reserved.

All logos, trademarks and copyrights in this site are property of their respective owner.