Bump map

From H3D.org

Jump to: navigation, search

In this example we do bump mapping using the X3D MultiTexture node.

Image:Note-tip.pngThis tutorial refers to the source code. You can download it from SVN at H3D release branch, or find it at H3D/H3DAPI/examples/bumpmap/.
 
<!!-- bumpmap.x3d -->
  <Scene>
    <IMPORT inlineDEF='H3D_EXPORTS' exportedDEF='HDEV' AS='HDEV' />
    <TransformInfo DEF="TRANS" />
    <Shape>
      <Appearance>
        <Material />
        <MultiTexture DEF="MT" source="DIFFUSE" mode='"DOTPRODUCT3" "MODULATE"' >
          <ImageTexture url="stone_wall_normal_map.bmp" />
          <ImageTexture url="stone_wall.bmp" />
        </MultiTexture>
      </Appearance>
      <IndexedFaceSet coordIndex="0 1 2 3" solid="false">
        <Coordinate DEF="COORD" point="0.15 0.15 0, 0.15 -0.15 0, -0.15 -0.15 0, -0.15 0.15 0" />
 

We begin by importing the haptics device from H3D_EXPORTS and set HDEV as our reference to it.

As is usual with Shapes, we add the Appearance node to define the appearance of the geometry. Here, MultiTexture is used as a value to the texture field of Appearance node as shown above. There are two ImageTextures that we will apply to the geometry and blended to create the appearance of depth - the first containing a normal map and the second a brick wall image. We specify the source as DIFFUSE to indicate that we that we will use diffuse colour as the texture argument when blending.

IndexedFaceSet is used as our geometry.

If you run the code as it is now (of course with the proper closing tags) you will see that there is already a depth appearance on the brick wall. This is obvious when compared to the source image. To create this effect, the normal map is blended with the default color of IndexedFaceSet with DOTPRODUCT3 mode, and the result of that is then blended with the brick wall image with the MODULATE mode.

From left to right: normal map, brick wall image and multitextured bump map
Enlarge
From left to right: normal map, brick wall image and multitextured bump map

Up until now, the light source of the scene comes from a global light. Suppose if we model a torchlight from the haptics device. When we direct our light onto the brick wall we would expect to see proper shadowing. The remaining code below creates the proper shadow effects for this multi-textured bump mapping.

 
<!!-- bumpmap.x3d -->
        <Color DEF="COLOR" />
      </IndexedFaceSet>
    </Shape>
 

We specify a Color node for the IndexedFaceSet. Normally, the Color node is used to apply colours on the IndexedFaceSet. However, in this example we will use the color field to contain the direction of light from each vertex on the IndexedFaceSet. These coordinates will be generated in the python script bumpmap.py.

 
<!!-- bumpmap.x3d -->
    <PythonScript DEF="PS" url="bumpmap.py" />
    <ROUTE fromNode="HDEV" fromField="trackerPosition" toNode="PS" toField="toColor" />
    <ROUTE fromNode="TRANS" fromField="accInverseMatrix" toNode="PS" toField="toColor" />
    <ROUTE fromNode="COORD" fromField="point" toNode="PS" toField="toColor" />
 
    <ROUTE fromNode="PS" fromField="toColor" toNode="COLOR" toField="color" />
  </Scene>
 

The python script is added to the scenegraph. Since we are modeling a torchlight, the light position is the same as the position of the haptics device tracker. We will need this value, hence the routing of trackerPosition to the python script. accInverseMatrix and point are also routed to the script. Results from the script is then routed to the color field.

 
#bumpmap.py
from H3DInterface import *
 
class SFVec3fToColor( TypedField( MFColor, ( SFVec3f, SFMatrix4f, MFVec3f ) ) ):
  def update( self, event ):
    inputs = self.getRoutesIn()
    light_pos_global = inputs[0].getValue()
    acc_inverse_matrix = inputs[1].getValue()
    points = inputs[2].getValue()
    light_pos = acc_inverse_matrix * light_pos_global
    res = []
    for p in points:
      v = light_pos - p
      v.normalize()
      v = v * 0.5 + Vec3f( 0.5, 0.5, 0.5 ) 
      res.append( RGB( v.x, v.y, v.z ) )
    return res
 
toColor = SFVec3fToColor()
 

In the python script, we get the incoming routes and store them in the variables light_pos_global, acc_inverse_matrix and points. The the trackerPosition value that we routed in (now stored in light_pos_global) is in world coordinates. We convert this to coordinates local to IndexedFaceSet by multiplying it with acc_inverse_matrix, and store the result in light_pos.

Final bump map 1
Enlarge
Final bump map 1
Final bump map 2
Enlarge
Final bump map 2

To calculate the direction of light from every point in IndexedFaceSet we iterate through each point and do a vector subtraction i.e. light_pos - p and normalize the result.

Since we need to store these direction vectors as RGB values bounded by 0 and 1, we apply calibration before appending them as RGB in the list res.

res is then returned, and as defined in the X3D file, will be routed to the color field.

With the Color node present, DOTPRODUCT3 mode is now applied between the normal map and the "colours" of the IndexedFaceSet specified by Color. This dot product is in actual fact between the normal map and the vectors representing light direction from every point on the surface of IndexedFaceSet. The result of this is then blended with the brick wall image with the MODULATE mode.

References

Personal tools
go to