SoundSpectrum Python-Based Drawing API Documentation
Last updated: January 2011
About
SoundSpectrum maintains a Python-based 3D drawing API for both public and internal use. The purpose of this API is to offer rich content authoring capabilities by exposing SoundSpectrum’s high-performance C++ based graphics engine to an embedded python interpreter. SoundSpectrum develops and maintains each of these layers and currently targets OpenGL, OpenGLES1, OpenGLES2, and Direct3D on Windows, OS X, and iOS.
This documentation describes how the create content using the SoundSpectrum’s python-based drawing API. Because Python is an interpreted language, special care has been made in designing this API to maximize performance.
Table of Contents
Concepts:
2D Canvas, 3D Worlds and Cameras
Reusing VertexLists Using Matrix Transformations
API Reference:
Aeon includes a template scene that can be used to quickly start developing content. The template file is located in your SoundSpectrum folder (Aeon/Home/Scenes/Dev/template.py). To view this file in the scene list, open Aeon/Home/Library/SceneData.py and set ‘devScenes = 1’. This will add any scene files in the Dev/ folder to the scene list.
Python is a rich and powerful language, but for the purposes of this introduction, we're going to focus on basic concepts that will allow you to write Python code that interacts with the drawing engine. One you are comfortable with the basic interactions between Python and the drawing engine, you can learn more about the advanced features of python in The Python Documentation.
Python is
an object-oriented language, which means that data and code are generally held
in structures called objects. Without
going into details, an object is a way to group together related pieces of source code and
data in one place, and the word class refers to a certain type of
object. In
order to program the drawing engine with Python, we'll need to create a Python
class which the drawing engine can communicate with. The following code shows the
bare skeleton of a scene script.
from SoundSpectrumDrawing import *
class MyDrawClass:
def __init__( self ):
# Custom initialization goes
here.
...
def
Draw( self, fft, time ):
# Custom drawing goes here.
...
SetDrawClass( MyDrawClass )
In the sample code shown above, the class MyDrawClass defines two methods: __init__, and Draw. When writing a scene, we will write code for these two methods which tell the visualizer what to do when the scene starts (__init__) and then how the scene should be drawn during each frame (Draw).
2D Canvas, 3D Worlds and Cameras
For simplicity, we sometimes refer to a 2D perspective as a "canvas", because drawing is conceptually similar to laying paint down on a flat canvas. A 3D perspective is referred to as a "world", because drawing is conceptually similar to setting up the positions of objects in a 3D space. There's no real difference between the 2D canvas and the 3D world, except in how they are typically used. It is perfectly acceptable to do "flat" rendering with a 3D world or "3D" rendering on a flat canvas, but framing the scene can become a bit more complicated.
When using a 2D canvas, the screen behaves like a piece of graph paper with the center representing the origin (0, 0). Moving left and right represents motion on the X-axis, while up and down represents the Y-axis. Because the drawing window size may change and because the size may not be the same size along the X- and Y-axis, there are several possible ways to interpret the length of the X- and Y-axis. The SoundSpectrum drawing engine provides three options for specifying how the lengths are interpreted. These options are passed as an argument to the function Set2DCamera:
ss_Stretch |
Stretches both the X- and the Y-axis so that they both run from -1 to 1 in both directions. This has the effect of stretching out all drawing commands when one axis is longer than the other. |
ss_Crop |
The larger axis run from -1 to 1. The smaller axis has an equal portion cropped on both sides so that it fits into the space provided. |
ss_Letterbox |
Makes the smaller axis run from -1 to 1. The larger axis will run over the -1 to 1 boundaries |
HSL (Hue Saturation Luminosity) Color Space
The HSL color space represents each color as its hue (a "pure" color value with no added white or black), saturation (intensity of the hue) and the luminosity (a bias towards white or black), with each number in the range 0.0 to 1.0. The hue value determines where on the color spectrum a color lies and is the value which is associated with the color names, such as "red", "blue", "yellow", etc. The saturation is a value which determines how vivid the color shade is, ranging from 0.0 (grey) to 1.0 (the pure hue). The luminosity specifies how light or dark a color shade is ranging from 0.0 (black) to 1.0 (white), with 0.5 representing the hue without any additional bias towards black or white.
Though the descriptions of the three values may not be intuitive, the values themselves are quite intuitive and easy to understand. By modifying existing HSL color values, you will quickly get a feel for how each value affects the resulting color.
RGB (Red Green Blue) Color Space
The RGB Color space represents a color as some blending of red, green and blue values ranging from 0.0 to 1.0 in each channel. Web programmers will be familiar with the RGB color space because HTML uses RGB to express colors, though with hexadecimal (“hex”) values from 00 to FF instead of numbers from 0.0 to 1.0. The RGB color space is the most natural expression of color for TVs and computer monitors, but is less natural for people to use than the HSL color space. In this color space, pure red is represented as ( 1.0, 0.0, 0.0 ), pure green as ( 0.0, 1.0, 0.0 ) and pure blue as ( 0.0, 0.0, 1.0 ), and different "brightness" values of each color are easily achieved by using intensities other than 1.0. For other colors, however, which are not made up of a single component, the RGB color space is less intuitive: yellow if ( 1.0, 1.0, 0.0 ).
RGB vs. HSL
HSL is generally more effective than RGB when it comes to “human” manipulation of color such as darkness, lightness, color saturation, and the color itself (hue). So, if you plan to alter an existing color, HSL is usually the best choice. If you intend to manually combine/blend two colors with one another (and not edit the colors themselves), RGB is usually the better choice.
Color Modulation
All of the drawing routines provided by the API allow for a "modulation color" to be specified, allowing you to change the color of what you’re drawing without having to step through every vertex and manually edit each vertex’s color. You can specify the modulation color as either RGB or HSL, though behind the scenes, the mathematics of color modulation takes place in the RGB color space.
For each channel (red, green, blue), the drawing color is computed as the original color (taken from the vertex data or texture) multiplied by the modulation color:
RedFinal = RedModulate * RedOriginal
GreenFinal = GreenModulate * GreenOriginal
BlueFinal = BlueModulate * BlueOriginal
AlphaFinal = AlphaModulate * AlphaOriginal
The math is shown here because it has an important implication on the way modulation colors can be used. Because all color values are between 0.0 and 1.0, the modulation values have the power to reduce the amount of light in each channel, but cannot increase it. This is analogous to placing a colored filter in front of a spot-light: it's possible to use a red filter to make a white spotlight turn red, but it's not possible to use a "white" filter to turn a red spotlight white. This means that to get the most flexibility from your textures or vertex data, with respect to the modulation color, they should be as close to white as possible.
The Alpha Value
In addition to HSL and RGB, all of the color objects and color vertex objects also take an alpha value, which has the same meaning in both color spaces. The alpha value represents the transparency of a drawing with a value of 1.0 being fully opaque and 0.0 being fully transparent. You can use the alpha values to fade objects in and out, and to render other "special effects". Note that the alpha is also subject to color modulation, so changing the modulation color argument is an easy way to change the opacity of a rendered object.
Color Objects
In the SoundSpectrum
Python environment, you can express a color in the following ways:
RGBA( R, G, B, A )
HSLA( H, S, L, A )
White( A )
The valid range for each of these values is 0.0 to 1.0, so if you specify a value outside of this range, the value will be “clamped” to 0.0 or 1.0. For convenience, note that if you don’t specify an alpha value (the “A” value above), it will default to 1.0. Also note that specifying White( A ) is equivalent to RGBA( 1, 1, 1, A ).
Some configs written using the SoundSpectrum engine are intended as "standalone" configs with complete control over artistic decisions such as color, but more often than not it is necessary for a config to fit in to an existing scene. Furthermore, the SoundSpectrum drawing engine is used both for background and foreground drawing. In order to provide the best possible visual experience, background drawings must coordinate with the foreground by making use of background colors which the foreground provides. The background colors are generated by the SoundSpectrum engine based on current foreground, and are continuously updated by the engine as the foreground changes. In order to provide the best possible coordination between background and foreground, background configs should therefore make use of the background colors specified by the engine at every frame.
For newer SoundSpectrum applications (Aeon and later), color palettes are the primary method of applying a colorscheme to the scene. See “Color Palettes” for more information.
For an application such as the WhiteCap backgrounds, the SoundSpectrum engine provides three background colors: a primary, a secondary and a highlight color, which are to be used in an approximate ratio of 50%/30%/20%. Because these values can change at each frame, configs should avoid choosing these colors during initialization, and should instead pick a color index during initialization which will refer to one of the elements from the color list. The color can then be computed at each frame using the color index.
A background config does not necessarily need to use the background color exactly as it is provided. Variation can be introduced by making small changes to the elements of the HSLA object, but a config should do its best to conform to the spirit of the colorscheme.
Functions are available below for retrieving a list of colors that should be used in the current config.
A VertexList is an ordered list of vertex data objects used for the purpose of drawing lines and polygons. A VertexList can contain any number of vertex data elements, but every vertex data object must be in the same format. When you create a VertexList, you must specify a format which specifies what kind of data will be stored in each element and the order it will be stored. The vertex format that you provide takes the form of a list of any of the following constants:
VertexComponent Type |
Number of Values |
ss_XY |
2 |
ss_XYZ |
3 |
ss_Alpha |
1 |
ss_RGB |
3 |
ss_HSL |
3 |
ss_TexXY |
1 |
ss_NormalXYZ |
3 |
ss_Width |
1 |
ss_Height |
1 |
ss_Angle |
1 |
When you create a VertexList, you must also specify how often you plan to edit the data it contains. This allows the drawing engine to manage your VertexList in a way that maximizes drawing performance. Behind the scenes, the drawing engine uses a copy of your VertexList’s vertex data whenever you draw with it, so the VertexList type you select basically tells the engine how often it needs refresh its internal copy of the vertex data. Since the update process can take time, it’s preferable to select the type of VertexList that requires it to be updated the least number of times:
ss_Static |
This means that the VertexList’s vertex data doesn’t change after it’s initially set. This is the most preferable kind of VertexList because it allows the engine to boost performance by skipping repetitious computation each frame. In the event that you need to change vertex data in a VertexList declared using ss_Static, you must call Update() (note that this usage case is rare: static vertex data typically doesn’t need to be changed once it’s initially set). |
ss_ChangesEachFrame |
This means that every frame, you intend to change the VertexList’s vertex data and then draw one or more times with it. Basically, this mode tells the engine to update its copy of your vertex data the first time it’s told to draw using that VertexList (for each frame). |
ss_ChangesEachUse |
This means that you intend to change the VertexList’s vertex data, then draw with it, then change the data again, and then draw with it, etc. This mode tells the engine to update its copy of your vertex data every time it’s told to draw using that VertexList. |
ss_UpdatedManually |
If you’re maintaining a VertexList and you only change a small percent of its data each frame, using ss_ChangesEachFrame or ss_ChangesEachUse will result in the engine doing unnecessary work that could reduce performance if the size of the VertexList is large. If ss_UpdatedManually is used, the engine will never automatically update its internal copy of your vertex data unless you explicitly tell it to do so using Update(). To see learn about Update(), see the declaration of VertexList in SoundSpectrumDrawing.py. |
|
|
Creating a VertexList
Creating a VertexList requires a vertex format (specified as a list of vertex component types) and a VertexList type, which tells the engine how often to automatically update its copy of your vertex data:
# Use ss_Static for array data that won’t change once we set it
circle = VertexList( [ ss_XY, ss_Alpha ], ss_Static );
# Use ss_ChangesEachFrame for if the VertexList’s data will change from frame to frame.
circle2 = VertexList( [ ss_XYZ, ss_RGB, ss_Alpha ], ss_ChangesEachFrame );
Once you create a VertexList, you’ll want to add vertex data objects so that you can start storing your data, which can be done in a number of ways for your convenience. You create Vertex objects (using the correct number of arguments implied by the vertex format of the VertexList) and add them the VertexList:
for i in xrange( N ):
angle = 2 * pi * i / N
circle.append( Vertex( cos( angle ), sin( angle ), 1.0 )
Under the hood, a Vertex object is merely and array of floating point values, so it only exists as a convenient way to express a floating point array of a certain size. More often than not, you’ll want to separate the creation of your vertex data arrays from their initialization:
for i in xrange( N ):
angle = 2 * pi * i / N
circle.AppendVertex()
circle[ i ][ 0 ] = cos( angle )
circle[ i ][ 1 ] = sin( angle )
circle[ i ][ 2 ] = 1.0
For performance and self-documentation reasons, global utility functions are strongly recommended and use of batch functions:
def SetCirclePt( vertex, theta, alpha ):
vertex[ 0 ] = cos( theta )
vertex[ 1 ] = sin( theta )
vertex[ 2 ] = 1.0
...
# AppendVertex() and InsertVertex() create can be told to make multiple Vertex objects
circle.AppendVertex( N )
for i in xrange( N ):
angle = 2 * pi * i / N
SetCirclePt( circle[ i ], angle, 1.0 )
Also note the use of append() above. VertexList inherits from python’s native list class, so VertexLists have all the properties of a python list object, allowing you use any of python standard list routines:
squares = VertexList( [ ss_XY ], ss_Static )
for i in xrange( M ):
r = 1.5 * ( i + 2.0 )
squares.extend( [
Vertex( r, r ),
Vertex( -r, r ),
Vertex( -r, -r ),
Vertex( r, -r ) ] )
VertexList Modifiers
Often, you’ll want to use a VertexList to draw something but you’ll want to modify or apply a property for the entire VertexList. For example, suppose you wanted to draw using the circle VertexList in the above sample code but wanted to change its color each time you drew it. You could add ss_RGB to its format, make it of type ss_ChangesEachUse and proceed to edit every element’s color values each time you were ready to draw it, or you could just simply keep it the way it is above use SetModColor() right before each time you go to draw it. It’s important to note that VertexList modifier routines are always a better than editing VertexList data directly since they will usually allow you to use ss_Static instead of ss_ChangesEachFrame and ss_ChangesEachFrame instead of ss_ChangesEachUse for your VertexLists. The following VertexList modifier routines are available:
# Binds the given texture object (or None) to this VertexList, and sets a modulation color if specified (given as a RGBA or HSLA object),
# effectively multiplying the RGBA of every vertex by the RGBA value of inModulationColor.
SetTexture( self, inTexture, inModulationColor = None ):
# Sets the global scale of every ss_Width vertex value in this VertexList. If this VertexList's
# vertex format doesn't contain ss_Width, each vertex implicitly has a width of 1.0.
# When a VertexList is created, its global width scale starts off as 1.0.
SetWidthScale( self, inValue ):
# Identical to SetWidthScale() except that ss_Height is the affected field.
SetHeightScale( self, inValue ):
# Sets the the global offset of every ss_Angle vertex value in this VertxList by the given angle (in radians). If this VertexList's
# vertex format doesn't contain ss_Angle, each vertex implicitly has an "angle" of 0.
# When a VertexList is created, its global angle starts off as 0.
SetAngleOffset( self, inRadianOffset ):
# Sets the default normal for every vertex in this VertexList. This vector is used when this VertexList doesn't
# contain ss_NormalXYZ values (or when the supplied or implied normal is degenerate)
SetDefaultNormal( self, inX, inY, inZ ):
Drawing a VertexList
DrawTriangles(
inFlags = 0, inNumStrips = -1, inVerticiesPerStrip = 0, inStartVertex = 0,
inVertexStride = 1, inStripVertexStride = 0 )
Draws triangles “strips” such that inVerticiesPerStrip specifies how many vertices will be used for each strip. After the 2nd vertex, each triangle’s vertices are v[i], v[i-inVertexStride], and v[i–2*inVertexStride], resulting in a total of inVerticiesPerStrip-2 triangles drawn per strip. This is typically used to draw a series of connected triangles that do not share a common vertex. If you pass the ss_TriangleFan flag in inFlags, after the 2nd vertex, each triangle’s vertices instead are v[i], v[i-inVertexStride], and v[0]. This is typically used to draw a series of connected triangles that share a common vertex, such as a closed polygon with a shared point in the center.
DrawLines( inFlags = 0, inNumStrips = -1, inVerticiesPerStrip = 0, inStartVertex = 0, inVertexStride = 1, inStripVertexStride = 0 )
Draws a number of small triangles to give the appearance of drawing a sequence of line segments. Although DrawLines() is often used to draw familiar thin lines, it can be surprisingly powerful for drawing things that would otherwise would be verbose and time-consuming to draw using DrawTriangles(). For example, using circle defined in the above sample code, the following draws a disk of a given width:
def DrawDisk( self, diskWidth ):
self.circle.SetWidthScale( diskWidth )
self.circle.DrawLines()
DrawBillboards(
inNumBillboards = -1, inStartVertex = 0, inVertexStride = 1 )
Draws “front-facing” squares (typically textured), that always appear flat and facing the camera, regardless of the camera’s global orientation or position. Drawing billboards is useful when you have a set of objects that are defined by a 2D or 3D position and a height and width and are represented as a single image.
All of the draw functions incorporate a start vertex index and a vertex stride. This allows you to use vertices from a VertexList that appear at any index or element stride. DrawTriangles() and DrawLines() incorporate a group size and stride, giving you more flexibility in how the verticies in your VertexList are arranged.
A Geometry object is used for loading and drawing static 3D geometrical objects, usually created in an external 3D modeling program. Multiple 3D objects can be ‘placed’ into one final object, allowing for algorithmic generation of static 3d geometry. Currently, the WaveFront Object file-format (.obj) is the only supported loadable file format.
Simple Loading and Drawing
The quickest way to load and draw geometry:
# in the __init__ function of your main scene class
self.geom = Geometry( “../common/cube.obj” )
# in the Draw() function of your main scene class, after the camera is positioned
self.geom.Draw()
This example creates a Geometry object, specifying the standard cube file to load and later render. The Geometry class offers a lot of useful functionality, discussed in the next subsections:
Texturing and Modulation Colors
You will probably want to eventually apply a texture or a modulation color to your geometry. To do this, simply call SetTexture( texture, color ), or SetModColor( color ):
self.geom = Geometry( )
# Any of the 3 following lines are valid
self.geom.SetTexture( texture ) # sets the modulation color to None (full modulation)
self.geom.SetTexture( None, color )
self.geom.SetTexture( texture, color )
self.geom.SetModColor( color ) # doesn’t overwrite the currently set texture
SetTexture() can be called at any point in the script. You can re-use the same objects with varying textures and colors each frame. Newer SoundSpectrum applications also support SetPalette() – see SetPalette() documentation below.
Grouping Concepts
Each Geometry object has the capability of loading specific objects from a multi-object source file into a specific destination object. Given an object file (items.obj) containing 3 different named objects (“cube”, “sphere”, “car”), to place these into one or more static geometry groupings you would do the following:
self.geom = Geometry()
self.geom.Place( “items.obj”, “cube”, “groupA” )
self.geom.Place( “items.obj”, “car”, “groupA” )
self.geom.Place( “items.obj”, “sphere”, “groupB” )
This example creates two internal groups within self.geom, named “groupA” and “groupB”. groupA contains the cube and car objects, and groupB contains a sphere. You now have several options when it comes time to draw:
# Draws all objects placed into groupA
self.geom.Draw( “groupA” )
# Draws all objects placed into groupB
self.geom.Draw( “groupB” )
# Draws all groups in self.geom
self.geom.Draw()
For more information, see documentation of Place() below.
Building Static Objects from Multiple Source Objects
If you wish to draw many smaller objects in a scene that are transformed as a group, you can significantly increase performance (and reduce code complexity) by building one static object out of those many source objects. To do this, you will find matrix operations useful. Geometry objects support all matrix operations (not including texture matrix operations) the canvas supports. Push() and Pop() are the equivalents of PushMatrix() and PopMatrix(). For more information, see Matrix Manipulation documentation below.
self.geom = Geometry( )
# Push the geometry object’s matrix
self.geom.Push()
# Add an object
self.geom.Place( “common/cube.obj”, “”, “result” )
# Apply transformations and add other objects
for i in xrange( 4 ):
self.geom.Translate( 3, 0, 0 )
self.geom.Rotate( 0, 0, 1, 3.14 * 0.5 )
self.geom.Place( “common/cube.obj”, “”, “result” )
self.geom.Pop()
Note that you may place any number of objects before you draw with that specific Geometry object. Once you draw, that object is internally compiled into a high-performance static mesh, and can no longer be added to. Finalize() performs this internal compilation as well, but is not necessary.
The SoundSpectrum drawing API allows you to create custom textures from JPG and PNG image files. In general, most image files can be loaded and used without any special preparation, but for maximum performance and quality, some texture preparation will be helpful. This section provides some guidelines for creating textures with the best possible performance and quality.
"Power-of-Two" Textures
Though the drawing API allows you use textures of any size, the preferred format is a square image with each side being a power-of-two (2**N for some N). Practically speaking, the most common texture sizes are 32x32, 64x64, 128x128 and 256x256. There are many performance and quality benefits that come automatically by using square power-of-two textures, which will not be used otherwise.
If your
texture is not a square, you can "pad" it with transparent pixels to
make it a power-of-two square using most paint or drawing programs.
Image Size
When
creating a texture, it is important to consider how much space it will occupy
on screen, and how many copies of the texture are likely to be drawn at once.
Unless the texture occupies a large portion of the screen, and only one copy is
drawn per frame, it's likely a good idea to use a relatively low resolution of
texture (64x64 or 128x128) instead of a higher resolution (256x256 or 512x512).
Though it
is sometimes desirable to use larger textures (512x512 or higher), such as for
a non-moving background that fills the entire window, it is often not
necessary. When in doubt, try a lower resolution texture first and switch to a
higher resolution texture only if the quality is not good enough.
“White” Textures
It is
often desirable to render a single texture with many different colors. This can
be done by calling SetModColor() on a VertexList (which is
internally passed to all of the drawing routines). The modulation color will
change the color of the drawing by multiplication (in RGB color space) with the
texture's real color. The modulation
color can reduce the color in each channel, but cannot increase the color.
Therefore, to get the maximum flexibility with a texture (with respect to color
modulation), it is best to use a white shape on a transparent background.
Portions of the texture may be other colors (to simulate shading or shadows),
but only the white regions will show the actual modulation color.
Blend Modes
When drawing occurs on top of an area of the screen that has already been drawn, the drawings must be blended. The blend mode is a global state that can be modified to change how this blending occurs. In general, the default value of ss_AlphaBlend gives the most realistic blending behavior, but other modes can be used to achieve different special effects.
· ss_AlphaBlend: the default value. The blending is done in proportion to the alpha of the new object so that new drawing is semi-transparently blended with the existing drawing.
· ss_NoBlend: no blending is done. The alpha of the new object is used as the new object's transparency, but the existing drawing is not blended in.
· ss_LuminosityBlend: use the luminosity of the new object to determine its opacity. Brighter regions of the object will be more opaque, and darker regions will be more transparent.
· ss_LightMode: the new object is drawn with additive blending on top of the old object. The new color is added to the existing color such that the result is never darker than either the new or the old color. This blending mode gives an "illumination" effect.
Reusing VertexLists Using Matrix Transformations
It is often necessary to issue multiple drawing commands for objects which differ in position, size or orientation, but which are otherwise the same. This can happen, for example, when drawing multiple objects, or when drawing one copy of an object that changes position, orientation or size over time.
One way
to handle this situation is to update each VertexList for each drawing command,
but this makes the code more verbose and is inefficient for the drawing engine.
Instead of editing your
vertex data each time a change is needed, there’s a good chance that you’ll be
able to reuse your VertexList over and over by using matrix transformations. Matrix transformations
allow to scale, rotate and move (also called "translate") your
coordinate data, without changing the data itself. As an example, consider the
following code, which draws the VertexList that we created in the above sample
code twice: once relative to (0, 0, 0), and then again shifted .2 units along
the X-axis:
self.circle.DrawLines()
Translate(
.2, 0, 0 )
self.circle.DrawLines()
Matrix transformations are cumulative, meaning that each transformation is applied in addition to the previous transformation. So translating twice by (.1, 0, 0), for example, will have the same effect of translating the geometry by (.2, 0 , 0). To elaborate on the last example, we can draw the second object shifted to the right, and twice as large as the first, by using a cumulative translate and scale:
self.circle.DrawLines()
Translate(
.2, 0, 0 )
Scale( 2.0, 2.0, 2.0 )
self.circle.DrawLines()
As you use matrix transformations, you’ll quickly find it’s very useful to save and restore matrix states. This lets a piece of drawing code manipulate the transformation matrix however it wishes, while still preserving the state of the matrix before the code was called. To save a state, we "push" a matrix onto a stack. To restore the previous state, we "pop" a matrix from a stack. Consider the following code in which we render multiple particles at different positions:
for
ball in self.ballList:
PushMatrix()
Translate( ball.x, ball.y,
ball.z )
self.ballShape.DrawTriangles()
PopMatrix()
In this example, we use a matrix transformation in anticipation that ballShape draws a ball centered at (0,0,0). Note that because transformations are cumulative, this technique would not work without the PushMatrix()/PopMatrix() calls. Note that each call to "push" a matrix must be balanced by exactly one call to "pop" the matrix. If this is not done, you will get an error.
To restore the default state of no transformations, you can also call SetIdentity(), which loads the identity matrix. Finally, in addition to the coordinate geometry transforms described above, it is also possible to apply transformations to texture coordinates by using "Tex" versions of the matrix manipulation commands described here (e.g. TexTranslate(), TexScale(), etc). Applying transformations to the texture matrix is less common than manipulating the geometry matrix, but it allows for effects such as scrolling and changing the size of textures.
The provided graphical APIs place no restrictions on the number or complexity of objects in a scene. Practically speaking, the only limit on the complexity of a scene is performance. By following some simple performance guidelines, scene rendering can be made highly efficient, thus allowing for more complex and visually rich scenes. These considerations are especially important if you are building configs to share with others. Bear in mind that your scenes might be run on machines much slower than your own!
Reuse textures and VertexList Objects
Many scenes create additional Python objects (in addition to the standard draw class object) to organize the drawing of multiple scene objects. This is a powerful technique which simplifies the drawing process, but it can also lead to trouble if each object creates its own resources. It is never appropriate to create multiple copies of the same texture data or vertex list. If multiple objects require the same texture or vertex list, they should all share the same texture or vertex list instead of loading or creating it themselves. This is typically accomplished by having the main draw object pass the texture or vertex list to the new object upon creation or during each frame. In this way, the main draw object becomes the exclusive manager of the resource.
Batch Drawing
You should try to do as much drawing as possible with as few draw commands as possible rather than using a python loop and drawing in small pieces. For example, nested loops containing draw commands tend to degrade performance quickly because of the overhead associated with calling a draw command. Each draw command was designed for mass (or “batch”) drawing by allowing you to specify any index offset and stride-related parameters, giving you a lot of flexibility. For example, if you had a VertexList that contained a N*N list of vertices representing a 2D (N by N) mesh, you could draw it as a grid with just two DrawLines() calls:
# Draws all the “horizontal” lines
self.mesh.DrawLines( 0, N, N, 0, 1, N )
# Draws all the “vertical” lines
self.mesh.DrawLines( 0, N, N, 0, N, 1 )
Use “Static” VertexLists
When drawing objects with a large number of vertices which do not change frequently, you can use a vertex buffer to speed up rendering. A vertex buffer is simply a special object which replaces a vertex list and which allows the drawing engine to avoid processing the entire vertex list each frame. Remember, using matrix transformations of VertexList modifications may provide
Precompute When Possible
Many scenes use complex computations to determine the positions of objects on screen, and often these computations need to be updated at each frame to properly animate the objects. As more and more objects are placed onscreen, the computation at each frame can cause the scene to slow down. It is often possible to take parts of the per-frame computation and move them to the initialization code. The computed values can be stored in the draw object's variables as numbers of as lists of data which can then be accessed at each frame. The same principle applies with loops which execute the same code many times per frame: if computation inside of the loop can be moved outside of the loop so that it is only executed once per frame, the scene's drawing performance will improve.
Use matrix transformations and color modulation instead of generating new vertex data
It is frequently desirable to render multiple objects that are similar, but not identical. In a particle system, for example, one may render hundreds of objects with different positions, sizes and colors. A first approach at such a system might involve a loop in which a list of color vertex objects is created for each particle, but this is highly inefficient. Creating hundreds of vertex lists can waste a lot of time. Instead, the vertex list should be created only once (preferably when the render object is created) and the position and size should be changed via matrix transformations. The color changed can be handled by using white (RGB values of 1, 1 and 1) for the master vertex list, and then by setting the Color argument taken by the draw method.
How do I copy vertices, colors or lists of vertices or colors?
Python typically passes objects around by reference, meaning that when an object is assigned to more than one variable, all of the variables refer to the same copy of the object. This means that any changes made to the object will effect all of the variables which are pointing to it. Consider the following example:
list1
= [ 1, 2, 3 ] # Create a simple list
list2
= list1 # Assign list2 to list1
list1[
0 ] = 5 # Modify list1
print
list2 # Print list2
The
result:
[
5, 2, 3 ] # list2 has been modified!
In this example, the variable list2 is never explicitly changed, but ends up with new values because it is still referencing the same object as list1. Because of this, it is important to be careful when assigning objects like vertices, colors or lists. If you wish to modify one set of the data without modifying the others, you will need to copy the data instead of simply assigning it. In Python, this can be done using the standard copy module as shown in the example below:
import copy
listRef
= list1 # listRef is a reference to list1
listCopy
= copy.copy( list1 ) # listCopy is a COPY of list1
Why are far away objects are drawn in front?
When drawing multiple objects which vary in terms of camera distance, you may notice that objects far away appear to be drawn in the front of the scene. This is because by default, objects are placed on-screen in the order they are drawn without regard for objects which may be in front of others. The best way to solve this problem–if possible–is to render objects in back-to-front order so that the objects closest to the camera are drawn last. Depending on how much the objects in your scene move around, this may be a simple matter of rearranging your drawing code. Another option is to sort the objects to be drawn so that they appear in the correct order.
If rendering objects back-to-front is not practical, another option is to use a feature called the depth buffer, though using the depth buffer is associated with its own set of complications. See the next question on "incorrect alpha blending with the depth buffer enabled" for more information on these issues, and see the documentation for SetDepthBuffer for more information on using the depth buffer.
Why do I get incorrect alpha blending with depth buffer enabled?
Though the depth buffer is very useful for solving the problem of objects being drawn in the correct order, it simply does not produce the correct results when alpha blending is being used. Alpha blending depends on objects being drawn in back-to-front order and will not produce the right results when the depth buffer is being used to ensure that objects in the back do not appear in the front of the scene. Note that even if you aren't using transparenly explicitly by specifying semi-transparent colors, you may be implicitly using transparency by using textures containing an alpha channel.
The Drawing API
This section contains a reference to all of the functions which can be used for drawing and communication with the SoundSpectrum Drawing Engine.
Some of the arguments to the drawing functions are shown with "=", followed by a value. This means that the argument has a default value that need not be specified another value is desired. This greatly simplifies many of the function calls which otherwise look somewhat complicated–often the default values will suffice.
SetDrawClass( PythonClass )
Sets the Python class to be called for drawing. This is typically the only class in a drawing file. See the included demos for a sample of how this class is defined and used. The only requirement for a draw class is that it has an instance method named Draw, which takes two arguments: a list of FFT values, and the current time. It is in this method where all drawing occurs. More advanced drawing classes typically also implement other methods to perform computation or to draw sub-sections of a scene. The following code sample shows a minimal drawing config which defines a drawing class and informs the engine of the object. In this example, no drawing is done:
from SoundSpectrumDrawing import *
class SimpleDrawer:
def Draw(
self, time, fft ):
pass
SetDrawClass( SimpleDrawer )
Set3DCamera( CamXYZ, LookXYZ = XYZ( 0, 0, 0 ), UpXYZ = XYZ( 0, 1, 0 ), FieldOfView = .785, ZNear = 0.5, ZFar = 100.0 )
Sets up the camera for a 3D scene.
This method is used to setup a
camera for a 3D scene. Only the first argument, which specifies where the
camera is located, is required. The others are typically left at their
default values.
The "CamXYZ" argument specifies where
the camera is located, and is required.
The next two camera position arguments are optional, and in most cases the default values are sufficient. The "LookXYZ" argument specifies where the camera is pointing. This value defaults to the center of the world, XYZ(0, 0, 0). The "UpXYZ" argument specifies which direction is considered "up". This value defaults to the Y-axis pointing up, (0, 1, 0).
The near- and far- clipping planes define the front and rear boundaries of the scene. Depending on where your objects are placed in a scene, you may wish to change these values. Note, however, that changes to the near and far clipping plane will effect the perspective in the scene. When the depth buffer is enabled, the near- and far-clipping planes define the amount of space that the depth-buffer needs to keep track of. If the near- and far- clipping planes are too far apart, then the rendered objects start "fighting" in the Z-buffer, which leads to ugly artifacts in the scene. It is thus wise to keep the Z-buffer as small as possible when using a depth buffer.
Set2DCamera( Mode = ss_Stretch, ZNear = 1, ZFar = -1 )
Sets up the camera for a 2D scene.
Mode specifies how the aspect ratio should be managed for this scene. Acceptable values are ss_Stretch, ss_Crop, ss_Letterbox. If not specified, the value of ss_Stretch is used. Camera values are not preserved between frames, so if a non-default camera view is desired, it should be set during each frame. ZNear and ZFar specify the clipping plane boundaries. Any objects drawn outside of these boundaries will not be seen.
GetViewExtents()
Returns a 2-element tuple for the x- and y- frame extents. The frame extents specify the coordinate-space distance from the center of the screen to the boundary of the screen. With a 2D canvas, this means the half the width and height of the canvas. With a 3D camera scene, the value is computed at the near clipping plane.
These values should be used to layout config drawing into
the visible portion of the screen.
Note that the view extents change when Set2DCamera or Set3DCamera is called, so this function should be called after setting up the camera.
SetDepthBuffer( Value, Clear = 0 )
Sets the behavior of the depth buffer, and optionally clears the depth buffer. The depth buffer is initially disabled, and will be cleared every frame. This function may be called multiple times during a scene.
The depth buffer is used to disable
drawing of surfaces that are obscured by other surfaces in a scene based on
distance from the camera. The depth buffer is initially disabled meaning
that objects are rendered to screen in the order they are drawn, without regard
for objects that may be obscuring them. This can lead to unrealistic
effects in which far away objects are drawn in front of close up objects. The depth buffer can be used to resolve this problem by
keeping track of where objects are drawn and discarding fragments which are
drawn behind existing objects.
ss_DepthDisabled – Disables the depth buffer for testing and
writing. Objects will be drawn on
top of previously drawn objects.
ss_DepthEnabled – Enables the depth buffer for testing and
writing. Objects will only be
drawn if they are closer to the camera than previously drawn objects (that were
written to the depth buffer).
ss_DepthReadEnabled – The depth buffer will be tested
against, but will not be modified.
Objects will only be drawn if they are drawn closer to the camera than
previous objects who’s depth values were written to the depth buffer.
ss_DepthWriteEnabled – The depth buffer will be written to,
but not tested against. Objects
will be drawn on top of previously drawn objects, and will occlude future
objects drawn behind when the depth buffer is tested against.
A major caveat with the depth buffer
is that it cannot
account for alpha blending. If a config makes use of alpha blending,
either explicitly by using semi-transparent alpha colors or implicitly by using
textures with alpha masks, then using the depth buffer will not produce
the expected results. Caution should be used when rendering transparent
objects and using the depth buffer.
GetBackgroundColors()
Returns a list of 3 HSLA() color objects representing primary, secondary and highlight colors.
GetRandomBackgroundColorIndex( ExcludePrimary )
Returns a value between 0 and 2 indicating an index to be used with the color list returned by GetBackgroundColors(). The index value returned is chosen with a probability weighted using the 50%/30%/20% ratio described above. If the ExcludePrimary argument is set to 1, the primary color is excluded and the remaining colors are returned with a 60%/40% probability weighting.
CombineColors( inColorA, inColorB, inFactor )
Returns an interpolated color from inColorA at inFactor = 0, to inColorB at inFactor = 1.
ConvertToRGBA( inColor )
Returns an RGBA-formatted color, regardless of inColor’s format.
ConvertToHSLA( inColor )
Returns an HSLA-formatted color, regardless of inColor’s format.
SetBlendMode( BlendMode )
Sets the blend mode for future drawing commands. The blend determines how the polygons drawn are combined
Note: The blend mode defaults to ss_AlphaBlend at each frame. The blend mode must therefore be changed during each frame if another value is desired.
Color Palettes are available to newer SoundSpectrum applications (Aeon and later). Color Palettes are not supported in WhiteCap. VertexLists and Geometry objects each have a SetPalette() function, while RTT objects (render to texture) support specifying a palette when drawing with Draw(). Palettes use a supplied colormap that maps the luminance values of each drawn pixel to an alternate color.
SetPalette( inPalette, in0 = 0.0, in1 = 1.0 )
Sets a Palette for a VertexList or Geometry object.
The standard is to set any foreground config objects to ss_PaletteFore, and any background configs to ss_PaletteBack. This will preserve the intended behavior of the colormap throughout all scenes. In some cases, it may be useful to use ss_PaletteFull (usually when your foreground draws fully to the screen, and occludes the background). Note that SetPalette only needs to be called once per object, and usually should be set in the __init__ function.
CreateTextureFromImage( inPathname, inTexFlags = 0 )
Loads a texture from an image file.
CreateTextureFromText(
String, Font = "", Size = 24 )
Creates a texture by rendering text.
CreateTextureFromDot(
Falloff = .3, Size = 128 )
Creates a texture from a dot.
CreateTextureFromEvaluator(
inProgram, inPixelX, inPixelY, inPixelZ = 1, inFlags = None )
Creates a texture from an evaluation script.
CreateTextureForReadback( Flags = 0 )
Creates a texture suitable
for rendering to texture.
BeginRenderToTexture(
Texture, ClearColor, SizeX, SizeY )
Begins rendering to the specified texture.
Note that you can stack render to texture calls with different textures. SoundSpectrumDrawing.py provides a utility class for fullscreen drawing (class RTT).
EndRenderToTexture()
Ends rendering to the
previously specified texture.
The following functions are available to Geometry objects. Any parameters with an equals sign afterwards have default parameters – these parameters do not need to be specified.
Place( inObjURL, inSrcNameFilter = “”, inDstGroupName = “” )
Places one or more objects from a file into the Geometry object, using the current Geometry object’s internal transformation matrix.
Note that Place() can be called any number of times before Finalize() or Draw() is called. After Finalize() or Draw() is called, no more objects can be placed in this Geometry object.
SetTexture( inTexture, inModulationColor = None )
Sets the object’s texture and modulation color.
This texture/modulation color will apply to all grouped objects in this Geometry object. The object’s texture/modulation color persists between frames, and can be changed between Draw() calls.
SetModColor( inModulationColor )
Sets the object’s modulation color.
Has the same effect as calling SetTexture( t, inModulationColor ) where t is the currently set texture for this object.
SetPalette( inPalette, in0 = 0.0, in1 = 1.0 )
Sets the object’s palette. See “Color Palettes” in the API documentation below.
Draw( inGroupName = “” )
Draws one or more geometry groups within this Geometry object.
Finalize()
Finalizes geometry object, internally compiling all placed objects into their defined groups. Once this is called, no more objects may be placed in this Geometry object.
Calling Draw() finalizes the object as well, so this function is optional but leads to less ambiguous code in some cases.
GenFromScript( inScriptText )
Generates grouped geometry using a text script.
Internally, the Place() function and the matrix functions generate a script (see SoundSpectrumDrawing.py for details). You may optionally generate your own script and pass it here.
GenFromURL( inObjURL, inSrcNameFilter = “”, inDstGroupName = “” )
Generates grouped geometry from a source object. Note that this has the exact same effect of calling Place() with the same arguments, and then calling Finalize().
Push() / Pop() / Translate( x, y, z ) / Scale( x, y, z ) / Rotate( x, y, z, angle )
These matrix functions affect the internal transformation matrix of the Geometry object. See “Matrix Manipulation Functions” in the API specification.
Push() and Pop() are equivalent to the scene’s PushMatrix() and PopMatrix().
PushMatrix()
Pushes the world matrix stack. Any transformations done to the world matrix will be undone once PopMatrix() is called.
PopMatrix()
Pops the world matrix stack, setting it to its state before the last PushMatrix() was called.
Note that there should be one PopMatrix() call for every PushMatrix() call.
Rotate( x, y, z, angle )
Applies a rotation to the world matrix. x, y, z specifies a vector, and angle specifies an angle (in radians) to rotate around the given vector.
Translate( x, y,
z )
Applies a translation to the world matrix. Moves the current world matrix by the vector (x, y, z).
Scale( x, y, z )
Performs a scaling transformation on the world matrix. The world matrix is scaled by the x, y, and z values.
Note that scaling by zero is not recommended, as z-buffer fighting may occur.
Also note that using Scale( v ) is equivalent to Scale( v, v, v ) when scaling the scene matrix (not a geometry-script matrix).
TexPushMatrix()
Pushes the texture matrix stack. Any transformations done to the texture matrix will be undone once TexPopMatrix() is called.
TexPopMatrix()
Pops the texture matrix stack, setting it to its state before the last TexPushMatrix() was called.
Note that there should be one TexPopMatrix() call for every TexPushMatrix() call.
TexRotate( x, y, z, angle )
Applies a rotation to the texture matrix. x, y, z specifies a vector, and angle specifies an angle (in radians) to rotate around the given vector.
TexTranslate( x, y, z )
Applies a translation to the texture matrix. Moves the current texture matrix by the vector (x, y, z).
TexScale( x, y, z
)
Performs a scaling transformation on the current texture matrix. The texture matrix is scaled by the x, y, and z values.
SetPointLight( inPosition, inLightNum = 0, inAttenuation = 1 )
Enables lighting and positions a point light in the scene.
The positions and colors of lights are not preserved between
frames. The positions and colors of lights must therefore be set during each
frame.
SetDirectionalLight( inPosition, inLightNum = 0 )
Enables lighting and positions a directional light in the scene.
The positions and colors of lights are not preserved between
frames. The positions and colors of lights must therefore be set during each
frame.
SetSpotLight( inPosition, inPointAt, inLightNum = 0, OuterAngle = 45, inFalloff = 0.0 )
Enables lighting and positions a spotlight in the scene.
The positions and colors of lights are not preserved between
frames. The positions and colors of lights must therefore be set during each
frame.
SetLightColor( inLightNumber, inDiffuse )
Enables lighting and sets the color of a light in the scene.
Note: The positions and colors of lights are not preserved between frames. The positions and colors of lights must therefore be set during each frame.
SetSpecular( inLightNum, inShininess = 0.2, inColor = ss_White, inColorScale = 1.0 )
Enables or Disables specular lighting and sets the parameters of specular highlights.
Note: The specular state is not preserved between frames (specular lighting is turned off by default). The specular state must therefore be set during each frame.
SetLighting( inEnabled )
Enables or Disables lighting. Does not need to be called before lights are set (as they will enable lighting as well).
SetAmbientColor( inAmbientColor )
Sets the ambient color applied to objects. Lighting will be additive from this base color.
SetFogState( inEnabled, inStart, inFinish, inColor )
Enables or Disables fog and sets fog properties.
Note that only objects drawn will fill in the fog color. If some areas of the screen are not drawn to with fog on, you will probably want to set the clear color to the fog color. Fog does not persist between frames, and must be set during each frame.
SetClearColor( inColor )
Specifies the color drawn
when the screen is cleared each frame.
This only needs to be set once, and will persist between frames.