PSM Tutorial #2-1 : Shader Programs

2012-12-05 2:36 AM

We’ll start out by breaking down sony example project for drawing an image on screen. Once we understand what’s going on we can move on to making our own sprite rendering system which will be a lot more flexible.

Let’s take a look at the code then go through it section by section like the previous tutorial. You can also open this code up and run it. It’s located in the “Tutorial\Sample02_01” of your samples folder for PSM. Here is the code:

public class AppMain
{
static protected GraphicsContext graphics;
static ShaderProgram shaderProgram;
static Texture2D texture;

static float[] vertices=new float[12];

static float[] texcoords = {
0.0f, 0.0f,// 0 top left.
0.0f, 1.0f,// 1 bottom left.
1.0f, 0.0f,// 2 top right.
1.0f, 1.0f,// 3 bottom right.
};

static float[] colors = {
1.0f,1.0f,1.0f,1.0f,// 0 top left.
1.0f,1.0f,1.0f,1.0f,// 1 bottom left.
1.0f,1.0f,1.0f,1.0f,// 2 top right.
1.0f,1.0f,1.0f,1.0f,// 3 bottom right.
};

const int indexSize = 4;
static ushort[] indices;

static VertexBuffer vertexBuffer;

// Width of texture.
static float Width;

// Height of texture.
static float Height;

static Matrix4 unitScreenMatrix;

public static void Main (string[] args)
{
Initialize ();

while (true) {
SystemEvents.CheckEvents ();
Update ();
Render ();
}
}

public static void Initialize ()
{
graphics = new GraphicsContext();
ImageRect rectScreen = graphics.Screen.Rectangle;

texture = new Texture2D("/Application/resources/Player.png", false);
shaderProgram = new ShaderProgram("/Application/shaders/Sprite.cgx");
shaderProgram.SetUniformBinding(0, "u_WorldMatrix");

Width = texture.Width;
Height = texture.Height;

vertices[0]=0.0f;// x0
vertices[1]=0.0f;// y0
vertices[2]=0.0f;// z0

vertices[3]=0.0f;// x1
vertices[4]=1.0f;// y1
vertices[5]=0.0f;// z1

vertices[6]=1.0f;// x2
vertices[7]=0.0f;// y2
vertices[8]=0.0f;// z2

vertices[9]=1.0f;// x3
vertices[10]=1.0f;// y3
vertices[11]=0.0f;// z3

indices = new ushort[indexSize];
indices[0] = 0;
indices[1] = 1;
indices[2] = 2;
indices[3] = 3;

//vertex pos,               texture,       color
vertexBuffer = new VertexBuffer(4, indexSize, VertexFormat.Float3, VertexFormat.Float2, VertexFormat.Float4);

vertexBuffer.SetVertices(0, vertices);
vertexBuffer.SetVertices(1, texcoords);
vertexBuffer.SetVertices(2, colors);

vertexBuffer.SetIndices(indices);
graphics.SetVertexBuffer(0, vertexBuffer);

unitScreenMatrix = new Matrix4(
 Width*2.0f/rectScreen.Width,0.0f,    0.0f, 0.0f,
 0.0f,   Height*(-2.0f)/rectScreen.Height,0.0f, 0.0f,
 0.0f,   0.0f, 1.0f, 0.0f,
 -1.0f,  1.0f, 0.0f, 1.0f
);

}

public static void Update ()
{

}

public static void Render ()
{
graphics.Clear();

graphics.SetShaderProgram(shaderProgram);
graphics.SetTexture(0, texture);
shaderProgram.SetUniformValue(0, ref unitScreenMatrix);

graphics.DrawArrays(DrawMode.TriangleStrip, 0, indexSize);

graphics.SwapBuffers();
}
}

Here is what it looks like when you run the code:

resources/images/2012/12/Tutorial2GameScreen.png

A lot more things to look at for this tutorial. We’ll be taking a look at what OpenGL is really doing. There’s a lot of concepts that I’ll touch on but are quite deep so I will provide some supplementary reading for those who want to further their knowledge on the various topics we touch on.

Let’s start by looking over the declarations:

public class AppMain
{
static protected GraphicsContext graphics;
static ShaderProgram shaderProgram;
static Texture2D texture;

static float[] vertices=new float[12];

static float[] texcoords = {
0.0f, 0.0f,// 0 top left.
0.0f, 1.0f,// 1 bottom left.
1.0f, 0.0f,// 2 top right.
1.0f, 1.0f,// 3 bottom right.
};

static float[] colors = {
1.0f,1.0f,1.0f,1.0f,// 0 top left.
1.0f,1.0f,1.0f,1.0f,// 1 bottom left.
1.0f,1.0f,1.0f,1.0f,// 2 top right.
1.0f,1.0f,1.0f,1.0f,// 3 bottom right.
};

const int indexSize = 4;
static ushort[] indices;

static VertexBuffer vertexBuffer;

// Width of texture.
static float Width;

// Height of texture.
static float Height;

static Matrix4 unitScreenMatrix;

We already know what a graphics context is from the http://levelism.com/psp-vita-opengl-tutorial-1-part-1-explaining-the-code-in-a-new-project/ so let’s move on to the ShaderProgram class:

static ShaderProgram shaderProgram;

A shader program is a class that encapsulates shader code. A shader is a set of code that instructs the GPU chip on how to handle graphics data you send it from your program. The way this works is first you prepare your data in your programming code, then you send it to the Vertex processing unit of the GPU. The vertex shader program is run once for every vertex that you send to the vertex processing unit.This is where the geometric functions are performed and data is prepared to be handed to the next step of the process, the fragment unit.

The fragment unit takes the vertex data and then draws every fragment onto the screen. A fragment is a pixel on a piece of geometry. This is where you perform pixel operations. If you have ever seen the cool filters and color changing stuff you can do in a image editing program such as photoshop or GIMP this is where you would perform similar effects.

The vertex step of the process can be used for many useful things. One example would be transforming all your vertex data. For instance one of the most common uses is if you have a large world that is stored in a array of vertices that you send to the vertex unit and then send it some mathematical information that explains where a camera is inside your world, the vertex processing unit will use this camera data to move the entire world so that when it renders the world is positioned according to the camera information. Essentially what this means is the actual geometric math required to have a camera is performed on the vertex processing unit.

Here is a very very simple vertex shader.

void main(float4 in a_Position    : POSITION,
          float4 out v_Position   : POSITION,
          uniform float4x4 u_WorldMatrix)
{
v_Position = mul(a_Position, u_WorldMatrix);
}

What we have here in the main function parameters is a in variable which represents the current vertex we have sent the vertex processing unit.

The word “in” marks the data as incoming data from the program code. Next we have an out variable which will represent the transformed vertex data that we sent to the next stage of the GPU process (the fragment shader). The word “out” says that this data will be sent to the fragment shader. In the case of vertex data the out parameter doesn’t directly send the data to the fragment shader. There are internal processes which determine whether or not the data reaches the fragment shader by calculating it’s depth from the camera and whether or not it is actually in the space viewable by the camera. We will discuss this more in depth in a later tutorial when we deal with more advanced fragment operations. For the most part other data such as colors and texture coordinates are sent directly to the fragment shader.

Finally we have a uniform matrix variable. Uniform means a piece of data in our programming code that is marked to come into the vertex shader. Uniforms are how we pass information we need from the game code to our vertex shader. In this case you can think of the uniform matrix as our camera we discussed earlier. Finally we have the line:

v_Position = mul(a_Position, u_WorldMatrix);

What this does is performs matrix multiplication to calculate the new position of the vertex based on the input camera data. Matrix and vector math operations are part of shader language math operations so luckily we do not have to have full knowledge of how they work. Essentially if you multiply the vertex against the “camera transformation matrix” you have a vertex that is now in the correct position relative to the camera.

Some other examples of cool things you can do on the vertex processing unit include things like skeletal character animation and model deformations. While it’s true you could do some of these operations in your programming code it’s almost always advisable to do this on the vertex processing unit because it’s a specialized processor made specifically to do geometric operations, you will almost always have much better performance doing these operations on the GPU rather than the CPU.

Once the vertex processing unit is finished processing its data it is sent to the fragment unit (also knows and the pixel shader). As stated before a fragment is a pixel on a piece of geometry. The fragment unit program is run once for every fragment. At it’s most basic level you just receive an incoming fragment and then send it back to the screen with a color. In reality depending on what you are doing you might be sampling a texture for a 3d model or performing a cool filter to give the game a artistic impressionist effect or toon style. Here is an example of an very simple fragment shader.

void main(float4 out color      : COLOR)
{
color =  float4(0, 1.0, 0, 1.0);
}

As you can see here we have one out parameter which is the color of the pixel we will be rendering to the screen. Usually we would have some information coming in from the vertex shader but in the case of simplicity we will just color the incoming fragment green and paint it to the screen. The coloring of the fragment green is done on this line:

color =  float4(0, 1.0, 0, 1.0);

The color out variable is assigned to be a float4. Float4’s are an array of 4 floating point variables. In this case we interpret it as a color structure with the format RGBA which stands for Red,Green,Blue,Alpha. The RGB portion should be fairly self-explanatory. The alpha represents how transparent the pixel is. 0 is fully transparent and 1 is fully opaque. So in our case we have R=0, G=1.0, B=0 and A=1.0. So this is why we would get a fragment that is green and fully opaque.

So to reiterate the process of how graphics get drawn to the screen I’ve written this small schematic to reinforce the concept.

resources/images/2012/12/GPUPathdiagram2.png

In the next part of this tutorial we will look at the shaders included with this example which are slightly more complex. It will be a shorter part. Hopefully you have the knowledge of how the graphics system works now. If you have any feedback, questions or comments leave a comment in on this post and I will do my best to address them.

I would also like to thank Bruce Sutherland for reviewing this tutorial and making some corrections in regards to how vertex out parameters in the vertex shader are not directly sent to the fragment shader. Here is his explanation on what happens to a vertex out parameter from the vertex processing unit:

The output of the vertex shader is still a vertex in 3D space.

In OpenGL and DirectX the vertices are mapped to what’s called the canonical view volume. In OpenGL that’s a cube which goes from -1, -1, -1 to 1, 1, 1.

You then have another stage in the GPU which carries out clipping and then screen mapping. These stages are carried out in Geometry Shaders on newer versions of DirectX and OpenGL on the desktop but haven’t made it to mobile gpus yet.

Some people can get confused by vertex shaders when they don’t realise what the output they are trying to generate actually is.

Extra vertex data like texture co-ordinates would mostly go untouched, except for clipped vertices, the texture co-ord sent to the fragment shader would probably be whatever the GPU calculates the text coord to be at the point of intersection.Bruce also has a blog with tutorials and topics on programming including Android related topics if you are interested in that. You can visit his site here:

http://brucesutherland.blogspot.com.au/

Tags: psm_tutorial_series_2

PSM Tutorial #1 : Explaining the code in a New Project

2012-12-01 4:41 AM

Before we can start drawing an object on the screen we need to setup our Update/Render loop and our graphics context.

Most realtime happen inside a big loop. In the Update/Render loop pattern the logic that drives the game happens in the Update step of the loop. This includes things like checking user input, moving objects, starting and stopping sounds, AI routines and almost anything not related to drawing that happens in a game.

The render step of the game loop pretty much only handles one thing: Drawing objects and graphics on the screen.

So lets get started. Open up Playstation Mobile Studio and start a new Solution. Choose a “PlayStation Mobile Application”. Give your project a name and then hit OK.

At this point you can run the program and you will see a black screen. Exciting! This screen is actually redrawing black many times per second, of course it’s redrawing a black canvas every time so it looks like nothing could be happening.

In the left side of the screen you will see your solution explorer. There is only one source code file here called “AppMain.cs”. Open this file up.

Let’s take a look at the basic program structure for “AppMain.cs”.

using System;
using System.Collections.Generic;

using Sce.PlayStation.Core;
using Sce.PlayStation.Core.Environment;
using Sce.PlayStation.Core.Graphics;
using Sce.PlayStation.Core.Input;

namespace Tutorial01_01_GameLoop
{
public class AppMain
{
private static GraphicsContext graphics;

public static void Main (string[] args)
{
Initialize ();

while (true) {
SystemEvents.CheckEvents ();
Update ();
Render ();
}
}

public static void Initialize ()
{
// Set up the graphics system
graphics = new GraphicsContext ();
}

public static void Update ()
{
// Query gamepad for current state
var gamePadData = GamePad.GetData (0);
}

public static void Render ()
{
// Clear the screen
graphics.SetClearColor (0.0f, 0.0f, 0.0f, 0.0f);
graphics.Clear ();

// Present the screen
graphics.SwapBuffers ();
}
}
}

Now let’s analyze the individual portions of this code:

private static GraphicsContext graphics;

public static void Main (string[] args)
{
Initialize ();

while (true)
{
SystemEvents.CheckEvents ();
Update ();
Render ();
}
}

public static void Initialize ()
{
// Set up the graphics system
graphics = new GraphicsContext ();
}

First we have a static member of type GraphicsContext defined. The Graphics Context is a system that handles many graphics related functions, you need it to have any sort of display. Here is a list of some things it can do:

  • Give you the width and height of the current device screen (in pixels) - Clear the screen - Draw objects on the screen - Enable and disable graphical features such as transparency

Next we have the main entry point to the program. The first method that is called is Initialize(). Inside Initialize() we only do one thing. Instantiate our GraphicsContext object. It’s important to note you only want to do this once in your program.Instantiatinga GraphicsContext twice will cause a exception to be thrown and crash your program.

Next we have aninfinitewhile loop. This runs continuously until some other event ends the loop causing the end of main to be reached and the program to shut down. Inside this loop we have 3 methods being called. CheckEvents(), Update() and Render().

SystemEvents.CheckEvents ();

What this does is checks to see if any Vita (or Android/Windows) specific events are called and allows them to be processed. This must be in your main game loop.

Update();

This is the update portion of a game loop. Almost all of your game specific code should be in this method. Things like moving objects around, handling user input and playing sounds should be done here.

Render();

The render portion of the game loop should really only be used for drawing things to the screen. So after you have moved an object and handled all the code for your game for this iteration of the loop you draw everything.

Now let’s take a look at the Update() and Render() methods in depth.

public static void Update ()
{
// Query gamepad for current state
var gamePadData = GamePad.GetData (0);
}

public static void Render ()
{
// Clear the screen
graphics.SetClearColor (0.0f, 0.0f, 0.0f, 0.0f);
graphics.Clear ();

// Present the screen
graphics.SwapBuffers ();
}

The default Vita project Update method really does nothing at all but lets take a look anyways. The code is one line:

var gamePadData = GamePad.GetData (0);

What this does is gets the current state of the 1st gamepad connected to the system. In the case of the portable Vita with only one controller it is most likely you will only ever be getting the first gamepad. I think the reason there is an option to select a gamepad is if in the future PSM is extended to support other systems like the PS3 (Hint Hint Sony ).

The code is only getting the gamepad data. It’s not checking for anything so it’s essentially doing nothing.

One other note, you may notice that the keyword “var” is used rather than an explicit definition of the type the GetData() method returns. In some languages this can mean that the type is determined at runtime but with C# if it can determine the type at compile time implicitly then it will. This means typing something like :

var gamePadData = GamePad.GetData (0);

Is exactly the same as writing :

GamePadData gamePadData = GamePad.GetData (0);

Finally we move on to the Render() method. This is where some stuff actually happens :


public static void Render ()
{
// Clear the screen
graphics.SetClearColor (0.0f, 0.0f, 0.0f, 0.0f);
graphics.Clear ();

// Present the screen
graphics.SwapBuffers ();
}

First we set the clear color of our graphics context. What this does is sets the color that the graphics context clears the screen to when you call the clear method. Parameters are passed in Red,Green,Blue,Alpha. So if you were to pass in 1f,0f,0f,0f you would get a red screen. This method could actually be put into the Initialize() method after creating the graphics context since what it does is changes the color of subsequent Clear() calls. So right now calling it in Render() every time is redundant.

Next call is Clear(). This wipes everything off the screen and gives you a blank screen in the color that you last set SetClearColor() to.

Finally the last call is SwapBuffers(). When you are drawing in PSM there are 2 images. The one that you are looking at on the screen and the one you are drawing to which is not visible. So in a Render() method you are actually drawing to an invisible image in the background and when you are finished all your drawing you call SwapBuffers(). What this essentially does is swaps the image that you are viewing on the screen with the image you just drew to allowing only a completely drawn image to ever be shown on the screen. This is done to avoid visual errors and flickering on the screen that can occur if rendering didn’t use this method.

That’s everything for this portion of the tutorial. If you spot any issues, inaccuracies or would like me to better explain a concept please let me know in the comments.

No source code is included with this tutorial because this is the default code PSM will give you when you create a new PSM application project.

In part 2 we will do something more interesting. Draw an image on the screen

Tags: psm_tutorial_series_1

New Flash Game I’m Working on : Lil’ Commando

2012-12-01 7:02 AM

So recently Sony support has really been giving me a very difficult time dealing with the fact I can’t add funds to myCanadianPSN account to pay for my PSM developer license from Japan. It’s been quite frustrating. I would however like to point out that this does not reflect on the PSM Vita development team. They are awesome and very helpful to the extent they have the authority to do so.

So anyways since my PSM projects are temporarily on hold I’ve gotten into some lessstressfulFlash game development again. I decided to go for a game type similar to some of te old arcade games where you had a player on the bottom of the screen who could only move left and right which having a reticle to aim their gun upwards towards enemies. I’m not sure if this genre has an exact name. Perhaps Aiming-Platformer? If anyone knows please let me know.

It’s been really fun just working on the game rather than the framework or too much art. I’m going with the Cactus philosophy for this game which essentially is to cut everything down that prevents you from focusing on the game itself. The result is a project that is always a lot of fun to work on. Anyways in the vein of #ScreenshotSaturday I present a lone photo of the current progress of Lil’ Commando:

resources/images/2012/12/LilCommando.png

Tags: