Wednesday, June 3, 2009

Worldspace versus Screenspace

In Metaplace, you have two canvasses on which to work: the "worldspace", where all of your objects sit, and the "screenspace", which is where the UI elements sit. There are a few key differences to note about them: worldspace is seen by everyone, where screenspace may or may not be shared; worldspace is affected by zoom, where screenspace is not; and screenspace is always on top of worldspace.

These three differences can be utilized to make all sorts of interesting effects that aren't immediately obvious to world builders. The fact that screenspace can be on a per-user basis means that it can help implement worlds where not everyone has the same knowledge -- fog-of-war and different kinds of "vision" (infravision, see-invisibility) can be supported through interesting uses of screenspace instead of worldspace. Basic menus and dialog boxes inherently "take advantage" of the fact that the UI doesn't zoom while the worldspace does, although the opposite can also be true -- zoom way out in a world (that supports it) with nametags on the avatar: the nametag soon dwarfs the puny little figure.

There might also be times when you want to have an effect between two objects: my Masses and Springs world needed to draw a line between two objects, but a line is in screenspace while the objects were in worldspace (if you go see the world, it currently suffers from a bug, which I've just reported -- it works okay if you click down-and-right of the face. Also, it only really works if you get out of full-screen mode, for reasons mentioned below).

So how do you go about matching up screenspace to worldspace?


There are four methods to attach UI (our screenspace objects) in Metaplace, two that are seen by everyone, and two that are per-user. UiAttachWorld() is attached to the viewport itself, treating the top-left corner as the origin, and is seen by everyone. UiAttachUser() is the per-user equivalent. UiAttachObject() attached UI to a specific worldspace object, based on the origin of that object (this is important later), and is seen by anyone who has that object on-screen. UiAttachUserObject() is the per-user equivalent.

Each of these functions has their use: UiAttachWorld() is good for a high score board, or a population monitor; UiAttachUser() is good for a HUD with user-specific values; UiAttachObject() is good for nametags; and UiAttachUserObject() is good for context menus. For the latter two, the UI "follows" the object as it moves (or as you move away), which is what you likely want, and is all nicely left up to the client; if the object moves off-screen, you don't see the UI, and that's probably a good design. Any UI attached to an object will hopefully be an appropriate size, such as a context menu that is reasonably sized to fit menu choices, or a nametag that is readable but not covering up half of the screen.

But is the nametag an appropriate size? What about when you zoom way out, like I mentioned before? And in the case of UiAttachWorld() and UiAttachUser(), the screenspace objects don't follow a specific object, but sit on-screen until removed, taking up real estate -- but how much?

The scaling and positioning of screenspace objects to worldspace objects is a two-part problem: zoom and screen size. The zoom can be read with GetPlace().zoom , but that only retrieves the "set" value of zoom, and cannot take into account any mouse-wheeling done by the user. This means that any screenspace-to-worldspace mapping requires the world to enable zoom lock. There's no way to set zoom from script (anymore - we used to be able to fake it with OutputToUser()), so we can't even force a zoom level on users if they change it, or if we want to have different zooms for different cases. Screensize is even worse: there's no function to request a user's screen size (which could vary like the zoom could), and there's no support for the undocumented P_VIEWPORT tag to force a client to a specific size.

Why is this such a problem? To do any sort of mapping between screenspace and worldspace, whether it's position or sizing, we need one or both of these values. Sizing, of course, only requires knowing zoom, so provided we enable zoom lock, we could ensure that or nametags are scaled appropriately with our avatars by using GetPlace().zoom to resize them. But since we have no way of changing zoom, I suppose this is really a one-time calculation anyway. Not too useful. But for positioning, and especially for screenspace items that might span between two worldspace objects, both position and size matter.

To map position between screenspace and worldspace, we need to know at least one point that maps between the two. Metaplace has the idea of a "camera", which is where the center of the viewport is focused; for most worlds, this is usually centered on the player's avatar, but Metaplace also allows the camera to be focused on a specific point in the world. The important thing to note is that this value -- either the player's position in the world or the camera's focal point -- is a worldspace coordinate that we know, and it maps to the center of the viewport. The fact that this is the center is important because we don't know how tall or wide our viewport is, which means that we still have no way of determining where in the worldspace the top-left of our viewport maps to. This means that we must always attach the screenspace items based on the center... but we don't know the center of the viewport, either!

So what do we do? If the camera is focused on the player, then we just use them as our center, using UiAttachObject() or UiAttachUserObject(); and if the camera is focused on a specific worldspace coordinate, we have to figure out how far away a known object is from that worldspace location (the player, or perhaps a more stationary object), convert this distance to screenspace (which we still haven't figured out how to do), and then attach to that object.

Sounds easy, right? Let me make it worse: the view type will also affect your calculations! Metaplace currently supports a bunch of views: top-down and side-view (which thankfully use the same scaling); isomorphic, stepped and sloped isomorphic; and rotated (or "UO") view. Also, as we hinted at earlier, the origin of the object to which we're attaching can matter, if our screenspace effect is related to the object itself.

Let's try to make sense of all this. First, let's think about scale. A zoom of 1.0 in Metaplace means that a tile is 64 pixels wide, regardless of which view we're in. Hurray for small blessings! This means that for every 64 pixels on the x-axis that we move something in screenspace, it will shift the equivalent of one tile over in worldspace, at zoom 1.0. A zoom of 2.0 makes everything twice as big, which means that one tile is now 128 pixels wide, and so our general formula is

screenspacewidth = (number_of_tiles_wide)*64*zoom

or

number_of_tiles_wide = screenspacewidth / (64*zoom)

The height of a tile depends on which view we're in: top-down and side-view use square tiles, so the size is also 64 pixels high, at zoom 1.0; rotated view's tiles are diamonds, but are proportional, so they, too, are 64-pixels high; and the isomorphic views are nicely at an angle where the tiles are half the height of their width, so 32 pixels high, at zoom 1.0. Given this, we now know how to scale screenspace elements to their worldspace counterparts; we know that if we want a UI window, button, etc. to be two tiles wide and two tiles high, we could write a function like this:

function tiles_to_pixels(user,w,h)
local tilewidth=64
local tileheight=64

local view=user.place.view
if view==2 or view==3 or view==4 then -- isomorphic
tileheight=32
end

local zoom=user.place.zoom

return w*zoom*tilewidth,h*zoom*tileheight
end

and then might use this function in such a way:

w,h=tiles_to_pixels(2,2)
local win=UiRect(0,"blank window",w/2,h/2,w,h)
UiAttachUserObject(self,self,win)

to draw a big square on top of the player. Big deal? It will be the same relative size, no matter what you set the Place's zoom level to. That's the big deal.


But what about position? This is all fine and good if we want scaled screenspace items that are a given distance from the player, but what if we don't know that distance all the time, because the player can move about?

Let's say that the player is at (10,10) in our world, but we want a screenspace item to appear at (4,6), because, perhaps, they have a treasure map marking X, and the player's Treasure Spotting skill is high enough that a glowing X should appear in the distance. One way would be to just drop a glowing X object into the world and be done with it, but other players would then be able to see it, even if they don't have the Treasure Spotting skill. Alternately, we could drop an invisible object in that location, and then attach a glowing X to it using UiAttachUserObject(), so only the Treasure Spotting player sees it; this would work for most cases, but some people will point out that the object information is still being sent to everyone else's client, and thus they can still know that the object is there, even without a glowing X. If that's not enough of a concern for you, let's say instead that we want a glowing line drawn from the player to the location of the treasure -- there's no UI function to attach a line from one object to another, so we're back to going through all of this nonsense anyway.

It sounds easy, that we can take our current position (10,10), and the desired position (4,6), find the difference (-6,-4), turn those into pixels (-384,-256) or (-384,-128) depending on view (at zoom 1.0), and voila, we have our offsets from our player's position in screenspace. And that's true -- if we're in top-down or side-view.

But this is not the case for any of the other views:

rotated (UO) view


iso view


See how going from (10,10) to (9,10), while it sounds like we're moving one tile "left", we're actually moving left and up. This is the "rotated" part of the view, where the X and Y axes of the worldspace don't align with the X/Y axes of screenspace. This means even more figuring, to determine how much each of our tiles in worldspace slide along these diagonals in screenspace.

Let's look at the rotated view first. I think it's pretty clear from the diagram that as we move along the X axis (from (9,10) to (10,10)), we're moving half-a-tile to the right, and half a tile down, or (32,32) pixels in screenspace (at zoom 1.0) for every increase along the X axis in worldspace. And, as we move along the Y axis (from (10,9) to (10,10)), we move half-a-tile to the left, and half-a-tile down, or (-32,32) pixels in screenspace for every increase along the Y axis in worldspace.

This means that our previous attempt, where we determined that our offset (our difference) was (-6,-4), can use these offsets to figure out where we're going: since the difference was -6 on the X axis, we can multiply that by the (32,32) we figured out and get (-192,-192); the difference on the Y axis was -4, multiplied by our (-32,32) gives us (128,-128). Add these two together, and we get (-64,-320) instead of the (-384,-256) we originally (naively) came up with. If you're still not convinced, try it again going from our (10,10) to (6,6) -- looking at the diagrams above, you should realize that if we step back on both the X and Y at the same time ((10,10) to (9,9)) the actual screen movement is straight upwards. Your result should have zero for the X, and a negative number for Y.

And what about the isomorphic view? Well, it's still the same formula of "half-a-tile right" and "half-a-tile down", but remember the height of the tile, in screenspace, is the only thing that changes from the rotated view to the isomorphic view, so if we figure things out based on "tiles left" and "tiles down", we can convert that using our function above.


The calculations above let us figure out the offset of the glowing X from the player, on the player's screenspace. But the player is likely moving. If we didn't take that into account, that red X would stay the same distance away, "moving" along the ground as the player did. Using this approach, we'd have to re-adjust (using UiPosition()) the X as the player moved, hooking into path_begin() and/or path_end(), or on a tick-based timer.

Alternately, we could attach it to a stationary object; the X-marks-the-spot code might find a tree, rock or seashell near the (4,6) location, and use that as the attach point. The drawback is that the code would have to do a search for an object, instead of just lazily using the player, but the savings would be in having to redraw the X. As long as the code can determine that an object isn't likely to move, this might be worthwhile.

So where's the final code? Err, well, I had plans on posting the two functions that I use often -- worldSpaceToScreenSpace() and screenSpaceToWorldSpace() -- but realized as I was writing this post that I don't handle all views (the "rotated" view had been removed when I wrote it, so I don't support it), nor do I handle the camera not being attached to the user (the code makes a bunch of assumptions in this regard). Once I add this extra support, I'll post the code to the Marketplace.

I didn't talk about how to handle that -- the camera being fixed on a location instead of the player. It's really just one more offset to be calculated, though; if we already know how to figure out how to get a screenspace object to appear at a certain worldspace location, if the player is at the center, so as long as we can convert the player's location from the camera location, we can still figure out the correct offset. I realize as I type this that diagrams might be helpful, but trust me when I tell you that drawing those two pictures above taxed my artistic skill.

The last thing I glossed over is the origin of an object. Metaplace allows the origin to be defined anywhere on the sprite that represents the object - three "fixed" modes (top-left, center and bottom-center), as well as a free-form "percentage" system. While this doesn't matter if you're just trying to find another worldspace location -- the player is at (10,10), regardless of how the sprite is drawn) -- there are times where you might want to know the exact screenspace coordinates at which an object's sprite is drawn. This can depend on the scale of the sprite, and the zoom as well. One example for this might be to draw a box around a sprite; knowing the height and width isn't enough, if you don't know where to start drawing. I have a function for this, too, which returns the top-left, center and bottom-middle screenspace coordinates of any object, which are part of my same screenspace/worldspace library.

I'm surprised how often my world ideas require mapping values between the two spaces; my UO world needed it for dropping items in-hand, and that's where I first wrote it. But things like the nametags should scale in size and location, regardless of zoom, as should speech bubbles. And attaching UI effects to objects (damage numbers, highlighting or selector graphics) could also take advantage of this to good effect.

No comments:

Post a Comment