Monday, June 29, 2009

OutputToUser()

Back from vacation, and it looks like our last server release has lots of goodies in it. One of the best ones, in my opinion, is "Added support for per-user camera control from script." It's the "per-user" part that makes me happy. I have mentioned before the loss of the OutputToUser() function, which was the ultimate way to do per-user effects, so anything that adds them back is welcome indeed.

MetaMarkup

The Metaplace servers, and specifically, a Metaplace world, communicates with a user's client by way of MetaMarkup tags, also occasionally called "game markup language." These are plain-text messages such as these samples pulled from my time in the PlainOldChat world:

[O_HERE]|10013|0:308|5|4|0|0|Crwth|0:1|0| |0
[P_ZOOM]|1.000000
[W_SPRITE]|24080:135|0.375|0.140625|255|255|255|http://assets.metaplace.com/worlds/0/24/24079/assets/images/dwnbtnpress.png|dwnbtnpresspng_|2|0|.|4|0|0
[S_CONFIRM]|Sprite data fetched successfully.
[UI_CAPABILITY]|2165|drag
[INV_GOLD]|241|13829|fc239cc9bf4ef7e148aae958c258bbb4|25|284503

The first part in the [brackets] defines the type of the tag, and the rest of the data, separated by |pipes| make up the parameters of the tag. If you're curious, you can visit any Metaplace world in which you're an administrator and go to Advanced|Debug and click on the "log" tab; perhaps uncheck the "commands" box as well. Everything that the client needs to know -- appearance/disappearance of objects, movement of objects, UI popping in and out -- comes through here.

As a world-builder, we have control over most of what comes through by using the API in scripts attached to objects, or by just building our world with the tools. Thus, calling CreateObject() will make a [O_HERE] tag appear (to most people - see below), and anything UI-based will give you one or more [UI_ ...] type tags.

OutputToUser() let you "hand-craft" these tags, such as

OutputToUser(self,"[P_ZOOM]|"..self.myzoom)

or

for _,user in ipairs(GetUsersInPlace()) do
if user.cansee then
OutputToUser(user,"[O_HERE]|"..getHereParams(self))
end
end

Why is this useful? Right now, the Metaplace API and system is setup mainly to support the idea of a shared-world view. If you go to Metaplace Central, everyone gets to see the same tiles on the ground, the same stationary objects, and see the same look for everyone they encounter. But what if this isn't what you want? Two types of games that easily come to mind, where players should have different views, are RPGs (Role Playing Games) and RTSes (Real Time Strategy). Both of these can have requirements that certain players have different/extra knowledge about the game world than others. With OutputToUser(), you could, with some work, code this up any way you wanted. Now, you're at the mercy of what the API permits.

UI

So what DOES the API permit, on a per-user basis? Well from the beginning, UI has always been able to be done per-user, which I've mentioned previously. This makes sense, as support for things such as pop-up dialogs are important pretty much anywhere, and just because I'm being asked "are you sure?" doesn't mean that everyone else should also see that message. So from day one (at least, my day one in beta) we've had per-user UI, so there was no need to use OutputToUser() for it. Right?

Perhaps, but I can actually think of reasons you might want to hand-craft the [UI_...] tags; sometimes it's easier to have strings premade which have drop-in values (using string formatting) or to send a variable amount of UI commands based on computations and iteration through tables instead of a bunch of conditional code. Fellow tester LunarRaid, as I recall, was doing something fancy with OutputToUser() and changing the art of UI elements.

Place Settings

There are a bunch of MetaMarkup tags that tell the user's client about the Place they're in, such as the location of the camera, the zoom level, and the type of View (top-down, isomorphic, etc.) Some of these, such as the View, probably make sense as a shared, universal settings for all users (though, I did hand-craft [P_VIEW] tags in a world to allow visitors to change the View -- for themselves alone -- to see how certain code behaved in different Views). Others, though, might have legitimate need to be different between users.

I mentioned in a previous post that certain calculations depended on knowing what the zoom level was of a Place, and thus it was strongly suggested that the zoom was locked, preventing the user from scrolling with the mouse wheel. But if you could set the zoom level, per user, you could not only override any mouse wheel zooming by forcing the zoom every second, half-second or quarter-second, but you could also provide a little zoom bar with which the player can legitimately change their zoom level in a way that code that depends on it will know (zooming with the mouse wheel is all client-side, so nothing gets sent to the server, and thus scripts can't know that it has been done.)

Playing with the camera can also provide some interesting effects: right now, the camera is usually locked to a user's avatar, or locked to a given location in the world. While we've had a MoveCamera() function for a long time, to change that location-based camera position, we never had the flexibility to change the camera behaviour between the two on a per-user basis. (Locking the camera to the user, and hiding the user, allows for interesting effects such as my follow camera experiment, based on a discussion with LunarRaid; he recently requested some new functionality that I would also like, as can be seen by the jerkiness here.)

One concept that can play a big part in RTSes is the "fog of war", where the map is unknown to you until you've been there, and even afterwards, parts of the map that you don't currently see can change - often, RTSes would have these "old" areas greyed out, with the scene as last seen. Of course, "what you see" changes from each player's point-of-view, and this includes the tiles themselves; my world BITN (currently suffering from an art issue) was a testbed for such tricks, where the world was dark except for a few lightsources or the special ability of the user to see in the dark, and thus tiles were revealed based on user-specific data.


This last server release has given us some of the functionality that I used to have relating to Place settings, but there are still some calls that we could use...

Objects

Along with the selective viewing of tiles in BITN was also the idea of selective views of objects. In fantasy RPGs, you might have spells such as Invisibility, Illusion or Polymorph, which change your outward appearance. These on their own are easy enough to implement, by changing the player's sprite. But such games might also have the idea of different "vision" types (perhaps granted by other spells, perhaps innate to the player's character's race), such as See Invisibility, See Illusion, or True Sight (which might see through all of these tricks).

Changing the player sprite is a good solution for Invisibility or Illusion if everyone is affected (although it could even be argued that the player of the invisible or illusionary character might want to see their true form), but as soon as we have different players who should see different things, the sprite-change method isn't suitable. When we were able to handcraft MetaMarkup, we could send different [O_SPRITE] tags to different users, depending on what they should see. This is exactly what BITN did, where I had all of the above spells and visions; if someone had cast illusion on their usual rogue form, they would appear as a fighter to the commonfolk, but anyone who had either See Illusion or True Sight would see the character as the rogue he or she really was.

So seeing an object differently can be useful. How about location? In the fog-of-war idea, the greyed out "old information" might show that there were some enemy units there, but they have since moved on since you last looked; on your screen, those objects should still be there, but for that enemy player, he sees them for where they truly are. Right now, you can't do this - as soon as you move an object, it moves -- there are no selective [O_HERE] tags based on whether or not you should have accurate knowledge of an object's location.

(I should point out that we do have a SetUserVisibility() function, which lets you set how far from a user objects can be seen - when objects leave this radius, either because they or the user moves, [O_GONE] tags are sent, even though some other user might still see them. It's a limited version of per-user visibility, which does solve some problems, but it only allows "see it or don't", not "see it here or see it there". There's also the gmVisible setting on templates, which set whether objects of this type can be seen by only administrators of a world, or by everyone -- again, it has it's uses (my camera marker module uses this functionality), but isn't a game play tool, just a game design tool.)

Look and location are just two examples of object settings that you might want to have different per-user; you can browse through the MetaMarkup page and look at each tag, perhaps coming up with all sorts of interesting game mechanics that could be implemented if only you could hand-craft how these tags were being sent to different users (just looking at the page now, I thought of having "x-ray specs" where you could have another player's avatar's clothing vanish, but only if you have the x-ray specs -- oo la la!)

The Solution

Of course, the easiest "solution" would be to just give back OutputToUser(). From what I can gather, the reason it was removed was to prevent malicious use; the last example MetaMarkup tag I showed above, [INV_GOLD], represents something "meta" from the gameworld you're in, something at the Metaplace level instead of the world level. Perhaps forging these, making people think that they got gold that they shouldn't, is the issue? Regardless of the reason, we've lost it.

So far, we've been getting new API calls to replace some of the most-often cited functionality of OutputToUser(). The per-user zoom is definitely a good one; we also recently got AddEffectForUser() added to the mix. And I can hope that, as long as I keep pestering/asking for the other users, we'll see the API calls appear.

A different solution, though, would be to still allow hand-crafting of tags, but perhaps only certain ones -- allow something like

OutputToUser(user,tag,params)

where I would call

OutputToUser(self,"O_HERE","10013|0:308|5|4|0|0|Crwth|0:1|0| |0")

and the function can decide if "O_HERE" is one I'm allowed to hand-craft. This would prevent me from faking INV_ tags, if that's the concern. Without knowing the full range of concerns regarding the old OutputToUser(), I don't know if this new one would be feasible. It would, however, be a one-stop solution to all of the other useful features lost and now pending API additions. Even if the goal is to have API functions for all of the imaginable per-user needs, something like this might be a nice temporary fix?

Monday, June 15, 2009

Happy accidents

One of the interesting things about working with Metaplace is that, with all of the different API calls and graphic effects, you can do all sorts of things -- some planned, some not!

For instance, the module that I just published today provided something like a "circle of transparency", which, from Ultima Online, would make objects that blocked your view of yourself transparent, so you could still see things near you on the ground, or that are attacking you, or just to see yourself for whatever reason. This is a client-side effect, where the client just clears out a circle of user-specified radius based on the objects it renders. Of course, I don't have access to the Metaplace client, nor does it have support for this, so I had to write a workaround, as I usually do.

An actual circle was out of the question (or is it? Hrm...), so instead, I went for a method of making any object whose sprite overlapped with the user's become invisible. This requires a lot of the screenspace/worldspace calculations that I've talked about. My original plan was to just make the sprite invisible to the user by changing the sprite, or through some sort of effect. As I was working on it, testing the algorithm for determining sprite overlap, I thought, "I'll just have it show a glow effect on the objects that it wants to turn transparent". This would be a temporary effect, just to help me visually see what the code wanted to hide. The effect didn't seem to be appearing on the objects it should, so I started messing around with it, changing values and effect types to see if the glow was perhaps broken. Once I figured out the issue (and I unfortunately cannot remember what it was), I had the effect set to the "bevel" effect, and had the object hiding itself.

Well, let me tell you, this effect was even better than I had planned. I had *planned* to just have the object vanish! But with the bevel effect, you can see through it (since I turned on the "hideObject" setting) but still get a sense of what it is you're seeing through, because the edges still have the bevel effect applied. Fantastic! If you want to see the effect, go take a stroll around my rotateplace world, stepping near the trees.

This lucky stumbling on a solution was a bit surprising, since I had previously made a demo world where you can play with all of the effects (I'll talk about them in a future post). But even after having all of that hands-on experience with bevelling and effects in general, I never thought of using it as a transparency system that including the "cloaking border", as fellow-tester Dalian described it.

This is what made me think of writing this post: there are all sorts of things that you can do in Metaplace, whether it's modules people have written to single lines of code, that you're able to put to uses that no one likely thought of. Once in a while, I've looked at the latest release notes, read about the latest functionality that was introduced, and said, "hey, not only can it do this, but you could use it to do THAT, too." But generally, I read about new functionality and think either "ahh, they finally added that feature we've been asking for", knowing what the original request was for, or, "hrm, they added this feature out of the blue; I wonder what the Metaplace content folks needed it for," and come up with a reasonable idea. But who knows what kind of interesting uses are still undiscovered, because we read about these features and just recognize the "intended" use?

And that brings up the other side: having a need for some feature, effect or functionality, and not having it... what do you do? Well, I kinda pride myself as the King of Workarounds, and I can usually find a way to do pretty much anything needed in Metaplace, even if there's nothing close to an obvious way to do it. The workaround may be ugly, may be laggy, may be considered illegal in thirteen states, but I can usually find one. And I think that, too, speaks to the beauty, the flexibility of the Metaplace platform: even if you can't do something, you can still do it.

Edit: I just realized that this post probably had me sounding like a huge Metaplace fanboy -- I didn't say anything bad! How's this: the best tool we had for workarounds, the OutputToUser() function, was taken from us. So there!

Friday, June 12, 2009

Metascript

The scripting language for Metaplace, Metascript, is strongly based off of Lua, so much so that one might think that it WAS Lua. If you don't know Lua before diving into Metaplace, then that's fine, but if you've got some Lua under your belt, you might hit some hurdles.

This post isn't about learning Metascript, though, or about the differences between it and other programming languages, or how Lua/Metascript sucks compared to your favorite language. Just because you come from a background where arrays are indexed from zero doesn't mean that indexing from one is wrong. And if you want to argue the point, then I'll point out that Lua doesn't have arrays anyway.

Syntactic sugar

One of the things that will strike Lua programmers is the extra set of definitions available in Metascript:

  • Define Properties()

  • Define Commands()

  • Trigger foo()

  • Command bar()

  • WebTrigger baz()


These all start "special" functions in Metascript, and aren't standard Lua. What post-alpha users might not know, though, is that these are just colourful candy coatings for some mundane-looking counterparts:

  • function def_properties()

  • function def_commands()

  • function trg_foo()

  • function cmd_bar()

  • function trg_http_baz()


In fact, way back when we also included the "trg_" prefix in the SendTo() calls. As far as I know, these old methods still work; I assume this because

  1. I still have old worlds that use it (thought I haven't visited them in a while)
  2. We were told that they wouldn't stop working


Kinda takes some of the mystery away, doesn't it? I can see why the change was made: it helps to emphasize the special nature of these functions from others defined with "function XXX()"; and it hides the unfortunate fact that variable prefixes are used to denote semantic meaning. Don't consider them variables? In Lua,

function foo_bar()
...
end

is equivalent to

foo_bar=function()
...
end

Hrm. I'm now curious whether I could write

def_properties=function() foo_bar=0 end

in Metascript and have it work. I'm going to guess not, which I'll talk about shortly.

So, should you use function trg_foo() instead of Trigger foo() ? One reason you might is to do some offline programming, so you have code that compiles under a standard Lua interpreter, but can still be tested (by implementing your own backend library that knows how to handle Triggers and Commands). I've used this in the past to do some rapid development and to avoid some editor idiosyncrasies.

Define Properties() and Define Commands() (or function def_properties()...)

I've made no secret my dislike for these "functions". Define Properties() is where you define your script properties (where they might collide with others on the object), include scripts (similar to Lua's dofile()), expose properties and export functions.

Define Commands() is where you define your MakeInput()s and your MakeCommand()s. (There used to be a separate function def_inputs() -- I wonder if that still works.)


These functions are handled "specially"; they're actually executed at compile time, which can reveal bugs earlier than runtime, such as attempting to access functions that aren't available (which is almost all of them). That means no Debug(), no AlertToPlace() - no pairs() or ipairs().

They're actually code, though; you can have for loops in here, if there's anything worth looping over, and conditionals, if there's anything to compare. But generally, these functions act as definition blocks, and might better be implemented as such, extending the Metascript further from standard Lua. Then we could have some alternate notation for defining properties, such as

float value;

instead of relying on

float=0.0

to work. Which it won't. This is because behind-the-scenes, any local variables defined in this block are being specially processed into member variables on a C++ object on the server (all supposition -- I'm not actually privy to the source code). Being C++, types need to be known and once defined, permanent. This means that, unlike Lua, Metascript won't let you redefine the type of a property once it has been set (and this is a reason why I advocate table properties). Not only that, but it means that certain Lua types - ones that aren't available in C++ - aren't allowed as properties, such as booleans and functions. I would expect that supporting table properties had to require a bit of work, having the Lua form of the table being converted into something that C++ understands, and can hold, so why not a Lua function? Why not the lowly boolean? Also, this desire to define property type using Lua notation, instead of just creating a custom definition block, leads to ugliness such as

child="_object_"

to define a property as an object. Ouch.

As for Define Commands(), I'll just re-iterate from my previous post that I don't understand why MakeInput() and MakeCommand() cannot be used outside of Define Commands(). Sure, some compile-time checking is nice, but you're not saving me from all sorts of other scripting bugs, so why this? Also, these definitions check whether a Command specified in MakeCommand() is actually defined. Why? What's so bad about a Command being sent that doesn't have a handler? Triggers allow this (and allow multiple handlers), yet Commands must have specifically 1.

And because I just can't let it go: why are these two scripts run in such a tight sandbox? Fine, it might make no sense to call SendTo() while defining properties, or perhaps not even Debug() while defining an input, but no pairs()/ipairs()? Am I really the first person to write "code" in these blocks, instead of just a handful of definitions and pre-structured function calls?

Scope

In Lua, everything is a global variable unless you define it as "local". In Metascript, too, your definitions have a global scope within a script -- this probably bites me in the ass once a week, since I tend to write a lot of recursive functions. The "within a script" is important, though. When first looking at Metascript, and Metaplace, and the idea that multiple scripts are attached to an object, you might want to think that these are all loaded into a shared environment as far as the object is concerned. However, this isn't true, and rightfully so.

Remember how

Trigger foo()

is the same as

function trg_foo()

is the same as

trg_foo=function() ...

? Well this would be problematic if all scripts shared the same scope, because it would mean that an object couldn't have definitions for a Trigger function more than once, as the latter ones would overwrite the former, and having multiple definitions for the same Trigger is a key, powerful part of Metaplace. This scoping is unfortunate, however, because this means if we use IncludeScript() to import a set of functions, we have to do it for every script that needs them, instead of just having it imported once for the object. This makes it awkward to have a library that's used throughout a set of scripts.

Also, the scope affects the idea of "self". Commands and Triggers, usually the largest portion of a script, have an inherent sense of self. But functions do not. Why? Well, they actually used to, I believe before we had IncludeScript(), so there wasn't a question of the context in which a function was being run. Not being privy to the way the Lua sandbox is being run, I'm not exactly sure why "self" can't still be defined in the environment of a function call, but it is no longer supported - you must now pass it in explicitly.

Userdata

Stock Lua also has a userdata type -- it's basically a C++ object -- so this doesn't make Metascript different in that regard. However, because every object in Metaplace is represented by a userdata object, how they interact in script is important.

Knowing the structure of the userdata is usually required, because there's no reflection or introspection by default; you can access self.foo if you know it's there, but if you don't, you have no way to ask (not exactly true, see below). Why is this important? If I don't know that self.foo is there, should I really be using it?

Well, yes, sometimes there are cases where iterating over everything is a good idea. One case is the set of stylesheet API functions that Metaplace provides. These let us look at the stylesheet (basically, the static portion of a world), to peruse the templates, places, sprites, scripts and modules. To do this without any prior knowledge, we need to be able to iterate over them all, much the same way that pairs() and ipairs() allow us to iterate through a table. In some cases, we have a special ._all_ property on the userdata, which returns a table which can indeed be iterated over with ipairs(). But why not all the time?

For instance, if I want to browse the Places in my world, I can access a userdata object with stylesheet.places, and if I know the name of one, I can index into this userdata, such as stylesheet.places["0:1"]. Alternately, I can use stylesheet.places._all_ and loop through them all, finding the one I want. And once I have a specific Place, I can get another userdata from it with someplace.tiles. Not knowing anything about how many tiles there might be, I don't know what to ask for specifically, so I try someplace.tiles._all_, and get back ... nothing. It turns out that instead of a nice table to iterate through, I have to basically guess the indexes of the tiles, with something like

t=0
while someplace.tiles[t] do
...
t=t+1
end

And worse, there are other objects, such as the Place itself, that neither have a way to iterate (with ._all_) nor to enumerate (with a 0->n loop), but that you just have to guess/know the properties of, hoping that the wiki documentation is up-to-date. Again, this isn't something specific to Metascript -- Lua userdata can have this problem too -- but it would be nice if we had the ._all_ table available on all accessible userdata objects. Alternately, all userdata objects could act as tables for purposes of using pairs() or ipairs(), if they really wanted to.

Metatables

Metatables (the "meta" being unrelated to Metaplace) of Lua are a very powerful feature, one that, in my opinion, turns Lua from a simple toy language into a powerful full-featured one. Metatables allow us to redefine our environment, creating a sandbox where a limited set of functions might be available. Metatables allow tables to take on new functionality, such as addition of two tables, or different handling of accessing values that don't exist. And metatables ... aren't available in Metascript.

My only guess is that the current sandboxing provided to the Metaplace scripts makes further access to metatables ... hard? Unwise? Dangerous? Confusing? I'm not sure, but it's a real shame that we don't have them. Granted, they're an advanced feature, and it's quite likely that very few Metaplace users will notice their absence, but heavier coders like myself certainly do, and working around them can be difficult, awkward, or near impossible.

Case in point: I'm writing a vector/matrix library for Metaplace, to help implement a new physics engine that I'm writing. Such a library must exist for Lua somewhere, right? Sure, there are some out there, but most (all?) wisely take advantage of metatables to overwrite the built-in operators, since it makes for much nicer and cleaner usage of such a library if I can say

newvector=v1+v2

instead of

newvector=v1.sum(v2)

which I have to do now. Even something as simple as making the vectors printable:

Debug(newvector)

is a lot nicer than

Debug(newvector.tostring())

Small things? Yes, but these only touch on what metatables can allow. Frankly, I'd like to stop there because I don't want to realize what other powerful things I cannot do in Metascript because of their absence.

Verdict?

All that being said, Metascript isn't a bad language. It's Lua at its heart, with a few surgeries to make it tick a little differently, some of which might have been required, or some only elective. This customized version of Lua hasn't itself prevented me from doing anything in Metaplace, apart from quickly porting in pure-Lua code from the internet; any walls I've hit have been with the Metaplace platform itself, and not the scripting language.

Monday, June 8, 2009

Namespaces

Metaplace's biggest strength has to be the Marketplace, where the community can contribute their own modules to everyone else, either free or for a cost of Metacoins, the current virtual currency being tested during Open Beta. This biggest strength, however, is going to become a problem if something isn't done to deal with the ever-crowding realm of namespaces.

In case you're not aware, a "namespace" is all of the available possibilities for naming something in a certain context. The names of the Metaplace worlds make up a namespace -- no two people can have the same "cleanname" world name (so, for instance, you cannot make a world called "MPCentralLive", or "UO", or any of the tens of thousands of other worlds that are made.) On the other hand, the display names are probably not strictly a namespace; I've not tried, but I do wonder if I can have the same display name on my world as someone else's.

The world names aren't TOO big of a deal. Sure, there might eventually be issues with copyright, or "prior art" in the case of names: if Metaplace is going to take over the world, then I'm sure worlds named "McDonald's", "Microsoft" and the like are going to be fought over, in the same way that domain names (mcdonalds.com, microsoft.com) were when the internet started growing. But the more pressing namespaces are from the scripting point-of-view, as more and more content becomes available, and the likelihood of problems increase.

Properties

Metaplace objects are all about their properties. Objects have a set of default ones, and different ones can be available if an object is a Physical object, or a Place, or a World. All of these built-in properties tend to be set upon creation of the object (such as "id"), or as they operate inside the Metaplace environment ("x", "y", "z"). These properties are accessed by having a reference to the object and then using dot- or bracket-notation to access it:


self.id
self["spriteId"]
GetObjectById(10003).vx
etc.


Attributes, which can be thought of as properties that are configurable for a template (from which objects are created), are also accessed with this same notation. This means, then, that you cannot add an attribute to your templates called "id", or "name", or "speed". But you could have your RPG game where all of the objects need to have a damage value, so after checking the Properties pages on the wiki, you can have that:


self.damage
monster[4].damage


Also, scripts attached to an object can also add properties. A module might define these in one of its scripts to allow passing of values between all of its component scripts. It might also use them to persist values, such as an RPG module that provides the ability scores such as "strength" and "dexterity". These are added in with the Define Properties() block, typically found at the top of a script.


Define Properties()
strength=10
intelligence=10
dexterity=10
constitution=10
wisdom=10
charisma=10
PersistProperty("strength") -- etc.
end



self.strength=self.strength+1


We're okay unless our RPG game has ability scores for "speed" or "lifetime" or a "type" feature.


As you can see we've already got concerns, so if you don't read the wiki property pages daily, you might, say, want to write your own physics library, and try to make script properties called "physics" or "speed" -- like I did. Or, you might want to write your own containment library, and feel that objects should each keep track of where their "container" and what they "contained" -- like I did.

Now I believe they've helped the issue a bit by having the compiler generate errors if you try to define script properties that already exist as template properties, and perhaps even attributes? I can't remember, because you'd think that once you'd made this mistake twice, and have wasted countless minutes, perhaps hours, debugging it. But while the script compiler can save you from attempting to use built-in property names, it can do nothing to prevent you from using properties that someone else is also using.


And this fact is actually taken advantage of, in a way I don't much care for. The Behavior Tool, in its latest form, recommends that any behaviour should have, in its script (and that's really all behaviours are -- scripts), a handful of properties defining that they are indeed a behaviour, name, description, and a few other future properties. But... wouldn't this be a problem once you have more than one behaviour?

It would, if these properties were used for anything other than identification. It's possible to pull out the values of these from the original script, even if these values get lost on the actual object due to being overwritten by other properties of the same name. In fact, these values are retrievable without the script actually being attached to an object, quite different from accessing properties via the dot- or bracket-notation.

I don't care for this "trick" to get values about the module. I think other methods should be used, such as labels, instead of this series of faux properties. I suppose it works, but I think it only fosters misuse of properties. Especially for those that are meant to take on per-object values.

Commands and Triggers

Properties aren't the only namespace to worry about - as was emphasized today, in fact. Newcomer tester Karkacabra was hitting a strange error, which quickly revealed itself to be a namespace collision -- with one of my own modules, no less. He had defined a Command called "close", which collided with the one that I had used in my languagewindows module. Admittedly, (especially since I'm writing this blogpost,) I should have used a better name, such as "languagewindow_close". And in my defense, Karkacabra should have also. *:^) In fact, we should ALL be doing so; the content from the Metaplace team is actually pretty good at this, prefixing Commands and Triggers with a something meaningful and hopefully less likely to collide.

Triggers are actually more of a problem for collision, because technically they DON'T collide -- it is perfectly valid for multiple scripts to define the same Trigger name, and they will each get called, with the order based on their attachment order. This is intended, and a very useful function -- but ONLY if it's intended. If I write a "use" Trigger on one of my objects, I had better hope that I had intended it for use with the Smart Object system in Metaplace, or else I'm going to see some odd behaviour -- not the wrong behaviour, as you're likely to see with colliding Command names, but additional behaviour, as all of the like-named Triggers get fired.

Workarounds

So, we know that if we start using verbose prefixes on our Command and Trigger names, we're going to help reduce the collision in their namespaces; the namespace is rather large, what with 50-60 or so characters per position in the name, and more length from a prefix just means more possibilities. But what about the properties?

Of course, the prefix system works just fine there, too, and I believe the Content Team uses it there. But I prefer a different method, which works, in my opinion, better.

Instead of defining each of the various or numerous properties individually, I define a single table as a property, with a suitably namespace-safe name, and then insert all of my properties within. Here are some of the benefits:


  • automatically grouped together by name, using foobar.propname instead of foobar_propname
  • requires only one PersistRuntimeProperty() call to keep all of the properties persistent (which also means none are accidentally forgotten
  • the property types do not need to be specified in the Define Properties() block
  • the table method allows easy deletion of properties, instead of setting to a "nonce" value
  • all properties for a module are easily iterated through
  • properties can be booleans, or functions
  • two modules, if they both try to define the same table property, won't fatally collide unless those tables predefine the properties (which is only necesary for default values)


The only drawbacks that come to mind are:


  • reading the script's Define Properties() doesn't tell you the properties used
  • reliance on the PersistRuntimeProperty() of a table, which has had a history with a few problems



I've been lightly pushing people in Metaplace to use the table property approach, only because I think it's nicer. What I will start pushing harder for, however, is the namespace consideration -- and I'll be the first to admit that I'm quite the culprit. In my defense, a lot of my code started off as "for me", so I would be aware of my own namespace usage, but of course as soon as I decide to publish it, or I start using others' modules, I need to be responsible.

Thursday, June 4, 2009

User inputs

I had mentioned previously that players have two inputs to their computer, and thus to Metaplace -- the keyboard and the mouse. Fellow beta tester KStarfire pointed out that I missed the joystick/gamepad. Does Flash support a joystick? I'm not sure, but I'll concede the point to him.

Metaplace uses a single method, the MakeInput() function, to define all of the inputs that a world understands. These are parsed at compile time to ensure they're valid, and passed to the client as a bunch of tags to tell it which inputs it should bother to send to the server. Here's its definition:

MakeInput(input description, input code, input event, input modifier, command string)

Where input description is something like "press I to open the inventory", the input code is "i", the input event is "down", the input modifier is "control" and the command string is "inventory".

Keyboard

Whether you're going old-school RPG with arrow keys or WASD movement, or just need a few keys to support inventory and status windows, the keyboard is a must. MakeInput() lets us define an event - that is, send a Command - upon a specific key press (up or down) with a modifier (shift, alt, none) for every need we have. Some of the keys have bigger "codes" than the letter they represent, such as "left" for the left-arrow key. This means we can do one-step-at-a-time walking, by just listening for "down" events for our arrow keys, or we can do walk-until-I-let-go walking, where we start walking on "down" event, and stop on "up". This sounds like everything we could want, but there are some limitations.

There are a bunch of keys that aren't supported. I won't spell them out here, but there's a bug filed (by me) in the Metaplace forums for certain punctuation keys that you just aren't able to listen for. This is problematic for me in my Ultima Online world, where I want to emulate the speech system of UO, where there's no chat box in which you click to start typing. This means that my current implementation (which you can try out) lets you type letters, numbers and a space, but none of your punctuation comes through, which makes you type like a ... well, like most of the people on the internet.

My UO world also points out another issue: to get this working, I had to make 26 lines of MakeInput() to handle each letter; another 26 for when I press shift (to get a capital version); 10 more for the digits; space, enter... that's 64 right there. If the punctuation was working, I'd have even more. And what if I wanted to catch every keypress possible? UO allowed you to map keypresses to macros, such as "control-e" for the allnames macro (which happens to work in my UO world, if you want to try it.) This means four modifiers (none, shift, control, alt), two events (down and up), and ... 100+ keys? That's 800 MakeInput() lines! In my UO world, I only have half of them (I wasn't interested in key-up events). Rest assured that I wrote a program to write those all out for me. The problem that this points out: there's no way to say "any" for the "input code", to say "for any keypress, send this command" or "for any shift-keyrelease, send this command".

Another problem that arises is that a given code/event/modifier combination can only have one possible Command that it will send -- you can't have "i" both bring up your inventory and cast the Invisibility spell. If you define a MakeInput() more than once for the same combination, it used to be the case that the client used a random one. Now it looks like the last-defined one -- the one in the script loaded last -- is the one that wins.

For a user creating their world completely from scratch, this shouldn't be a problem - they know what they want each key to do, and aren't likely to re-define the same combination again for another purpose (though I'll come back to this). The problem is more likely to appear when the Marketplace becomes involved, when a world-builder buys off-the-shelf modules that define MakeInput()s. As more and more content becomes available, the chances of these modules colliding is going to grow.

This has already occurred. KStarfire had a world where the spacebar was the "throw" command; after he had created that functionality, the avatar module -- probably the most ubiquitous module in Metaplace -- added a jump action to the avatars. And what key did they choose for that? That's right, the spacebar.

KStarfire, at that point, had two choices: either edit the avatar module, change the MakeInput() line where the jump was defined, and have to do this every time the module updated; or change his own code. But what if the "throw" command was one that he had purchased, instead of written himself? Another solution would be to allow the keys for every module to be defined by the user or world-builder, to have the MakeInput() commands pull their values from a user or template property. Unfortunately, the MakeInput() command doesn't allow this (future blog post).

So is there a solution? I think so. I think that a module needs to be developed that defines every single combination in MakeInput() and sends a single command from the client to the server when it happens. This Command would then fire a Trigger to the the user object, and at that point, any interested parties could listen for the Trigger, and based on configuration, decide if that matters to them. So the avatar system could allow the user, or world admin, to say that "control-spacebar is jump" and "spacebar is throw", and each module would handle things the way they should. Additionally, modules could, if they wished, share the same keypress.

I believe in this solution so much, in fact, that I've implemented it. Also, I was able to work around the limited environment of the Define Commands() block to come up with this little gem to save myself from typing an enormous list of MakeInput()s:


keys={"numpad0","numpad1","numpad2","numpad3","numpad4","numpad5","numpad6","numpad7","numpad8","numpad9",
"f1","f2","f3","f4","f5","f6","f7","f8","f9","f10","f11","f12",
"backspace","tab","return","pause","capslock","escape","space","pageup","pagedown",
"end","home","left","up","right","down","print","printscrn","insert","delete","help","numlock","scroll",
'-','=','\\','.','/','0','1','2','3','4','5','6','7','8','9','a','b','c','d','e','f','g','h','i',
'j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z',
"shift","control","alt","lshift","rshift","lcontrol","rcontrol"}
states={"down","up"}
modifiers={"none","shift","control","alt"}

for x=1,#keys do
key=keys[x]
for y=1,#states do
state=states[y]
for z=1,#modifiers do
modifier=modifiers[z]
MakeInput('keypress '..modifier..'-'..key..' '..state,key,state,modifier,(state=='up' and 'un' or '')..'key '..modifier..' '..key)
end
end
end


This generates every possible MakeInput() that Metaplace supports right now (with all of the missing punctation). It fires a Trigger key(modifier,code) for key down, and unkey(modifier,code) for key up. Now I can have code such as


Trigger key(modifier,code)
if code==self.jumpkey and code==self.jumpmodifier then
SendTo(self,"jump")
end
end


or even have a way to encode the modifier and code into a single value so there aren't two values to store and compare, and have a comparison function:


Trigger key(modifier,code)
if self.jumpkey==keypair(modifier,code) then
SendTo(self,"jump")
end
end


I really think that this approach is going to be inevitable, because the colliding modules is going to happen more and more, and the need for customizable keypresses, either to prevent collisions or to allow flexible worldbuilding (not to mention, to be able to support key macros) is going to be in demand. And, as I hinted at before, we might want to change the meaning of a key halfway through gameplay; for instance, I might want "i" to open the inventory in a non-combat mode, but when I'm in combat mode, "i" might cast the Invisibility spell. This could be one Trigger that handles the keypress, that understands both modes, or two separate Triggers (in different modules) that each handle a specific case, unaware (and uncaring) that the other exists.

I have one last thing that I'd like to see support for -- held-down keys generating repeated key-presses, much as we're used to seeing on computers when we hold down a key in our word processor. Some world might need this functionality (such as my UO speech), but others probably won't; once I add this as an option, I'll publish the above code as a small module for universal key handling.

One last thing about the keyboard: even with the Flash client in focus, some browsers out there rudely capture keypresses instead of passing them on to Metaplace. Specifically, the "control-c" in UO is meant to refresh your mouse cursor (there's a bug there I have yet to file), but in Internet Explorer 8, this brings up a menu or something. I've seen this on some of the other browsers, with certain keypresses, whether it toggles bookmarks or who-knows-what. This is an unfortunate issue that world- and module-builders will have to keep in mind -- the browser compatibility issues reach further than the web page!

Mouse

The mouse is a must in today's computing, giving us near-instant pointing and selection. Very few worlds are likely to get by without its use, largely in part to the graphical nature of the worlds (plain worlds such as Plain Old Chat.)

Unlike the keyboard, there are only a few different input codes available for the mouse:


  • mouse-terrain -- clicking on the ground (on a tile)
  • mouse-object -- clicking on an object


"click" and "double-click" are the only two events supported. Embarrassingly, I've never tried the "double-click" event, since the only one of my worlds that needs it (UO) was created before this event was added, and I ended up writing my own (in a similar manner to how I'll write the repeating key code). All of the modifiers are supported (none, shift, control, alt).

There are lots of things missing here. There's no way to distinguish where the mouse-click went down, and where it was released (I believe the location of release is what is sent to the server). This is one that I think could add some extensibility to a world's interface, and wouldn't increase the traffic at all.

Having the mouse location, either while holding down the mouse button or not, would of course be valuable. This would let us have draggind-and-dropping, of handling mouse-over/hover (future blog post) very easily, and to have the mouse cursor itself act as an agent in the game, as the avatar itself. We're told that the traffic between the client and server would be too large to allow this feature. I can certainly see that there *could* be a lot of traffic, depending on how often you sent the mouse data. But I propose that this is a setting we should be able to send to the client,

SetMouseLocationUpdateTime()

as well as provide a way to enable and disable the flow of this data, so worlds that need it can request it:

StartMouseLocationUpdates()
StopMouseLocationUpdates()

And of course, the client could have a hardcoded maximum of tags being sent in.


And last but not least, there's the fact that all mouse events currently represent the left-mouse-button only. No right click (nor middle click, nor sideclick?). This is a Flash problem, as I mentioned in the post on movement, but indeed a problem. As I had said, Ultima Online used right-mouse-hold (or right-mouse-doubleclick) for movement, and everyone associates right-clicking with a context menu. Other clients (a future blog post) could implement this support, of course, but I think it unlikely that we're going to see support in the Flash client. And, of course, even if other clients COULD support it, there's no way for the world to state that it's interested, because it's not an option in MakeInput().

So for now, workarounds need to be made. Luckily (perhaps oddly), double-left-click on terrain had no meaning in Ultima Online, so I've used that as my walk-to command (I use single-left-click as a "look" command.) As for workarounds for other missing things, I use ctrl-left-click as pickup/drop (instead of drag-and-drop). Though I also have a little hack for this, too, which I'll mention ... in a future post. Shift-left-click is, I believe, used to bring up the Behavior Tool, which acts in place of a right-click for a context menu, and the current avatar behaviour seems to go with single-left-click for bringing up a context menu on other avatars (for meeping, sending friend requests, offering gameplay, etc.)

I can also see a need for a mouse equivalent to the "every key" module I mentioned above, letting world owners define the mouse meaning they desire to various actions: if I want single click to be "look", and double click to be use, I should be able to set that instead of be forced to use whatever the module-writer decided.


Overall, I'm a bit disappointed with the mouse support, because it's really the primary interface for many. The modifiers can help us for now, but I really hope that, once all of the bigger pieces of Metaplace get done, the mouse support can be revisited.

Joystick?

Is this so far-fetched? Not really; I have no idea if Flash can support joysticks, or gamepads, but custom clients sure can, but we still hit the problem of having no support for them in the MakeInput() call. And this brings up something that has stuck in craw for a bit: the special nature of MakeInput().

I had planned on talking about this in a future post, but I really don't understand why MakeInput() must be in the Define Commands() block. Or at least, why it must ONLY be in that block. I can see a desire to precompile this, to ensure that the definition is done correctly. Or can I? Every other API call checks this at runtime, not at compile time, and lets the world owner know in the Debug or Log window, and lets the user know by just not working. Why can't MakeInput do the same thing? Why can't I call MakeInput() later in my script, after asking the user to define some keys, of after the user's preferences have loaded? Why can't we call DeleteInput() -- there's a tag ([W_DELINPUT]) that exists for when a script is unloaded, which removes a defined input, so doing so after the world is running, and a user is connected, is certainly possible. Why can't we add inputs after we're up and running?

Maybe there's a good reason, maybe it goes against the design of the system, or maybe this was just oversight. Some day I hope to know! Until then, though, I hope my module will be a viable workaround, or that the collision of inputs doesn't become too big of a problem, too soon.

Wednesday, June 3, 2009

Worldspace versus Screenspace

In Metaplace, you have two canvasses on which to work: the "worldspace", where all of your objects sit, and the "screenspace", which is where the UI elements sit. There are a few key differences to note about them: worldspace is seen by everyone, where screenspace may or may not be shared; worldspace is affected by zoom, where screenspace is not; and screenspace is always on top of worldspace.

These three differences can be utilized to make all sorts of interesting effects that aren't immediately obvious to world builders. The fact that screenspace can be on a per-user basis means that it can help implement worlds where not everyone has the same knowledge -- fog-of-war and different kinds of "vision" (infravision, see-invisibility) can be supported through interesting uses of screenspace instead of worldspace. Basic menus and dialog boxes inherently "take advantage" of the fact that the UI doesn't zoom while the worldspace does, although the opposite can also be true -- zoom way out in a world (that supports it) with nametags on the avatar: the nametag soon dwarfs the puny little figure.

There might also be times when you want to have an effect between two objects: my Masses and Springs world needed to draw a line between two objects, but a line is in screenspace while the objects were in worldspace (if you go see the world, it currently suffers from a bug, which I've just reported -- it works okay if you click down-and-right of the face. Also, it only really works if you get out of full-screen mode, for reasons mentioned below).

So how do you go about matching up screenspace to worldspace?


There are four methods to attach UI (our screenspace objects) in Metaplace, two that are seen by everyone, and two that are per-user. UiAttachWorld() is attached to the viewport itself, treating the top-left corner as the origin, and is seen by everyone. UiAttachUser() is the per-user equivalent. UiAttachObject() attached UI to a specific worldspace object, based on the origin of that object (this is important later), and is seen by anyone who has that object on-screen. UiAttachUserObject() is the per-user equivalent.

Each of these functions has their use: UiAttachWorld() is good for a high score board, or a population monitor; UiAttachUser() is good for a HUD with user-specific values; UiAttachObject() is good for nametags; and UiAttachUserObject() is good for context menus. For the latter two, the UI "follows" the object as it moves (or as you move away), which is what you likely want, and is all nicely left up to the client; if the object moves off-screen, you don't see the UI, and that's probably a good design. Any UI attached to an object will hopefully be an appropriate size, such as a context menu that is reasonably sized to fit menu choices, or a nametag that is readable but not covering up half of the screen.

But is the nametag an appropriate size? What about when you zoom way out, like I mentioned before? And in the case of UiAttachWorld() and UiAttachUser(), the screenspace objects don't follow a specific object, but sit on-screen until removed, taking up real estate -- but how much?

The scaling and positioning of screenspace objects to worldspace objects is a two-part problem: zoom and screen size. The zoom can be read with GetPlace().zoom , but that only retrieves the "set" value of zoom, and cannot take into account any mouse-wheeling done by the user. This means that any screenspace-to-worldspace mapping requires the world to enable zoom lock. There's no way to set zoom from script (anymore - we used to be able to fake it with OutputToUser()), so we can't even force a zoom level on users if they change it, or if we want to have different zooms for different cases. Screensize is even worse: there's no function to request a user's screen size (which could vary like the zoom could), and there's no support for the undocumented P_VIEWPORT tag to force a client to a specific size.

Why is this such a problem? To do any sort of mapping between screenspace and worldspace, whether it's position or sizing, we need one or both of these values. Sizing, of course, only requires knowing zoom, so provided we enable zoom lock, we could ensure that or nametags are scaled appropriately with our avatars by using GetPlace().zoom to resize them. But since we have no way of changing zoom, I suppose this is really a one-time calculation anyway. Not too useful. But for positioning, and especially for screenspace items that might span between two worldspace objects, both position and size matter.

To map position between screenspace and worldspace, we need to know at least one point that maps between the two. Metaplace has the idea of a "camera", which is where the center of the viewport is focused; for most worlds, this is usually centered on the player's avatar, but Metaplace also allows the camera to be focused on a specific point in the world. The important thing to note is that this value -- either the player's position in the world or the camera's focal point -- is a worldspace coordinate that we know, and it maps to the center of the viewport. The fact that this is the center is important because we don't know how tall or wide our viewport is, which means that we still have no way of determining where in the worldspace the top-left of our viewport maps to. This means that we must always attach the screenspace items based on the center... but we don't know the center of the viewport, either!

So what do we do? If the camera is focused on the player, then we just use them as our center, using UiAttachObject() or UiAttachUserObject(); and if the camera is focused on a specific worldspace coordinate, we have to figure out how far away a known object is from that worldspace location (the player, or perhaps a more stationary object), convert this distance to screenspace (which we still haven't figured out how to do), and then attach to that object.

Sounds easy, right? Let me make it worse: the view type will also affect your calculations! Metaplace currently supports a bunch of views: top-down and side-view (which thankfully use the same scaling); isomorphic, stepped and sloped isomorphic; and rotated (or "UO") view. Also, as we hinted at earlier, the origin of the object to which we're attaching can matter, if our screenspace effect is related to the object itself.

Let's try to make sense of all this. First, let's think about scale. A zoom of 1.0 in Metaplace means that a tile is 64 pixels wide, regardless of which view we're in. Hurray for small blessings! This means that for every 64 pixels on the x-axis that we move something in screenspace, it will shift the equivalent of one tile over in worldspace, at zoom 1.0. A zoom of 2.0 makes everything twice as big, which means that one tile is now 128 pixels wide, and so our general formula is

screenspacewidth = (number_of_tiles_wide)*64*zoom

or

number_of_tiles_wide = screenspacewidth / (64*zoom)

The height of a tile depends on which view we're in: top-down and side-view use square tiles, so the size is also 64 pixels high, at zoom 1.0; rotated view's tiles are diamonds, but are proportional, so they, too, are 64-pixels high; and the isomorphic views are nicely at an angle where the tiles are half the height of their width, so 32 pixels high, at zoom 1.0. Given this, we now know how to scale screenspace elements to their worldspace counterparts; we know that if we want a UI window, button, etc. to be two tiles wide and two tiles high, we could write a function like this:

function tiles_to_pixels(user,w,h)
local tilewidth=64
local tileheight=64

local view=user.place.view
if view==2 or view==3 or view==4 then -- isomorphic
tileheight=32
end

local zoom=user.place.zoom

return w*zoom*tilewidth,h*zoom*tileheight
end

and then might use this function in such a way:

w,h=tiles_to_pixels(2,2)
local win=UiRect(0,"blank window",w/2,h/2,w,h)
UiAttachUserObject(self,self,win)

to draw a big square on top of the player. Big deal? It will be the same relative size, no matter what you set the Place's zoom level to. That's the big deal.


But what about position? This is all fine and good if we want scaled screenspace items that are a given distance from the player, but what if we don't know that distance all the time, because the player can move about?

Let's say that the player is at (10,10) in our world, but we want a screenspace item to appear at (4,6), because, perhaps, they have a treasure map marking X, and the player's Treasure Spotting skill is high enough that a glowing X should appear in the distance. One way would be to just drop a glowing X object into the world and be done with it, but other players would then be able to see it, even if they don't have the Treasure Spotting skill. Alternately, we could drop an invisible object in that location, and then attach a glowing X to it using UiAttachUserObject(), so only the Treasure Spotting player sees it; this would work for most cases, but some people will point out that the object information is still being sent to everyone else's client, and thus they can still know that the object is there, even without a glowing X. If that's not enough of a concern for you, let's say instead that we want a glowing line drawn from the player to the location of the treasure -- there's no UI function to attach a line from one object to another, so we're back to going through all of this nonsense anyway.

It sounds easy, that we can take our current position (10,10), and the desired position (4,6), find the difference (-6,-4), turn those into pixels (-384,-256) or (-384,-128) depending on view (at zoom 1.0), and voila, we have our offsets from our player's position in screenspace. And that's true -- if we're in top-down or side-view.

But this is not the case for any of the other views:

rotated (UO) view


iso view


See how going from (10,10) to (9,10), while it sounds like we're moving one tile "left", we're actually moving left and up. This is the "rotated" part of the view, where the X and Y axes of the worldspace don't align with the X/Y axes of screenspace. This means even more figuring, to determine how much each of our tiles in worldspace slide along these diagonals in screenspace.

Let's look at the rotated view first. I think it's pretty clear from the diagram that as we move along the X axis (from (9,10) to (10,10)), we're moving half-a-tile to the right, and half a tile down, or (32,32) pixels in screenspace (at zoom 1.0) for every increase along the X axis in worldspace. And, as we move along the Y axis (from (10,9) to (10,10)), we move half-a-tile to the left, and half-a-tile down, or (-32,32) pixels in screenspace for every increase along the Y axis in worldspace.

This means that our previous attempt, where we determined that our offset (our difference) was (-6,-4), can use these offsets to figure out where we're going: since the difference was -6 on the X axis, we can multiply that by the (32,32) we figured out and get (-192,-192); the difference on the Y axis was -4, multiplied by our (-32,32) gives us (128,-128). Add these two together, and we get (-64,-320) instead of the (-384,-256) we originally (naively) came up with. If you're still not convinced, try it again going from our (10,10) to (6,6) -- looking at the diagrams above, you should realize that if we step back on both the X and Y at the same time ((10,10) to (9,9)) the actual screen movement is straight upwards. Your result should have zero for the X, and a negative number for Y.

And what about the isomorphic view? Well, it's still the same formula of "half-a-tile right" and "half-a-tile down", but remember the height of the tile, in screenspace, is the only thing that changes from the rotated view to the isomorphic view, so if we figure things out based on "tiles left" and "tiles down", we can convert that using our function above.


The calculations above let us figure out the offset of the glowing X from the player, on the player's screenspace. But the player is likely moving. If we didn't take that into account, that red X would stay the same distance away, "moving" along the ground as the player did. Using this approach, we'd have to re-adjust (using UiPosition()) the X as the player moved, hooking into path_begin() and/or path_end(), or on a tick-based timer.

Alternately, we could attach it to a stationary object; the X-marks-the-spot code might find a tree, rock or seashell near the (4,6) location, and use that as the attach point. The drawback is that the code would have to do a search for an object, instead of just lazily using the player, but the savings would be in having to redraw the X. As long as the code can determine that an object isn't likely to move, this might be worthwhile.

So where's the final code? Err, well, I had plans on posting the two functions that I use often -- worldSpaceToScreenSpace() and screenSpaceToWorldSpace() -- but realized as I was writing this post that I don't handle all views (the "rotated" view had been removed when I wrote it, so I don't support it), nor do I handle the camera not being attached to the user (the code makes a bunch of assumptions in this regard). Once I add this extra support, I'll post the code to the Marketplace.

I didn't talk about how to handle that -- the camera being fixed on a location instead of the player. It's really just one more offset to be calculated, though; if we already know how to figure out how to get a screenspace object to appear at a certain worldspace location, if the player is at the center, so as long as we can convert the player's location from the camera location, we can still figure out the correct offset. I realize as I type this that diagrams might be helpful, but trust me when I tell you that drawing those two pictures above taxed my artistic skill.

The last thing I glossed over is the origin of an object. Metaplace allows the origin to be defined anywhere on the sprite that represents the object - three "fixed" modes (top-left, center and bottom-center), as well as a free-form "percentage" system. While this doesn't matter if you're just trying to find another worldspace location -- the player is at (10,10), regardless of how the sprite is drawn) -- there are times where you might want to know the exact screenspace coordinates at which an object's sprite is drawn. This can depend on the scale of the sprite, and the zoom as well. One example for this might be to draw a box around a sprite; knowing the height and width isn't enough, if you don't know where to start drawing. I have a function for this, too, which returns the top-left, center and bottom-middle screenspace coordinates of any object, which are part of my same screenspace/worldspace library.

I'm surprised how often my world ideas require mapping values between the two spaces; my UO world needed it for dropping items in-hand, and that's where I first wrote it. But things like the nametags should scale in size and location, regardless of zoom, as should speech bubbles. And attaching UI effects to objects (damage numbers, highlighting or selector graphics) could also take advantage of this to good effect.

Tuesday, June 2, 2009

Blog content -> Metaplace content

My first few posts have been mainly about Metaplace from a scripting point-of-view, and I can't promise than many of the future ones won't be along the same theme.

That being said, I hope the non-programmers reading the blog don't stop reading; even if you're sure you'll never decide to try your hand at scripting, reading what's possible in Metaplace can help you design your worlds and try to find folks in the Metaplace community that are willing to implement some of the ideas you might come up with after reading this blog, perhaps in exchange for some design ideas, some writing/dialogue, or some art, depending on where your talents lie.

Also, if anyone finds modules or worlds that demonstrate any of the ideas that I put forth, and I fail to mention them in the blog itself, please let me know, either in a comment or in Metaplace mail, and I'll try to keep such information up-to-date for future readers. While this might seem like the Metaplace Marketplace's job, that does require authors to properly tag their creations, and if Metaplace becomes as successful as I expect, the Marketplace is going to become very full indeed, perhaps causing some worthwhile modules to be missed.

Of course, I'd love to be the one implementing all of the ideas that I come up with here, but family life comes first, which means that Metaplace, unfortunately, doesn't get the time that it used to. Still, I do have my ever-growing list of ideas and projects, and SOME day I'll get through it... perhaps when my grandchildren start helping.

Movement

I would say that, apart from "seeing", movement in a game or virtual world is the most important feature. While I can come up with a few games (Boggle) or worlds (chat room) that don't require the idea of player movement, they're the exception and not the rule. Given this, you would expect that movement would be a solid, ironed-out subsystem of virtual worlds in general, and Metaplace specifically.

Unfortunately, as recent discussion on the forums has shown, that's not quite the case. Sure, your starting world in Metaplace has a nice click-to pathfinding script attached, but that doesn't provide a solution for everyone.

Inputs

Not counting voice control or some telepathic USB device I don't know about, computers have two methods of user input: the keyboard and the mouse. Metaplace gives us access to both, for the most part, which means we have a choice on how we get instructions to our game world, and thus have a choice on how one might move around in our world.


The mouse

The mouse is probably the most common input method, since at a glance we can see our world, where we want to go, and can deftly move our mouse cursor there and simply click. Getting this click event is easy enough to do in Metaplace. But what about other mouse-based movement? In Ultima Online (you'll eventually get used to me using it as a reference) you did not "walk to" with the mouse, but rather "walked toward", by using the right-mouse-button on the screen and holding it down; the distance that the mouse cursor was from the player also let you decide between walking and running.

Unfortunately, this isn't possible in Metaplace, for two reasons: we don't have access to a right-click event (this brings up a Flash menu in the client; there are apparently hacks out there to allow right-clicking in Flash, but they aren't used for the Metaplace client); and we don't have access to mouse-up versus mouse-down events (and mouse-location), just mouse-clicked. This isn't a Flash limitation, from what I understand, but more a consideration of the bandwidth involved with sending the mouse location so-many times per second.

Still, we've got the click event, which lets us get our intentions across pretty well; we're also able to modify that with keys, such as ctrl-click to run there, or shift-click to sneak there.

The keyboard

The keyboard typically has two forms of movement: a "step-based" method and a "continuous-walk" method. There are also games where you control a cursor with the keyboard, but I suggest that that's only used when a mouse isn't available; and I suppose that you could have "direct" movement to waypoints using letters or numbers, but I'm going to happily ignore that idea for now.

In the step-based movement, players will press their arrow keys or WASD keys once for each "step" they wish to take; this would likely be in a tile-based game, where a discrete step is perhaps something visible in the world, and it represents the smallest unit of movement in the world. The continuous-walk movement instead listens for key-down events and key-up events, so holding down the arrow key will keep you walking in that direction until you release it (or until the game decides you can go no further, due to movement "points" running out, obstacles, or consumption by monsters.

The continuous-walk method is really just a shorthand way of doing the step-based movement, allowing the player to hold down the key to represent repetitive, consistently-spaced key presses. This means that continuous-walk also has the idea of a "step", but it's more likely that these steps are smaller: it's no big deal to quickly walk across a 64-"step" tile by holding down an arrow key, but you certainly wouldn't want to have to press the arrow 64 times for the same effect! The hold-and-release of a key allows the player to stop whenever they want, but there might also be the need to support the larger steps anyway, where releasing the arrow key still has the player continue to move until they reach the next full tile. It all comes down to the needs of the game world.

Movement

Once we know where we want to go (or at least, which direction we should start heading), how do we actually move? We could instantly appear there, we could slide across the screen in a direct line, or we could follow some sort of path around obstacles. Each of these methods has their place, depending on the world or game. Metaplace provides four ways to get you around.

MoveObject

The MoveObject() function is an instant teleport. No delay, no motion, just here one instant, and there the next. This might be used by simple board games, moving pieces around, though you may want the movement to be animated (if only for your opponent(s) to better see what your move was). It could be used in a simple RPG, stepping from tile to tile, but again, it might look better to have them slide along instead. Teleportation! There's one that surely needs to use MoveObject. The nice thing about MoveObject is that there's no issue of collision detection, because we're not moving through anything; however, it if there's something already at that location, MoveObject won't stop you from piling objects on top of each other, so if that's a problem, it needs to be handled in the code. MoveObject doesn't allow you to move to a blocking tile, however. And there's no issue about timing or animation, because it's an instantaneous event.

SlideObject

SlideObject gives that nice bit of animation to the game or world, allowing you to move your objects from here to there in a specified amount of time. Of course, you might still want to animate the sprite to make it look like it's walking, rolling or otherwise ambulating, but the movement there is nicely handled for you.

There are a few things to note: as far as the server is concerned, the object is instantaneously teleported. The client is told about the sliding, and does the job of moving the sprite along the right path at the right speed. However, the object's location along that path is always accessible, correctly, from the server. So what, then, does it mean that "as far as the server is concerned, the object is instantaneously teleported"? I believe this means that, for any subsequent requests by other objects, that location is considered occupied, such as if another object wanted to MoveObject there. I've admittedly not tested this out, yet, but that's the only meaning I can imagine.

Also, SlideObject will allow you to slide onto tiles that are blocking. This can be a good or bad thing. In the new Rocking the Metaverse world, the bouncers get you up to the VIP section by sliding you there, past all of the blocking tiles on the stairs. On the other hand, a movement system using SlideObject would have to look for blocking tiles manually to prevent the player from wandering over them. Also, SlideObject will allow the physics system (below) to trigger hit() and hit_by() events, if a collision takes place during the slide. If the blocking-tile behaviour of SlideObject is acceptable to you, this is a good way to go.

Physics

The Metaplace engine has a built-in physics system. Instead of specifying specific a location to move to or slide to, instead we can define a facing (direction) and a speed, and let our player keep going until we change these values. Physical objects are affected by quite a few things: they will fire the hit() and hit_by() Triggers when they hit objects, allowing you to decide what should happen; blocking tiles will fire a bounce_tile() Trigger, and also reverses your direction for you; and the world edges fire a bounce_border() Trigger, and again automatically applies the "bounce" off of it. There's also a facing() Trigger fired whenever the direction changes; this is useful for changing your sprite to rotate to another direction.

This automatic bounce effect can be a problem, though, if that's not the behaviour you want. It works well for a space maze game, like Uberspace, or an Arkanoid/Breakout game, but for other uses we might wish that movement in the direction of the blocking object was dampened down to zero, just stopping movement in that direction. Of course, that's what you can do with the bounce_tile() and bounce_border() Triggers, but I would almost expect the two methods of handling it to be swapped -- the bounce to be coded, and the stopping to be standard. Or perhaps a flag?

Alas, these aren't the only concerns with the physics system. Objects tend to "drift", where the client's representation isn't quite the same as the server's, which can lead to inexplicable collisions (the server sees the collision, but the player's view in the client doesn't match or make sense) and to sudden lurching movements (when the player changes the direction or speed, and the server suddenly updates the client with some out-of-sync location information). Additionally, detecting collisions between blocking tiles or the corners of the world boundary don't quite work, allowing objects to escape into the unknown.

Still, if you have good control over your physical objects, and use the physics system for short, immediate movements, it can provide more flexibility than SlideObject. The fine-tune control over direction, speed and even acceleration might be easier to do with physics than it is using SlideObject, where you might have to convert angles and slide times or perform multiple slides to get a nice effect.

pathfinding

The pathfinding system is what you get with most of the worlds that you create in Metaplace, generally tied to the mouse click. This system uses an algorithm called A* (a-star) to try to find the best path from where you are now to where you want to go, avoiding blocking tiles and tiles with blocking objects on them.

Whether or not a path is found, a Trigger is fired, either path_begin() or path_not_found(), so you can either start animating your walk, or give a loud BZZZ if the player cannot get somewhere. Upon reaching the end, a path_end() Trigger is also fired, to allow the walk animation to be turned off, or to allow some NPC to pathfind to its next location (perhaps because it's walking a circuit, patrolling).

The pathfinding system uses the physics system, which means that on the way, you can receive hit() and hit_by() Triggers, as well as get facing() calls. In theory, you shouldn't receive any bounce_border() or bounce_tile() Triggers, since the path has walked around those for you. Finally, you can also specify a "mode", which tells the pathfinding system that it can or cannot walk across diagonals; this is useful if you want to avoid the appearance of a collision when there isn't really one (a future post) -- you can see some good diagrams on the PathToLocation() wiki page.

The best solution?

From the four, it sounds like the pathfinding is the best way to go for most uses, especially if you need to avoid blocking tiles, if you care about collisions, and if you want the player to be able to single-click their way around. It can also be tied to an key-press movement system, where with each press, the system attempts to pathfind one tile left/right/up/down. A key-holding continuous system is a little trickier; based on the speed at which the pathfinding gets the player from tile to tile, you could have your movement code continuously calling the pathfinding system again, from within path_end(), if the key is still being held down. This would mean that the animation should not be stopped, if it exists, which might cause some collisions between multiple modules trying to handle the path_end() Trigger. Attention would have to be paid to the smoothness of the motion, to see if multiple pathfind calls, chained together like this, would appear as a jerky, stop-and-start motion.

Pathfinding, because of its adherence to blocking objects, isn't useful in worlds where you can ignore these features, such as when you can fly over terrain. As mentioned before, the bouncer in the Rocking the Metaverse world has to SlideObject you into the VIP room, because pathfinding will NOT get you there (as you can see for yourself if you click up top). And, if you're looking for the instant-move effect, pathfinding might be able to fake it with a high enough speed, perhaps... I haven't tried!

I think the marketplace needs a handful of movement systems available, so those who know what they want can piece it together. Whether it's mouse-clicks, key-presses or key-holds, and moving, sliding or physics, there's no reason we can mix-and-match these together, or even have a single module that allows the world administrator to select their world's manner of movement. Unfortunately, over the span of the alpha and beta testing, there have been a variety of keyboard and mouse movement systems, using move, slide, physics and pathfinding, each at one point being the "standard" movement script provided with your new world. It would a shame if the click/pathfinding system being used by the avatars, quite obviously the most popular method now, becomes the only way people know how to interact with their worlds.

Monday, June 1, 2009

Screen effects

While animating a sprite is surely the best way to get a visual effect, there are other methods to change the look of your world which don't require nearly as much artistic skill.

Metaplace supplies a growing number of these: tinting has been available for over a year, allowing each sprite to change its colour to great effect; decals provide a way to have a ground-based effect from a sprite, such as a shadow; the general UI system allows a mask overtop of any region of the screen, which with creative use of alpha, can generate some interesting effects; the sprite light system provides a similar effect as the UI system, but provides a simpler method of attaching the effect to an object, and also supports rotating along with the object; and the effects system provides a handful of other interesting changes. And last week, the screen effects functions were added.


This new set of visual effects has three functions: UiScreenEffectColor(), UiScreenEffectMatrix() and UiScreenEffectClear(). I'll abbreviate these as USEColor/USEMatrix/USEClear to save wear and tear on my keyboard.

These effects, as you can guess, cover the whole screen, regardless of size, unlike any of the other effects mentioned above, which were specific to a sprite or a region of the screen. The USEClear function, obviously, removes the effect of an existing screen effect; it should be noted that, because it does not take any sort of "handle" or "reference", that this means you can only have one such effect in place at any one time. This is different than most of the other effect methods mentioned above, and will be discussed later.

I should mention that, until the introduction of these various effects systems, I knew NOTHING about colour theory. Playing with these various systems has made me learn a little bit, so I should be able to convey some ideas here, but for all of you that know more than I do, be sure to chime in!

Of the other two functions, the USEColor function is the simpler of the two. The definition of the function is

UiScreenEffectColor(user, time, r, g, b, method)

The "r", "g" and "b" values are, essentially, scales that are going to be applied to the red, green and blue portions of every pixel on the user's screen. The scales, perhaps a bit confusingly, are in the range of 0 to 255; while these values make sense if we're setting absolute colours, they don't make as much sense as scaling values. However, as long as we think of zero as "none" and 255 as "full", we can think of the values as everything in between, such as 127 as "half". In fact, you can just divide these values by 255 to get a percentage, if that helps (and in fact this is exactly how it's explained in the wiki, and how it's done in the USEMatrix function). The method parameter currently only accepts "multiply". You can get an "additive" effect with some of the other effect systems, but not as a full-screen effect with USEColor.

So what do these scales allow us to do? Well, if we set them all to 0, we're basically saying "set red to 0% of its original value, green to 0% of its original value, and blue to 0% of its original value" which is going to turn our RGB to 0, and give us a completely black screen:

UiScreenEffectColor(self,1000,0,0,0,"multiply")

You can try this out by going to my crwth_ui test world, pressing the backtick (`) key (or, if on a non-US keyboard, try the single-quote key (')) to get a command console, and typing

usec 1000 0 0 0

You should see the scene fade to black in one second (the 1000 value is for time-to-fade). Note the little sprite light beaming from your head -- I'll talk about that in another post. You can return it to normal by typing

useclear

You can get a nice dusk effect by using values of 127:

UiScreenEffectColor(self,1000,127,127,127,"multiply") (or "usec 1000 127 127 127")

Also, I kinda lied above, when I said that the scale range was 0-255; you can actually go higher than that, to scale the original values even higher than they were (which just makes the 0-255 range even more confusing):

UiScreenEffectColor(self,1000,500,500,500,"multiply") (or "usec 1000 500 500 500")

Other interesting effects include removing all of the red from your view:

UiScreenEffectColor(self,1000,0,255,255,"multiply") (or "usec 1000 0 255 255")

or getting the red to stand out a little more than usual:

UiScreenEffectColor(self,1000,400,200,200,"multiply") (or "usec 1000 400 200 200")

or you can just see how much blue there was in your world:

UiScreenEffectColor(self,1000,0,0,255,"multiply") (or "usec 1000 0 0 255")

Remember that between each of these, you don't need to USEClear the effect, because you can only have one going at a time. You might want to clear it if you're trying this out in crwth_ui, though, to "reset" the view to the original, to get a better impression on the change each is making.

These effects aren't too bad; they allow for simple brightening and darkening of our world, and also allow for a little bit of colour shifting. But the real power comes from the last function in the set.


USEMatrix is defined as the following:

UiScreenEffectMatrix(user, time, matrix)

where the matrix is a table of 20 values. These 20 values are used in this way:

redResult=(m[0] * srcR) + (m[1] * srcG) + (m[2] * srcB) + (m[3] * srcA) + m[4]
greenResult=(m[5] * srcR) + (m[6] * srcG) + (m[7] * srcB) + (m[8] * srcA) + m[9]
blueResult=(m[10] * srcR) + (m[11] * srcG) + (m[12] * srcB) + (m[13] * srcA) + m[14]
alphaResult=(m[15] * srcR) + (m[16] * srcG) + (m[17] * srcB) + (m[18] * srcA) + m[19]

This sure looks like a mess, with a lot of multiplication, a lot of addition, values from blue being added to the green, and alpha to the red... what's going on?

If you learned (and remember) any matrix mathematics in high school or post-secondary school, then this might look familiar. If not, don't worry about it: you don't need to understand the math behind HOW this is applied, just what each number is going to contribute. For those that do remember matrices, and are curious, it might help if I write it out this way (I have no doubt the proportional font is going to make a mess of this):


| m0 m5 m10 m15 |
| m1 m6 m11 m16 |
| r g b a 1 | * | m2 m7 m12 m17 | = | r' g' b' a' |
| m3 m8 m13 m18 |
| m4 m9 m14 m19 |


The row on the left is a 1x5 matrix of our pixel's value (the "1" at the bottom is there for a reason, I promise), and the block in the middle is our matrix of values. The final row will be the new value of our pixel.

Why is this so complicated? First of all, using a matrix means that we can allow any of our original values -- our original red, green, blue and even alpha values -- to have an effect on the result. With USEColor, all we could do was modify each of the colours only with respect to itself (and not alpha at all), which meant that if we increased our red's scale, our image was going to get brighter overall; we could try to guess how much to reduce our green and blue by to keep the total brightness the same, but we'd be guessing, because each pixel is different. Using this matrix means that, if we choose, our new red value can be affected by how much green and blue we also have in this pixel, not only the red.

The second reason to use a matrix is because it can make it easier to apply multiple effects. The screen effect system only allows one effect in place, so if we want to increase the red but decrease the green, we'd have to figure out a way to "add" these two effects together into a single call. This is easy enough using USEColor, because each of the scaling values is separate from the other. But when we extend our abilities with USEMatrix, it's not so simple. Luckily, matrix multiplication of two different effects can let us do them both at once (with a challenge or two, if you know your matrix math). Even if we could have more than one effect going at once, we'd likely want to multiply our multiple effects into a single matrix, because those formulas above, with sixteen multiplies and sixteen additions, are going to be done to every single pixel on your screen; that can approach two million pixels at full screen, on a nice monitor, and if every one of those pixels has to go through all those multiplications and additions every time the screen is redrawn (20, 30, 40 times a second?), that's a lot of processing no matter how fast your computer is, how efficient the client's renderer is, or if your video card is doing the work.

The values in the matrix are numbers more in line with the idea of scaling, compared to those used in USEColor, but because of the additive nature of the formulas above, it's best to think of them more as a "ratio" of contribution to the final value. What does this mean? Well, look above at the formulas, and find the mention of m[0]:

redResult=(m[0] * srcR) + (m[1] * srcG) + (m[2] * srcB) + (m[3] * srcA) + m[4]

This shows that the "redResult", the red portion of each pixel, is affected by m[0] through m[4], and each affects a different "source colour"; m[0] is multiplied by the pixel's red value, so m[0] can be thought of as "the amount that the original red affects the resultant red", and m[2] as "the amount that the original blue affects the resultant red."

Specifically, look at m[0], m[6] and m[12]. These three are the amount that red, green and blue affect red, green and blue in the result. Sound familiar? This are the positions in the matrix that simulate the effect of the r,g,b values in USEColor. In fact, if we scale the USEColor values down from the 0-255 range to a 0.0-1.0 range (by dividing them by 255, as mentioned before), and turn all of the other matrix values to zero (except for the alpha one, which we'll discuss shortly), we discover that we can make a USEMatrix do the same thing as a USEColor, such that

UiScreenEffectColor(user, time, r, g, b, method)

is equivalent to

UiScreenEffectMatrix(user, time, {
r/255 , 0 , 0 , 0 , 0 ,
0 , g/255 , 0 , 0 , 0 ,
0 , 0 , b/255 , 0 , 0 ,
0 , 0 , 0 , 1 , 0
}

and that

UiScreenEffectColor(self,1000,127,127,127,"multiply")

is the same as

UiScreenEffectMatrix(self,1000,{0.5,0,0,0,0,0,0.5,0,0,0,0,0,0.5,0,0,0,0,0,1,0})

or

UiScreenEffectMatrix(self,1000,{
0.5, 0, 0, 0, 0,
0, 0.5, 0, 0, 0,
0, 0, 0.5, 0, 0,
0, 0, 0, 1, 0})

for legibility. (Yes, 127/255 isn't the same as 0.5 ...)

You can try this in the crwth_ui world with the following:

usem 1000 0.5 0 0 0 0 0 0.5 0 0 0 0 0 0.5 0 0 0 0 0 1 0

or

usem 1000 0.5,0,0,0,0,0,0.5,0,0,0,0,0,0.5,0,0,0,0,0,1,0

or

usem 1000 {0.5,0,0,0,0,0,0.5,0,0,0,0,0,0.5,0,0,0,0,0,1,0}

Each of our USEColor examples can be turned into a USEMatrix call in the same way, by only considering m[0], m[6] and m[12], and leaving the other colours' contributions to the result as zero. This was what we noted above as a limitation of USEColor.

So why would we want the other colours to contribute? Why would I want the amount of green in my original pixel to affect how much red I have in my new one?

One reason was hinted at above, with USEColor: the fact that the total brightness of the image is going to be changed, if we just start increasing or decreasing each of the colours. However, if we can "know" the brightness of each of the pixels, somehow represented in our matrix, then we can maintain that brightness even while changing the colours! But how do we do that?

Leave it to the colour scientists to tell us. That's right, the colour scientists. They've gone and figured out the amount that each of the red, green and blue colours contribute to the overall brighness of a pixel. If it was completely even, then we could determine the brightness by just adding up the values of red, green and blue. If we had those, we could then turn every one of the pixels to a grey that matches that brightness. Because our total "brightness" can be anywhere from 0 to 765 (red/green/blue all zero, to red/green/blue all 255), we would divide the total by 3 to get the range back to our 0-255 range. We can do all of this with our matrix -- remember that it does multiplication and addition for us, so if we just do the dividing-by-three to all values first (by multiplying by 1/3, or 0.33), and then add them together, we can get a greyscale image:

UiScreenEffectMatrix(self,1000, {0.333,0.333,0.333,0,0,0.333,0.333,0.333,0,0,0.333,0.333,0.333,0,0,0,0,0,1,0} ) (or " usem 1000 {0.333,0.333,0.333,0,0,0.333,0.333,0.333,0,0,0.333,0.333,0.333,0,0,0,0,0,1,0} ")

Not bad, right? But the colour scientists claim that red, green and blue don't actually contribute evenly to the brightness; try this:

UiScreenEffectMatrix(self,1000, {0.3086,0.6094,0.0820,0,0, 0.3086,0.6094,0.0820,0,0, 0.3086,0.6094,0.0820,0,0, 0,0,0,1,0} ) (or " usem 1000 {0.3086,0.6094,0.0820,0,0, 0.3086,0.6094,0.0820,0,0, 0.3086,0.6094,0.0820,0,0, 0,0,0,1,0} ")

These values -- 0.3086, 0.6094, 0.0820 -- are apparently the ratio of brightness for each of red, green and blue; you'll notice that they add up nicely to 1.0000 , to get the total brightness. (I got these numbers here, where they mention another set of numbers used for NTSC, but that depends on the gamma of the image, which is beyond the scope of this post (and beyond my knowledge)).

Having this matrix is useful, because it allows us to get an screen that has the same brightness as before, but with no colour, leaving it open to colouring in any way we choose. Of course, to colour it afterwards isn't possible, since we can't apply a second filter, so we'd need to combine the two effects together, which would require some matrix math. I'm currently finishing up a Lua matrix math module, which I'll post and allow people to blend various screen effects together.


So what about that alpha stuff? This is the transparency of the pixels, going from zero for fully-transparent, to 255 for fully opaque. For instance, you could use

UiScreenEffectMatrix(self,1000, {1,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0.5,0} ) (or " usem 1000 {1,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0.5,0} ")

to turn the whole world half as transparent as it used to be; in the case of the crwth_ui world, you should now see the background "metaplace" logo through the cacti and such. The matrix we used said to leave red, green and blue alone, but to change the alpha of every pixel to half of its former value. If you combine that with the grey-scaling matrix (easy to do by-hand, without complex matrix math),

UiScreenEffectMatrix(self,1000, {0.3086,0.6094,0.0820,0,0, 0.3086,0.6094,0.0820,0,0, 0.3086,0.6094,0.0820,0,0, 0,0,0,0.5,0} ) (or " usem 1000 {0.3086,0.6094,0.0820,0,0, 0.3086,0.6094,0.0820,0,0, 0.3086,0.6094,0.0820,0,0, 0,0,0,0.5,0} ")

you get what might be a ghost state of the world, where the background coming fully through might be a useful effect for a world builder.

The original matrix formulas above had things like m[3], which set "how much the pixel's new red is affected by the original pixel's alpha". I'm really not sure if there's a good use case for this, but perhaps the more creative types can prove me wrong; the only reason that that exists is to "fill in" the matrix.

And what about the last set of values, m[4],m[9],m[14],m[19]? These don't take any of the previous values from the pixel, but just simply add in a value. Again, I can't really think of a good example for this one, but I welcome anyone who can!


I hope to add something to the crwth_effects world to demo these new features, in a nicer interface, but for now, I hope that anyone interested in it can try out the crwth_ui world and the "usec" and "usem" commands for now. As for my plans for screen effects, I want to make a module that provides sun(s) and moon(s), on user-settable revolutions; they could each cast their own colours, would vary their light based on angle, and could introduce colours (such as a red sunrise or sunset) based on angle as well. Two moons would cast more than one, and the sun (or two!) might drown out any effect the moons have. All of this will have to be compacted into a single matrix, of course, which is why I'm writing the matrix library. To see something along these lines, check out Brooke's (chooseareality) world model_testing world, where she has a simple day and night cycle with the USEColor function (and her really cool fog!)