Monday, December 21, 2009

RIP Metaplace.com

This is definitely a sad day for me. One of the most interesting things I've ever been involved in is Metaplace, and it has been announced today that metaplace.com, the User-Generated Content portion of their business, will be closing as of January 1st.

Anyone reading this blog will likely know of Metaplace, for that's its whole purpose; I might have been critical about some of the things in Metaplace, but for the last two years, it has been one of my biggest distractions. It is (was) a platform for which I often thought, "what else can I do with this?", or when a new feature came out, "what can I create with this new feature", or, if I had any idea for a game/virtual environment, the first thought was "how can I implement this in Metaplace?"

From a technological point-of-view, I think Metaplace did it right; I've said to others that it's very much the way that I would have done it, had I written my own engine. I hope the code is preserved for a future project, because while it wasn't "done", it was "right so far".

I realize, as I chat in MSN with KStarfire, a friend I've made from Metaplace, that this announcement has left a larger hole than I realized 20 minutes ago, because so much of my free programming has been around Metaplace, whether in it proper, or external tools. One of the tools that I started working on, as a way to learn Google's Go language, was a tool to download all of the assets of a world. Now it looks like I have two weeks to get this going, to salvage the work I've put into Metaplace, if only for nostalgia.


I sat down tonight to continue work on the bitmap font module, something that I recently picked up again, and was making really good headway with.

I also planned on whipping up a small module for a user on the forums for togglable MP3 music playing, something that would have taken me only an hour or so to write.

Over the last week, as my two-and-a-half year-old daughter keeps asking to "sit the internet" to play her Disney computer games, I've been thinking about how to make some toddler games in Metaplace to let them learn their colours, shapes, numbers, letters; their keyboard and mouse skills.

I was thinking, as I was out and about today, about Ultima Online (as I often do), and how I really need to get back to my Metaplace implementation of it, perhaps tonight after I work on the bitmap fonts and sound player...

And now, after reading the announcement, I think about the four Metaplace clients I've started, now all dead ends. About all of the worlds that I created (I think I hit 102), many which were mere placeholders for ideas that I had come up with, but never got off the ground. I think of the rest of them, most which got a start with coding, seen only by me, and the handful that might be considered worth visiting, mostly demo worlds of the latest Metaplace feature, and one, Sniper, being the only "done" world, which has had a bizarre resurgence of interest lately, getting Favorited once or twice a week. Please, go give it (another) try in the next two weeks, before it's gone.


I think about the people I've met through Metaplace, whether online acquaintances that I only know in Metaplace worlds to those with which I've got a non-Metaplace relationship, through Twitter or blogs or email. And then there are those that I've met in-person through Metaplace - fellow testers Scopique, Chooseareality and Rboehme, and the employees at the time -- some who had since left, some who are leaving as of today, and some who will stay with what Metaplace-the-company will become. I met these people because Metaplace flew me down to meet them, something that spoke a lot about the personality of the company and the people within it, that they would do this for an unknown alpha-tester and some-time pain in the ass. I got to meet Raph Koster. I'm a fanboi, I admit it.

I wore my Areae shirt two days ago.

I use my Metaplace coffee mug every day at work.

And no matter what Metaplace does in the future, it has made its mark on me; on my programming skills, on my programming ideas, on my programming direction. It has stimulated the game designer and game developer in me, from the sideline dreamer to a nascent latecomer.

Thank you, Metaplace.

Tuesday, December 1, 2009

Client writing

One of the earliest projects I started when I got into the Metaplace alpha, apart from a handful of demo worlds to try out the API, was a client. Why write a client when the Flash one was available? To see if I could, of course. For me, it's not so much the "doing" as the "can I do" when it comes to programming.

If you're a programmer, you should definitely give the Metaplace wiki a peek; it has a lot of information about how a client connects and all of the Metatags that come down the pipe whenever something happens in a Metaplace world. Even if you have no interest in writing your own Metaplace client, seeing how the communication between the client and server happens can lend some insight to what kind of information is available to the client, and when, and thus can help you design scripts more efficiently for yourself or for modules you're looking to publish.

Lua



My first attempt at a client was in Lua. Because I was learning (and loving) Lua at the time, mainly to learn how to script in Metaplace, I was attempting to apply it to all programming problems that arose -- I find this a very good way to learn a language, even if the language is not necessarily suited to the problem domain. This is a perfect example of that, because while some of the data structures of Lua lend themselves well to writing a Metaplace client, the basics such as encryption and secure HTTP are not supported without adding external libraries, and Lua comes with no standard graphics library. This was where the project stalled -- there were a handful of choices out there, C libraries that could be linked into Lua to provide the visual part of the client (which, admittedly, is really the point). But these were large libraries that I had no interest in learning, especially in a manner that requires calling them awkwardly from within Lua.

In the end, my Lua client was just a text-based client, showing me the tags as they came in (and eventually you develop the ability to "read" them), with a simple command-line interface for querying the state of a world, such as the objects within, the users within, and, perhaps most usefully, for calling some built-in scripts that I had written; the rudimentary beginnings of a Metaplace bot. Those who were around in the alpha days might remember the statue in the earliest Central getting dropped on people's heads, or being tomatoed mercilessly by my auTOMATOn script. The only other use for the Lua client -- one that I've actually used quite a bit -- is as a way of introducing many commands to a world at once. This allowed me to generate the instructions for placing tiles and objects offline, into a file, and then upload the whole series of commands in one fell swoop, mimicking the effect that would otherwise require me to painstakingly do so through the editor tools.

Java/Android



My next attempt was to make a client for Google's Android platform. Not that I had (or have) an Android device, but it sure sounded like an interesting way to learn how to develop for that platform AND still do Metaplace stuff. Because Android is essentially a Java platform, I would be in familiar territory, having coded lots of Java over the years, and my initial trial with the networking in Android (using the emulator) showed that the most worrisome part of writing a Metaplace client on Android -- the actual connectivity -- wasn't going to be a problem.

The approach I took, then, was to write a Java client, and then worry about the Android-specifics after the fact. This may not be the most efficient way to write a Android client, but remember I really just wanted to write "a client", and if I ended up with a Java one first, and later an Android one, that was fine with me. Shortly after I started the project, Akidan, who had been fiddling with Tachevert's short foray into client-writing in .NET, joined the Java project, and he ended up getting it further along than the Lua client got. Unfortunately, Akidan left the Metaplace scene, and while I'm not usually a group-programming kind of guy, it kind of took the steam out of the effort, not having him around as a motivator to continue work made the project stall. This client got so far as to display some rudimentary UI, and the world tiles, and in the end could function as a very simple chat client, something that was only possible in the Lua client if you hand-crafted your conversation into Metaplace commands.

It's too bad that the project stopped -- there are a few Metaplace developers with Android devices that I feel I let down by not coming up with something useable for them. Though, unlike the Lua project, this one isn't "dead" so much as "abandoned", and could still be resurrected, if time permits. However...

Javascript



HTML5, and CSS3, and all of these new Web technologies that are coming out have finally given me reason to give this "web programming" a try. I've never had a need to do it before, and although I've read the occasional Javascript book in the past, I had never had a need to use it, so never really "knew" Javascript. As HTML5 and CSS3 are being developed and standardized, however, I'm trying to stay on top of it, figuring that now is as good a time as any to learn them. And what better way to learn than to write something big -- like a Metaplace client!

I have to say that, by far, this was the easiest client to get going, at least to the state that the other two did. Because (at the time of me writing it) the WebSocket part of HTML5 wasn't supported in any browser, it relies on a Java Applet to do the networking, but otherwise, in no time at all, it caught up to, and surpassed, the functionality of the Lua client. Perhaps two or three days? It's a good testament to either Javascript or the Metaplace client model. Or perhaps to the fact that it was my third client attempt.

The best part about this client is that you can actually try it! I will point out that the username/password become part of the URL, so do get stored in my server logs, but I assure you, I have no interest in them. I don't blame you if that prevents you from giving it a try, but you could also just make another Metaplace account if you really want to see. Now, it's not like it's finished or anything, so you might want to stick to simple worlds, such as the ClientTestWorld, and I think doesn't work on Safari or Opera (and definitely not IE!) so Firefox and Chrome are your best bet. It draws the background, tiles, and objects too, even animating them! And simple keyboard support also works, so you can walk around in the world a little.

This project only paused because this semester has become way too busy (thus my lack of anything Metaplace at all, and lack of posts here). Family, work, teaching and classes doesn't leave much time for anything, even fun Javascript-based Metaplace clients. This project is certainly not dead, just on hold, because it's still a great way for me to learn Javascript and some other HTML5 tags. The best part of this one, though, is that there's nothing stopping you from using View Source and just continuing on with it, if you so choose. Hey, if you do, let me know how far you get!

Go



Of course, no matter how busy I am, I find it hard to ignore when Google comes out with something new and interesting, such as their new Go language. Being an old C hacker and a compiler writer in another life, something like this really got my attention, and after a few little projects to warm up with the language (mainly ports of Lua projects that I had done to learn THAT language), I started thinking about what I could do for a large project in Go, to really test it out. You can see where this is going...

And that's where I am today; my current Metaplace time, which is still very slight for another week or so this semester, is taken up with starting my fourth Metaplace client. Go is very young, still under development, which makes things a little awkward (the first day I used the Vector library, they changed it on me), so things such as encryption and secure HTTP are missing from it as they were in Lua, as are anything but the most basic graphic primitives, but I feel have a better chance of appearing, and soon. Even so, connecting to external libraries is a little cleaner with Go than it was with Lua, and in just a few days, I've put together the start of client #4. Not as fast as the Javascript one, mind you -- I did have years of reading of Javascript behind me -- but it's getting there, slowly but surely.

Until the next language comes along, I guess.

Monday, August 31, 2009

Patterns in programming

Just managed to sneak in ONE post for August...

It's not that I haven't been playing with Metaplace lately (though not nearly as often as I'd like), but a lot of the stuff I've been doing hasn't been groundbreaking or, really, that interesting. Not in its current stage, anyway.

Ever since I added sounds to my Ultima Online world, I've been drawn back to getting that large project going again, happily ignoring the fact that custom avatars are the bane of my Metaplace existence.

While the footstep sounds were interesting, the most memorable sounds in UO were the spells, so I decided to focus on that subsystem next. As is typical of my programming history, I'm much better off at the behind-the-scenes coding than the user interface portion, and since UO spells (more specifically, the spellbooks) can require a bit of UI that I shudder at creating (and I won't even bring up UiXml()... well, except there), I went to work on the back-end portion of spellcasting.


One of the most common things I do in Metaplace, whether it's designing a new module, a new system, or just testing out new functionality, is to write a command-line interface to the code I'm developing. This is useful for testing things that would otherwise require buttons and sliders and textboxes, but don't have them yet, and for quickly trying different values in different situations. This is something I do so often, in fact, that I've got a routine when I first create a script, that sets me up for development. For instance, when I decided I was going to work on spellcasting in UO (specifically, Magery, which is actually a skill...) I created a script called "magery", and right off the bat, wrote the following:


Define Properties()
magery={}
end

Define Commands()
MakeCommand("magery", "magery interface", "cmd:sentence")
end

Command magery(cmd)
local params=string.gmatch(cmd,"%S+")
local subcommand=params()

end


This sets me up for a few things: it gives me some local storage for anything I create during testing, such as window IDs, sound IDs, state variables, etc. all within the self.magery table. It also gives me a quick way to pop up the command-line and start sending commands to my code, taking advantage of the handy "sentence" type in Metaplace. In case you've not seen it, it lets you type, say


magery cast gate travel


And it'll take everything after the command name and pass it as a single string, spaces and all. This allows for parameters with spaces (such as "gate travel" above), and for sub-commands that have varying or variable numbers of parameters.
The little bit of code at the start of the magery Command helps me tokenize the subcommand and the parameters.


After throwing together a little bit of magic in my UO world, I decided that I should really add in skill support (because, as I mentioned, Magery is technically a skill that you use), so I made a script called "skills" and started it with


Define Properties()
skill={}
end

Define Commands()
MakeCommand("skill", "skill interface", "cmd:sentence")
end

Command skill(cmd)
local params=string.gmatch(cmd,"%S+")
local subcommand=params()
local skillname=params()

end


Almost the same as before. Then I could quickly add


if subcommand=="use" then
SendTo(self,"use_"..skillname,0,params(),params(),params(),params(),params())
elseif subcommand=="set" then
self.skill[skillname]=params()
else
AlertToUser(self,2000,"Unknown skill command: "..subcommand)
end


Now, admittedly, I kind of lose some of the nicety of the sentence type by coding my skillnames without spaces (AnimalTaming instead of Animal Taming). And the ugly bit with the repeated calls to params(), to pull off each of the extra parameters (if they exist), works because the function returned by string.gmatch() will just continue to return nils once it's done, and sending extra nils through to the Trigger is harmless (provided the Trigger didn't expect anything there, of course). I could spend the time and write code that says, "if the skillname is Animal Taming, then there's just one extra parameter, the target, so I'll only pass one extra params(); but if it's Provocation, then there are two...", but this lets me change the rule in their own individual handler functions, making for very rapid prototyping.


The other day I got distracted from UO development by a conversation with LunarRaid, which made me want to try out implementing drag-and-drop functionality using the UiEvent() system. Replace the "magery" with "dndui" and you have my starting block of code, ready to change settings, pop up windows, or whatever else I might want to change while testing. The nice thing about this setup is that many UI elements call Commands when pressed or used, and thus the same single interface can be used by them. Also, I tend to have all of the conditional code in the Command just fire Triggers to self so other code I write can easily duplicate, with a SendTo(), the functionality that I've been testing from the command-line.

For just testing concepts, or to avoid fiddling with buttons, this is a great way to just get coding. If I didn't have a nice way of quickly prototyping the code I come up with, I'd probably still be fiddling with a spellbook interface and have nothing but a few sprites to show for it.

Sunday, July 26, 2009

Sounds

After a year-and-a-half of taking part in Metaplace during alpha and beta, I've finally added sounds to one of my worlds (I don't count the crwth_effects world, as the sounds are only played when the "player" tries them out, and not as part of the world itself.) I'd say the biggest reason is that I rarely finish a world, and something like sound is a finishing touch.

I'm not really sure what made me do it, since the world I've added them to, my Ultima Online workspace world, is nowhere near done (being a lab world, it'll never be done as such). For some reason this past week, I got to thinking about the music of UO, or the sounds of UO... I'm not sure, but I decided to figure out how to pull all of the UO sounds from the datafiles. Once I had those, I browsed through them, playing them to bring back memories (which is silly, since my UO accounts only just expired) and to see the large portion of them that never appeared in the game. Some of the sounds include the player's footsteps on a variety of terrain, and I was thus compelled to add them into the UO world.

The funny thing is that right now, my UO world doesn't have the UO avatars for the players, but the Metaplace ones instead -- the only non-UO thing in the world gets the only UO sounds! For each terrain type (grass, pavement, sand, etc.) there are two sound files, allowing you to alternate the foot falls per-foot; they could have taken the approach of a sound file that played a two-step sound, but it wouldn't match up if you took a single step with your avatar.

This poses a challenge right off the bat, since there's no way in Metaplace to say "play these two sounds in a loop"; all sounds, whether to an individual or the whole Place, or whether always-on or based on distance, are on their own. Because we want to play a different sound right after one finishes -- to give the clip-clop walking effect, we have one of two tools available to us: we can either ask to be informed when a sound ends (and then act to start up the next one), or we can try to time playing the next sound based on the length of the current one.

Trigger sound_done

As part of the PlaySound*() series of API functions (but not exhaustively documented), you may provide an object pointer as a callback object, telling the system that once this sound finishes, this callback object will have a "sound_done" Trigger fire at it. This lets you play the next sound at the right time -- in theory. Of course, being an online world, you have the latency of the internet to contend with, which means that once the sound finishes, your client tells the server (after X milliseconds), the server acts upon this information by firing the sound_done trigger at the specified object, the object decides to play a new sound, and the client is told, again after X milliseconds. All told, these milliseconds can add up, between the latency and the script logic to determine that another sound should be played. This delay is probably fine for playing background music (which I also added to the UO world), because a half-second delay between soundtracks is fine. A half-second between footsteps, when you're stepping every quarter-second, is not.

Here are the basics of this method:

Trigger sound_done()
local movesounds={
{"feetpvmta","0:19"},
{"feetpvmtb","0:18"}
}

self.move_number=self.move_number+1
if self.move_number>#movesounds then
self.move_number=1
end
local moverec=movesounds[self.move_number]
self.move_sound=PlaySoundTo(self,moverec[2],255,0,self)
end


SendTo()

While we can't completely eliminate the latency problem, we can at least reduce it by not waiting to react to a message from the client, stating that the current sound has finished playing. Instead, after starting the current sound, we'll fire a delayed Trigger to our sound-playing code to just start playing when it makes sense to do so. This means that we need to know how long our sounds are to know how long to wait (for playing a series of background tracks), or we have to know that a given delay will be plenty of time for each of the sounds to play and finish (such as short footstep sounds).

Trigger movesound()
local movesounds={
{"feetpvmta","0:19"},
{"feetpvmtb","0:18"}
}

self.move_number=self.move_number+1
if self.move_number>#movesounds then
self.move_number=1
end
local moverec=movesounds[self.move_number]
self.move_sound=PlaySoundTo(self,moverec[2],255,0)
self.soundtrigger=SendTo(self,"movesound",250)
end

I have to admit, I was prepared to trial-and-error the delay value to see what looked good -- in theory, it should be based off of the framerate of your animation, and the number of frames between each foot lands on the ground in the animation), but I really lucked out with my first attempt at 250ms -- go take a look at the world and see if you agree.

That's right, I've gone with this second approach, for the reasons specified -- I've reduced the effect of latency on my sounds loop by not relying on the client to report when the sounds finish, and I'm able to do this because the clip-clop of walking is very regular. And what about my background music? Well, in the end I used to delayed-Trigger method, there, too:

Trigger playmusic()
local music={
{"britain1","http://pages.cpsc.ucalgary.ca/~crwth/metaplace/uo/music/Britain1.mp3",39},
{"Britainpos","http://pages.cpsc.ucalgary.ca/~crwth/metaplace/uo/music/Britainpos.mp3",53},
{"Stones1","http://pages.cpsc.ucalgary.ca/~crwth/metaplace/uo/music/Stones1.mp3",135},
{"Walking","http://pages.cpsc.ucalgary.ca/~crwth/metaplace/uo/music/Walking.mp3",343},
{"Medieval","http://pages.cpsc.ucalgary.ca/~crwth/metaplace/uo/music/Medieval.mp3",150},
{"Festival","http://pages.cpsc.ucalgary.ca/~crwth/metaplace/uo/music/Festival.mp3",128},
}
self.music_number=self.music_number+1
if self.music_number>#music then
self.music_number=1
end
local musicrec=music[self.music_number]
self.background_music=PlaySoundRefTo(self,musicrec[2],100,0,self)
SendTo(self,"playmusic",(musicrec[3]+1)*1000)
end


Note the list of tracks also contains the length, needed because they're of variable length, unlike the footsteps. Why use this method instead of the sound_done() Trigger? I think the main reason is because, when trying to handle both the music and the sound effects, I had code that looked like this:

Trigger sound_done(userid,handleid)
if handleid==self.background_music then
SendTo(self,"playmusic",1000)
elseif handleid==self.move_sound then
SendTo(self,"movesound",0)
end
end

I can't imagine all the conditionals when I've got dozens of sound effects, such as spell effects and combat sounds. Sure, I could separate them into different scripts each with their own sound_done() Trigger, but the conditional code would still be the same. One way I see this being a bit more useful is being able to supply not only the callback object, but the callback Trigger as well.

Dynamic footsteps

The other thing I decided to do, because of the variety of footstep sounds available, was to change the sound based on the terrain the player walked upon.

In general, having different events based on the tiletype that a player stands on isn't "easy", because tiles themselves don't support Triggers or events, so you can't just attach a script to a grass tile that sets the player's footstep sound to "feetgrssa" and "feetgrssb". For many cases of tile-based effects, you'd have to have some sort of tick-based Trigger, checking if the tile has changed and then changing the effect in question.

The handy thing about the footsteps firing every 250ms, however, is that this provides its own "tick", and thus every time we're to start a new sound, we can decide if the sound should change.

Trigger movesound()
if self.moving==0 then return end

local tile=GetTileAt(self.x,self.y)
local tilename=stylesheet.places["0:"..GetPlace().placeId].tiles[tile].name
local movesounds={
stone={
{"feetpvmta","0:19"},
{"feetpvmtb","0:18"}
},
grass={
{"feetgrssa","0:21"},
{"feetgrssb","0:20"}
},
dirt={
{"feetgrvla","0:25"},
{"feetgrvlb","0:22"}
},
sand={
{"feetsanda","0:23"},
{"feetsandb","0:24"}
},
water={
{"feet15a","0:27"},
{"feet15b","0:28"},
{"feet15c","0:29"},
{"feet15d","0:26"},
}
}
local tilesounds=movesounds[tilename]
if tilesounds then
self.move_number=self.move_number+1
if self.move_number>#tilesounds then
self.move_number=1
end
local moverec=tilesounds[self.move_number]
self.move_sound=PlaySoundTo(self,moverec[2],255,0,self)
end
self.soundtrigger=SendTo(self,"movesound",250)
end

This assumes that your tiles are nicely named, as mine are (they're computationally-generated from the Ultima Online datafiles, which conveniently provide names), though this is all in the hands of the world builder anyway. The above is the exact function I use for my footsteps.

Of course, this solution wouldn't work for sounds that have a long "tick" -- Ultima Online played different background music depending on your region, so you'd get some spookier music in the jungle or a dungeon, compared to a forest or city. You wouldn't want to wait three minutes for your current cycle of "city" themed music to end before realizing that you're in a dungeon and should be a little more on edge.

When to walk

The only thing missing is knowing when to start and stop the walking sound loop at all. The last snippet gave a hint on how the stopping is done -- using a property called self.walking. It's used to stop the sound-playing loop, above, but how is it set? By patching into the path_begin()/path_not_found()/path_end() series of Triggers that are part of the pathfinding system.

Trigger path_begin()
StopSound(self,self.move_sound)
if self.soundtrigger~=0 then CancelSend(self,self.soundtrigger) end
self.moving=1
SendTo(self,"movesound",0)
end

Trigger path_end()
self.moving=0
StopSound(self,self.move_sound)
end

Trigger path_not_found()
self.moving=0
end

The extra code in the path_begin() Trigger is necessary because of the delayed-Trigger method being used; if someone was walking and then clicked elsewhere, they'd start a new pathfinding session but wouldn't have stopped the last delayed-Trigger, causing two (or three, or four) sound loops to play until the player finally stopped moving. This was an interesting effect, but definitely not desired!

Some readers might be thinking that such a problem could be avoided if I had just used SendToRepeat() instead of a recursive delayed Trigger. Then I could just let the SendToRepeat() continue as it would, but I could really do the same thing with the recursive Trigger. The reason I didn't use the SendToRepeat() is because of the possibility of a variable delay for the sounds; while the footsteps are a regular 250ms apart, I might get sounds that aren't so regular, and would thus want to be able to vary the delay, much as done in the music loop. For this use, I could go either way, but I like to code generally, even if Dorian and Sean would rather I didn't.


I'm happy with how the sound turned out, considering it was really a whim. It makes me want to get more sounds into the world, even without the mechanics that really require them (the sounds of combat, the sounds of crafting, monsters roaring...) I also realized, as writing this post, that the footstep sounds are solely for the ears of the maker -- this is a bit silly, since we have PlaySoundRefRadius() to emanate sound from an object and to get the feature of distance affecting the volume built-in. Perhaps I'll go take a look at that now.

Friday, July 10, 2009

UiXML

A year ago this week, the Metaplace folks flew four alpha testers, myself included, down to San Diego to meet the team and get a little insight into what was coming. It was a fantastic experience, very much appreciated, not only to meet the team (and the other three testers -- Chooseareality, Rboehme, and Scopique) but to see where they work (though they've moved since then) and how they worked (in which we got to participate). We also got a sneak-peak at a new feature that was released shortly after that, which was the UiXml() system.

The idea of this system is that instead of calling lots of UI API functions, you can create an XML segment, in a string, that will be passed to a single UiXml() function and will generate all of your UI elements for you. So, instead of typing

local outerrect=UiRect(0, "outer rect", 10, 10, 220, 140)
UiColor(outerrect, 125, 2, 2, 0.9)
local innerrect=UiRect(outerrect, "inner rect", 1, 1, 218, 138)
UiColor(innerrect, 50, 130, 130, 0.8)
local label=UiLabel(innerrect, "name", 1, 1, "label text")
UiColor(label, 255, 234, 0, 1)
UiAttachUser(self,outerrect)

you can type

local rectxml=[[
<rect name="outer rect" x="10" y="10" width="220" height="140" red="125" green="2" blue="2" alpha="0.9">
<rect name="inner rect" x="1" y="1" width="218" height="138" red="50" green="130" blue="130" alpha="0.8">
<label name="name" x="1" y="1" text="label text" red="255" green="234" blue="0" alpha="1"/>
</rect>
</rect>
]]
UiAttachUser(self,UiXml(rectxml))

What does this gain you? Quite a few things, actually. As you can see, you can set the colour of a UI element at the same time as defining it, instead of as a separate command. Not really a biggie. But the above rectxml string doesn't have to be created all in one go, and THAT makes things powerful.

For instance, you could have a loop that builds up the string, based on ... well, based on whatever: the number of entries in a table, whether a checkbox is or isn't checked, or the user's name. But, to be honest, you could do the same thing with if-, while- or for-statements and the standard Ui*() functions.

You could have objects which themselves know how to render the pertinent UI elements, and each might just have a property, "myui", which you can read at any time to insert into a block of the XML. But, of course, you could have a function or Trigger on an object which calls the related Ui*() functions.

Okay, you could call an external web service, which would supply the UI XML to render whateveritis that that web service wants you to render -- instead of having to fetch some text or JSON data or XML and then process it in your world, wouldn't it be nice if that service knew how to "talk UiXml()", and could give you a window, with buttons, and sliders, and textboxes, all laying out the data? Yes, even THIS could be done with Ui*() functions, but we're seeing some usefulness...

XML is processable; with an appropriate XML/XSLT library, or hell, even some interesting use of string.gsub(), I can change all of my UiRect()s to UiOval()s; decrease all of my red colours by 10; or widen all of my elements by 10%. Yes, some of these could be variables that get modified, and others could be done with if-chains or lookup tables, but I still say processing a string is easier...


Still not convinced?

UiXML supports "layouts"; these are basically containers for arranging UI elements (similar to Java Layouts if you know those). From the wiki:

<layout style="grid" grid_x="2" grid_y="1" padding_x="1" padding_y="2">
<image name="itemimage" width="32" height="32" spriteId="0:2"></image>
<rect width="68">
<layout style="grid" grid_x="1" grid_y="2" padding_x="0" padding_y="0">
<label name="itemname" text="Name" red="0" green="0" blue="0" ></label>
<label name="itemqty" text="Qty" red="0" green="0" blue="0"></label>
</layout>
</rect>
<layout>

See the grid_x and grid_y? This set up the organization of the elements as they were added. They even nested, so inside one of the grid elements was another layout. Oh, but there's a UiLayout() function (I bet I'm the only one who has ever used it).

Fine. How about "components"? Again, from the wiki:

component = [[
<ui xmlns:mp='http://www.metaplace.com/schema/ui'>
<define_component name="data_field">
<rect name="back" width="100" height="20" red="131" green="131" blue="131" expand="true">
<layout style="grid" grid_x="2" grid_y="1" padding_x="1" padding_y="1">
<label name="data_name" text="name:" width="100" />
<text_field name="data_value" command=" " text="value" red="0" green="0" blue="0"/>
</layout>
</rect>
</define_component>
</ui>
]]

-- Add this XML to the UiXml stack
UiXml(component)


Using A Component:
myUI=[[
<ui xmlns:mp='http://www.metaplace.com/schema/ui'>
<window name="item_detail" x="10" y="10" width="400" height="140" style="20">
<layout style="grid" grid_x="2" grid_y="2" padding_x="1" padding_y="5">
<component type="data_field" name="df1" height="20" width="200"/>
<component type="data_field" name="df2" height="20" width="200"/>
<component type="data_field" name="df3" height="20" width="200"/>
<component type="data_field" name="df4" height="20" width="200"/>
</layout>
</window>
</ui>
]]
UiXml(myUI)

Basically, components are "templates" for UI layout, allowing you to group together multiple UI items into a single conceptual block, and then reuse them as often as you like. This example provides a rectangle with a label and a textfield as a nice convenient unit that can be placed anywhere with a single line. Note how the height and width can be supplied afterwards, leaving some variability to the component's configuration.

In fact, you could go even further with the <override/> tag:

<component type="data_field" height="20">
<override target="data_name" text="Strength:"/>
<override target="data_value" text="100"/>
</component>

Fantastic! Customized label/field pairs, at your fingertips. Sure, I could write a function that provided this functionality -- the function could take any number of parameters that it would then use to make a specific component, and return a handle... which do you think is easier to use?

Okay, if you're still not convinced about the power of UiXML over regular UI function calls, Data Binding will win you over. Data binding allows you to specify an alternate source to a constant for filling in values. So instead of specifying a static value of "100" for an element's colour,

<rect name="red rect" x="0" y="0" width="100" height="100" red="100" blue="0" green="0"/>

we can use a constant in our script:

<rect name="red rect" x="0" y="0" width="100" height="100" red="{RED_VALUE}" blue="0" green="0"/>

Or even something more advanced:

<rect name="red rect" x="0" y="0" width="100" height="100" red="{stylesheet.places["0:0"].red_value}" blue="0" green="0"/>

Anything you can imagine that you can code, could be put into that { } definition. (This has the unfortunate distinction, however, of being one of the few ways to hide malicious code -- a future blog post). I hope you can get an idea of how powerful that is. Sure, whatever code you put in there could also be run separately, and then passed to a Ui*() function as a parameter, but this string of code itself can be generated dynamically - which is why it can also be dangerous.

There are also other types of data binding. Use "% %" for values passed in during the UiXml() call, such as

<ui xmlns:mp='http://www.metaplace.com/schema/ui' scriptId='0:8'>
<component type="data" height="20">
<override target="data_name" text="Strength:"/>
<override target="data_value" text="%foo%"/>
</component>
</ui>

values = {foo='a value here'}
winId = UiXml(xml, parentWindowId, values)

And even better, use "# #" to pull data from object properties:

<ui xmlns:mp='http://www.metaplace.com/schema/ui'>
<component type="data" height="20">
<override target="data_name" text="Health:"/>
<override target="data_value" text="#health#"/>
</component>
<component type="data" height="20">
<override target="data_name" text="Mana:"/>
<override target="data_value" text="#mana#"/>
</component>
</ui>

winId = UiXml(xml)
UiBindObject(winId, self)

the only thing that would make the above better is if the #values# automatically updated any time the properties changed; as it stands, you have to call UiBindObject() every so often to have the changes reflected.


So, have I convinced you that UiXml() is THE way to do UI in Metaplace? I hope not. That's right, I hope not, because... they just removed it.


Apparently I was the only person using it, and instead of letting me go on doing so, it got removed. I agree, it had some problems - some of the later UI elements, such as sliders, weren't supported, and you couldn't set the text on certain items from within the XML - but it also worked well for the parts that did work. Even more stinging is the fact that a new system, UI Styles, was introduced as a replacement to UI XML. It's not the same thing at all (though it's an interesting system in its own right, one that I may blog about).

So why did I blog about it at all? Well, I have a Google Doc which is a long list of future blog posts, and that was on it from before they stole my baby from me. And, to be honest, I had already started working on a replacement version which DOES support sliders, and text initialization (but which, can't do the cool things such as the data binding) before it was taken away, so those who might want this functionality can have it once I publish my version. And, perhaps, just perhaps, all of the bitching I do about this (and other things) will get others to bitch (about this, and other things), and my beloved UiXml() might be returned to me.

Hey, stop laughing. I can dream.

Wednesday, July 8, 2009

Embedding

A week or so ago, Metaplace released the ability to embed your Metaplace world into pretty much any webpage. As long as your page can support the IFRAME tag, you're probably set; there has been mention of various modules for Wordpress and other blogs, and I now regularly sit in chat from an embedded version of the PlainOldChat world inside an iGoogle gadget. You can also see the PlainOldChat world embedded at the bottom of this page (I'm too lazy to figure out how to change the blog's template to make it fit up near the top).

The main purpose of this embed, I suppose, is to allow people to share their worlds in a different environment: instead of having people "go" to Metaplace to see your world, they can find it right at your blog, or your company's website, or on your guild's page. This is a nice way to get people with common interests together in a "live" setting, giving a virtual environment that's a little more interactive than your flat webpage or forum.

Not long after the embed was introduced, there was talk about using them as banner ads. I'm surprised we haven't seen this yet, actually. Of course, it would be nice to get rid of the little bit of non-world stuff from the embed, like the Help/Logout stuff, so that JUST the world is shown. Also, there's currently no automatic anonymous or guest user support as yet, so only people who choose to create an account, or choose to log in, are going to be subjected to the advertising.

Adding support for anonymous or guest access would also allow for mini-games to be added to a web page; nothing as elaborate as a full virtual world, but just a casual game of slot machines, or a shoot-em-up, or sudoku, or a little RPG.

But the one thing I think will make embedding fun, cool, and powerful is that the surrounding webpage can communicate with the embedded world, by using Metaplace's design of every world being its own web server. In this way, the encompassing website, using AJAX or something similar, could display information about the world or about the user, right in the page instead of in the embedded view. You could have your health bar outside of the play area, the high score list, the help commands, your character's inventory...

Why not just put these things inside the Metaplace world, you ask? Because they take up screenspace. But don't they still take up screenspace, just in the webpage portion, you ask? Yes, but: one of the great things about the embed is that you can finally force a size for your world view; back in alpha, we started off with a 640x480 view, and could expand to fullscreen, but now it's the other way around, and there's no way to make someone leave fullscreen view. Some world- and game ideas can rely on the player only seeing a certain amount of the world, and while we do have a little bit of distance culling in Metaplace, it's done as a radius distance instead of a square, and it doesn't affect tiles. Being able to restrict the size of the world means that every player is on even footing, and no one benefits from having a 30" widescreen monitor over those of us with just a 20".

Displaying information from the world is only the beginning of what can be done with the embeds and the web triggers, but I'll save other ideas for another post. For now, I'll enjoy being able to chat in Metaplace alongside my Twitter feed alongside my email, all in one browser tab, and will save the interesting embed tricks for another day (one in which I've brushed up on a some Javascript).

Monday, June 29, 2009

OutputToUser()

Back from vacation, and it looks like our last server release has lots of goodies in it. One of the best ones, in my opinion, is "Added support for per-user camera control from script." It's the "per-user" part that makes me happy. I have mentioned before the loss of the OutputToUser() function, which was the ultimate way to do per-user effects, so anything that adds them back is welcome indeed.

MetaMarkup

The Metaplace servers, and specifically, a Metaplace world, communicates with a user's client by way of MetaMarkup tags, also occasionally called "game markup language." These are plain-text messages such as these samples pulled from my time in the PlainOldChat world:

[O_HERE]|10013|0:308|5|4|0|0|Crwth|0:1|0| |0
[P_ZOOM]|1.000000
[W_SPRITE]|24080:135|0.375|0.140625|255|255|255|http://assets.metaplace.com/worlds/0/24/24079/assets/images/dwnbtnpress.png|dwnbtnpresspng_|2|0|.|4|0|0
[S_CONFIRM]|Sprite data fetched successfully.
[UI_CAPABILITY]|2165|drag
[INV_GOLD]|241|13829|fc239cc9bf4ef7e148aae958c258bbb4|25|284503

The first part in the [brackets] defines the type of the tag, and the rest of the data, separated by |pipes| make up the parameters of the tag. If you're curious, you can visit any Metaplace world in which you're an administrator and go to Advanced|Debug and click on the "log" tab; perhaps uncheck the "commands" box as well. Everything that the client needs to know -- appearance/disappearance of objects, movement of objects, UI popping in and out -- comes through here.

As a world-builder, we have control over most of what comes through by using the API in scripts attached to objects, or by just building our world with the tools. Thus, calling CreateObject() will make a [O_HERE] tag appear (to most people - see below), and anything UI-based will give you one or more [UI_ ...] type tags.

OutputToUser() let you "hand-craft" these tags, such as

OutputToUser(self,"[P_ZOOM]|"..self.myzoom)

or

for _,user in ipairs(GetUsersInPlace()) do
if user.cansee then
OutputToUser(user,"[O_HERE]|"..getHereParams(self))
end
end

Why is this useful? Right now, the Metaplace API and system is setup mainly to support the idea of a shared-world view. If you go to Metaplace Central, everyone gets to see the same tiles on the ground, the same stationary objects, and see the same look for everyone they encounter. But what if this isn't what you want? Two types of games that easily come to mind, where players should have different views, are RPGs (Role Playing Games) and RTSes (Real Time Strategy). Both of these can have requirements that certain players have different/extra knowledge about the game world than others. With OutputToUser(), you could, with some work, code this up any way you wanted. Now, you're at the mercy of what the API permits.

UI

So what DOES the API permit, on a per-user basis? Well from the beginning, UI has always been able to be done per-user, which I've mentioned previously. This makes sense, as support for things such as pop-up dialogs are important pretty much anywhere, and just because I'm being asked "are you sure?" doesn't mean that everyone else should also see that message. So from day one (at least, my day one in beta) we've had per-user UI, so there was no need to use OutputToUser() for it. Right?

Perhaps, but I can actually think of reasons you might want to hand-craft the [UI_...] tags; sometimes it's easier to have strings premade which have drop-in values (using string formatting) or to send a variable amount of UI commands based on computations and iteration through tables instead of a bunch of conditional code. Fellow tester LunarRaid, as I recall, was doing something fancy with OutputToUser() and changing the art of UI elements.

Place Settings

There are a bunch of MetaMarkup tags that tell the user's client about the Place they're in, such as the location of the camera, the zoom level, and the type of View (top-down, isomorphic, etc.) Some of these, such as the View, probably make sense as a shared, universal settings for all users (though, I did hand-craft [P_VIEW] tags in a world to allow visitors to change the View -- for themselves alone -- to see how certain code behaved in different Views). Others, though, might have legitimate need to be different between users.

I mentioned in a previous post that certain calculations depended on knowing what the zoom level was of a Place, and thus it was strongly suggested that the zoom was locked, preventing the user from scrolling with the mouse wheel. But if you could set the zoom level, per user, you could not only override any mouse wheel zooming by forcing the zoom every second, half-second or quarter-second, but you could also provide a little zoom bar with which the player can legitimately change their zoom level in a way that code that depends on it will know (zooming with the mouse wheel is all client-side, so nothing gets sent to the server, and thus scripts can't know that it has been done.)

Playing with the camera can also provide some interesting effects: right now, the camera is usually locked to a user's avatar, or locked to a given location in the world. While we've had a MoveCamera() function for a long time, to change that location-based camera position, we never had the flexibility to change the camera behaviour between the two on a per-user basis. (Locking the camera to the user, and hiding the user, allows for interesting effects such as my follow camera experiment, based on a discussion with LunarRaid; he recently requested some new functionality that I would also like, as can be seen by the jerkiness here.)

One concept that can play a big part in RTSes is the "fog of war", where the map is unknown to you until you've been there, and even afterwards, parts of the map that you don't currently see can change - often, RTSes would have these "old" areas greyed out, with the scene as last seen. Of course, "what you see" changes from each player's point-of-view, and this includes the tiles themselves; my world BITN (currently suffering from an art issue) was a testbed for such tricks, where the world was dark except for a few lightsources or the special ability of the user to see in the dark, and thus tiles were revealed based on user-specific data.


This last server release has given us some of the functionality that I used to have relating to Place settings, but there are still some calls that we could use...

Objects

Along with the selective viewing of tiles in BITN was also the idea of selective views of objects. In fantasy RPGs, you might have spells such as Invisibility, Illusion or Polymorph, which change your outward appearance. These on their own are easy enough to implement, by changing the player's sprite. But such games might also have the idea of different "vision" types (perhaps granted by other spells, perhaps innate to the player's character's race), such as See Invisibility, See Illusion, or True Sight (which might see through all of these tricks).

Changing the player sprite is a good solution for Invisibility or Illusion if everyone is affected (although it could even be argued that the player of the invisible or illusionary character might want to see their true form), but as soon as we have different players who should see different things, the sprite-change method isn't suitable. When we were able to handcraft MetaMarkup, we could send different [O_SPRITE] tags to different users, depending on what they should see. This is exactly what BITN did, where I had all of the above spells and visions; if someone had cast illusion on their usual rogue form, they would appear as a fighter to the commonfolk, but anyone who had either See Illusion or True Sight would see the character as the rogue he or she really was.

So seeing an object differently can be useful. How about location? In the fog-of-war idea, the greyed out "old information" might show that there were some enemy units there, but they have since moved on since you last looked; on your screen, those objects should still be there, but for that enemy player, he sees them for where they truly are. Right now, you can't do this - as soon as you move an object, it moves -- there are no selective [O_HERE] tags based on whether or not you should have accurate knowledge of an object's location.

(I should point out that we do have a SetUserVisibility() function, which lets you set how far from a user objects can be seen - when objects leave this radius, either because they or the user moves, [O_GONE] tags are sent, even though some other user might still see them. It's a limited version of per-user visibility, which does solve some problems, but it only allows "see it or don't", not "see it here or see it there". There's also the gmVisible setting on templates, which set whether objects of this type can be seen by only administrators of a world, or by everyone -- again, it has it's uses (my camera marker module uses this functionality), but isn't a game play tool, just a game design tool.)

Look and location are just two examples of object settings that you might want to have different per-user; you can browse through the MetaMarkup page and look at each tag, perhaps coming up with all sorts of interesting game mechanics that could be implemented if only you could hand-craft how these tags were being sent to different users (just looking at the page now, I thought of having "x-ray specs" where you could have another player's avatar's clothing vanish, but only if you have the x-ray specs -- oo la la!)

The Solution

Of course, the easiest "solution" would be to just give back OutputToUser(). From what I can gather, the reason it was removed was to prevent malicious use; the last example MetaMarkup tag I showed above, [INV_GOLD], represents something "meta" from the gameworld you're in, something at the Metaplace level instead of the world level. Perhaps forging these, making people think that they got gold that they shouldn't, is the issue? Regardless of the reason, we've lost it.

So far, we've been getting new API calls to replace some of the most-often cited functionality of OutputToUser(). The per-user zoom is definitely a good one; we also recently got AddEffectForUser() added to the mix. And I can hope that, as long as I keep pestering/asking for the other users, we'll see the API calls appear.

A different solution, though, would be to still allow hand-crafting of tags, but perhaps only certain ones -- allow something like

OutputToUser(user,tag,params)

where I would call

OutputToUser(self,"O_HERE","10013|0:308|5|4|0|0|Crwth|0:1|0| |0")

and the function can decide if "O_HERE" is one I'm allowed to hand-craft. This would prevent me from faking INV_ tags, if that's the concern. Without knowing the full range of concerns regarding the old OutputToUser(), I don't know if this new one would be feasible. It would, however, be a one-stop solution to all of the other useful features lost and now pending API additions. Even if the goal is to have API functions for all of the imaginable per-user needs, something like this might be a nice temporary fix?