Friday, November 11, 2016

Recommendation Engines and the 2016 Presidential Election

This is a topic that's been on my mind for a while, but it's way too long for a Facebook post. Bear with me as I tend to get longwinded, but it's important to understand the digital world we live in.

Most people do not recognize how much of our daily lives is run by algorithms. Sure, obvious things like Google Maps directions make sense, but do you know that most of your cultural and social experience is now driven by algorithms?

The specific algorithm I'm speaking of is the recommendation engine. The goal of this algorithm is to provide a person with a dataset customized to that person's taste. Let's look at an example.

Have you ever used Pandora or Spotify or any other internet music provider? These are the clearest examples of recommendation engines.

When you use Pandora, you're asked to create a new channel by searching for a song or artist. You then have "random" songs play after that, and you're allowed to Thumbs Up or Thumbs Down a given song. The more preferences you provide, the better it is at providing songs you enjoy.

But how does that work? Out of millions of songs, how does Pandora know that if you like Queen you might also like ACDC?

The Basic Algorithm


For those not familiar with computer science, the term algorithm simply refers to a strategy to solve a problem. The basic tenet of computer science is that any problem can be quantified into some kind of logic-and-math strategy. From displaying the pretty graphics in video games to transmitting Tweets to your followers, it's all just math and strategy.

For Pandora, it works something like this:

You create a new station by naming an artist or song. We'll go with Queen because that's my go-to Pandora station. The system uses that as a starting point.

For simplicity, let's say that all music can be defined by two properties: how rocky it is and how melodic it is. In the case of Pandora, I'm sure they have tons of properties, but we'll simplify it to these two.

We can then score every artist on these two properties on a scale from -10 to 10. Let's give Queen a rock level of 7 and a melodic level of 5. This can be graphed!


You can also graph a ton of other artists:


Two notes: I didn't think too hard about the classification and my musical knowledge ends around 2001.

You can probably see where this is going. Based purely on the measurement of rock and melody, if you like Queen, you're probably likely to enjoy Red Hot Chili Peppers, Bon Jovi, and Green Day, and you're probably not going to like Snoop Dogg or Celine Dion.

Recommendation Drift


An interesting thing can happen, though, when you introduce the idea of Thumbs Up and Thumbs Down: your recommendations can drift.

In a recommendation engine, each person has a thumbprint. It's basically your current location on that graph above. Because you seeded the station with Queen, you start at the same location as Queen.


Because it can't play Queen songs all day, the algorithm selects a song by a nearby artist. Let's say it selects Basketcase by Green Day. It's a cool song, so you give it a Thumbs Up. That pushes your location toward Green Day.


The algorithm then selects another song. Perhaps it reaches a bit wider out and selects Shook You All Night Long by ACDC. It's a fine song, but you're not in the mood for it, so you give it a Thumbs Down. That pushes you away from ACDC.


Finally, you're now in range of Sublime and the algorithm selects Santeria. You don't give it a Thumbs Up or a Thumbs Down, but you listen to the whole song. That's considered a soft like, so it nudges ever so slightly toward Sublime.

This pattern continues, testing different artists until it finds your stable spot. Eventually, you'll settle in that nice sweet spot somewhere between What I Got and Don't Stop Me Now. Pandora now knows your musical taste, and it continues to feed you those songs forever.

We measured only two properties here for ease of graphing, but this goes out to dozens of properties. This is all exposed to you as a user. On Pandora's website, you can choose "Why Was This Track Selected", and you'll see something like this:


These are the next few songs that played:






Apparently my thumbprint is solidly near Major Key Tonality and Melodic Songwriting. That sounds about right.

All this is powered by that little Thumbs Up button.

Social, Cultural, and News Recommendations


What does all this have to do with the 2016 Election? This same algorithm is running in your entire digital life.

Have you ever noticed that you see posts by the same friends frequently, but you don't always see posts from other people? Facebook's recommendation engine is determining which posts you see.

Recognize this guy?


Yup, it's a Thumbs Up. This same button is determining your social and cultural tastes in the exact same way Pandora determines you musical tastes.

Facebook then uses that data to determine which posts to deliver to you. What if we took Facebook posts and graphed them like we did above, but on the scale of Liberal/Conservative and Dramatic/Boring? It'd look something like this:


Note: I didn't think a lot about the location of these dots. I'm not editorializing.

Where do you think your thumbprint dot lands? Facebook will deliver stories to you that it believes you'd prefer to read.

And it's not based entirely on the Like button. Any time you interact with a post -- comment on it, share it, heck probably even stop to read it, that registers as an interaction. Each interaction modifies your thumbprint such that it can deliver more finely-tuned posts.

This isn't just Facebook -- this is everything. Google News, for example, uses a recommendation engine to give you news. Here's what mine looks like:


Google knows me really well.

This is true for the posts you see on Reddit, the items you see for sale on Amazon, and the ads you see on Youtube.

In essence, everything you see and experience is customized for you. Put another way, nobody else sees or experiences the same thing you do on the internet. Your social and cultural reality is distorted in such a way that it reinforces your personal tastes.

You Won't Believe What I'm About to Say


In general, this is fairly innocuous if not outright desired. What's wrong with hearing your favorite music or getting D&D news delivered to you?

The trouble is that most of these free services we enjoy online have to make their money somehow. In the case of news sites, that money is purely from advertisement, and that advertisement is paid by the impression. The more people who visit the site, the more money they make. That is not a bad business model in itself, but the profit margin is razor thin and competition fierce.

This pressure has led these companies down a path of rapid evolution. Using a combination of user analytics  (i.e. tracking what you do) and psychology, news sites have developed the ultimate weapon: clickbait.

Clickbait is a headline that sounds so outrageous or interesting that it breaks your brain and you have to click on it. Here are some examples:


They're almost always misleading and usually have missing information combined with some incredulous statement like "You'll never believe #3!". What is number three!? I have to know!

I won't get into the why or the history of Clickbait, but suffice to say that it works. This wasn't some accident, either. They have been intentionally optimized by the aforementioned user analytics and psychological research to get you to click.

Unfortunately, outrageous headlines aren't reserved for celebrity gossip and top ten lists. Legitimate news topics are subject to this as well.

Look at this post from the Gawker network:


You want to click that, don't you? It's an outrageous premise and evokes an emotion, plus the picture strikes a chord. I'll admit that when this showed on my feed, I totally fell victim and clicked it. It was a trite article, as they always are, but it still left an impression on me.

However, the fact is that I clicked it. That gave Gawker their $0.001 and left a mark on my digital thumbprint that I like articles like that. This incentivizes Gawker to make more of these articles and causes the recommendation engines in our lives to deliver more of these articles and fewer traditional news articles. Websites are optimizing toward what we're willing to click on, and recommendation engines are blindly believing that's our preference.

This is the problem. There is a feedback effect here that has been running rampant for the last decade and has caused our political discourse to fall apart. You're literally being delivered only the content that the recommendation engine believes you want to see, and you're giving it Thumbs Up actions based on headlines purposefully engineered to get you to click them. 

Remember when I mentioned the recommendation drift earlier? When you combine recommendation engines with Clickbait headlines to outrageous content, the content you see becomes more outrageous. The articles you see will literally drift more partisan and more outrageous over time. You clicked it, so the algorithm believes you like it!

In other words, if you wanted Hillary to win, the news you saw was customized based on your preference for Hillary to win, and you were mostly fed the most outrageous form of that content. Same if your preference was Trump.

Think Critically


So how do we fight this? It's simple: think critically.

Stop giving into your own indulgences and automatically believing the news that matches your own worldview. Do you think Hillary is a criminal? When you see an article saying Hillary's going to jail due to the latest email leak, skip the editorial and go straight to the leaked email itself. Hint: Most of those emails were fairly innocuous.

When you see an image on Twitter of the KKK openly celebrating Trump's victory, confirm that the image is actually what it claims to be before reacting. Hint: That image neither included KKK members nor was it taken after the election.

Check sources. Think critically. Find fault in arguments you agree with. Use Snopes.com. Seek a variety of sources. Ignore question headlines. Ignore clickbait headlines. Every time you visit DailyKos, end it with a visit to Drudge. Better yet, avoid both of those sites. 

Most importantly, be relentlessly suspicious.

It's not easy to be relentlessly suspicious of what you agree with, but this is the world we live in. Recommendation engines determine the social and cultural information that we see in our digital lives. If you're not relentlessly suspicious, you'll fall into the same echo chamber so much of the nation fell into during this last election.

Yes, that echo chamber is comfortable, but it's a false comfort.


Saturday, August 8, 2015

Player Engagement Through Participation

I've been out of town the last week or so and haven't had time to open Unreal, thus the lack of posts. I'll get back to it tomorrow, but I wanted to opine about an article I read last night.

The article is on Polygon and written by Paul Kilduff-Taylor, co-founder of indie studio Mode 7. It's a great read (I read it twice), and I suggest you read it before continuing.

The piece I want to comment on is toward the end where he states that games suffer from conflicting ideologies:

We’re told that games should be narratively profound but also that nobody reads text. We’re told that "philosophically there’s little difference between developing for eSports and all players" and also shown empirically that even basic multiplayer features are irrelevant to a game’s ability to grow a community. We were told endlessly that sports games do badly on Steam; a sports game is currently Number 1 on Steam. Everyone has an opinion about what games should be or shouldn’t be.

This struck me as both a dev and a player because of the truth to it.

Building a game is hard. There's a lot of mud and a lot of unknowns, and like writing a novel or song, you don't know exactly how it'll turn out until it's done.  Also like books and music, consumers double as expert critics.

Civil engineers have it easy (not really, I'm just jealous). They get to know exactly what a bridge looks like before they begin building it. Every cable, every bolt, every beam is specified on paper, rigorously tested, and committed to long before the shovel hits the ground. They know what a bridge is supposed to be, and they don't change their mind on which direction the bridge should span halfway through building it.  Most importantly, drivers do not create subreddits to discuss their opinion of the bridge, and there's no expectation the engineers should modify the bridge post-completion based on driver feedback.

But games are a creative field, and as such we should never strive to build games the way we build infrastructure. You can't make something creative via formula, and I truly believe engagement with the community is critical to building a game for that community.

That first sentence in the quote makes me laugh because it's so true. Read any forum post about a game's story and they'll complain about it being shallow, predictable, or cheesy. Taken at face value, there's a huge craving for deeply storied games.

However, if you watch someone play a game with a lot of story, you see them skip cutscenes and ignore text. Players get annoyed at exposition (rightfully so), and they question if games that are just about a storied experience actually qualify as games at all.  Anyone who's DM'd a game of D&D knows that player agency is above all, but that comes at the expense of a cohesive story since more player agency necessarily means less creative control. (I can talk about story all day long, but I'll save that topic for another discussion.)

Another paradox is the question of single player vs. multiplayer.  Companies invest a ton of development time adding multiplayer support because of the belief that multiplayer extends a game's life and garners a community. Yet a game like Civ V has an entire online ecosystem surrounding its single player experience and largely ignoring its multiplayer.  Does a giant sprawling RPG need a multiplayer PvP mode?  Does a competitive shooter need a campaign mode?  I'd argue no in both cases, but then players feel cheated as if they're buying an incomplete game when these pieces are missing.

These are fundamental questions, but even at the more detailed level it can be hard to know what to focus on.  Tactics games need a solid core game loop, but they also fall apart without a progression system.  Should you build social systems for your game to help with stickiness or should you rely on existing social media so you can keep working on the game?  You have a vision for your game's economy, but players expect something different due to genre standards.

This is made even harder in the world of Early Access and Kickstarter where you're publicly building your game in front of the audience. Everybody on Reddit has an opinion about what's wrong with what you're building, even when it's incomplete. It can be both dizzying and paralyzing to have a thousand arguments pulling you in every direction. We're living in a world of a global design-by-committee, and it's a major shift in the last five years of how we create games.

It may sound like I'm being negative, but I actually think it's a super exciting time to be in this industry.  Player-developer engagement is at an all-time high with things like reddit and live streaming and user-generated content and eSports and revenue sharing and Twitch.  It's true that for many games now, players create more game content than developers do by orders of magnitude.

As laid out in Reality is Broken (another must-read for game devs), humans have an innate desire to connect with others while creating things.  It is this emotion we're tapping into when people showw off their Minecraft servers or people stream speedruns or players engage in design debates over their favorite game.

Maybe it's not so bad having the drivers build the bridge with the engineers?  Different drivers would participate in different ways, they'd key the engineers in on features that matter to them, help deliver materials so the engineers focus on other tasks, and the crowdsourced effort could help discover a flaw in the bridge they couldn't have found otherwise.  At the end of the whole project, those drivers also feel a sense of engagement and ownership they wouldn't have had otherwise.  

And for this reason and many others, this is why I love making games.  It's exciting and crazy and unknown and creative and fun.

Monday, July 27, 2015

Bit of a Block

Reality hit me in the face today as I tried to implement the system described below: I learned you can't create objects in Blueprints. Whaaa?

Only Actors can be created. Several people said to just make Actors for everything, so they're suggesting I make an Actor for every command issued and for each spell.

That sounds terrible. An actor is an object in the scene. A command is not and should not be an actor. What is the position of a command? How do you render a command? No, I refuse to do that.

Even if I wanted, I still couldn't structure it the way I wanted even with Actors. I'm not going to place an instance of a Fireball in every level just so I can attach it to my character.

All is not lost, though. We have the power of code by our side. The Command and Ability system will just need to be primarily driven in code. I can call into a native function that creates objects, and I can even return references to those objects. I just need to manage lifetime myself, but I'm used to that. I was hoping to do it all in Blueprints, but mostly as a self-imposed challenge. I'll gladly use code if it means the game is better structured.

I'm still not clear how you attach an instance of a Blueprint to an Actor. Let's say I have a Blueprint of a Fireball. It is the script for a Fireball, the visuals, etc. I would like some of my characters to have a Fireball as one of its abilities. What mechanism do I use to point to the Blueprint? Perhaps I make the Fireball an Actor Component and add it that way? I'm pretty sure you can add Actor Components at runtime, which will be important to support gaining abilities via level up.

But then again, you can't even instantiate objects so I probably shouldn't assume anything.

Moving Code Between Blueprints is Hard

The first step I'm taking in my plan outlined below is to move a bunch of my blueprint code out of the level blueprint and into a new PlayerController blueprint class I just made that derives from my custom native one.  A few notes on why this was harder than it should have been:

  • I had to update my DefaultPlayerController class in the Game Mode.  To do this, I look up the blueprint by path.  That path ended up being "/Game/Blueprints/TacticsGamePlayerControllerBlueprint". Apparently "/Game" maps to the root of my project, but I'm not sure how I was supposed to know that.
  • Despite the UI suggesting you can do it, copy and pasting large chunks of Blueprint code is not a good idea.  Nodes that don't have meaning (i.e. local variables or functions that don't exist in the destination) are simply removed.  This makes it very difficult to know which nodes you need to fix up -- I would've rather they import in and are disabled.
  • Some part of the native code change I made broke the Get node to my "Targeting State" property. It refused to treat it as an Enum in any of the nodes where I used it, calling it a Byte instead.  Deleting them and re-adding them fixed it.  Strikes me as some kind of bug.
  • I'm not sure if this is the case, but I suspect you can only have references to level objects from within the level blueprint.  That's reasonable. I had relied on that a bit, so I'll need to restructure  it such that the level blueprint passes those references into the player controller.  For now I just disabled the 1-3 hotkeys to select characters.
With those three obstacles out of the way, I'm now right back to where I started... but with cleaner code! :)

Sunday, July 26, 2015

Some Notes on Game Structure

I'm at a point now where I've got a few nice building blocks, and I can start turning them into more of a system.  I'd like to now move toward building commands, abilities, movement, etc.  I'll need a proper system to manage these pieces, and you'll see why thinking about them holistically is both necessary and hard.

For this post, I'm just going to think out loud stream of consciousness style to gather some thoughts on how I'm going to structure the command system from here.  Keep in mind my Unreal experience is quite low, so this might be a terrible way to do things.  But alas, we must move forward somehow, and we learn best when we get burned!

First, let's start with some requirements.

  • From the user's point of view, it'll be a standard tactics game style of UI.  Select a character, select an ability, go into targeting mode, select a target, confirm the target, execute the ability.  
  • Different abilities may have slightly different targeting rules and steps.
  • It'd be nice if the system can support an action history because I like seeing that in my tactics games.  
  • The same system should work for players and AI.  
  • I'd like it to be very easy to construct new abilities from simple building blocks.  
  • Finally, I want it to follow good engineering principles: solid code reuse, low coupling, etc.

There are different approaches you can take when designing a system like this.  I frequently use a bottom-up approach where I imagine what it would look like to build a few different things using this system that doesn't exist, then I look at the pieces I now require and move up a step.  I find that this approach makes it easier to understand real-world use cases.  A top down approach has a tendency to miss real-world cases, leaving you in the awkward position of saying the user can't do what is desired or hacking the system to make it work.

So let's see how we would build a Fireball ability.  This ability can target a location and explode, dealing instant damage to everything in range and applying a Burning debuff to them.  Let's write some pseudocode!

function Fireball::Execute
    damage = CalculateDamage()
    characters = target.GetCharactersNearTargetPoint(explosionRange)
    for each character in characters
        ApplyDamage(character, damage)
        AddEffect(character, Burning)

function Fireball::CalculateDamage()
    return baseDamage + scalingDamage * myCharacterLevel

function Target::GetCharactersNearTargetPoint(range)
    return SearchCharacters(targetPoint, range)

Okay, so I've highlighted some pieces worth talking about.

Target: The ability needs to know where to apply its effect.  A target object seems like a nice way to encapsulate that concept.

MyCharacterLevel: The ability also needs to know some information about who is using the ability.

ExplosionRange: To reduce coupling, we pass in any relevant ability information to the targeting object so it doesn't need to know anything about the ability. 

Note: my first draft had me passing the ability into the target, but that caused a two-way dependency, which is generally icky.  By pushing the data into the target rather than having the target pull the data out of the ability, that makes all flow unidirectional. Conceptually I like this, too -- an ability needs to have a target in order for it to execute, but a target can stand alone and doesn't need an ability to exist.

This suggests a basic class structure already: A character has an ability. An ability has a target (hold onto this thought).  

As of right now, an ability also has a reference to its character. This goes against what I just said about explosionRange, but for some reason this coupling bothers me a bit less.  Probably because conceptually an ability naturally feels like it only exists as a result of a character and also because the Unreal Component structure is such that the Character will already be accessible to any Abilities Component inside it.  I'm going to move on for now, and we can revise this later if I feel gross about it once I understand the rest of the system more.

So now that we know what an ability is, how do we execute it?

The user has a selected character that we keep track of.  There's also some UI that maps each ability, so when I press a button or hotkey, I activate that ability (which is itself a multi-step process).  We'll call this request to use an ability a Command.  Not only does Command work conceptually, but we're literally using the Command design pattern as well. 

Interestingly enough, this article on the Command pattern showed up on my Twitter feed this morning, so I even have a relevant link to give for those not familiar with the pattern:http://gameprogrammingpatterns.com/command.html.  I know, it's unfashionable to talk about design patterns these days, but I still stand by them when used appropriately, especially when it comes to easily communication through a shared language.  But I digress...

I like the idea that a Command brings together an Ability and a Target and tells them to combine (execute).  This tells me that we'd create a target object, run a state machine until it's valid and locked in, then pass that Target into the Ability when telling it to execute.  That thought I told you to hold onto above: an Ability only has a Target when it's executing (i.e. we pass it in as a parameter).

Let's bring it back to Unreal now.  When the user hits that ability hotkey, we'll need to construct a Command.  The PlayerController can probably handle that construction.  It'll then pass in the ability.

Once the ability is passed in, the command will ask the ability to construct a target object.  We go factory style on the target because, as we'll discover later, the exact subclass of Target depends on the ability.

I'm going to apologize right now for my inconsistent capitalization and just stop trying to pick a style. It makes sense in my head, okay?

Once we have the command, the PlayerController would hold onto it as the current active command. It would also bind some events on that command so it can respond as the state machine updates.  It would also immediately ask the command to begin targeting.

The command then forwards the targeting request to the targeting object, along with any per-command information it needs (such as caster location).  That causes the targeting object to show and update as the user's mouse is moved around. The ability had already fed in information like whether or not it requires line of sight, the types of characters that are valid, etc. The targeting object uses all that information to determine whether or not the target is valid, which it uses when updating the UI.

Once the user clicks, the targeting object fires an event that the target has been acquired.  The command was listening for this event and will advance the state machine to the lock in phase.  Some UI will get updated and show a confirmation frame.  If confirmed, that will then cause the command to execute, which calls the Execute function on its ability, passing in the target.

For external systems, such as UI frames, I imagine the PlayerController sends an event any time the active command changes.  Those systems can then register directly on the active command so they can respond to changes in lock in and targeting state.

Finally, let's talk about the actual class hierarchies.  I know this is getting long, but I need these notes to help gather my own thoughts. :)

We have a Command, and I'm going to look slightly ahead and have a base class for Command and a subclass for CommandCharacterAbility.  This way I can use commands for other things if necessary.  For example, what if Save was a command?  Or how about that most useful of debugging tools -- cheat commands?

Command
    void Activate()
    void Execute()

CommandCharacterAbility extends Command
    void BeginTargeting()
    Ability* ability
    Target* target
   
As we said before, characters have a set of abilities.  They're not references here because characters own the abilities.

Character
   Ability[] abilities

Next the ability itself.  Abilities can all be executed.  They can also create targets.  In this way, AbilityFireball would construct a target of type TargetPointAOE while AbilityHeal would create a TargetCharacter. The Command and other classes could then operate on that target without caring what kind of target it is.  Yay polymorphism.

Right now I have the abilities deriving straight from Ability.  If I discover there are common ability archetypes, we could add another layer there.

Finally, the IsTargetValid function is worthy of special mention.  Imagine we disallow you to cast Heal on targets with full health.  I'd still like the targeting system to consider that a valid target, but I don't want you to be able to Lock In the ability.  If we simply prevented you from targeting at all, there'd be no opportunity to tell you why you can't cast the spell.  Also, that kind of complex validation where we look at character status doesn't feel right inside the Target class.  I'd rather custom validation like that occur inside the ability.  So there are two styles of validation: whether you can select the target and whether you can choose the selected target.

Ability
    void Execute(Target*)
    Target* CreateTarget()
    bool IsTargetValid(Target*)

AbilityFireball extends Ability
    override void Execute(Target*)
    override Target* CreateTarget() // Type: TargetPointAOE

AbilityMove extends Ability
    override void Execute(Target*)
    override IsTargetValid(Target*)
    override Target* CreateTarget() // Type: TargetPoint

AbilityHeal extends Ability
    override void Execute(Target*)
    override IsTargetValid(Target*)
    override Target* CreateTarget() // Type: TargetCharacter

Finally, we have the target.  The base target has a bunch of common properties, such as the origin position, range, etc.  Each subtype would draw a different kind of targeting UI and have different accessors for getting the data you'd need out of it.  For example, TargetCharacter has a GetTargetCharacter() function for obvious reasons, but that function doesn't make sense for TargetPointAOE.  The abilities know what their TargetClass is, so they'll be equipped to cast down during execution.

Target
    bool IsValid()
    void LockIn()
    Vector originPosition
    float range
    bool requiresLOS
    bool friendly

TargetPoint extends Target
    override bool IsValid()
    position GetTargetPoint()

TargetPointAOE extends TargetPoint   
    override bool IsValid()
    Character*[] GetCharacters()
    float aoeRange

TargetCharacter extends Target
    override bool IsValid()
    Character* GetTargetCharacter()
    void SetAllowedTypes(bitmask)

I still have some questions on this architecture.  For example, while targeting a Move, it would be nice to draw a path on the ground.  Who is responsible for that?  The Target object doesn't have enough information on that because different Characters may move differently.  I'll have to think about that some more.

I'm also a bit afraid at the number of parameters that'll go into the target classes.  Maybe it should just have a reference to the ability?  RequiresLOS, Friendly, and Range all seem like properties of an Ability, and duplicating them all down into the target just for some self-imposed encapsulation may cause more problems than it's worth.

So this was a massive post and far larger than I intended, but I think it highlights the amount of consideration needed when designing something as core as a command/ability/targeting system in your game.  Sure, it'd be super easy to just start writing code or blueprints, but decisions I make early will have a massive impact on what is easy and what is hard in the future.  A little forethought now can help make sure I don't design myself into a corner.

I'll also say that I lied a bit at the top of this post -- it wasn't stream of consciousness.  As I thought about it and discovered different considerations, I would go back and make changes.

This process of make a design, ask questions about it, revise, poke holes in it, etc. is very normal.  In fact, writing up this blog (i.e. explaining it) was very helpful in that process.  The act of explanation, whether on a blog or with a teammate, is probably your #1 tool when tech designing.  It's amazing how you find errors when you go through the exercise of putting words to it.  I spent most of the day on this post, and I feel like I made great progress on the game despite never opening Unreal.

That said, the threat of overengineering is real, especially on a personal learning project where the scope will be limited.  Perhaps it's time to start building so I can find those holes using real world problems?

I'll be sure to compare various stages of implementation with these notes to see how close we got.

Saturday, July 25, 2015

Why We're Bad at Estimates

So it took me an hour and a half to write the block of code in the previous post. I think it says a lot about software development and why we're so bad at estimations, so let's case study it.

I knew I wanted a function to convert a world position to a tile position. For the implementation, look below, but essentially take a vector, round X and Y down to the closest multiple of tile size then add half the tile size. Keep Z.

If someone asked how long it would take to write that function, I'd estimate a few minutes. I know the formula already, and I've written functions like it before many times. I would also guess I could do it in a handful of lines of code and the risk of getting it wrong is very low.

Let's take a look at why I'd be so wrong while being completely correct with what I just stated.
The main problem with the estimate is there's no context to the request, and it turns out context is everything. In this case the context is Unreal, a system I'm just barely starting to learn.

My first problem was I didn't know where to put the code. I knew I wanted a Blueprint function (scope creep!), and it would just be a utility function.

I made a blank class and wrote the implementation in a static function. I then added the macro to make it a Blueprint, and compile error.

My class needs to derive from UObject if I want it in a Blueprint. Totally reasonable and makes sense. I have a handful of classes already, so I copied over the UClass macro and the body generation one. No dice. I fiddled a bit and compared it to others before I gave up and used Unreal to make a new class, knowing it would set it up correctly.

I didn't know what to derive from, though. UObject wasn't in the global list, but I found Blueprint Library. That one sounded right. So new class created, old one deleted, get my code in. It works, and I have it in a Blueprint now.

Pull it up in the Blueprint, but it requires a target (the instance I'm calling the function on). I really wanted this to be a simple, stateless function and not a manager class, and I know in regular C++ I would make this function static (or not even in a class, but I know a class is required).
Here's where I went wrong: I had also seen in Blueprints they call static functions pure. I Googled that and found the BlueprintPure specifier. No good. StackOverflow said I had to make the function const. They were wrong. After a few combinations, I went back to my gut and static was correct.

It all worked, and it did what I wanted. But good programmers relentlessly refactor, right? I moved the tile size from a literal to a constant. Easy. I then thought it'd be convenient if Blueprints had access to the tile size (more feature creep!). Turns out you can't make static member variables Blueprint properties. I then had to make it a function that returns the value... which led to a rabbit hole where I was curious how Unreal treats inline. Finally 90 minutes later, I was done.

You can see with all the problems above, plus some tools failure (Unreal crashes a lot) and compile time, why my initial estimate was off by an order of magnitude. This is more typical than atypical for software development, and scales with the size and complexity of the project and team.

So what can we take away from this? First, understand context. What is the project like you're working in? How experienced are your programmers in it? How similar is it to other completed tasks?

Next, what is the entire request? It wasn't just make that function, but make that function accessible by Blueprints as a pure function along with an accessor for tile size. Right there that would've signaled me since I didn't really know what a pure Blueprint is.

Finally, never underestimate testing, research, tools failure, compile times, curiosity distractions, and other sinks. Remember, a typical estimate is highly optimistic for lots of legitimate, human reasons, so we need to actively fight against that.

As a wise programmer once taught me: the problem isn't that we need more accurate estimates, it's that we need larger estimates.

Friday, July 24, 2015

Sometimes Code is Easier Than Blueprints

I've said a few times that I'm really enjoying Blueprints in Unreal.  The visual feedback and structure has a nice flow to it.  However, I just came across a function I wrote that has no business being in a Blueprint compared to code:


Compare that to the C++ version: