Several development videos can be found on my Project Universe Playlist on YouTube
After having recovered some work I lost on Friday, I opted to put the programming aside for the weekend. By Sunday, however, I was thinking about what I could do that wasn’t programming. While I’m an OK developer, I absolutely suck at the art that I’m going to eventually need for this project. I have a few options, though. I could scour the Unity Asset Store and buy stuff. I have done that to a degree already, with Space for Unity and a few packages for particle effects, but the downside to that is that what’s available to me is available to everyone, and that runs the risk of making the project look like it came from a starter kit. I could hire an artist to make stuff for me. When it comes to hobbyist game development, there’s no shortage of developers, but there’s a black hole where artists should be. It’s easier for artists to make stuff and sell it through the Asset Store, and with so few bodies on-call, I’m sure they can pick and choose their work and charge what they need. I don’t know what that would cost, but considering I have a budget of $0, I don’t think I could find anyone in my price range.
I’m not a terrible artist, mind you, but doodling stuff as a past-time doesn’t come anywhere near the skills needed to create 3D art for a video game. But nothing ventured, nothing gained, so I downloaded Blender again (because free) and signed up for the subscription to BlenderCookie.com again (sadly, I had a sub before, but didn’t even use it). I went through their Intro to Blender tutorial (also free), and then engaged a two part tutorial on making a fuzzy burlap elephant. At the hands of a professional, it wasn’t all that difficult. However, that’s a specific case; nothing is difficult when your hand is being held throughout the proceedings. The real challenges are legion:
Everything is daunting when you approach it with little to no knowledge of it. There’s so much to learn, including the most basic baby steps (i.e. the tedious parts), even before you can think about making anything interesting or useful. Meanwhile, there’s a world of professional output out there that serves both as an inspiration and a trap: work hard and some day you’ll be able to render like the pros. Try jumping the gun to render like that, and you’ll get tangled up in frustration, miss a lot of the foundation needed to be able to work independent of tutorials, and end up quitting the entire process.
The last step, then, is to stick with it. I keep up with my development skills because it’s also my day job (not game development, sadly), but why the heck would I need to render a fuzzy elephant on command? It has to be something that I end up doing just for the sake of doing, which I can totally do…but like anything else in adult life, I have to make time to do it.
Filed under “From Left Field”:
UnityVS is an app that allows Unity developers who use Visual Studio to live-debug Unity scripts. Right now, developers can attach Monodevelop to the Unity.exe process, but Monodevelop lags behind VS in features and stability.
I’m very excited about this, as I prefer to use VS when working with Unity. I’ve had to jump back and forth between VS and Monodevelop for general development and debugging abilities, and have then had to grump my way through the weird and sometimes unresponsive break point system in Monodevelop.
UnityVS is going to be released for free from Microsoft once they get the branding and licensing agreements re-worked.
I really like Playmaker for Unity. It can shave off a lot of time when developing, but I’ve recently found that it’s not omnipresent.
Due to a — confusing mishap — I’ve gone back in time to a previous code base in which I’ve scrapped my scenes except for the initial main menu screen (new game, load game, load base data). That means I’ve had to re-create the initial post-menu scene from scratch, including player motion and the station trigger system.
I wanted to replicate my earlier design, but because of previous refactoring, I found that Playmaker was having a hard time getting references to scripts on !=Enabled objects when the Playmaker script was placed on an object in a prefab. I wanted to make the docking trigger a prefab so I could just drop it in there, but it wouldn’t see the UIManager script I needed in order to display the docking prompt (the docking prompt was !=Enabled).
So I decided to put Playmaker aside for the moment and get back to scripting. The triggering isn’t difficult. There’s events for Enter, Exit, and Stay, and I need all of them: Enter to display the prompt, Exit to hide it, and Stay to listen for the “F” key to dock. I put this script on the sphere trigger, set the collider to Is Trigger, hide the mesh, and it’s all done.
I think this is going to end up being more powerful, because now I can make special cases for NPCs that will be entering the trigger sphere. Should an NPC path take them into a sphere, they are considered to be docking as well. They should “vanish” to show that they’ve docked, and should “reappear” when it’s time for them to leave. I should be able to handle that with the trigger script, whereas with Playmaker that functionality would be more obfuscated and difficult to keep track of.