Tales from the Ebony Fortress

The state of GUI libraries for game development

by on Jan.18, 2013, under programming

I find myself complaining a lot about the GUI systems that are designed for game use, rather than for typical desktop apps. For some reason or other, they all seem to fall short in one or more ways. Part of this is perhaps a quest for optimal performance, but I suspect a lot of it is also that game developers traditionally don’t have much familiarity with other types of application and don’t always appreciate the best way to approach complicated user interfaces.

What I look for in a graphical user interface system is the following:

Separation of logic and presentation

You don’t want these mixed together for reasons that are covered extensively elsewhere, but one good reason is so that an artist can change the layout without requiring any code changes. Unity’s immediate mode GUI completely breaks this idea, because the layout is decided by the flow of the code. If you have 10 buttons in order and you want to swap the order of buttons 2 and 3, that’s a code change. Want to arrange some elements horizontally instead of vertically? That’s a code change too. Unity does try hard to give artists independent control over the GUI theme and styles but the interface there is so clunky (and bugged) as to make it a joke. Really, you want to be able to specify layout either in code or from data, you want to be able to have the theme as data too, and you want a narrow interface (eg. IDs and callbacks) between the layout and the logic. Web development’s trio of HTML, Javascript, and CSS are actually close to the ideal here. (Mozilla’s XUL is even closer.)

Must be easy to read information from it.

It should go without saying that you’d want to be able to query the state of the interface, eg. to get the current content of an edit box before sending the text to the app when the user presses Enter. Strangely, this is one of the cases that the aforementioned UnityGUI makes awkward. You can’t query an edit box because it doesn’t exist in a queryable form. Instead you have to cache the return value of the call that displays the edit box so that when you get the KeyDown event for the Enter key, you know what was in that box.  It reminds me of some of the more awkward moments of MFC where you would create a control variable which would hold the value of the control – but at least Microsoft provided an editor UI to hook them up and to place the control onscreen. Unity, which has editor UI for pretty much every single other aspect of their engine, expects you to do this part all by hand. Microsoft also at least had the courtesy to offer an alternative route via looking up the control with GetDlgItem – not possible in Unity’s UI.

Must be easy to write information to it.

Some libraries pride themselves on how easy it is to create a dialog window. Take a look at this example from kytten, a GUI system for Python’s pyglet games library:

    dialog = kytten.Dialog(
        kytten.TitleFrame("Kytten Demo",
            kytten.VerticalLayout([
                kytten.Label("Select dialog to show"),
                kytten.Menu(options=["Document", "Form", "Scrollable"],
                            on_select=on_select),
            ]),
        ),
        window=window, batch=batch, group=fg_group,
        anchor=kytten.ANCHOR_TOP_LEFT,
        theme=theme)

Great! Now… how would I change that label later? When kytten creates a new control it may be just a wrapper for several sub-controls, maybe nested, so you have no easy way of getting a reference to the Label even by trying to traverse the hierarchy. The result is that you can’t really change the content of the label if you construct it in this way. Instead you can construct important controls first and pass them in later, but that gets incredibly unwieldy.

If you just want to show some buttons on the screen then it’s ok to have a ‘write-only’ GUI. But most systems require something more dynamic than that.

Doesn’t force low level issues and optimisations on you.

Unity’s built-in UI is widely regarded as awful for anything other than rudimentary use, so naturally many 3rd party systems have appeared as alternatives. One favourite is NGUI. By all accounts it is very capable. Yet to use it you need to get familiar with texture atlases, nine-sliced sprites, tiled sprites, bitmap fonts, etc etc. These should be implementation details hidden from you unless you explicitly need to access them. Worrying about slicing the texture should be outside the app and in the data somewhere. Creating a texture atlas should be done automatically by the software unless you have a strong desire to override it, and the same goes for generating a bitmap from a font. Tiling vs. stretching should be an option on an Image. Another Unity UI alternative, EZGUI, seems to have the same issue, which suggests that these are problems exacerbated by Unity somehow.

Decent selection of controls.

Pretty much every GUI system will offer you a button, a text label, editable text, and an image. Great – but it’s not long before you start needing more than that. For example:

Pretty much every desktop UI offers all these controls. Pretty much every game UI I’ve tried recently lacks one or more of them. I lost count of the number of times on my last project I was asked by a designer to use a drop-down list and our UI didn’t provide it. And trying to hack grids and tables in via horizontal and vertical boxes is a recipe for pain.

And no, Unity, creating new controls is not the same as grouping a few existing controls together.

 

Conclusion

Strangely, the various GUIs available for C++ developers seem quite capable: CEGUI, SFGUI, GWEN, etc. For some reason the more laborious task of making a decent GUI in C++ etc has been done while those with normally more productive languages struggle on with half-baked solutions. Why is this? Why do developers in other languages accept that UI has to be a chore? And what can we do about it?

Related Posts:

4 Comments :, , , , more...

The Haxe Programming Language

by on Dec.22, 2012, under programming

This is just a quick post regarding my experiences using the Haxe programming language over the last couple of days to make a simple 2D game.

On the surface Haxe looks like a great tool, being a language that allows you to write the code once and then deploy to a number of platforms, including Flash, HTML5, native code (via C++), etc. These days a small developer would be taking a bit of a risk to write for just one platform so if this works well, it could be a great asset. The syntax is the familiar C++/Java/C# style so really it’s not so much a new language as a new set of libraries and a few slightly different keywords plus the tool chain to match. The learning curve therefore is pretty narrow for most programmers with experience of a C++ style language.

However where it starts to fall down is the area of general maturity. String handling is minimal, with there being no sprintf function or format method to create non-trivial text output. The type casting syntax is ugly and throws a pretty vague exception when it fails. In fact all the error messages are fairly curt and unhelpful. IDE support is currently poor, with FlashDevelop struggling to handle stepping through the code. Installation is primitive, expecting to put its tools in a directory off the root of your drive so that its command line library installation tool can write directly to that directory. (This is an awful decision for anybody who wants to make standalone per-project environments, or has a multi-user system, or has security restrictions regarding where they can install to, etc.) The standard libraries are very sparse, and although you have access to platform specific libraries, that’s not much use if you want portable code, and the wrappers aren’t fully documented yet anyway.

And that touches on the biggest problem, being that the platform independence isn’t quite there yet. String encoding differs depending on the back-end, which is bound to cause problems for internationalisation. Code that compiles on one platform may not compile on another, because there is platform-specific code that doesn’t get tested until you try a build. Code that compiles on 2 platforms may raise an error on one and not the other when executing code that didn’t appear to be platform-specific. (My game works perfectly in the Flash build but crashes with an ‘Invalid Cast’ pop-up on the native Windows build.) Code that runs on 2 platforms may act differently on both – my rendered text (using HaxePunk) appears differently on the Windows build to the Flash build.  There are so many different parts between your code and the final build for each platform that it’s not surprising that one part or other often malfunctions.

You can tell that a massive amount of work has been done to get the same Haxe code creating different output code for each platform and wiring it up to the various different capabilities. But some of this wiring and rewiring isn’t fully working yet and the danger is that you’ll discover a problem quite some way after the project begins and end up faced with few options for fixing it. As such I can’t recommend Haxe as a “write once, deploy anywhere” portable language. With some platforms exhibiting runtime errors that others do not, you get the worst of both dynamically and statically typed languages: you have to exhaustively test every line of your code, but you’re paying a syntactic price to please the compiler as well. It would seem wiser to pick whichever one of the two routes that you prefer and enjoy the benefits as well as the disadvantages.

What about as a language to deploy to a single platform – perhaps it could shine there, at least? Sadly not – it offers little in the way of libraries or syntactic advances over Actionscript or Javascript, so I can’t recommend it for that use either. I personally prefer Python if I want a language that focuses on productivity or C# if I want one that gives me fairly rigorous compile-time checking. Both of those languages offer better libraries and syntax, not to mention the bonus of having a larger community. (The last point is a poor one to judge a language quality on, but ultimately if a language is already tricky to use properly then a small community is going to exacerbate efforts to resolve any problems you encounter.) If Haxe could have been like Boo, trying to combine the strengths of Python with those of a .NET language, it might have been a more compelling option. But the Java/Actionscript style syntax is really a lowest common denominator – easy for most programmers to pick up, but offering little over whatever language they came from.

I hope that Haxe continues to improve because I can see it being useful to some people. However based on my experiences with it, I would say the best route to platform independence is still to write code that targets a particular virtual machine,  whether that be Java, Python, .NET, or Flash.

Related Posts:

  • No Related Posts
Leave a Comment :, , more...

Totally Dishonored, Slightly Disappointed

by on Dec.01, 2012, under design

So, I just finished Dishonored, and I must confess to feeling a little disappointed. So much so that I’m driven to write about it. (Some minor spoilers below.)

But, I should probably start off by saying there is a lot that is great about this game. The characters are interesting, the setting is novel, the level and world design gives you a lot of freedom, and the art direction is unique (even if a few textures are awful). The running and jumping mechanics work flawlessly – very important in a game like this – and the interface is responsive.  In fact, it’s only all those positive aspects that make the problems visible. By setting the quality bar so high, it’s hard not to notice the areas where it’s not been reached. So what follows is mostly negative not because the game is mostly negative but because the positives are well covered elsewhere.

Hide or Seek

Yeah, you’re doing a good job at hiding from me too.

Arguably the biggest problem Dishonored has is also one of its biggest attractions, namely the ability to choose whether to play it stealthily, violently, or some mixture of the two. A similar style exists in games like Deus Ex: Human Revolution or the Splinter Cell series and works well when executed effectively. For both play styles to make sense they need to be broadly in balance, so that you feel it’s an active choice you make for yourself (or for your character) rather than a passive choice that you make by virtue of one route being significantly harder than the other. Unfortunately the two sides never feel quite in balance in Dishonored. My first run-in with the problem was in the first proper mission. One of the hardest areas in the entire game for me was mere seconds into the first mission proper when I faced three opponents in one room which has to be traversed. Opting for complete stealth, I tried many ways to get through the room undetected without attacking anyone, and failed about ten times in succession. Perhaps I was mistaken to assume such a manoeuvre was practical, so instead I moved to a compromise approach of non-lethally attacking people in a certain order, and after several failed attempts there I managed to pull that off. (It helps if you realise you can move while choking people, incidentally.) But what I learned with each failure was that once I was discovered and forced into violence, it became much easier, and foes tended to die easily. Perhaps this was a deliberate commentary of some sort – but to me, it immediately felt like a stealth game where I was being punished for playing stealthily.

Luckily, things didn’t stay that bad, and I don’t think there was ever another time I had to contend with three enemies in such an enclosed area with so few resources available. But I never felt like the stealth worked well. Rather than opt to give you tools that directly measure your own visibility (as in Thief or Splinter Cell) which you can learn from while navigating the world, the feedback comes from visual indicators on the characters who see you, plus an audio cue. I can understand that having a magical way of knowing how visible you are is not something every game wants, but it’s been replaced with another equally non-diegetic indicator, one which only shows up when you are failing. Because of this, it seems like it’s neither providing sufficient feedback to master the mechanic nor attempting to be a realistic in-game behaviour and hence ends up being a frustrating exercise in trial-and-error. And when you are spotted, enemies converge upon you very quickly, making escape difficult. This makes the violent playthroughs somewhat more challenging at the expense of making stealth feel very binary – you’re either undetected or you’re surrounded. My experience ended up a lot like Chris Kohler’s, where most attempts at stealth fail and either you reload or you give up on playing it that way, because standing your ground and killing your foes is so much easier. I’m not a good player by any means – I completed Dishonored on a stealthy run at the Hard difficulty level in a leisurely 40 hours whereas most will manage it in half to a quarter of that time – but few games made me feel like a bad player as much as this one did, and considering I pretty much live off stealth/action hybrids, that’s unfortunate for the likes of me.

I think this is as many bodies as you can get before they start disappearing. Certainly when I came back to this room shortly afterwards, one of them had gone.

There are a few other problems with the implementation of stealth and non-lethal methods:

  • Aggressive characters get a variety of different explosives and weapons, while pacifist characters pretty much just have one. On top of that there aren’t many tools for distraction or manipulating the environment in any way. This pretty much reduces a non-lethal character’s decision making process in most cases to picking one of your two methods of knocking someone out, or going around them. (Some suggestions for extra non-lethal features could include limited invisibility, more extinguishable light sources, a ‘whistle’ command as in Splinter Cell, noise-making crossbow bolts, stun or knockout grenades, smoke grenades…)
  • Related to the last point, the economy was out of balance for stealthy characters. You’re likely to end up with thousands of unspent coins and ten or more unused runes because most of the weapons, two of the powers, and the majority of the enhancements are irrelevant if you’re not killing people.
  • Enemies can spot hidden bodies – but the engine arbitrarily removes bodies from the map! When I first spotted this, I assumed it was a bug. A few minutes searching the internet and I discover that it’s an optimisation. But you can’t just optimise out bits of gameplay. Hiding the bodies is a traditional part of stealth games, and Dishonored theoretically makes this more interesting by varying your opponent’s patrol routes, apparently to cover areas that you may have ‘removed’ someone from. On at least one occasion, I panicked because I saw a guard walking towards where I’d left one of his incapacitated colleagues earlier, and rushed to attempt to intercept him before he made the discovery – only to find out that the body was no longer there to be discovered. That was really immersion breaking for me.
  • Shadows – do they hide you? Do they not? I’ve read the note that pops up during loading screens but that’s not the same as seeing the system work in practice. At close range, they certainly don’t, and at long range, it seems to be down to luck as to whether you’re spotted or not. And if you’re stood above someone then they’ll probably never spot you at all, whether you’re in light or shade. I don’t feel this mechanic was usable in a practical way.

One more related aspect bothers me, namely the concept of there being ‘high chaos’ and ‘low chaos’ playthroughs, broadly corresponding to the lethal and non-lethal approaches respectively. While it is suggested that your choice makes a significant difference during play – eg.”fewer rats and weepers” as the loading screens say – it doesn’t feel like something tangible you can actually observe without having a second play through to compare against. It seems like some of the content was designed to accommodate significant differences between a high chaos and a low chaos playthrough, but diluting content in this way is risky. In particular the last mission was spectacularly anticlimactic with little to do for the last half, several bits of the building completely unused, perhaps playing a more valuable part in the high chaos version. But the result is that the whole final mission can be almost accidentally completed in under 5 minutes. By all reports it’s a totally different experience when you play it the ‘other’ way, but when fewer than half the players are likely to finish it once never mind twice, it seems a dubious decision to make a second playthrough better at the expense of the first.

One more comment on the gameplay and the choices available to me. On at least two occasions I was seemingly expected to make my way past a certain obstacle, only for there to be a completely unguarded route around it. On the first occasion a helpful tutorial window even showed the routes for me, and the second occasion later in the game made it seem like a concession to making it easier to get into an area. These choices felt a bit patronising, as if aimed at someone more accustomed to playing linear shooters who might consider themselves a genius for finding an alternative route, and who needed a prod to even consider that there may be some alternatives. Still, there was a good degree of freedom in the levels which was welcome.

Tell Me A Story

Dishonored’s world has a truly believable juxtaposition of the beautiful and the ugly, the environment playing a vital part in conveying the game’s atmosphere. It’s only when it comes to the actions of the characters within that world that the quality dips somewhat.

The other major problem I had was with the narrative. The setting had so much potential, with a cast of interesting characters, so the story takes a prominent role and I expected a satisfying resolution at the end of the story with loose ends tied up. But Dishonored left many aspects unexplained, the result being a situation reminiscent of Ridley Scott’s Prometheus, the audience being left with more questions than answers. Surely these gaps must have been noticed, so why did they make it to the final product? A charitable explanation might be that the team started out intending to create a huge ‘Order vs Chaos’ mythos, much like Thief‘s Hammerite/Pagan dichotomy, but ended up having to significantly strip back this aspect when short on time. A more cynical view might be that these issues were deliberately left open and were intended to be dealt with in a sequel – but while that may sound great to the bean-counters, it’s not a great way to treat your fans.

Issues with the plot include:

  • Your character gets supernatural abilities. Nothing else in the rest of the game really justifies or explains this. In pure gameplay terms they work well – but the narrative never makes a compelling case either for their existence in general or for you to have them.
  • The person who bestows the supernatural abilities upon you also seems to have no relation to the rest of the game’s events and only a tenuous connection to its setting. Maybe the intent was to show homage to Deus Ex by adding a literal deus ex machina situation to the game! If so, well played, but it’s a bit too ‘meta’ for the typical player. The name of the character raises interesting parallels with your own character’s status, and I thought this would form part of the plot later on – but either this is just a coincidence, or it simply never got elaborated upon.
  • At more than one point you face another faction of characters with abilities similar to yours. The lack of explanation for their motivation could be glossed over if taken at face value. The lack of explanation for their powers cannot be dismissed so easily.
  • You get given a mystical object that yields valuable information about characters and environments. This is a lovely touch and quite a unique item for a game – yet again, largely unexplained. (It also lets you work out the solitary plot twist some way ahead of time, incidentally.) Robert Yang has his own theories on this device but I don’t feel the implication is as strong as he suggests, especially given the missed opportunity to make that association at the point where you acquire it. And even if you do accept that the game effectively tells you what it is, you never get told why it is.
  • It’s hinted that your character has a certain type of relationship with one of the other characters. This is referred to a couple of times but doesn’t seem to get expanded upon either.
  • You can end up in a duel with someone with virtually no warning beforehand and little in the way of explanation afterwards. It also makes little sense that the person who sends you to a duel would risk your life over such a matter at that important stage.
  • There’s a recurring character who appears to exist for no reason other than to add a bit of flavour. This would be great if not for the nagging feeling that the time spent on that character could have been spent on some of the above issues.

None of these issues are a problem during play because you assume that all may be revealed at the end. But when you reach the end and little has been explained, it leaves you feeling dissatisfied and lacking a real sense of resolution. And I think that’s a real problem with a game that places such weight on narrative, through cut-scenes, audio logs, expositionary dialogue with other characters, and so on. It’s an interesting example of how gameplay that works perfectly well without narrative changes when you add it, colouring your perception of the whole game for better or worse, becoming the lens through which you observe the game system.

Revenge Solves Everything™

As usual, a sequel could address these issues. Dishonored sold deservedly well, and Bethesda are already throwing around the distasteful business terms such as ‘a new franchise‘ to imply that sequels are coming. Perhaps when played as a series the narrative will make a lot more sense, addressing the loose ends and doing a better job of giving players a satisfactory resolution. And it wouldn’t be too hard to beef up the stealth aspect of the game too by throwing in extra powers and perhaps paying a little more attention to sound and lighting factors and how players can manipulate them. I just hope that the fact that Dishonored succeeded despite the problems above doesn’t mean no attempt will be made to address them.

Related Posts:

  • No Related Posts
6 Comments :, , , , more...

Video games vs books, or, how not to run a retail operation

by on Sep.22, 2012, under Uncategorized

As many will know, earlier this year the UK video game retailer GAME filed for administration, which to the uninitiated basically means it ran out of money. This is after a year that should have been great for such a retailer, given that we saw the release of Modern Warfare 3, Skyrim, LA Noire, Deus Ex: Human Revolution, Uncharted 3, Saints Row: The Third, and all the usual franchises such as FIFA, The Sims, Tiger Woods, Madden, and PES getting updates. Despite all that we found a 13% drop in revenue in the UK. This would be bad news for most sectors, but the UK only really has one specialist video game retailer, namely GAME, so you’d think a 13% drop would be something they could absorb – yet this was not the case.

Compare this with Waterstones, a UK book retailer. Yes, books, those old wooden things that must be out of date by now. Waterstones, like GAME, also has a privileged position of having no direct competitors of a similar size or reach, being the only national chain that specialises in that area. Both companies are under threat from the likes of Amazon, but book sellers arguably more so due to Amazon’s historical focus on book delivery and vested interest in promoting digital readers like their Kindle. Yet Waterstones is turning over a modest profit, despite also suffering a drop in revenues and sales. How can the new entertainment industry learn some lessons from the old?

Variety

Book shops stock a wide variety of titles, new and old. Barnes and Noble say they stock up to 200.000 titles in a store. I don’t have any hard figures but I’d be surprised if the branches of GAME I’ve visited recently have more than 2,000. Video game stores have always kept a smaller selection, focusing on the hits, perhaps worried about filling shelves with stock they can’t shift. But this means there is less incentive to go into the stores as you’re rarely going to find even a classic from 3 or 4 years ago, never mind true retro classics from a decade or more ago. Nor can you find slightly off-beat titles that might appeal to small but dedicated market niches. Shunting games off the shelves so quickly not only makes it more likely that people won’t find what they’re looking for, but also decimates the shop’s chances of cross-selling and down-selling.

Instead there is a conservative focus on the best-sellers, presumably because they are guaranteed to sell quickly, but this takes GAME into direct competition with giant general purpose retailers like Tesco who can afford much more aggressive discounting. This seems like a foolish strategy.

GAME have also annoyed publishers by supporting the used game market. While it’s reasonable for gamers to want to be able to trade in their games, and therefore reasonable for a retailer to want to make use of this, that market could easily have been undercut to a large degree by continuing to ship a wide variety of older games at budget prices, where the publisher does still see a return, and gamers get a wider choice – everybody would be happier.

Organisation

Book stores go to painstaking efforts to file books by genre. This way, you can find what you want even if you don’t know its name or author, or if perhaps you don’t know exactly what you’re interested in.

Games stores do not do this. They file by platform, which is necessary but not sufficient. Then if you’re lucky, they order games alphabetically, which means you have to know the game you want. That makes it harder for casual browsers to find something of interest. (And besides, people who know exactly which game they want are more likely to just do a web search and get it cheaper online.)

We’re not lacking a way to classify game genres. Steam has one (Action, Adventure, Strategy, RPG, Indie, MMO, Casual, Simulation, Racing, Sports), Wikipedia has one (Action, Action-Adventure, Adventure, Roleplaying, Simulation, Strategy, Other), as does Metacritic (Action, Adventure, Fighting Games, First-Person Shooters, Flight/Flying, Party, Platformer, Puzzle, Racing, Real-Time Strategy, Role-Playing, Simulation, Sports, Strategy, Third-Person Shooter, Turn-Based Strategy, Wargames, Wrestling). Of course there’s no consensus here, and we geeks in the video games world love to argue about how to classify things. But it doesn’t and shouldn’t matter, any more than it matters whether Twilight is “Teen Romance” or “Dark Fantasy” or “Young Adult Horror”; what matters is that similar things get filed together. As was noted in David Allen’s “Getting Things Done” book, filing systems are not so much about ensuring everything is in the right place, because that is always a hopeless dream – it’s more about vastly reducing the number of places where something might be. If I wanted to find a game with gameplay like Realms of the Haunting, it might have been filed under ‘shooter’, or ‘adventure’, or ‘horror’ – but that’s still far better than me having to start at ‘A’ and examine every single box until I get to ‘Z’.

It also becomes much harder to familiarise customers with new brands when each brand exists in a vacuum. To revive a previous example, if I wrote a novel about teenagers falling in love with werewolves, the book stores would shelve it right next to Twilight, and any Twilight fan looking for more reading material in that store would see the books together and instantly have a good idea that my novel might be to their liking, even before reading the blurb on the back, and even if they didn’t see the big sign saying “Teen Supernatural Romance” above the shelf. Games stores have spectacularly failed to address this aspect of cross-selling. I suspect this has helped to perpetuate the hit-driven nature of the industry, making it harder for new games and properties to get attention because non-hardcore gamers have no idea what the gameplay is like. This is trivial to address, but for some reason still hasn’t happened.

It’s interesting to note that when games spawn ‘clones’, the mainstream industry tends to react negatively. But when books spawn clones, book stores spot the trend and open new dedicated shelves for them, because genre fiction is the backbone of the book world. Do games need to rehabilitate the humble act of evolutionary improvement? For an industry that spends so much on content, arguably far too much, we seem rather dismissive of anybody that presents all-new content if they recycled the gameplay in some way. Gameplay is king, certainly – but it appears that we’re maybe expecting an unrealistic level of novelty from it.

Misunderstanding the way people buy games

GAME have long supported pre-orders of games, and in recent years they’ve also had midnight openings for big releases like Modern Warfare 3. They see the signs that many of their customers know exactly what they want, often months before the product even exists yet.

So, why do they sprinkle so many shops around in the same town, as if expecting people to spot a shop front from the corner of their eye and drop in to buy games on impulse? Video games are not snacks or drinks that you might pick up randomly while wandering round doing your shopping. Nor do the different shops offer any different stock to make it worthwhile having different stores. The only thing that differentiates some of them is that the Gamestation line of stores are aimed at a slightly more hardcore audience than the safe-but-sterile GAME stores – but even this distinction will come to an end with the imminent rebranding of all Gamestations as GAME stores. The end result is that most towns and cities will have 2 or more identical shops selling identical products to the same market.

This has been a problem for GAME since it was called Electronics Boutique – at one point they had 4 identical shops here in Nottingham, and I think that was alongside 1 or 2 GAME shops before the latter chain was bought out (with that brand persisting to today, obviously). Up until going into administration in March, they had 2 GAME outlets and 1 Gamestation here. The only other retailers with this many stores in the city centre are convenience stores, bakeries, coffee shops, and McDonalds.

GAME should be consolidating into fewer, larger stores. This would cut their costs and probably allow them to broaden their range too. Really they should have spent money on doing this instead of buying Gamestation back in 2007, but it’s a bit late for that now.

Failing to exploit bricks and mortar benefits

People often go into shops to browse. The lack of organisation mentioned above means that browsing through video games is a slow, fruitless affair. And the lack of variety means that you’d unlikely to find much of interest anyway. A retailer needs to exploit the fact that someone has chosen to walk into their shop by showing them lots of interesting things that they could buy. Instead you get an alphabetically sorted rack of games, and often a fabricated “top 10″ chart with 4 copies of the same game standing side-by-side.

If you go to a book store, you can pick up a book and read it right there without paying a thing. They provide chairs and tables for you to do this. If you like the book, often you buy it. They were doing ‘free to play’ long before games developers caught on to the idea. So why do games shops make it difficult to actually play the games before I buy? My local HMV, a music and video store that has a small range of games, actually has more playable consoles in-store than the nearby GAME, which appears to just have 1. It’s absurd.

And as much as I hate being harassed by shop assistants, you can imagine that with the complete dearth of tools available in such shops to discover new games that might be of interest to you, it might help to have people on the shop floor who can help you find something. But this doesn’t happen either. Lots of the staff are knowledgeable gamers but you’re unlikely to actually talk to one until you’ve already made the decision to buy. This must mean many lost sales.

Summary

Many are saying that high street game retail is dying – but it looks to me like a self-inflicted wound. There doesn’t seem to be a lack of people who are willing to buy products from a shop when they could buy online for slightly cheaper – but you have to make it easy and worthwhile for them. Which GAME has not come remotely close to doing.

Related Posts:

  • No Related Posts
4 Comments :, more...

Steam ‘Greenlight’ and the $100 fee controversy

by on Sep.05, 2012, under Uncategorized

Continuing on the Valve theme…

Today’s controversial game development news is that Valve have decided to charge $100 to developers who wish to submit games for approval via their Greenlight system. This is not to cover costs, but to ‘cut down the noise in the system’, ie. people submitting other people’s games, unfinished games, or things that aren’t games at all – the problems that have blighted Greenlight‘s release a few days ago.  The fee itself goes to charity, not Valve.

This fee has led to a lot of complaints, and a lot of counter-argument against the complaints. The obvious objection is mainly the claim that $100 is a lot of money for independent developers. This is laughed off by a lot of people – it’s just a few hours’ pay, right? And this is certainly the case in North America and Western Europe. But when you look elsewhere, the situation is different – in Poland, the average disposable income is not even 20% of what it is in the USA, so it feels more like a $500 fee to them. So non-trivial monetary barriers like this risk will deter people from poorer countries from submitting, despite their game being as good as the competition. It’s also a lot of money for students, those out of work, or those working minimum wage jobs to pay the bills while developing games in the evenings. The Rampant Coyote mentions that “a lot of indies out there are practically charity cases themselves, living on Top Ramen and working from their cramped studio apartment.” Of course it is certainly possible to overcome adversities like these and raise $100 to spend on the chance of getting your game onto Steam, but this is still going to be a pretty high threshold for many people. Surely we want to encourage them, not discourage them.

Some argue that you shouldn’t expect to enter business without making an investment, and that you can’t expect the world to hand you things for free so that you get to do what you want. These statements are certainly true, but they also miss (or ignore) the point. If every game was purely made with a market in mind the world of gaming would be significantly diminished. Valve certainly aren’t a charity and nobody expects them to subsidise unpopular games, but unlike a normal publisher they are paying nothing up front and hosting the bits and bytes of a game that hardly anybody buys costs them next to nothing anyway – so why not select based on quality, rather than ability to invest? And nobody is expecting to be given things for free, or below cost price – just to not have an arbitrary fee imposed that doesn’t even buy them a service or product, as is the case with any fee that is immediately taken and given to charity like this. By way of analogy, when not programming games or writing silly blog posts, I’m a musician – and although I have no problem with spending money on buying a guitar for my performances (because that instrument costs money to make, and the guitar maker deserves compensation for his or her efforts), I would take issue with being asked to pay to submit my music to record labels, because it costs them virtually nothing to listen and if they like what they hear, they’ll split the proceeds with me anyway.

Comparisons with PS3 or Xbox360 developer costs are also somewhat flawed. When you enter into a development relationship with the likes of Sony or Microsoft you’re usually paying for an expensive and bespoke piece of hardware, access to internal libraries and tools, as well as getting a degree of documentation and support with all that. Besides, their costs are arguably deliberately high to keep smaller companies out – is this the future we want for PC gaming? Sony and Microsoft have their reasons for operating that way but it’s essential to keep an alternative for the kind of games that don’t suit massive console development budgets.

Still, surely any game that is worth publishing can raise $100 from somewhere, right? Maybe just $1 from each of 100 fans? But this brings us right back to why Greenlight even exists in the first place. Steam has the majority of the PC digital distribution market sewn up – between 50% and 80% of it, depending on who you ask – so developers are desperate to get on it. Lots of gamers are buying all their games from Steam and nowhere else, so if you’re not on it, it’s very hard to reach people, as most of them are simply not looking beyond Valve’s store.

But worse still, even the gamers who do look outside the Steam ecosystem will often demand developers put their games on Steam, often not realising (or caring) that the developer can’t just choose to put it up there.  As Sophie Houlden says, ‘ “I won’t get it if it’s not on steam” is such a common attitude I could spit and it’d hit someone who thinks that.’ There are good reasons why players prefer Steam, but that’s no consolation to the developers. They know that not being on Steam is a massive hindrance to their chance of survival, possibly being a larger factor than the relative quality of their game. Without being able to get on Steam, hardly anybody knows you exist, and that situation is getting worse every day. And if nobody knows you exist, reaching 100 people to get them to pay you $1 each is non-trivial. This is also why simply choosing to use another distributor like Desura is not the whole answer – the real problem is getting your name out to people, not sending data across the wire.

Some angry developers have suggested Valve should not have brought in Greenlight at all, and instead of having the voting system, they should be paying a team to properly check all submitted indie games. That swings the pendulum an unreasonable amount to the other side though – Valve shouldn’t have to incur all the cost and risk of getting good games onto the platform and weeding out the bad ones. We can hope they have good intentions, but again, Valve are not a charity. Ideally all sides would come to a compromise where Valve get to make a good profit from making as many good games available to people as possible, without any side having to take an unnecessary financial risk in the process.

So, what could Valve have done differently?

  1. Deposits, not payments. Here in the UK you need to pay a deposit if you want to stand for election as a Member of Parliament. This keeps the number of candidates down to mostly those who have a serious agenda. (Mostly.) But the important thing is that you get your money back if you do reasonably well, which reduces the monetary risk while still deterring time-wasters. Steam could implement a system like this, returning the deposit to any game that reaches a certain threshold of votes (presumably proportional to other games, or total votes cast over a period of time, etc). It would still be something of a barrier to the poorer developer, but it would be easier for them to get a loan from others in this case if there is a significant chance of them getting the cash back.
  2. Better organisation. There’s no excuse in 2012 to just have a handful of alphabetical lists sorted by crude genre categories, as is currently the case on Greenlight. With everything lumped in like that then of course noise will get in the way of the signal. Where is the “If you voted for This, then you might also like: This, This, and This” system to help you find similar games to the ones you like? Why is there no segregation of games that others are voting positively from those which others clearly think are not ready for publication? Why not have tags for certain aesthetics, such as 2D, 3D, pixel art, cartoon, etc? Why not show me games that are tagged similarly to the Steam games I already own, ie. in line with my interests? You don’t need to curate a collection aggressively if it’s easy for users to find what they want anyway. I don’t care that 99.99999% of the things on Amazon, Reddit, StackOverflow, or eBay are of no interest to me, because they do a good job of keeping that out of the way. (Incidentally, Valve made all the same mistakes with the Steam Workshop.)
  3. Improved moderation tools. YouTube is full of notoriously awful comments. Reddit is better, but not perfect. StackOverflow is almost all high quality, by comparison. All these sites are based around user-supplied content and user comments, yet some work better than others.  I won’t go into this too deeply but it appears that if you want to increase the signal to noise ratio, then it’s not sufficient to vote on something, but the layout of the site and the positioning of the articles and comments needs to change in response to that voting. When put that way, it’s almost too obvious to point out, but it’s still something Greenlight missed. Promote the highly upvoted ones. Demote or even hide the downvoted ones, unless I ask to show them. But at least let the crowdsourcing actually mean something.
  4. A more reasonable fee amount. If the figure had been $10 rather than $100 then probably nobody would care. Pretty much any payment would deter most of the time wasters, but smaller payments would have less of an adverse effect on developers with less cash to spare. A small non-refundable admin fee could be combined with the refundable deposit and few could argue with the fairness of that.

I think that with all that in mind, Greenlight could have avoided most of this negative publicity. But even if Greenlight does improve in future, independent developers and gamers who want a thriving ecosystem of diverse games will need to carefully consider the way the PC market is going and whether we’re just trading the old evils of retail for an equally bad set of problems.

Related Posts:

  • No Related Posts
4 Comments more...

Thoughts on Valve’s plans

by on Aug.10, 2012, under Uncategorized

Valve have made a profound impact on the PC gaming market since they launched Steam. They have roughly half the PC digital game distribution market, they offer more games for under £4 than my local branch of GAME offers at any price, and they have up to 4 million users online at peak times. But while most still think of them as primarily a game distribution system, bits of news have trickled out of the last year which suggest that they are looking beyond that.

First up was the ‘Steam Box‘, a hypothetical console designed by Valve, to compete with Apple TV and the gaming consoles. Valve were quick to deny this at the time, however.

Now note the recent statement from Valve’s Gabe Newell that “Windows 8 is a catastrophe for everyone in the PC space”, and more significantly the part where he goes on to say, “I think we’ll lose some of the top-tier PC/OEMs, who will exit the market. I think margins will be destroyed for a bunch of people. If that’s true, then it will be good to have alternatives to hedge against that eventuality.” (Emphasis mine.) This is why Valve are taking a keen interest in Linux, porting Left 4 Dead 2 to Ubuntu (with significant success, it seems).

But there’s more: Valve recently talked about branching out into non-game software, which goes beyond merely protecting their game sales from the risks of a Windows 8 walled garden. Obviously they already have the means of distribution and bricks-and-mortar retail is suffering so this makes sense. But I think there’s more to it than this.

I suspect that Valve will do some variation on the following:

  • Use their knowledge learned during the port of L4D2 to get their Source engine running all their back catalogue on Linux. Gaming on Linux has been mostly held back by a chicken-and-egg problem: because there’s no retail market for Linux games, publishers won’t spend money on Linux ports of Windows games, meaning there are few games available on the platform, which inhibits adoption of the platform by gamers. Since Valve is the combined developer, publisher, and distributor of these games, they can jump start this process with a stable of highly-regarded games.
  • Take the abstraction layers they develop during the port to create a framework or library to aid porting from Windows to Linux (and possibly MacOS). This will probably be offered free to developers to facilitate them offering their games across multiple platforms. This will let them quickly fill the Linux Steam store with games without needing to do much work themselves. The success of the Humble Indie Bundles in recent years show that there are people who will buy games for Linux, and that the numbers are not too dissimilar from MacOS users (but with almost 20% higher revenue per user), so I don’t think it’s a coincidence that Valve have integrated the bundles with Steam.
  • Intensify work on the Steam Box, with the idea being to produce an Ubuntu-powered system where Steam is the primary software delivery method. By this time they should have many games that exist in a Linux compatible form. The open nature of Ubuntu and Linux’s focus on the typical Intel architecture means manufacturing such a device would be cheap and easy compared to current or next-generation consoles with custom operating systems and architectures.
  • Aim to position themselves as clear leaders in the home entertainment market by having the Steam Box be the primary open platform for games and other non-business apps. They will be able to offer a much wider set of game titles than the competition, a better social network, support for mods, etc. And the recent expansion into software other than games will help position it as a useful all-rounder device. Sticking to modular PC-style hardware and open development standards like OpenGL will also allow them to ship significantly upgraded versions of the Box every few years in a way that consoles with their proprietary APIs cannot, while benefiting from free updates coming from the Ubuntu ecosystem.

If done properly, it could certainly shake up the gaming industry over the next few years, disrupting things much like Google did with Android, possibly following a similar model of allowing multiple hardware manufacturers. There’s nothing amazingly revolutionary here, but when you’ve already tied up the software side you just need decent hardware to close the loop. It may sound a bit far-fetched to consider a Valve console, but some of us will remember a time when the idea of Microsoft launching one seemed crazy too. Valve look like being in a similar position now.

Related Posts:

  • No Related Posts
6 Comments more...

In defence of Metacritic scores

by on Jul.28, 2012, under Uncategorized

Back in March it was reported that Obsidian Entertainment missed out on a bonus payment from their publisher Bethesda when their game Fallout: New Vegas narrowly failed to score 85 points on the Metacritic scoring system, which aggregates game ratings from a variety of sources to form an overall score. There was some discussion over whether that sort of criterion was a fair one to base developer bonuses on, especially given the news that Obsidian were having to make lay-offs when this news became public. The implication was that if they’d got the bonus, those job losses may not have happened.

The debate was renewed this week when it emerged that Irrational Games, makers of classics like Bioshock and System Shock 2, had included a requirement on a recent job advert that applicants should have a credit on a game with a Metacritic score of 85 or above. After the initial flurry of criticism online, the requirement has been removed from the ad. Yet it would be hard to imagine that this consideration will have been forgotten entirely, since it was considered important enough to add in the first place.

Gamasutra asked industry writers on their thoughts, and they tend to converge on a common position of being critical of the practice:

  • “Some really smart and talented folks have contributed to games that weren’t outright critical darlings.”
  • “Holding their individual work to a group standard, and a nebulous one at that, is beyond the pale.”
  •  ”[...] it’s even worse when you’re pinning that badge to an individual whose contribution to a bad game could have been amazing, or to a great game could have been insignificant.”
  • “your Metacritic score is really just an arbitrary number derived from the press, and it doesn’t take much to ruin your chances of receiving a “good” score.”
  • “who would want to work for a company that believes this to be an acceptable requirement for hiring?”

The complaints seem to revolve around a few key issues – that organisations (whether publishers in the case of bonuses, or developers in the case of hiring) shouldn’t be judging whole people and entire products based on these numerical scales,  that the Metacritic scores themselves are arbitrary and don’t measure anything useful, and that good people worked well on games that weren’t critically acclaimed (and vice versa).

The first issue is odd, because the job world is already heavily numbers-based. Even the amended job spec with the Metacritic requirement taken out asks for 6 or more years as a designer in the industry, 4 or more years of management experience, and 3 or more games worked on for the project duration. Requirements like these are common for top-tier jobs – for entry level positions it’s common to see requirements like “1-2 years in proficiency in C++“,6+ months console development experience“, “Bachelors Degree“, etc. Of course there will be people without one or more of these criteria who are better than some of the candidates with all the criteria, but we know that the criteria are still a useful guide. The number of false negatives you will suffer by ruling people out wrongly is almost certainly compensated by the time saved in filtering out inappropriate applicants.

As for the relationship between a publisher and a developer, a publisher will often tie bonus payments to sales figures, and payments during the development period may be dependent on the quality of milestone builds or on the dev team meeting fixed deadlines. Tying bonuses to sales is an important part of managing the risk a publisher is exposed to when funding a project, and is essentially equivalent to paying less up front but adding royalties, except with a steeper threshold. This lets them minimise the fixed cost while also being able to reward successful developers with money that would generally only ever come from profits. Without the ability to do this, publishers would have to take fewer risks – if that is even possible these days! – and fewer games would get funded. So in the big bad world of high budget game development, these metrics are a necessary evil. If you want a publisher to throw millions at you to make a game,  you’re crossing over from art to commerce, and the people who fund you deserve to get some assurances back that you are trying to make the best product with their money. And as a developer you’d usually prefer that the quality of that product was based on things you can more directly influence, such as how much the reviewers enjoy it, than on things you get little control over, such as how many units it sells. Metacritic scores are a step up from sales figures here.

The second issue is about the Metacritic score itself. What does it measure -  fun,  quality, predicted sales? Does it even make sense to assign a score to a game, which is surely going to be experienced subjectively? And does the aggregate value make any more sense than any given individual one? One answer to all these questions, which is actually quite simple but will not satisfy purists, is to abandon the idea that the score measures anything other than critical opinion. And whereas critical opinion itself does not equal fun, or quality, or predicted sales, it does actually correlate highly with all of those variables when the population is viewed as a whole. Metacritic scores correlate positively with the user scores (with a Pearson coefficient of 0.47 in one test I did of 50 randomly selected games) which implies the critics are at least in touch with public opinion, and that what they like is probably what the market will like too; at least one laboratory study supports this, as does empirical data from EEDAR presented at this year’s Game Developers Conference. Of course, everybody can find discrepancies, whether between critical opinion and public opinion on one game (eg. Mass Effect 3 getting 89 on Metacritic while the user score averages 4.2 out of 10), or in the relative rankings, or in finding games that scored highly but sold poorly, but on the whole the ordering is far more right than wrong and the scores are meaningful. The value itself may be imprecise but it doesn’t mean that you discard the entire measurement, just adjust your expectations.

Also in the Gamasutra article, Kris Graft suggests, “maybe their HR departments should just cut out the middleman and recruit a couple dozen video game reviewers who will play job applicants’ games, score them independently, then average out the results. Isn’t that essentially what’s going on here?” Not exactly. But is it really absurd to check a designer’s abilities by getting experienced players to actually play the candidate’s games and rate them? Surely not – in fact, surely that is going to be one of the better tests if we care about a game’s actual experience. And most Metacritic scores of reasonably well-known games are formed by aggregating more reviews than two dozen (although not all, admittedly), which makes the scores more valid than an in-house test of that size would be,  as the more samples you get, the closer you approximate the ‘actual’ value. And using publicly available scores (rather than it being privately done for HR departments) means more transparency and a level playing field. The Metacritic score will almost always be a better judge of critical acclaim than any in-house test. Many developers, in games and elsewhere, will have experienced the lottery of in-house testing which often rejects someone who then goes on to pass a test at somewhere which, on paper, would be an equal or better quality employer. A standard focus for comparison on public data would seem to be an improvement on this situation.

The last major objection was that a game’s score isn’t necessarily a good match for an individual’s score. This is the hardest one to argue with because there is a lot of truth in the fact that great developers sometimes end up on poor games, and perhaps vice versa. But this is where industry experience tends to even things out – the best developers will, over their careers, generally gravitate towards the higher quality companies and make higher quality games, giving themselves a good chance of getting such a credit. (Note that the controversial advert only asked for “one game with a Metacritic score of 85 or higher” – not an average of 85 over your career.) So rather than being read as “are you good enough that you inspire whoever you work with to create 85+ scoring games”, it should be read as “are you good enough that you were previously hired by a company that has made 85+ scoring games”, or perhaps even “do you have experience of working in the kind of environment and with the kind of people that make 85+ scoring games”. These are more useful ways of viewing the requirement, and while there is still a lot of scope for false negatives and rejecting some good developers – as with any measurement made prior to employment – you can be sure that someone who meets this level will be likely to boast the kind of experience a top developer would need.

So, whereas it’s understandable that people don’t like their art being distilled down to subjective ratings and strict thresholds, nor the idea of jobs being lost or companies closing due to a metric that is potentially at the mercy of a few rogue journalists, it’s hard to argue that judging developers by Metacritic scores is inherently bad. Gamers, developers, and publishers all want better games, and that means finding ways of deciding what ‘better’ means. Metacritic scores may be far from ideal in that regard, but right now they’re probably the best we have.

Related Posts:

  • No Related Posts
9 Comments more...

A quick thought about software usabililty

by on Jul.31, 2011, under Uncategorized

I wanted to install an update to some of my music software today, on Windows XP. Here’s the story.

I found out about the update via email, because there is no single place to find out about updated software. Obviously Windows isn’t a closed ecosystem so a single update point might be an unreasonable thing to ask, but it’s a shame there is no obvious central place that notifies you of updates: just an RSS stream with app name, version number, and a download link would be enough, and one program could read that once every startup. Instead, all your software asks to run at startup just so that it can check for its own updates, slowing your boot times and sometimes leaving stuff taking up memory. Not great.

The update was not for the app itself but for the samples, and the sample pack update had a version number of 1.0.5: did I need it? I don’t know. The software doesn’t tell me what version the samples are. It’s not in the Help -> About menu, because that’s just the software itself. It’s not noted in the Start Menu either. Nor in the Add/Remove Progams dialogue. So I just have to install it and hope for the best. Why is there no standard way of finding version numbers, in 2011? Even if different vendors have different ideas of what a version number should mean, there should still be a single place you go to find them.

I read the readme file that came alongside the installer. It tells me I need to navigate to the right directory if I’ve moved the samples. I might have done, so I bear that in mind. I run the installer, and at no point does it offer me the option of navigating to the right directory. Instead, it finds the correct location of the samples and patches them. Except, it doesn’t really. It seems to have done everything 1 directory level up from where it should have, so now I have to work out whether I can just copy them over by hand. Grr. Did I mention that the installer proudly said 1.0.4 in the title bar, not 1.0.5?

I go to their website to search the forum to see if anybody has (a) any idea how to find the version of what is installed(not because I need that any more, but because I’m intrigued as to whether it’s even possible), and (b) whether everybody has a problem with the directory structure when upgrading, or if it’s just me. I get asked for a username and password. I don’t know my username and password. I hate passwords (as you can see in my previous post on the issue) but this is especially bad because I need to use one account and password to authenticate my software and another account (and potentially another password) to use the software’s forum. I click the ‘forgotten password’ option, and have a reset link sent to my email account. I click the link, and it emails a new random password to my email account in plain text. I spend 15 seconds getting angry that they’ll send me a forum password via plain text email but won’t store anything server-side that lets me actually retrieve my original password, 15 more seconds amused that the random password they generated for me was ‘dddd5ddd’, and plough on.

I get into the forum, and enter a search term with 2 keywords. I am given a list of all results that match either of the 2 keywords. I feel like we’re back in the days of Altavista vs. Yahoo when search engines thought it was better to give you more results rather than better results. Then, Google demonstrated that when people typed “A B”, they really wanted A and B, not A or B – has the rest of the world not noticed yet? I suppose it was only 11 years ago, after all. Anyway, none of the hundreds of results seemed at all relevant to my problem, so I gave up.

Now this was an extreme case, but it’s rare for me to not encounter at least one issue similar to one of the above during any installation or upgrade. As software developers, we should do better than this. Right?

Related Posts:

  • No Related Posts
2 Comments more...

Initial thoughts on Google Plus

by on Jul.06, 2011, under Uncategorized

I appear to be somewhat unusual among my professional colleagues in being a big Facebook user; I think this is mostly because most of my friends are younger than me, typically in the 18 to 25 age range, a demographic who never really knew an internet before social networking, and who rely on Facebook for most of their online life – it’s like Outlook for young people. By comparison, older and more technically advanced people seem to find many aspects of Facebook annoying and for various reasons either want to ditch it entirely or migrate to a service that they feel will suit them better. In particular, a lot of online bloggers and journalists seem to have dismissed or ignored Facebook for whatever reason but are drawn to the various Google offerings.

With this in mind, the recent launch of Google Plus has been illuminating, both in terms of the attitude towards it from both demographics, and in showing how Google may not entirely understand this space that they’re arguably attempting to enter for the 4th time (after Buzz, Wave, and Orkut).

Firstly, one of the things I’ve noticed on Google Plus at this early stage are many people singing the praises of features that they believe to be exclusive to ‘G+’, but which already existed on Facebook. For example, the key ‘Circles’ feature already exists on Facebook in the form of Friend Lists and has done for years. (To set one up, click Friends > Manage Friends > Create A List. Once you’ve done this, you’re prompted to add each new friend into the various lists, just like G+ prompts you to put people into Circles.) Yet it is being talked about as if it is a revolutionary advance in privacy – such as articles like this one at the Huffington Post, which claims that Facebook’s lists don’t address the problem ‘in a meaningful way’, when in fact the features are almost equivalent. This honestly baffles me, and I can’t help wonder if a certain audience is biased against Facebook and favours Google. MG Siegler at Techcrunch says that Circles are “the most visually appealing and simple way to create groups”, and maybe that’s true, but Google will need to do better than something that can be hacked up for Facebook in one night to compete at this game. Besides, the same writer reported that nobody wants to make lists – so making it easier is still probably just catering to a tiny minority, the minority who probably cared enough to go to the effort of doing it on Facebook anyway.

G+ promises more features for the future, such as Huddles (group messaging of some sort – although Buzz wasn’t too great on that score), Hangouts (group webcam stuff, which I don’t think will take off), etc – but little of this is in place yet. What about the stuff it doesn’t do?

Currently profile customisation is limited, so unless you already know someone, there’s little to suggest you might want to befriend them. This is pretty poor if you want to extend your social graph. This was a massive part of MySpace, a somewhat diminished part of Facebook, is pretty poor on Twitter, and is currently looking like being almost extinguished on Google Plus. Personally I think this gradual decline is a big problem, but that’s a post for another day.

There’s not yet any G+ equivalent to Facebook Chat, which is used very extensively (1 billion messages per day 2 years ago, and I see no reason why that would have dropped since then). As I mentioned above, my older and more techy friends are quite different and prefer just to use email, but the fact is that email is dying off among new users and being replaced by the various FB messaging systems, FB Chat being a key part of that. If G+ doesn’t cover that base, it won’t reach those people, the ones who are not interested in having to keep a separate window open for email, and for whom the idea of needing a standalone email client is as bizarre and anachronistic as needing a Gopher client.

There also appears to be no direct private messaging. I’ve had various people suggest workarounds for this – limiting a stream post’s privacy to just 2 people (which it would seem will scroll off your news stream and disappear along with everything else, and won’t be immediately obvious as a direct and personal message), using Google Chat (which most users don’t have enabled), sending an email (which currently appears to be impossible since the default seems to be to disable receiving email within G+, until you reconfigure your account – and quite frankly I don’t want to have to leave the site to respond to messages), etc.

(Oh, and since I mentioned that a message can scroll off the bottom of the stream, I should probably point out that you currently can’t search most of this content. That’s right, the undisputed kings of search have released a social network with a worse search capability than Facebook. What’s that all about? When a company fails to exploit its key advantage you have to suspect genius, idiocy, or complacency. I’m guessing it’s a 50/50 split of the last 2.)

Here’s a massive one – G+ doesn’t have events. This might seem trifling to a lot of people who don’t have a very active social life with their online friends but for the Facebook generation FB Events are key. If you’re not on the FB event invite, you typically miss the event entirely. Only a tiny minority of us on Facebook are supplementing it with iCal or Google Calendar or whatever – most FB users just let Facebook tell them what is happening and when. And this is what I meant by saying Facebook is like Outlook for young people – it’s got all your messaging, your contact lists, and your agenda right there. The tools it offers compared to Office or various standalone tools are anaemic by comparison but that isn’t a problem for most users. Possibly the weirdest thing here is that Google already have a Calendar app, but it’s not integrated into G+ at all – why? This seems to me like a massive oversight.

Finally, there are currently no business or brand pages on G+. I can see why they might not want to have it flooded with cybersquatters from day one but on the other hand being able to opt in to stuff that interests you is a big draw for Facebook users. G+ makes it easy to follow celebrities celebrity bloggers but that’s not really the same.  And it’s actually hard for such celebrities to moderate the comments on their public posts, as a post from Tom Anderson points out. Yes, the MySpace Tom Anderson.) They do have a plan for future 3rd party development – you can sign up here – but we won’t know how well this works out for a while yet.

And yet despite all this, on my Google+ stream I see a lot of people feeling hopeful that this is the site they’re looking for. This seems to come from a mixture of 3 types of people:

  1. the people who didn’t realise that Facebook already did everything they wanted (eg. supposed tech experts who couldn’t or wouldn’t perform the 3 extra clicks to set up a Facebook friends list)
  2. the people who hate Facebook on principle, because of their feelings towards their business practices or the attitude of Mark Zuckerberg (currently the most followed member of Google+, ironically enough. Tom Anderson was #16 at the time of typing this.) Many of these people are having fun filling their G+ feeds with animated gifs of various representations of Facebook being defeated by Google. It reminds me a bit of the Sony vs Microsoft vs Nintendo wars in that a lot of this seems to be as much about tribalism as it is about actual features. But G+ suits the needs of these people, which is fair enough.
  3. the people who find Facebook annoying for technical reasons. This latter group seem to want extra privacy, less ‘noise’ (ie. fewer friends, no strangers, the ability to mute comment threads), no distracting chat (“we have email for that!”), and so on. In my opinion – and I admit this is making an arbitrary distinction to back up my own point – these latter people are not really looking for ‘social networking’, but are looking for something smaller. It’s the equivalent of preferring to have friends round your house to going out to bars and clubs to meet people. Facebook performs both roles, and thus suits the younger demographic who wants both, but the older demographic only wants the quieter option, usually with their existing social circles that they have little interest in expanding, and find the general social networking aspect annoying, intrusive, or both. For these people, Google Plus is potentially the perfect solution – assuming enough of their friends will migrate, or maintain accounts in both places.

But as it stands, Google Plus isn’t in a position to supplant Facebook, as the feature set is not comparable and the missing features are actually quite vital to Facebook’s offering. With the convenient asymmetrical following system but not much else, it’s more like a superpowered Twitter. In fact, it’s hard to see how Twitter could have survived against Google+ if they had been released at the same time, and even now G+ may well sap people away from Twitter one by one anyway as people tire of 140 character limits in an increasingly post-SMS world. But Twitter is not the arch-rival to Google in the advert-selling and web-owning space that Facebook is, meaning this comes across as Google picking entirely the wrong battle.

So, what will happen? I anticipate Google rolling out fixes for most of my complaints – but the buzz (pun not intended) will have died down by then. That would leave a potentially compelling platform for the techy people who stay on G+ and no real reason for anybody else to leave FB, especially when network effects are considered. It’ll take either some sort of catastrophe on Facebook or some sort of amazing new feature on Google Plus to bring about a mass migration on a scale large enough to overcome that inertia, and until that happens, there’s a risk that the early adopters on G+ will be forced to come back to where the people are.

But if nothing else, at least the existence of a decent competitor should keep Facebook on its toes, and prevent it from radically mistreating users, now that they have something that should eventually shape up to be a real alternative. The next year will be interesting, at least.

Related Posts:

2 Comments :, , more...

Passwords and authentication

by on Jun.19, 2011, under Uncategorized

With the recent security problems leading to many passwords being exposed (eg.Hackers claim they stole a million Sony passwords, or the Gawker hack) there have been several security analysts – some professional, some self-appointed – doing analysis of the stolen data to pass judgement on what the general public chooses to use as account passwords. Examples include “LulzSec E-mail Hack Proves We’re Lousy at Picking Passwords” (PCWorld.com), “Statistics of 62K Passwords” (codelord.net), and “A brief Sony password analysis” (TroyHunt.com)

They all suggest that users are employing poor security practices, suggesting they should be creating better and more diverse passwords. Yet this is all ignoring the fact that not one of these accounts, as far as we know, was compromised by having a poor password. They were compromised by companies having poor data security. The guessability of your password or whether it appears in the dictionary becomes completely irrelevant when it gets posted up on a website. And before that point, it’s only vulnerable if it’s a common word – at codedump.net Troy Hunt’s analysis of the leaked login data shows that the top 25 most common passwords are used by only 2.5% of accounts. In other words, you have to try to log in to an account at least 25 times to just get a 1 in 40 chance of being able to log in – and few sites I know of will let you continue to attempt to log in that many times. You can take the reverse approach, of taking one or two of the most common passwords and trying a bunch of usernames, but if usernames aren’t exposed to the public then this is non-trivial, and again a site can limit the number of login attempts from one address to make these attacks much less practical. In practical terms the only people at risk with their bad passwords are the 5% or 6% who choose things like ’123456′ or ‘password’ – the rest, who might just choose a dictionary word, have nothing to fear if the site is coded with any attention to security. Programmers like to remind each other that “obscurity is not security”, so how come very similar programmers seem to think the use of more obscurity by users is the most important part of securing their accounts? Surely that is a double standard, best addressed by developers rather than by users, by ensuring repeated login attempts will not succeed.

What about the more reasonable issue of when people share 1 password across multiple sites? Again the data suggests that over 90% of people who had accounts both with Sony and Gawker used the same password for both. Sounds bad, in isolation. But consider the alternative. A password has to be remembered to be used. Either you remember it yourself, or a tool has to remember it on your behalf. In the former case, given how many different online accounts a person might have (I have about 30 that I use regularly, and about another 100 used occasionally), this is simply not practical. Adding so-called ‘best practices’ such as ‘using a mix of uppercase letters, numbers, and special characters’ into the mix start to make it even harder to remember them, since case-sensitivity is a particularly geeky perversion and most people don’t think of numbers and punctuation as part of a ‘word’. If they forget a password as a result, they have to ask for it to be reset, usually due to a website being unable to send the password back to you and unwilling to store a hint on your behalf. And if they find themselves doing this often, they’ll just switch to a simpler password that they won’t forget. The upshot of it is that a normal human is going to need to reuse passwords if they’re memorising them by hand.

This, of course, is why so many experts suggest the use of a password manager. Something like Lastpass, except, oops, seems like they’re not completely secure either. Imagine if you lost access to pretty much all of your online accounts because of a problem like this. Without wanting to criticise Lastpass – their security issue is trivial and has been handled much better compared to Gawker’s or Sony’s – it seems little short of madness to me for anybody to want to entrust all the keys to their digital kingdom to a 3rd party somewhere on the internet, because not only is that a single point of failure for your own authentication but it’s an incredibly attractive target for hackers, knowing the potential payoffs of getting in would be great.

What about an offline password manager? KeePass is one highly regarded option in this area, and their website says: “you should use different passwords for each account. Because if you use only one password everywhere and someone gets this password you have a problem… A serious problem.” True enough. It goes on to say, “You can put all your passwords in one database, which is locked with one master key or a key file. So you only have to remember one single master password or select the key file to unlock the whole database.” So… just one password for everything? Isn’t that still “a serious problem”? To be fair, there is an added layer of security here in that at least this stage is being carried out on your own computer rather than across the internet, but now you’ve lost the benefit of being able to authenticate wherever you go. The tech-savvy can perform a variety of contortions using some permutation of USB keys and portable apps, encrypted password files in a DropBox or cloud storage, apps on their smartphone, etc., but this is all no good for the average user.

Of course, I’m glossing over another point, and that is that most users already have a single point of failure for their authentication – their email account. I can have completely random 15-character-long passwords that are unique across all my accounts, but all that is for nothing if someone gets access to my email account. They can then go to pretty much any site and choose the ‘reset my password’ option and pick a new one to gain instant access. Perhaps the moral of the story there then is just to make sure that your email password is unique, complex, and unguessable. Then pick as many different passwords for the rest of your accounts as you can remember.

The lax attitude towards security of typical users isn’t the problem – passwords are the problem. In real life we try to limit the number of different types of authentication a person needs to be able to produce and we try and make them simple to use. But computer security experts try to get us to create more and more types of authentication, and ask us to make them complicated, which is impractical.

Similarly, passwords themselves wouldn’t be anywhere near of a big issue if they weren’t getting leaked onto the internet in their tens of thousands due to poor security on the part of the companies. If companies do need to store your identity data then they need to do this properly. Expecting every programmer to be fully competent in security issues is a pipe-dream but at the very least companies should be compelled, perhaps by local law, to take reasonable steps to protect the important stuff. No hacker should be able to get away with anything better than a salted hash which is impractical to crack.

So what’s the alternative? One would be to make more use of authentication systems like OAuth or OpenID, which allow you to use a small number of authentication methods across a large number of sites, a bit like how a national identity card or a driver’s licence can be presented to many different service providers to prove your identity. This allows users to employ a few strong passwords and still have many accounts. It also means there are no confidential authentication details on most of the sites you use, so they have no passwords to leak no matter how poor their security.

Of course, the downside is that the auth providers again become a single point of failure – how do you keep someone out of your Google/Twitter/Facebook when they themselves are just protected by a password? Perhaps the answer there is to make auth providers require more than a password – a date of birth, a secret question, maybe even biometrics. Some laptops come with fingerprint readers now – is that going to get more practical? Maybe some other hardware-based approach like the World of Warcraft authenticator would be a start, as it seems to create one-time hashes that would be hard to spoof without possessing the device. Whichever methods were chosen, it would arguably be worth taking a significant hit to convenience for this initial login to your authentication provider if you know you don’t have to perform it for every site you log in to.

On the whole though I can’t claim to have a good answer here because I’m sure a competent security professional can point out flaws in the previous two paragraphs. And who is to say that Google or any other major authentication provider will do a better job than LastPass at keeping your data hidden? But I will stick by my assertion that exhorting users to use a variety of complex passwords is – except in the case of their email account – a pointless and counterproductive measure.

Related Posts:

  • No Related Posts
5 Comments :, more...

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!