28 July 2014
At this month's Londroid Meetup at Skills Matter, there was a chance for developers to learn more about how to create successful apps for Google Glass. It presented an opportunity to see the devices, but most importantly, a chance to hear from Google representatives about what Google has learned about making Glass apps through trial and error. Senior developer advocates Hoi Lam and Timothy Jordan gave talks, sharing their favourite apps and best practice using Google Glass.
Here are the key takeaways I noted:
- Know your techs. There are three ways you can develop Glass apps. You will be able to use the upcoming Android Wear notifications, which will push notifications from your Android phone to your Google Glass. You can use the Google Mirror API if you have a web service to deliver that into Google Glass with minimal changes. Or you can use the Google GDK (Glass Development Kit), which adds some Glass-specific features to Android, including voice triggers and gestures. The Mirror API can enable you to implement your app more quickly, but it means output is pulled from the cloud. It's best used when it will feel intuitive to the user if there is a delay in getting information, such as in a searching application. The GDK is suitable when you need an app to work offline, need a real-time response, or need deeper hardware integration.
- Design around user intent. Although I'm talking about applications here, Glassware, as they call it, isn't like a phone or a tablet app. Users won't expect to open an app and then do something. Instead, you should design around their intent and cut the time from their expression of that to you delivering a result. For example, a tablet user might open a mapping app and then search for directions. A Google Glass user would just ask for directions.
- Design for Glass. App developers are experienced at porting from "one rectangle to another", but Google Glass isn't just a smaller rectangle. It's so different to existing mobile platforms, that you need to rethink the experience you provide. Jordan suggested thinking about what it is you always wanted to do with your services that you can now do with Google Glass, and then focus on that one feature. Wordlens, for example, provides hands-free translation of signs using the camera, leaving your hands free for maps and baggage. Augmedix enables doctors to look up medical information without turning away from the patient, and enables them to build stronger rapport and better study the patient's condition.
- Don't get in the way. Google's vision for Glass is that it should not interrupt the user and take them out of the moment. Lam had a great example of how he's been using Google Glass to photograph his daughter while playing with her. He can look at her all the time and use both hands to guide her on the climbing frame, but can capture that moment using Google Glass. Google's aim is that the device should enhance living in the moment, rather than pulling people out of it. As a result, users should be able to ignore your Glassware without any penalty, and it shouldn't require them to take action.
- Be relevant. Google Glass isn't designed to provide all the information, all the time, just what's most relevant now. Think about how you can use contexts like the time of day and the location to provide relevant information to the user when they need it. For example, it would be great if a shopping list app presented a reminder just as you're passing the shop, Jordan suggested. Because of the need to focus on what's relevant now, you can't just port your mobile apps across and expect to end up with something usable.
- Avoid the unexpected. One of Google's guidelines is that you shouldn't surprise the user in a way they don't want. This probably applies to every platform, but it's especially important for Glass, because the nature of the device means that unpleasant surprises would feel particularly intrusive. Nobody wants to see cabbage adverts in their glasses at 3.30am. Jordan presented CNN as a good example of managing expectations. When the software is configured, it makes it clear to the user how many alerts they will see and when they're likely to see them.
- Build for users. Jordan said that Google had tried building some applications that didn't work, and advised against building something just because it seems cool and takes advantage of the technology. Instead, you should focus on a problem that people have and help them to solve it. Again, this might seem like something that applies to every platform, but it's still a good filter to put your idea through before developing it. The idea of a facial recognition app came up that would tell you who somebody was when you met them again, having forgotten their name from your previous encounter. That seems like a cool idea, but it would get in the way of the real social interaction. In practice, I think it would be pretty hard to use without somebody knowing it was being used on them, which would be a particularly graceless way to socialise. (Facial recognition apps are currently not allowed on Google Glass in any case).
- Be wary of head gestures. They can be a cool way to control Google Glass, but people lack fine motor movements, so they're not good where precision is required, including in scrolling.
- Make it glanceable. Jordan presented two different designs of a running app, both of which showed the steps taken and the percentage of the run completed. One of the pics had a photographic backdrop showing some running shoes. When the room was polled on which design they preferred, it was evenly split. The screen with the shoes in, though, took longer to glance at, eye-tracking studies had shown. The eye is distracted by irrelevant information and graphics, and when you're running, you can't take in information as quickly. Keep the design simple, and think about what the user really needs. During the run, they need coaching support, not a complete data set.
- Expect design delay. Plan to spend time iterating on the user experience. Until you've tried it, you won't know what works effectively, so allocate plenty of time to experiment with this new interface.
It's early days for Google Glass, and there was a lot of excitement in the room just seeing the devices and trying them out. There's clearly a lot of potential for innovative new applications, particularly for platform-based services, which will be better able to build a business model on a device that is not conducive to advertising, and that does not yet support monetisation. What are your thoughts on Google Glass?