Living in a DSL world

December 18th, 2014

I don’t live way out in the boonies somewhere; I’m 5 minutes from a Starbucks.  I have LTE at home.  But the best level of Internet services available where I live is DSL, 6mbps down, 800kbps up.  This means I can upload at about 80k/second.

For all you kids with your DOCSIS or VDSL or FTTH or whatever you’ve got that gets you upstream connectivity of >2mbps, you don’t know what it’s like living with slower internet. So it’s easy to write stories about how ChromeOS is the future or how online backup is the way to go. 

I don’t backup my devices to iCloud, because when I do, whenever anyone plugs an iPhone or iPad into a charger, the rest of the home network becomes unusable for hours. Saturating the upstream introduces a huge latency in any network request, so even trying to do something like load a web page, which is typically fast, takes many seconds because of the added latency for all the requests required to load the page and all the bits of the page.

Loading right now involves over 200 network requests. It takes a few seconds to load without a backup running; with a backup going, it takes 15 seconds before loading that site shows any data. This makes web surfing unbearable.

Reading Fraser Speirs talking about the Post-Mobile Era is frustrating, because while I would like to believe that the future is more cloud and less state, I just can’t see it for anyone without a better network connection than me.

Here’s what it’s like with this level of DSL.

I can join a Skype call no problem (QoS on the router ensures that the Skype traffic gets priority), but trying to share my screen and talk at the same time doesn’t work well. Too much latency. And sometimes, if the folks upstairs are watching Netflix or otherwise using the network, the Skype call will get choppy. I literally unplug the rest of the house from the network when I need a stable Skype call.

Streaming a video game session from the Wii U or PS4 to something like Twitch or uStream?  Nope, can’t do that.

iCloud backups are off, as I mentioned. I put up with the network hassles of uploading photos to iCloud because the benefits of having them there are worth it, but it does mean whenever I get home after taking some photos, the network sucks for a while.  It takes about a minute per photo to upload.

At one point I had TestFlight automatically uploading symbols for builds to their server, and it took me a while to figure out what was killing my network until I realized TestFlight was doing this.

Streaming video is hit and miss. I can usually watch a video from iTunes if I let it buffer a while before I start watching, but it really depends on what else is going on on the network. If one of my kids are doing something that’s using bandwidth, then video won’t play without interruption.

The level of DSL service I have is not uncommon. It’s considered “high speed”, and as far as I can tell, isn’t going to be upgraded any time soon. There are government programs to bring faster internet to rural areas, but that only applies to people still on dial-up. For me, this is likely as good as it gets unless I’m willing to move.

Apple’s UI API Trend

November 27th, 2014

Since iOS and UIKit, Apple has produced three products with three new UI toolkits:  Apple TV, CarPlay, and WatchKit.

In all three instances, the architecture they’ve chosen is one where the UI is essentially a runtime.

There’s a reason that all Apple TV apps look the same, and that’s because the “app” just provides data for the UI to present. NSHipster’s class dump of the BackRow classes doesn’t indicate that this is the framework that Apple intended to expose to applications; rather I think those are the classes that the runtime uses, and that the various applications would essentially talk to the UI server through a pipe.

That’s sort of how the CarPlay SDK works.  You provide the information to present (via MPPlayableContentDataSource), but CarPlay chooses how to present it.

WatchKit is a bit more flexible, in that the data you’re delivering to the watch over the pipe also includes a storyboard that the runtime uses to “play” your app, but there’s still a pretty solid separation between the application and the view. 

On iOS, the system tells you when it’s time to draw and provides a surface to draw on and a rich set of drawing APIs. This makes literally anything possible. And neither the Apple TV, CarPlay, nor WatchKit provide this.

I’m curious to see whether the native WatchKit uses same architecture but with letting the app run locally (with still no direct access to rendering), or if they’ll provide an actual drawing API.

Apple Pay vs CurrentC

October 26th, 2014

Apple timed the introduction of Apple Pay exactly right for the US market.  Retailers are just in the midst of a switch to NFC payments, and are making changes to their payment terminals. It’s hard to convince retailers it’s worth making a change, but they’ve been convinced that they need to change something by the uptick in fraud, and Apple’s token-based payment system is a great system for reducing fraud.

But Apple isn’t the only company that can see the opportunity for change. CurrentC, a new system being developed by a consortium of retailers, is a different thing altogether, that routes around the credit card companies and withdraws the money directly from your bank account.  Think of it as Apple Pay for your debit card.

It can’t be as integrated into iOS as Apple Pay, so it’s a much more cumbersome process. And it doesn’t have the user protections that Apple Pay has built in, such as using payment tokens instead of sharing your account details with the merchant.  But to the merchants, that latter one is a feature, not a bug.

But really, it’s about the cut that the credit card companies take. About 2%, I believe, is what’s at stake here for the CurrentC merchants. And think of the scale here:  That’s 2% of all retail transactions. That’s how much money is going to the credit card companies today, and that’s the amount that these merchants have set their sights on.

There are so many issues here. I want to look at just a couple.

My Visa card provides quite a few services for their fee. Extended warranties, for example. Travel insurance. Rental car insurance. Credit card companies have been offering these sorts of deals for a while, because they have an incentive to attract customers and this is one of the ways they compete with each other.

Now think about the same situation with CurrentC. Rather than using the 2% to compete for your card business, they can use the 2% to compete with other retailers. CVS can offer coupons, points, and other loyalty programs, to attract you to CVS. And I can absolutely see this being attractive to consumers. While I personally hate the whole loyalty card movement, I know plenty of people who collect their points at various stores and it does affect where they shop.

One concern I have is that the 2% that Visa is taking is used to pay for the services that Visa offers. Fraud protection, for example. You can easily contact Visa and have a charge reversed, and Visa will take the money back from the merchant, who must then prove that you owe it.  A system that’s run by the merchants will shift the balance of power.

And I see some parallels here between companies like Netflix, that are trying to route around cable companies, and CurrentC, trying to route around the credit cards. In both these cases, however, the power lies with the eventual service provider, and that eventual service provider is the same company that they’re trying to route around!

Netflix would be happy if you were to to cancel “cable” and switch to Netflix. But how is that Netflix service getting into your house?  Through the wires that probably belong to your cable company. If the cable business goes away, you can bet the price of internet service will increase to cover their loss of profit. 

And I expect the banks will do the same to CurrentC. You’re taking all our Visa business away? Fine, the fees for doing your direct withdrawals are going up. 

The consumer loses out on every side. They’re getting fewer of the purchase services from their credit card company, they’re losing their personal information to the retailers, and they have less opportunity for recourse when things go bad.

I hope this doesn’t turn into a war between Apple Pay and CurrentC, where retailers pick one or the other, but it feels like that’s the direction it’s going. And we’re stuck right in the middle.

Apple Handling of Non-Reproducible Bugs

October 4th, 2014

The most frustrating thing about developing in Apple’s ecosystem today, for me at least, is bugs that are difficult to reproduce.

I have two separate issues right now where customers write to me because they’re having a problem, and I can’t reproduce that problem. In both of these scenarios, asking the customer to reboot their phone fixes the problem.  I’ve seen other companies do the same.

This should never happen. An app should never be able to get the system into a state where OS level functionality (in my case, iCloud document sync, and email) stops working in such a way that the system needs to rebooted to fix it. It’s an OS bug.

I’ve attempted to get these bugs through Apple’s Radar process, but it always seems to stop dead with the fact that I can’t give then a reproducible scenario. Occasionally they’ll ask for logs, and then the bug gets closed as a duplicate.

A lot of the problems we’re seeing with iOS 8 are not easily reproducible, and I wonder if this isn’t a sign of a bigger problem with the bug reporting system and it’s handling of problems that are difficult to reproduce.

The iMessage problem that plagued so many people for years is a perfect example. I haven’t seen this happen in Yosemite, yet, but for at least two major OS releases, there’s been a problem where some users find message delivery unreliable. It wasn’t just me; it’s not hard to find people talking about iMessage delivery reliability issues.

How did this bug survive for so long? I don’t know Apple’s internal processes, but it seems like these difficult-to-reproduce problems fall through the cracks, and persist for far longer than they should.

It’s often not clear who should own these bugs. If iCloud sync stops working, whose bug is it?  There are probably half a dozen subsystems involved here, and coordinating reproducing the bug and fixing it is no easy task. And the problem is it’s probably a task with no explicit owner; it belongs to whoever it’s assigned to at the time, but once they get around to investigating it and figuring out that it seems to be a bug somewhere else, the bug gets reassigned and the process starts over.

I’ve been thinking about how I’d solve this problem, and my proposal is that once an issue reaches a certain level of notoriety, it should be assigned to a person whose job it is to own that bug. Someone who is outside the various teams involved, and can follow the bug wherever it leads. This person would be the owner of maybe 5 bugs at a time, and that’s their full time job – to contact customers who are having the problem, arrange for instrumented builds to capture information about when the problem happens, whatever it takes.

Apple is suffering a pretty severe reliability hit right now with iOS 8 and all the problems that are plaguing people. I’m sure the teams are busy enough just fixing the issues they can reproduce, but that’s what makes these other issues last so long. There’s always a bigger fire to put out than a bug that’s affecting a tiny percentage of users, but at Apple’s scale, that tiny percentage of users is still a lot of people.


UITableViewCell, auto layout, and accessoryType

October 4th, 2014

I just burned a few hours on this:

Auto layout for UITableView cells is awesome. It’s so much easier than what we had before which was having to measure cells before they existed.

With auto layout, as long as you specify the height of the cell’s contents in relation to its superview (by having constraints connecting vertically from the top edge, through all the cells, to the bottom edge), UITableView figures out the height by solving the constraints.

The catch is that if you add an accessory, like the UITableViewCellAccessoryDisclosureIndicator, this breaks. It seems to work okay in Interface Builder, but at runtime, the height of, for example, a UILabel that is supposed to wrap text, will not wrap it correctly.

It’s easy enough to reproduce, in my case at least.  Create a new project, add a table view, add a cell whose content is a label, constrain it to the edges of the cell, and then at runtime, set enough text that it needs to wrap.  With no accessory view, the label wraps fine. Add the accessory, and it stops wrapping.

Radar logged.

Swift Inexperience

September 25th, 2014

My two cents on David Owens’ take on Swift Experiences.

In the section on Modern Syntax, David makes some points that could be valid in some contexts.

The example that’s showing the modern syntax for sorting an array of strings *is* fraught with ambiguities and complexities.  But this is a shortcut syntax, intended to be used in cases where those ambiguities and complexities aren’t a problem.  If the definition of the types isn’t obvious, then I think you shouldn’t use this syntax.  It’s a decision left up to the programmer.

Generics are an awesome feature for building collections, and a terrible idea for almost anything else. I like typed collections. I like that when I pull an item out of a collection, the compiler and IDE know what it is.

Operator overloading is great when the result is clearly what you would expect it to be (meaning, for example, that adding two items yields a third item that is what you’d expect to get when you add those two items), and a terrible idea otherwise.

Some of the items in his “A Lot More to Say” section are not design choices, but implementation problems that have no doubt arisen because they shipped the language a year too early.  The slow, buggy compiler, and the poor debugger anyway.  Those will get fixed.

 I’m still on the fence about reflection; the ability to have objects opt-in to reflection reduces some of my concern, but I do miss the ability to reflect on anything. On the other hand, I’ve seen that feature mis-used so many times that I’m not sure taking it out of the toolbox is such a bad thing.  The ability to change an object’s behaviour at runtime is both cool, and a nightmare to debug when it’s some library doing it in a way that’s affecting your project.

Our usage of the Objective-C language today is based on a lot of convention, that has developed over a long time. You absolutely can write strange, obtuse, terrible Objective-C code, but we don’t, because we know better.  With Swift, we don’t have that experience yet.

Perils of Embedded Software

August 7th, 2014

Hardware manufacturers shouldn’t be responsible for software. The problem is crystal clear.

Your new TV has a computer in it, running some embedded OS and some software that came with it. If it’s a smart TV, it probably knows how to connect to the Internet and stream Netflix.

And that’s great, until it stops working. Maybe it’s a Netflix API change. Or some security problem surfaces. Or customers find bugs.

Almost every hardware company treats the software in the devices the way they treat the hardware:  They build it, ship it, and then forget about it. If there’s a problem, your recourse is a refund, or repair. But they’re not going to make your older product better.

This is on my mind because I’m buying a car. The car I’m buying is a 2010 Ford Flex, and it comes with Ford’s SYNC system.

SYNC is a small custom computer running an embedded version of Windows, which provides communications and entertainment features. 

The version of SYNC in my 2010 vehicle is no longer being updated.

In practical terms, this means that the next iPhone may not work with my car, and that will never get fixed. Ford’s solution is that I should buy a new car.

Dumb devices is the answer. The car shouldn’t have a built-in computer (beyond for basic car functionality), it should have a display that connects to an external, replaceable computer, like an iPhone. CarPlay, or Google’s equivalent. And a TV shouldn’t have an embedded OS; smart TV features should come from something outboard like an Apple TV.

Why Apple?  Because they’re honestly the only tech company that seems like they’re committed to keeping older hardware up to date. Apple’s not perfect, but I can’t think of anyone doing a better job.

I would have included Microsoft in that list, because of their history with Windows, but they’re the ones responsible for the Ford system, and their history with obsoleting mobile phones has been pretty bad.

What other companies are there that care about keeping old hardware up to date?

Making Enough to Afford Marketing

July 29th, 2014

There’s been some chatter going around (spawned by Brent Simmons) about whether it’s possible to run a business solely on the profits of selling an iOS app in the App Store. Here’s my two cents on why the low price of apps is hurting our ability to promote them.

Fall Day Software’s two main apps, Resume Designer and MealPlan, are both doing well, and they’re only available on iOS. 

There’s a good chance you’ve never heard of them, unless you’ve gone looking for an app in one of these categories and run across my products. I don’t advertise them, because I can’t afford to.

In most product businesses, scaling is based on marketing. You figure out a price point and a marketing plan where spending $1 on marketing results in >$1 in sales, and then you turn up the marketing tap until your market is saturated.

Think about the products you buy. How many of them are from companies that you’ve never seen any advertising for? 

In the iOS app world, a focal point of app marketing is the initial press push, because it’s free. We all know what the sales curve for an app promoted this way works:  An initial spike in sales, and then a drop off to trickle.

There’s social, which works great for apps that are naturally social or involve sharing. This doesn’t work well for productivity apps. I want to make it easier for MealPlan users to share meal plans, but my app’s marketing shouldn’t depend on it.

There’s App Store search, which honestly is how I get a lot of my sales. But that’s fickle, and almost completely out of our control.

There’s word of mouth (and I appreciate every one of you). But that’s also hard to scale.

The proven way to scale a product business is through advertising, and we don’t make enough money off apps to afford it. The cost per customer acquired is too high. 

What can we do?

Honestly, I don’t think there’s much that we aren’t already doing. Unless everyone in a given category agrees to raise their price by enough to cover marketing costs, you’ll find it difficult for your $9.99 app to compete with the $0.99 or freemium alternatives. That’s just too much of a price premium, and customers have been acclimated to the lower prices.

Adding a Mac counterpart and pricing it at a premium seems like the best way to go. Mac users are used to the higher prices, and Apple is making it very easy for your Mac app to be a great adjunct to your iOS app. 

Hybrid Handoff

June 24th, 2014

Will Apple ever make a hybrid tablet / desktop computer?

Traditional mouse and keyboard user interfaces don’t work well on touchscreen devices, and apps designed for touch don’t work well with a traditional user interface. They are fundamentally different, so much so that Apple decided, back when first designing the iPhone, to build a completely new UI framework and paradigm for touch-based apps.

A lot of the “can you do real work on the iPad” debate boils down to input. Touch is better for some things, but for anything that involves creating or manipulating a lot of text, a physical keyboard and mouse is better. There are other issues, like screen real estate and the extra widget density afforded by not having to have such large targets for each button, but in my opinion, mouse and keyboard vs a touchscreen is the main differentiator.

It’s not as simple as using a keyboard to type text into an iPad app, because text editing involves a lot more than just typing. Selection, cursor movement, document navigation, these are all things that you expect to be able to do with the keyboard. Many users learn keyboard shortcuts for common operations, and can work almost entirely using the keyboard.

Apple has both traditional and touch-oriented versions of all their major applications. They also have a way for these applications to seamlessly share documents, through iCloud.

And with Handoff, in Yosemite and iOS 8, they now have a way for a user to switch between a touch-based app and a traditional app without losing their place.

The iOS simulator is a complete execution environment for iOS apps, that works on top of Mac OS X. Developers use it to develop their apps, so every iOS app can run on a Mac, in the iOS simulator environment. Convert this simulator into a real runtime for iOS apps on the Mac, and you’ve got the start of a workable hybrid.

Imagine a Mac that had two displays, with one of them being a touchscreen. Run the iOS environment on the touchscreen, and run the traditional Mac OS interface on the other screen. Put one of these touchscreens as the top lid on a MacBook Air style device, and the other as the inside display.

You could use iOS on one side, and Mac OS on the other side. But the really interesting thing is how you could switch between these two environments just by opening or closing the lid of the computer. Start working on a document in Pages for iOS, open the lid, and the document appears in the desktop version of Pages.

One of the missing pieces is security. iOS devices are locked down, to an extent that they could never lock down a Mac. But what if…

There have been rumours of an A7 or A8 based Mac. So let’s say they build that, but for this new Mac, only sandboxed applications are allowed. No user-installable device drivers, no acquiring apps outside the app store. It works just like iOS. Now you’ve got a Mac that’s as locked-down as iOS. It doesn’t have to replace the Mac line; it would be a new category of device.

I’m not sure Apple would ever go there, but I find it interesting how close they are.

Second Tier Upstream

June 23rd, 2014

I get the “Age of Context”. I get the desire to share the stuff you’re doing, the stuff you’re creating, even if it’s fairly mundane.

I have a PS4, and I like that Sony thought sharing was so important that they put a Share button on every controller. It’s easy to take the last few minutes of gameplay and turn it into a video, and then post it online for anyone to watch. This is cool.

You can even live-stream your game.

I use online backup for my Mac, via BackBlaze. It’s great.

My iOS devices automatically upload photos I take to iCloud.  They can even automatically make backups whenever I plug the devices in.

I had to turn this off, as much as I love this feature, because my upstream bandwidth is terrible.

I have DSL at home. It’s the best option available to me, and my upstream bandwidth is 800kbps, which works out to about 80k/second when uploading.

So if I take a photo on my iPhone, it takes about a minute and a half for it to upload. If I take a bunch of pictures, there goes my network connection for the next hour or so, because while my network connection is saturated with uploads, the latency on downloads goes way up and web browsing becomes nearly impossible.

I have BackBlaze set to only do backups when I’m sleeping.

And we just don’t use the sharing or streaming features of the PS4, as much my son would like to.

It sucks. And, because I already have “high-speed internet”, it’s not going to get better for me any time soon.

This is nothing new, of course, but I was reminded of it when I looked at DropCam, a connected camera service that uploads video to the cloud so you can watch it from anywhere.  I love the idea of DropCam, a camera that you can just place somewhere and then connect to from your phone or a web browser to see what’s going on or receive motion alerts, but I can’t have a device in my house constantly uploading video. It just wouldn’t work.

And DropCam doesn’t have any option for a local server. It’s the cloud or nothing.

Companies shouldn’t forget that a lot of people don’t have a fast internet connection, and for those that do, sometimes it’s only fast in one direction.  I looked at Manything, as another great-looking camera option, but I can’t use it either for the same reason.

Google clearly wants the “hub” for your home automation to be their servers and for me, at least, that’s not going to work.