Musings on Mobile Software
Thoughts on disrupting the TV industry

At least since Steve Jobs hinted that Apple were interested in the TV space, there has been lots of speculation on what sort of product they would launch. Since long before then, lots of companies have been attempting to use the Internet to disrupt TV distribution.

A lot of people would like to see the TV distribution model unbundled in a similar way to iTunes impact on music. We want to pay for the individual channels, or ideally even just shows, that we want to watch. Instead we have to pay for large bundles of channels if we want to access the best content. The problem with trying to disrupt this with a new distribution mechanism is that it’s the people who produce and own the rights to the content that insist on selling it in bundles. Replacing the satellite or cable middle men does not solve the problem. There’s also an issue of infrastructure investment required to deliver major live events over the Internet, while the existing cable and satellite broadcast infrastructure is well suited to this.

So, what could Apple do to revolutionise the TV experience? Making an actual TV set doesn’t make a lot of sense to me. Industry margins are tiny. The bulk of the cost is in the panel, which they’re very unlikely to want to make. Then there isn’t an ideal TV size, it really depends on room size and viewing distance, which creates a need for a fairly wide range of models - not something Apple is particularly keen on. If the content still comes through a satellite or cable box then the viewing experience would be almost entirely outside of Apple control.

Another option would be to downsize from the current Apple TV to make something more like the ChromeCast. A cheap dumb wireless connector to allow your iPhone/iPad to push content to your TV. This works for a lot of use cases and is definitely a credible product but it’s really more of an accessory - doesn’t seem like the future of something Tim Cook refers to as a “beloved hobby”.

What does that leave? Perhaps a major upgrade to the set-top box? Maybe creating a games console, music and TV system in one. Replace all the boxes in your living room at once. The game controller APIs in iOS 7 made me suspicious - external game controller for an iPad? Very sub-optimal experience, even connected to a TV via AirPlay. The 64-bit processor in the iPhone 5S is another possible hint - most of the performance improvements in the A7 have nothing to do with it being 64-bit. The memory requirements of a games console, or being able to map an HD movie file into the address space would both benefit from the jump beyond 32 bits. Breaking into the standard set-top box replacement cycle is tough but we are somewhat overdue a new games console upgrade (or maybe downgrade, since games for the Xbox 360 and PS3 generation have become too expensive to make and buy - leading to gigantic sales for a handful of titles and weak catalog beyond that).

Where would that leave the premium TV content? Maybe you can connect a cable or satellite feed and access your pay-TV subscription on this box? Sounds crazy - the set-top box space is highly fragmented right? Broadcasters do try to differentiate with their box technology but the designs are mostly constrained by the requirement to keep costs low enough for total (or near total) subsidy. That’s also whilst planning for a replacement cycle of around 7 years. The legacy installed base of boxes at any point makes it extremely difficult to innovate with new services at scale. The primary fragmentation in set-top boxes is the conditional access (CA) technology. Almost all cable uses a QAM tuner and a decent fraction of the world’s satellite uses DVB-S2. On the CA side the content encryption is standardised it’s just the encryption of the keys that is proprietary. It’s possible (and indeed not uncommon on some systems) to use multiple key encryption systems simultaneously. On a permanently Internet connected device it’s also theoretically possible to send the keys synchronously via a separate channel.

I believe Apple could convince broadcasters they can secure the keys without a separate physical card and they might just be able to use the same model they did in mobile, rolling out with one exclusive deal in each country initially. This would create far more differentiation for the chosen broadcasters than they’ve been able to build for themselves. Apple may even be able to get some hardware subsidy payments through broadcaster channels, although I’d expect the devices to be available without subsidy or pay-TV subscription too. Given the nature of the broadcast technology, satellite seems like the obvious first choice for widest availability and least fragmentation of markets. That said, CableCARD in the U.S. means the CA technology is already defragmented there. This type of product would not enable Apple to unbundle TV (at least in the short term) but they could revolutionise the user experience, particularly from the perspective of interaction with smartphones and tablets.

It’s not an iPhone-sized opportunity but the original Nintendo Wii, for example, sold 100 million units and the PS2 155 million. That’s a meaningful market size for Apple with a potentially wider appeal for improving the TV experience and the additional opportunity for their 30% store cut on top. Getting a large enough installed base of boxes connected to the TV by playing with, rather than against, the incumbents is possibly the best way to disrupt the market. With a (semi-)open content platform addressing Apple’s typical demographic there would be a real opportunity for TV content producers to thrive outside the existing system.

Nokia Sells to Microsoft - Why and what next?

The Nokia board has finally decided to end Stephen Elop’s failed Windows Phone experiment and salvage what they can of the business. This looks like a deal that was driven primarily by Nokia’s board:

1) Microsoft confirms that there was a 2014 recommitment date for their Nokia WP partnership. (slide 4 of their Strategic Rationale presentation). I suspect this was Feb 11th 2014 and essentially a break clause for Nokia which they signalled that they intended to use.

2) Nokia has sold Microsoft the parts of their business that almost no-one else would have wanted. Possibly some Chinese OEMs would have liked to buy the global sales and distribution networks - to get those Microsoft had to take the feature phone business (which it almost certainly didn’t want) too. The costs of ramping down the feature phone business and laying off a lot of staff in European countries with worker-friendly employment law shouldn’t be underestimated.

3) Nokia has kept valuable pieces that Microsoft must have wanted - they only licensed their patents to Microsoft, not sold them and they’ve also kept and licensed HERE, which Microsoft must see as a vital service in their “devices and services” strategy. Slide 14 of the above linked presentation is totally unconvincing - why would Microsoft want the best alternative to Google’s maps to be widely available rather than controlling it as a strategic asset.

4) Nokia has kept their brand, which although badly damaged still has significant value, despite agreeing not to use it on new devices until 2016 or license it for 30 months. Microsoft just gets the Lumia brand - the emphasis on which, in hindsight, shows that this kind of sale was probably the fallback plan all along.

Microsoft will still be using the Nokia brand on feature phones for long enough that it should still have value for a sale or re-launch when the restrictions on its use expire. Would Nokia try to make a comeback with Android, or even bring the Jolla team back in with the (Android compatible) Sailfish OS? It seems unlikely but not impossible. With their hardware design, manufacturing, logistics, supply chain and distribution gone they’d be outsourcing production and rebuilding distribution from scratch. It would make more sense to license the brand, IP and mapping services to a Chinese OEM attempting to expand outside of China. The other thing against a comeback is that they’ve indicated they’ll return excess cash to shareholders after the deal is complete and they’ve settled on the new strategy.

What about Microsoft? They’ll have to keep funding Windows Phone for some time to reach their stated operating break-even level of 50m units/year. Nokia was only able to grow market share significantly at the low end and even then they had to heavily discount the devices. At the same time the feature phone business is rapidly shrinking and difficult to keep profitable against a vast array of Chinese competitors. It seems unlikely that Microsoft could transition Nokia feature phone customers to non-Nokia branded Windows Phone devices when Nokia was only marginally successful in doing so with the brand intact.

I tend to agree with Ben Thompson that Microsoft should have admitted defeat in the mobile OS wars and looked to preserve and grow their high value software and services by embracing iOS and Android. Android commoditised the mobile OS which Microsoft wanted to sell, it completely killed the business model. If they’re determined to carry on in that battle then buying Nokia was the only option - Nokia dropping out as a Windows Phone OEM would have killed the platform instantly. There’s only room for one free ad-supported ecosystem, network effects ensure that and Android got there first, it will be impossible to displace them. As such Microsoft has to adopt the Apple model and become a premium ecosystem, making sufficient profit on device sales to cover the cost of maintaining the platform.

Of course Apple has cornered the real premium market at the high-end, so unless they slip up badly, Microsoft is left with the scraps. Their ambitions are significantly scaled back, 15% market share by 2018 with an average of $40/unit margin. If Apple does announce a significantly lower cost device (sub $300) next week then those ambitions will still look incredibly optimistic. If Apple stays further up the pricing scale then maybe there is room for a 50-100m unit/year OEM building low cost but high quality devices and possibly over time they can encourage them to trade up to more premium devices within the same ecosystem. Right now though, Windows Phone doesn’t offer a compelling alternative to Android at anything like a $40/unit margin. The next CEO of Microsoft has a mighty task on his (or her) hands.

On Surveillance and Privacy

TL;DR - Don’t let terrorism scare us into giving up our privacy. Widespread surveillance is too much power - governments consistently prove they cannot be trusted with it. Probably even more than joining campaigns for legal reform, the most effective thing you can do is embrace encryption for your personal comms and support design-led, open source and decentralised efforts to put those tools into more people’s hands.

The rise of the surveillance state

Surveillance is a necessary evil to protect us from the worst elements of society. I have no doubt that it is necessary - I’m certain that organisations like GCHQ and the NSA do excellent work to prevent all sorts of horrible crimes that I’m glad I never even hear about. I’m also certain that it is evil, quite simply because knowledge is power and power tends to corrupt, while absolute power corrupts absolutely. This is a fact of human nature that has been observed for a long time. Necessary evils should be minimised. For a long time technology and economics have kept surveillance at relatively low levels - it was simply not feasible to monitor or analyse even a small fraction of human communication and activity. In recent history the giant shift towards online communication, combined with terrorist activity which has increased public/political appetite for surveillance (and thus funding for security organisations) has created a very different situation. Essentially everyone’s online activity is passively monitored and can be actively queried and tracked with minimal oversight or authorisation of the process.

Lack of safeguards and oversight

Most people who care seem to be outraged that the surveillance is happening in the first place (an almost inevitable consequence of the technology / economics / politics - the function of security services is to spy) when they should be much more upset by the lack of oversight. Systems like this should not be built for the convenience and speed of security analysts - they should be locked down such that a transparent legal process is implemented with the technology. Access to a specific individual’s data should only be available via authorisation tokens that in turn can only be generated by a separate organisation responsible for approving the surveillance. Instead the US has secret approvals through secret courts and the UK has more or less unrestricted approval.

Who will guard the guardians

To some extent security services have to self-regulate due to the nature of their work but legalising mass surveillance should be an obvious step too far, not least because we don’t get to vote the leadership of these organisations out of their jobs, nor hear about what they do (unless someone leaks it) - they are at extreme risk of being corrupted by too much power. At the same time, their temporary bosses in government are inherently untrustworthy - anyone who seeks a position of power is more likely to be corrupted by it than average. We can’t trust our politicians to keep the promises they make when we elect them. Even at lower levels of government we’ve seen plenty of evidence of abuse of powers that were intended for fighting terrorism to pursue petty personal agendas. With the detention of David Miranda at Heathrow under UK anti-terror laws, the UK government demonstrates yet again that it cannot be trusted not to abuse the powers it has been granted, at exactly the time when it wants to show it can be trusted with powers of widespread surveillance.

The genie is out of the bottle

Mass surveillance tools exist all over the world, not just the ones we know about, and neither the governments nor the security services are likely to give them up. Even if political pressure can reverse some of the legislation that legitimises this activity, I seriously doubt all of the tools will be locked down or destroyed. It would be relatively simple for the major internet companies to prevent most of this surveillance by building end-to-end encryption, so they can’t actually read any of our data themselves. However, since most of them rely on reading our data to support their targeted advertising business models that’s not going to happen any time soon.

A solution?

The only guaranteed way to restore privacy to something more like historical levels is for people to embrace encryption for their day-to-day communication. Clearly not everyone is going to do this but right now only the extremely dedicated and technically knowledgeable can. If we can get to a place where anyone who cares about privacy can have encryption by default without jumping through hoops then it should become much harder for anyone to track the communications of bothersome people. If using encryption is so complex that only those who really need privacy do it then that simple fact immediately makes them a target for deeper investigation. The encryption technology itself may be excellent but there are weaker links in the security chain if the number of targets is small.

Open source, decentralised & crowdfunded

To try to ensure that software systems aren’t compromised with surveillance back doors, they should be open source. To prevent infiltration of one system impacting a significant number of users they should be decentralised (100,000 people running comms services for 10 friends each is a much tougher surveillance target than a service for 1 million people). From a business perspective it’s quite difficult to monetize a decentralised service built around open source software, so creating these projects through crowdfunding seems like an obvious option. Hopefully there’s enough interest out there to sustain and maintain them through subscription donations or at least crowdfunding of new features.

Usability and design are key

Most open source software is created by engineers scratching their own itches. Occasionally organisations form around popular bits of open source software and try to make it prettier and easier to use but they rarely reach the levels commercial alternatives. I don’t believe this is through lack of effort but simply lack of focus on the user experience. What we need are design-led open source solutions which focus on the user experience from the outset.

I’m following a couple of projects at the moment which are promising with regard to the criteria above: Mailpile - which has achieved its initial funding target just over halfway through their Indiegogo campaign as I write this, and Codename Prometheus - which is not as far advanced but the founder clearly has the right skills and ideas. I hope to see many more projects like these.

But what about the terrorists?

Terrorism is all about instilling fear and causing people to do things out of fear that are disproportionate to the threat. The attacks on 9/11 killed just under 3000 people, the 7/7 suicide bombings in London killed 52 people (and injured about 700 more). Yes, they were horrific and I have nothing but sympathy for anyone with a connection to the victims. However, in the US there are >10,000 gun homicides every year - we don’t see anywhere near enough people willing to give up their right to bear arms to prevent them. Even more relevant, in the US there are >30,000 motor vehicle deaths (about 2000 in the UK) every year yet I can imagine the uproar that mandatory GPS vehicle tracking to help prevent some of those deaths would cause. If we give up our rights to privacy through fear of the terrorists then they have already started to win. It might seem insignificant at first but over time it will damage our societies and our democracies.

Are Mobile Web Apps Really Doomed?

Drew Crawford recently wrote an excellent post on “Why mobile web apps are slow" which essentially concludes that mobile web apps are doomed to be "slow" for the next 5-10 years.

I really love to see people bringing facts to this debate and I started out writing the firmware for mobile devices in C/C++, so I have a natural bias against the giant resource bloat a web app brings on a mobile device.  However, there’s no denying (or at least I don’t see anyone credibly doing so) that building web apps is more productive and enables you to target lots of platforms with a common codebase.  Whatever your personal views on cross-platform development and developer productivity vs. quality of the end result, there are lots of situations where budgets dictate that the most beautiful and smooth experience possible for the end user is not the top priority.  As such it’s really important to check the facts on a well reasoned argument with such a strong conclusion.  This one looks like it may actually contain the seeds of its own counter-argument, since a central point seems to be “ARM sucks and will continue to do so, while the mobile ecosystem is unlikely to switch to Intel”, to which I say wrong and ummm, yeah you’re probably right on the second part.

As Drew pointed out, in 2010, we had Google Docs launch realtime collaboration for the desktop web.  Taking that as a (possibly unfairly tough) benchmark for “a sufficiently complex web app” his graph shows that the browser in the iPhone 4S may be around 3-4x slower than it needs to be to run it (well, that slightly conflated with Google Wave to make the point a bit stronger).  The strongest argument in the article suggests that we’re not going to see any significant software driven improvements to JavaScript performance anytime soon.  I agree it’s very unlikely.  So, we’re left looking to the hardware.  Drew seems to have an extremely negative view on ARM here that I don’t share:

"But a factor of 5 is okay on x86, because x86 is ten times faster than ARM just to start with.  You have a lot of headroom. The solution is obviously just to make ARM 10x faster, so it is competitive with x86, and then we can get desktop JS performance without doing any work!"

Clearly mobile browser performance isn’t going to reach desktop browser performance any time soon, it’s a moving target, but what about 2010 desktop performance?

"Whether or not this works out kind of hinges on your faith in Moore’s Law in the face of trying to power a chip on a 3-ounce battery.  I am not a hardware engineer, but I once worked for a major semiconductor company, and the people there tell me that these days performance is mostly a function of your process (e.g., the thing they measure in “nanometers”).   The iPhone 5′s impressive performance is due in no small part to a process shrink from 45nm to 32nm — a reduction of about a third.  But to do it again, Apple would have to shrink to a 22nm process.  

Just for reference, Intel’s Bay Trail–the x86 Atom version of 22nm–doesn’t currently exist.  And Intel had to invent a whole new kind of transistor since the ordinary kind doesn’t work at 22nm scale.  Think they’ll license it to ARM?  Think again. There are only a handful of 22nm fabs that people are even seriously thinking about building in the world, and most of them are controlled by Intel.”

Ummm, TSMC - that place Apple is supposedly moving their chip fab to from Samsung has recently taped-out the new ARM Cortex-A57 processor at 16nm. OK, it’s likely to be 2+ years before that’s really viable for the mass market but they’re expecting to be shipping volumes at 20nm next year and they have a path down to 14nm at least.

"In fact, ARM seems on track to do a 28nm process shrink in the next year or so (watch the A7), and meanwhile Intel is on track to do 22nm and maybe even 20nm just a little further out.  On purely a hardware level, it seems much more likely to me that an x86 chip with x86-class performance will be put in a smartphone long before an ARM chip with x86-class performance can be shrunk."

Intel is expecting to ship 22nm mobile chips at the end of this year but the 28nm Cortex-A7 chips are already in the market (from low-cost ARM licensees like MediaTek).  Intel definitely lead in this area but not by anything like the margin implied.

The iPhone 5 (using an Apple designed Cortex-A15 derivative) was already twice as fast as the 4S (which was around 3-4x too slow) and we’re quite likely to see an iPhone/iPad using an Apple derivative of the Cortex-A57 in a year or so that’s around 4 times the speed of the iPhone 4S - or easily fast enough to run Google Docs realtime collaboration from 2010.  Of course that’s only high-end hardware.  There’s likely a sweet spot for mid-to-low end hardware around the level of the iPhone 4S in performance for several years yet because the phenomenal number of devices sold (both iOS and Android) around this performance level has created a truly gigantic app catalog, the developers for which are unlikely to want to push on performance requirements and leave those devices behind - this could easily self-perpetuate for a few more years.

Drew conclude’s his hardware section with:

"I’m afraid my knowledge of the hardware side runs out here. What I can tell you is this: if you want to believe that ARM will close the gap with x86 in the next 5 years, the first step is to find somebody who works on ARM or x86 (e.g., the sort of person who would actually know) to agree with you. I have consulted many such qualified engineers for this article, and they have all declined to take the position on record. This suggests to me that the position is not any good."

Now this is the wrong question and creates a bit of a straw man.  Nobody expects ARM to close the performance gap with desktop x86 anytime soon because they’re not trying to.  ARM is already ahead of x86 within a mobile power envelope - Intel might jump slightly ahead on performance somewhere near the ARM power envelope this year but it’s not clear which architecture will get the best power/performance/cost trade-offs for mobile computing in the next 5-10 years.  A more relevant question would be whether ARM will close the performance gap to, say, 2010 desktop browsers and the answer to that looks much more positive.

Then there’s the real question - how fast is fast enough for any given application?  For lots of apps, devices are probably already more than fast enough for their computational requirements (if you’re stuck in a JIT-less iOS WebView then maybe not).  However, a lot of the processing power for a mobile app is doing fancy graphical stuff in the UI.  The browser is pretty bad at this - you don’t want to be doing it with a general purpose CPU in JavaScript, manipulating the DOM.  Here web technologies provide solutions like WebGL and hardware accelerated HTML5 Canvas or CCS transitions but currently you’re still looking at a world of fragmentation in the implementations.

To me, the fate of mobile web app performance looks much more political than technical in the next 5 years.  Apple currently has no major incentive to resolve the JIT-less WebView scenario nor race to adopt technologies that would help get around it, like WebGL to offload the graphics fully to the GPU.  Google may push ahead with web performance for Android but unless they can start to match the engagement and spending levels of iOS users so that developers start jumping ship in large numbers then it’s not going to force Apple to respond.  If the technologies aren’t cross-platform then you’ve lost the main advantage of using them in the first place.

I suspect that mobile web app performance will be acceptable for most use cases on commodity hardware in significantly less than 10 years, particularly because the cheaper stuff will probably be running the most performant browser implementations.  This doesn’t necessarily mean the web will have “won”.  When the web “won” on desktop computers most of the web advocates were too busy celebrating to notice computing was already shifting to mobile.  By the time mobile computing is sufficiently powerful and commoditised that efficiency is no longer important and the majority of new “mobile” (probably responsive rather than exclusively mobile) apps are built with web technologies, the hardware battlefield is likely to have moved on again.  Here, it’s interesting to note the power optimisation ARM have gone for with the Cortex-A53 - similar performance to today’s high-end smartphones but 25% of the power consumption.  I don’t believe that’s because there’s a strong requirement from licensees to return to the days of week-long battery life for smartphones but because new form factors (e.g. wearables) will require smaller, lighter batteries.

This is all built into the system and amplified by the app store model - as long as there are large and wealthy companies pushing hardware technology forward to create relatively high margin devices, there will be lots of opportunities for native developers to create the best possible mass market experiences for those devices.  At the same time, as long as there are businesses that want to deliver software and services across a range of devices as cost-effectively as possible, there will be lots of opportunities for cross-platform developers, web included.  As long as software keeps “eating the world” there’s more than enough opportunity for everyone that wants to learn to code.

If you’re trying to deliver the best possible experience on mobile devices and you’re doing it exclusively with web technologies today, you’ll experience a lot of pain and/or failure.  If you set ideology aside and match the technology with the project and budget then there are lots of ways to win.  Mobile web apps are a long way from doomed, although mobile web app only platforms/devices might be.

The Real Reason Apple Probably is Building a Cheaper iPhone This Year

There’s a rather simple argument on AllThingsD suggesting Apple should build a cheaper iPhone this year because there’s a huge demand for it and it’s the only thing that will allow them to grow eps significantly. It might be true, it might not - who knows what other products Apple has in the pipeline. That said, the smartphone market is pretty unique in scale and growth at the moment and most of the other product segments discussed in connection with Apple (e.g. TVs, Watches) wouldn’t make a significant difference to Apple’s revenues even if they did spectacularly well. What’s wrong with this argument is that it was equally true for the last couple of years and Apple hasn’t felt the need to shift strategy. They don’t plan products around the desires of Wall Street.

However, I do think this is the year for Apple to introduce a lower cost iPhone. Just in case anyone reading this doesn’t already know (which seems unlikely) the iPhone 4 is NOT a low cost iPhone and it is NOT free. Unlocked, without a contract, the iPhone 4 costs $450 which is still a very high-end price for what’s starting to look like incredibly dated hardware. Apple have been able to get away with this strategy because their OS on dated hardware still performed better than Android on modern hardware and the popular form factor had not changed much. A few significant changes to the competitive landscape in the last year mean that is no longer true. Android launched Jelly Bean, finally making their UI smooth and the first ARM Cortex-A7 based smartphone chips started shipping. Add to this a very strong trend towards larger screen sizes and the older iPhone’s start looking like ridiculously overpriced toys next to the competition in the not too distant future.

Right now in China you can buy (retail) something that looks a lot like a Samsung Galaxy SIII with processing capabilities greater than an iPhone 4S for $150. Running Jelly Bean (which most of these devices do) the UI performance is smooth and fluid. It’s not going to be long before the larger Chinese OEMs start shipping devices like this to the rest of the world at sub-$200. Previously it was only possible to get to the pre-paid price points in advanced markets by compromising significantly on processor performance and/or screen resolution.

Now very few new apps will push the boundaries of performance on the latest devices because they want to maintain compatibility with the huge installed base of older hardware. Since the new Cortex-A7 can provide better performance at a lower power consumption (and thus also a cheaper battery, or more battery life) than the top-of-the-range Cortex-A9 of less than a year ago, plus is 100% compatible from a software perspective, the mid-to-low end devices will be able to run almost all of the apps out there well. This adds up to the top-end devices over-serving most people’s needs.

At the very top end of the market where the smartphone is as much status symbol as practical gadget, this makes no difference, there will always be people willing to pay for the latest and greatest toys. In the markets where operator subsidy models dominate, the cost of the device is not as significant and the super-premium pricing is still possible. For everyone else, the premium charged for the Apple brand, design and access to their superior apps ecosystem is going to be looking increasingly poor value.

If this sounds like bad news for Apple, it isn’t particularly. Aside from Android’s UI performance improvements, the changes in technology (lower cost larger displays, cheaper processors capable of running all of their software on smaller batteries) are available to them equally. They can spend $30-50 more on components than their competitors and make a superior quality product, yet still successfully charge a premium - maybe a $250-300 device with 8-16GB of storage and make a very decent margin. This device would cannibalise some of the high-end product sales but mostly those they’d lose to competition anyway. The main device it would have wiped out sales for was the iPod Touch but that had modest sales and Apple have already conveniently moved that product up-market (including removing the bottom-end storage options) to target it at those wanting a dedicated music player with large storage and some extra bells and whistles - almost anyone previously interested in it as a small WiFi connected computer would now go for the iPad Mini.

I’d expect some of Apple’s services or features to be unavailable on the lower cost device as they are/were on the older iPhone and iPad models. In case they need further differentiation between the mid-range and high-end models in future it’s worth noting that unlike Google with Android (who already give almost all their software away), Apple have a good sized library of top quality applications that they can bundle free with their latest premium device if they so choose. Those would make the difference between a “free with contract” mid-range iPhone and a “$200 with slightly more expensive contract” high-end iPhone look very small indeed.

I expect Apple can see the technology shifting the competitive landscape better than I can, so such a device has long been planned and is on the way. When it arrives the majority of commentators will say it’s too expensive and they’ll be wrong. The lower cost iPhone will grow Apple’s market share without destroying their margins. It’ll still be a premium product, just in a slightly different market segment. If they’d just made a cheaper iPhone last year it would have had to have essentially the same components in a cheaper case with a cheaper screen (back to pre-retina resolution) and probably a significantly lower margin to hit the right sort of price point - that’s not Apple’s way. Now they can meaningfully differentiate the hardware while preserving software compatibility and still deliver a great product.

Improving App Economics (or why App.net is important)

If you’re already aware of App.net and the back story then feel free to skip down to “Better Business Models”.

Mobile Apps are awesome. Social networking is powerful and pervasive. We are spending many hours on these things every month (week or even day in some cases). However they both have serious economic problems. Problems that require different business models.

But What About All That Money?

Economic problems? Apple have paid out $5.5B to app developers and the Facebook IPO was over $100B - where’s the problem? The distribution of that $5.5B across about 650k apps in the App Store is far from optimal for encouraging innovation. Of course there’s no way it should be equal but the dynamics of the app store skew the distribution too much towards a relatively tiny minority of apps. Those same dynamics also encourage the development of certain types of app at the expense of others. Facebook’s valuation has fallen significantly since the IPO and will probably continue to do so as it struggles to monetize its gigantic user base without intruding on their socialising so much that they leave.

The App Problem

From the beginning Apple’s App Store suffered from a "race to the bottom" since the visibility of a top 25 listing, if you get there, more than compensates for the reduced price. Smart developers cautioned their peers against this but a combination of fear and greed led to $0.99 becoming established as an expected app price point for almost everything[1]. Aside from a handful of genuinely exceptional offerings and those that Apple favours with regular features, apps tend to spike in sales shortly after launch (and if they’re lucky updates, or getting featured) then drop down to some level that doesn’t sustain continuous development. Developers have adapted to this market and a lot of apps get made that are designed for the minimal price point and sales longevity. Low price expectation creates low value apps and the cycle continues. Cheap apps don’t really hurt Apple much unless the developers move on to greener pastures, so the incentive for them to improve things is fairly weak.

Android on the other hand has had less of a race to the bottom in paid apps but rampant piracy and poor market curation have led to an abundance of free apps and developers trying to monetize with advertising. The latter of course works well for Google so they’ve been fairly slow to do anything about it and thus far the attempts seem fairly half-hearted.

The Connected App Corollary

Those who follow technology will have read about the amazing future that awaits us with smart devices connected to powerful cloud services. However, the vast majority of apps have fairly limited online services components. If it doesn’t make much economic sense to build a high value app (since you’ll struggle to sell at a high price) then it’s even less likely you’ll invest in building a back-end service to go with it. At $0.99 you’re not going to cover hosting costs and product updates for a user for very long. There are excellent services like Parse and StackMob that can remove most of the up-front cost of building the back-end but if you scale beyond the free price bracket you need to find a way to make more money to cover hosting, or you’ll end up with a Ponzi scheme variant where new purchases start covering the hosting bills for the existing users[2]. For example at 100 API calls/user/day, the $199/month Parse plan will cover 5k * $0.99 users for less than 18 months with no development costs recouped at all - if you can keep the number of API calls down by an order of magnitude or two then it might just work, otherwise the pricing needs to go up, or you need to charge a subscription. With a freemium or ad-funded app, you have the significant risk that the upgrade or ad income doesn’t even cover the hosting costs.

For this reason it’s mostly VC-funded startups and giant corporations that are building the interesting connected services. They have the cash to go for gigantic scale and often a runway to figure out how to make money after they have users (although I personally find that particular approach to company building a little crazy). It’s worth noting that even a fantastic service like Evernote was hours away from failure due to running out of cash before they’d got people hooked enough to start paying - saved only by a last minute investment.

For independent developers and bootstrapping startups, the connectivity of an app is often more about sharing to social networks or pulling data from them to help create a more personal experience - this has no hosting cost. Apple recognised these issues and did something useful about them for game developers with Game Center (Google rumoured to be copying this for Android). The connectivity features were so useful to developers that many started using them in non-Game apps. These were initially approved but then later removed in a crack down to prevent confusion for Game Center users. The next great stride forward in helping app visibility (not much more in online services yet) for iOS 6 is Facebook integration…

The Social Network Problem

Here’s the first of my two points on social network economics and how they impact app economics. The major social networks are currently trying to figure out how to use all the data they have about people to advertise to them whilst they socialise online. The trouble is, people fundamentally don’t want advertising when they’re communicating. As former Facebook employee Jeff Hammerbacher said:

The best minds of my generation are thinking about how to make people click ads. That sucks.

The latest Facebook ruse is Social Stories where they use the fact that someone you know vaguely has at some point in the past liked some brand or other to insert an ad for that brand in your feed. I’ve personally explained to a few complainers about distant acquaintances already that this is not the fault of the person listed posting spam but in fact a new type of ad. So it’s working in that people can’t initially tell they’re ads but:

a) they don’t like them

b) when they find out it’s a disguised ad they are very unhappy with Facebook

I’m not at all convinced that even with all those brilliant minds working on it, anyone will figure out how to make this work in a way users will accept. As such I’m also not fabulously convinced that App Store/Facebook integration is going to massively improve things for app developers either - it’ll help a little with discovery I’m sure but it’s not a game changer.

Twitter has a similar problem to Facebook. Since they started out as an open platform with no business model and have recently decided on advertising, there’s all sorts of pain going on in the transition right now. They are cutting off access for some 3rd party apps (e.g. Instagram) and simply stealing the ideas of others (e.g. StockTwit), while cutting off user access at the request of advertisers (and this is just the last week or so).

Beware of Free

Facebook was the platform you built within, Twitter was the platform you could build on. That’s not the case anymore with Twitter - anyone doing so is on very dangerous ground. Twitter now does things that make sense for their advertisers, not users or developers. There’s point two - if you want to build a deeply social experience into your app, where do you go?

Better Business Models

We need better business models for connected apps and social networks. Ideally users would pay a subscription for connected services that have ongoing hosting and maintenance costs. Recently Dalton Caldwell has made, in his words, “an audacious proposal" for a better way to fund social networks - users paying for the service! Actually his proposal is smarter and the argument more subtle than that - it’s worth reading all his recent blog posts about App.net. The most interesting suggestion for app developers is the idea for revenue sharing - the service would share their revenue with developers based on proportion of usage by each client app. Developers can then give apps away for free or sell at low prices and still be financially incentivised to build things that provide lasting value and continue to support and improve their products.

This model can work for apps that are direct clients for the network, share content to it or build services around the content. I think that with a little tweaking, the proposal could also work for many types of app that simply need a way for users to interact with others in the network, not necessarily through short textual messages. Instead of building apps within Facebook, or on Twitter in the ways they deem appropriate, developers could build apps on top of App.net. Then rather than pay for hosting a service, developers get paid for driving interaction and increasing the value of the network to users. The more useful apps available on the service, the more likely users are to subscribe. This creates more revenue for App.net, which they can spend further improving the core service (since they don’t have to pay hoards of engineers to figure out how to monetize the thing).

I believe users are much more likely to pay a subscription for a socially connected service that supports a whole range of useful apps than to pay for lots of separate services. The centralised service has economies of scale and can charge a sufficient multiple of the hosting and maintenance costs to leave plenty for revenue sharing. Of course there needs to be some scale and revenue to share.

Chicken And Egg

This is a fantastic ideal but it’ll only work if there are enough paying users to give all the developers a worthwhile share. Thanks to crowdfunding and an inspiring post from Paul Graham[3], Dalton has an answer. They’ll build the service if they can reach a critical mass of around 10,000 users. Is that enough? Many critics suggest[4] that the only way to get the scale and network effects to create real value is through a free (ad-supported service). Personally I think it’s true that a freemium route will be required to reach the mass market (my personal suggestion - free users are either read only[5], or very restricted on post count) - lots of people are going to want to try before they buy. However, right now we’re not talking about mass market, we’re talking about early adopters. As a developer, would you build an app for an audience that size who have all shown themselves to be both willing to pay for digital services and interested in trying out new ones? Might well be worth it to get in early and become established in a growing market. Even if your risk appetite doesn’t stretch to that, wouldn’t you like there to be the possibility to target an audience of that type an order of magnitude or four greater? As an early adopter user, wouldn’t you be interested in interacting with more like-minded sorts and getting to try more new innovative services?

This is where crowdfunding is one of the most brilliant innovations around. If a developer wanted to make such a vision reality they can start a company or spend countless hours of their time working to help create it in open source. Users could just wish for it. With crowdfunding you can vote with your wallet on a pretty small scale and if not enough people agree with you then it costs you absolutely nothing to have done so. We can’t rely on advertising to fund all digital services - if we want great software and services that are improved and supported then people are going to have to get used to paying for them. Maybe the timing is not right yet and the world isn’t ready, but if this is a future you’d like to see then it’s got to be worth backing the project to see if the idea can fly.

[1] This is not to suggest that $0.99 is the correct price or that you cannot run a successful software business on the app store charging higher prices.

[2] Using a PaaS provider instead of hosting your own back-end also introduces the risk that your chosen provider goes bust or gets bought and shuts down the product. I’d suggest the latter is much more likely than the former with the current funding of those companies and the talent shortages where they are based.

[3] FWIW I think it’s also possible that App.net with a tweak of the sort I suggest could be used as the basis for #2 on Paul’s list - replace email.

[4] The other group of critics suggests this should be open standards based and not run by a single company. I say we’re not there yet. The web advances so slowly because it tries to build standards and uniform implementations first. We’re in a highly innovative and experimental phase, standardisation can come later.

[5] About 40% of Twitter accounts never post anything - not sure how many of those are spambots who just follow people and have a URL in their profile?

The Smartphone Market Fallacy

The more things change the more they stay the same. Technology changes much faster than people’s habits and behaviours.

Everyone in the industry knows that the term smartphone has largely lost its meaning and been stretched to the point where we use it to refer to wildly different products. In many cases what analysts refer to as smartphones when reporting market shares and volumes by manufacturer and OS is largely useless due to skew from historical classifications and variety of products using the same platform.

Many market observers have said that the iPhone created a new product category, some called it a “superphone” while others, noting that these devices are not primarily used as phones, have suggested we need a new name. None of this is new, within the industry we’ve been talking about mobile computers, or Mobile Internet/Information Devices (MIDs) for as long as I can remember. While the iPhone created the category as a mass market product it certainly didn’t invent it. This is not much more than a historical curiosity now, but Symbian OS used to have multiple different UI layers. While Nokia’s S60 was designed for phones that also had some “smart” features, UIQ was for mobile computers (or PDAs as they were usually called back then) that also included a phone. Personally I think the Sony Ericsson P800 was the first product in the category that the iPhone succeeded in bringing to the mass market.*

Why do I bring up this ancient history? Because although there were not any analytics or statistics available at the time, it was reasonably well known amongst 3rd party Symbian app developers that UIQ users were many times more profitable than S60 users. Despite the massive numbers advantage S60 had, some developers made more money on UIQ. Many causes for this were debated - one valid issue was that most S60 phones were sold without any kind of mobile data plan, however, the most critical thing as I saw it was that almost all UIQ buyers actually wanted a mobile computer whereas most S60 buyers just wanted a high end phone.

How is this relevant today? Blatantly generalising and ignoring the vocal minority platform fan power users:

- Symbian (now only the S60 branch) is still really just for slightly fancy phones; maybe with a bit of browsing, gaming and social networking app use, but feature phones do that too.

- RIMs remaining customers are either forced users through work (often with another primary device) or mostly interested in mobile email or BBM.

- Samsung’s bada has a limited and painful application environment where development will not take off. I class it as essentially a very pretty feature phone with a bigger screen.

We should ignore all of these platforms if we’re really interested in mobile computer use. While there are a minority of iPhone users who just bought one as a status symbol, the vast majority actually wanted a mobile computer.

The interesting platform in this regard is Android. Being flexible and open it is in use across a wide range of hardware and price points.  Since it started at the high end it’s easy to ignore quite how far down the range it has been pushed. It is being sold in both the “mobile computer” market and the “slightly fancy phone” market and helping to transition people from one to the other. You can get pre-paid Android devices and also post-paid with no data plan.

I see plenty of articles looking at stats like “on average Android users download half as many applications as iPhone users” or the ever popular browser page view based market share stats which try to explain them with the average quality of applications, or the usability of the browsers on each platform. The truth is much more likely that high end Android users download more apps than iPhone users on average (due to the level of free apps available) and do a similar amount of browsing. Further down the device price range there are plenty of users that really don’t care about apps or browsing on their devices, they just wanted a new phone.  That is, mobile computing is growing rapidly, just nowhere near as quickly as the Android shipments figures might suggest.

With mid-range Android devices there are users who wanted a more capable mobile computer and simply couldn’t afford it (lots in less affluent countries, see map here, but also younger consumers and those with lower incomes) - these users may download lots of free apps but are much less likely to buy them or pay for other services. They are also less interesting to marketers as they have lower disposable incomes. Within the Android mid-range you additionally have a lot of users who are simply getting the best free upgrade that came with their network contract or a fashionable new phone. Many of this user group will try downloading a few apps to find out what all the fuss is about but mainly stick to their existing usage behaviour.

At the bottom of the Android range, if people can’t afford a better phone than this, they can’t afford much mobile data use either and have very little disposable income for other apps and services. Either that or they just wanted a cheap phone that doesn’t look too embarrassingly basic.

How many Android users fall into each category? Since the Android vendors and Google don’t release much relevant data for this it’s very hard to tell.  Samsung accounts for about half of the Android sales according to most estimates and we know they’re selling truckloads of cheap devices like the Galaxy Y (~1.75m/month for that one device) along with the flagship Galaxy S range. Since the other major Android OEMs are struggling to make profits they can’t be selling huge quantities of their high end offerings.

Purely looking at installed bases vs browser usage and app download shares it’s tempting to say that significantly more than half of all Android devices are not really being used as fully fledged mobile computers, regardless of capability. It’s probably not a big stretch to say that Android is roughly on a par with iPhone globally (probably still not quite caught up) for this sort of user/usage.

Now if the analysts would just forget about classifying smart devices by platform and report by price tier we’d have a much better proxy for the really interesting numbers - how quickly is mobile computing actually catching on?  Here’s an interesting snippet from an infamous memo**:

In 2008, Apple’s market share in the $300+ price range was 25 percent; by 2010 it escalated to 61 percent.

It’s interesting to note that 61% given that Apple’s ASP was closer to $600 and Android had already passed iPhone in shipment volume while Symbian was still shipping meaningful numbers above that price point.  I’d love to know what the numbers are now. I’m sure there are plenty of people trying to track this sort of data for financial/investment purposes - if you know of any that report on it, please let me know about them. I assume no-one is giving it away for free but it’s worth asking!

A minor positive for Microsoft in all of this is that if you ignore the legacy platforms and take into account that almost all Windows Phone buyers are actually looking for a mobile computer then their market share is not quite so terrible. Then again, if you add in the mobile computing use of iPads and some of the iPod Touch market (plenty of teens in the mid-range Android equivalent segment who have a 2-box solution) to the iPhone figures then the iOS ecosystem is still looking miles ahead of everyone - doing similar for the other players does very little to their numbers.

Now this might sound a bit like mobile computing elitism. I guess it is… but at the same time it matters if you’re looking to deliver content and services to mobile computing platforms.  Obviously if you’re aiming at a local market you want to check your local figures and adjust real world usage as best you can.  If we had breakdown’s like this for every country it’d be a lot easier to estimate. For the global market, don’t automatically assume that Android is the best way to get reach, despite the giant market share lead. It’s all very well people having the devices with the capability to access your app/service but they’ve got to want to use them in that way too.

*  Nokia also had an aborted attempt with the Series 90 UI and the 7710 (which later morphed into Maemo and the internet tablets but somewhat bizarrely with no cellular modem).

** I promise not to mention that memo again on this blog.

Nokia’s Downward Spiral

A few weeks after the fateful Feb ‘11 announcements I wrote about “What Happened to Nokia”, it got picked up by some fairly big tech news sites and 10s of thousands of people read (or at least visited, it was a pretty long blog) what I had to say.

Now that the Windows Phone strategy is not working out as well as planned and yet more cuts have killed off Nokia’s next generation platform for the low end (codenamed Meltemi, think bada competitor but much better), I’m writing some final thoughts on the topic and then moving on.  I say final because I really don’t expect Nokia to be very relevant for a whole lot longer, certainly not returning to their former glory.  The data collected by Asymco is pretty compelling in this instance.

Tomi Ahonen has recently posted an incredibly long public assault on Stephen Elop’s management.  I think he goes too far in blaming Elop for everything, Nokia was in a pretty bad shape before he arrived.  However, he has made a bad situation much worse and I think the conclusion is essentially correct, Elop and most of the board need sacking for almost completely destroying the company.

The “Burning Platforms”

So, my take on the famous “burning platforms” memo and February 11th announcements with another year or so of hindsight is fairly simple. First, Q1 2011 the whole industry, apart from Apple, saw a major drop in demand, or at least demand growth (see the graph here). Elop in his relatively new Nokia CEO role knew they needed to shift smartphone platforms to regain competitiveness. He had access to Nokia’s forward sales pipeline for the quarter but not the rest of the industry and he panicked, assuming the drop in demand for Nokia products against a strong industry growth trend was due to their inferior product range. He did a deal to get enough cash from Microsoft to keep the business running while they transitioned and made some fairly hefty concessions to get it (for example, he’d have to be as mad as Tomi makes out not to sell the N9 in advanced markets in competition with the Lumia 800 unless the Microsoft deal explicitly forbade it). This could make sense of the total lack of detail evident when the deal was announced and the very poor bargain Nokia appears to have achieved given Microsoft had already tried and failed partnerships with the other major OEMs.

And Strategy Announcements

Clearly, from the various sales projection charts that were shown on February 11th, the Nokia board had agreed a strategy that involved the phasing out of Symbian over several years as it gradually drifted down the value chain, while Windows Phone ramped up, was optimised for lower price points and localised for more countries.  Telling the world that this is what they expected wasn’t the most brilliant business strategy - simply committing to improve and support Symbian (as they were doing anyway) and making the updates available to every current buyer would have made much more sense, the phasing out would have happened naturally.  The plan most likely included continuing to build out the developer ecosystem around Qt and as Symbian was phased out it seamlessly transitioned over to Meltemi with lots of local content for the developing/emerging markets where that platform would primarily compete. Meltemi would not have run on cheaper hardware than Symbian but being Linux-based it would have increased their agility at the low-end - much shorter time to market (due to simpler development environment and pre-existing hardware adaptation layer) - and allowed them to ditch the immense cost of maintaining Symbian/S60 & also S40, which was shifting down into a market where Shanzhai was making it hard to make profit.

Original Plan Not Bad

The plan was not actually a bad one.  The disconnect in development environment between mid-low end and high end devices might have been an issue but with Symbian/Meltemi not selling at the high end, appealing to local developers to create solutions for the markets where devices were sold would probably create a more appealing long tail than attracting the 1000s of gold and glory hunting startups in Silicon Valley solving first world problems - they could pay the porting costs for the big name apps that didn’t care enough about reach to do it anyway.

So, what went wrong?  Management and comms.  When Elop started he’ll have wanted to do what most turnaround CEOs do, get as much future bad news out of the way early so everything bad is attributed to the previous management any recovery is all his doing.  How to go about that - same way most arse-covering CEOs do - hired some management consultants to assess the situation and tell him what to do.  Probably some US-based ones with their slightly warped view of the “smartphone market”.  I put that in quotes because it’s a pretty meaningless term now - I’ll have to explain that in a follow up post.  For these purposes it’s enough to say that Symbian was not really competing in the same “smartphone market” as the iPhone and high end Android devices.  As most Symbian developers would have been able to tell you, outside a relatively small core of early adopters who bought Symbian devices for their advanced feature set, the bulk of Symbian devices sold were not used as smartphones in the iPhone sense. Much as Nokia wanted it not to be true, the iPhone (at least by the 3GS) really launched a whole new product category for which they did not yet have a competitive response.  Your competitors are not who you think they are, they’re who your customers think they are*.

Viewed in the market for iPhones and flagship Android devices, Symbian was horribly uncompetitive.  Nokia clearly had a serious problem and could not sell Symbian devices in that market with the great margins they once enjoyed.  However, viewed against the competitors where the vast majority of Symbian devices were selling at the time - mid-low end Android devices, bada, teen-focussed BlackBerry and high end feature phones - Symbian was still fairly competitive.  With a decent UI overhaul (as it eventually got with Belle) it would have been extremely competitive in that space, particularly since every device came with free SatNav, complete with offline support and turn-by-turn voice guidance in countless countries (things Android is still catching up with).  Nokia’s profits would continue to take a beating without a credible high end offering as that’s where the bulk of the industry profits are, but they could have continued to run a profitable business with what they had and given themselves a few years to rebuild a credible offering at the high end.

Comms Catastrophe

So new partnership with Microsoft to rebuild at the high end and make the most of existing assets, unless… the CEO decides to tell the entire industry that the Symbian line of products is no good and will be killed off, before any replacements are in sight.  This simultaneously does massive damage to the operator and retail channels, the early adopter consumer interest and the developer community - all of which Nokia badly needed.  Did Stephen Elop really think that an internal written memo of that nature would not leak?  I doubt it, but if he did, he’s an idiot and should be fired.  Being generous and assuming there were a lot of safeguards and the memo really should never have leaked, or there were well-intentioned reasons for leaking it - what about it’s content?  With it’s slightly warped view of Nokia’s true situation at the time (which was pretty bad but not jump in the ocean or we’re all going to die bad) the only possible reason for such a message to the staff is to motivate them to change.

Psychologically Incompetent

The organisational change psychology involved here is to make the staff feel they’ve reached a crisis point and HAVE to change NOW.  This is flawed and outdated thinking in change psychology.  Indeed, shortly before Elop started in his new turnaround CEO role, a New York Times No.1 bestseller was released, “Switch - How to change things when change is hard”.  Wired called it “A Fantastic Book” and I think it must have come up on Mr. Elop’s radar - shame he didn’t pick up a copy.  Allow me to quote a few relevant sections here (from pages 119-123):

Speaking of the perceived need for crisis, let’s talk about the “burning platform,” a familiar phrase from the organizational change literature.  It refers to a horrific accident that happened in 1988 on the Piper Alpha oil platform in the North Sea.

Skipping the details of the accident…

Out of this human tragedy has emerged a rather ridiculous business cliche.  When executives talk about the need for a “burning platform,” they mean, basically, that they need a way to scare their employees into changing.  To create a burning platform is to paint such a gloomy picture of the current state of things that employees can’t help but jump into the fiery sea. (And by “jump into the fiery sea,” what we mean is that they change their organizational practices.  Which suggests that this use of “burning platform” might well be the dictionary definition of hyperbole.)

So this is an exaggeration designed to evoke strongly motivating negative emotions - slightly disappointing that dear Mr. Elop (or more likely his management consultants) couldn’t come up with something more imaginative than the canonical analogy to fit the situation but much more disappointing that he was eliciting entirely the wrong emotions:

There’s no question that negative emotions are motivating … But what, exactly, are these emotions motivating?

(buy the book if you want the psychological explanation)

Bottom line: If you need a quick and specific action, then negative emotions might help. But most of the time when change is needed, it’s not a stone-in-the-shoe situation. The quest to reduce greenhouse gases is not a stone-in-the-shoe situation, and neither is Target’s mission to become the “upscale retailer,” or someone’s desire to improve his or her marriage.

And neither is Nokia’s need to respond to the competitive threat posed by Apple and Google/Samsung…

These situations require creativity and flexibility and ingenuity.  And, unfortunately, a burning platform won’t get you that.

So what is the answer, in a nutshell:

To solve more ambiguous problems, we need to encourage open minds, creativity and hope.

Great Way To Kill Productivity

For the 1000s of staff reading that memo who would be continuing to work on Symbian, where execution of the new UI was absolutely critical to the company’s continued income in the short term, how does that do anything but anger and demotivate.  It seems the MeeGo leadership cleverly managed to turn the anger around into determination to show what they could do and prove the CEO wrong.  They executed a(nother) new UI from scratch in 6 months and beat the Lumia 800 to market**.  However a high end smartphone, declared a one off before launch and not sold in the most developed markets was always doomed - no apps and a lack of affluent customers (even so, it’s sold similar numbers to the Lumia devices despite a microscopic fraction of the marketing budget and near identical industrial design).

With the Symbian/S60 engineers demoralised and then transferred across to Accenture, Symbian UI updates were inevitably delayed further, and of course the networks had little incentive to approve updates quickly, since they were for the most part no longer stocking many Symbian devices anyway. That said, the update situation there is still significantly better than Android, where despite the updates being released for ages, most users are still running Gingerbread from back in 2010 and new devices are still sold running that version.

If Only The Windows Plan Was Working

All of this might have been swept under the carpet IF the Windows Phone devices had taken off as hoped.  Microsoft attempted to buy market share with an unprecedented marketing assault and some hefty sales incentives to channel partners.  This appears to have failed.  On the developer side it’s also becoming very clear that you can’t really buy a developer ecosystem either.  Once you start paying people to build apps for your platform, word soon gets out.  Even popular apps that might have been thinking about porting anyway will now wait until someone comes and offers them some money.  Vast numbers of Microsoft desktop developers not wanting to get left behind by the mobile revolution jump on Windows Phone bandwagon in order to avoid learning new languages and technologies, or because they’ve just been drinking the Microsoft kool-aid for too long.  Unfortunately, the real mobile entrepreneurs with the great ideas are market driven and looking for cash (iOS) or reach (Android, iOS or both).  For a late to market offer like WP7, they can wait to see if it ever gets decent sales.  Small startups will even turn down free porting cash for platforms they don’t see as generating a near term return as it’s small change in their big plans and a major distraction that costs management time.

Now one potential advantage that Windows Phone could have had was games (and games are the most important app category for consumers) - there’s the Xbox link up and potentially simpler porting for all those top titles from the console.  Unfortunately WP7.x doesn’t allow native code (because they were about to replace the underlying OS fundamentally) and thus it’s very expensive/impractical for many of those games to port over.  WP8 will fix this BUT… the current phones don’t get an upgrade***.  This is going to hurt the Lumia range further in the retail channels.  Most consumers may not have heard about this, but no-one in the sales channel wants their customers coming back in 6-9 months saying, “how come my Windows Phone doesn’t run all the latest games Microsoft is showing off on the TV”.  That leave’s Nokia with what now looks like a very last roll of the dice on WP8 success and already massive damage to their brand.

Now There Are No Alternatives

To add insult to injury, with the developer interest in Symbian/MeeGo mostly killed off by the memo and subsequent announcements Nokia had to fund a lot of application development.  That’s something they now can’t afford again with Meltemi, so that had to die too, to preserve enough cash to keep the Windows Phone effort running.  This leaves Nokia with no credible story at the low end, so revenues there will continue to decline.  This is a logical choice 10% of the high end market is potentially worth more than 30% of the low end.

How far the mighty have fallen, it looks most likely we’re heading towards this situation I speculated on in my previous post:

If Nokia is left with a sub-profitable smartphone business and Microsoft is doing well out of it they can either subsidise further, or buy Nokia and strip out the other parts of the business.

Except, now I’d replace “and Microsoft is doing well out of it” with “and Microsoft is out of other options”, plus there’s less and less of the rest of Nokia to strip out with each new announcement yet still little reason for Microsoft to buy them rather than subsidise unless it becomes worth it for the patents, manufacturing and distribution vs building out their own in a “no-one wants to use Windows Phone anymore” situation.

Is It All Elop’s Fault

Did Stephen Elop cause all of this, no.  Nokia were already in a lot of trouble.  Did he make it much worse, yes.  The aftershocks of February 11th are still playing out.  Some have been pointing out that Nokia was already losing market share rapidly before February 11th.  That’s true, but the were also still growing unit volumes - only losing market share because they were growing much slower than the market.  After February 11th their unit sales were in free-fall.  This is the difference between having time to make a turnaround while falling behind competitors and heading towards bankruptcy very fast.  However, having previously tried and failed to rebuild for a couple of years there’s far from any guarantee another couple of years would have helped.

Is he the worst CEO ever - I seriously doubt it.  However by deciding to take the easier way out, throwing everything away and counting on Microsoft to solve the software side, rather than fixing a broken software development organisation, then making a catastrophic comms error, he has given himself a really good chance of going down in history as one of the most value destructive CEOs ever.  Either that or he really is a Microsoft trojan and really brilliant strategist - one small comms misfire and lots of sincere looking effort to make things work turns the world’s largest device maker into a Microsoft captive OEM for the rest of its history.  However, in cases of potential conspiracy, 99 times out of 100, the truth is far more cock-up than conspiracy.

* I read wise words to that effect somewhere recently and can’t find them again to attribute the source.  If I stole your thought, please comment and I’ll link to the original.

** And some of this same team have formed a new company, Jolla, to continue making MeeGo-based smartphones.  I think there’s a market for geek/hacker phones that’s big enough to support a small company but their ambitions are bigger so I hope they’re going to support Android apps - if not I can’t see it working.

*** So Nokia moves from a platform (Symbian) with a new UI layer (Qt) that enables a transition to a real competitive OS solution (Linux/MeeGo) keeping some level of app compatibility, to a platform (Window CE) with a new UI layer (Metro/.NET) that enables a transition to a real competitive OS solution (Windows 8)… just a year behind on that plan… oh, the irony.

Android and all apps subject to GPL or copyleft is doomed?

There’s a bit of “dull” IP law that was pointed out recently and didn’t get wide enough coverage.

http://www.brownrudnick.com/nr/pdf/alerts/Brown%20Rudnick%20Advisory%20The%20Bionic%20Library-Did%20Google%20Work%20Around%20The%20GPL.pdf

Basically Google used some automated scripts to “clean” the Linux kernel headers in order to use them in its C library, Bionic.  In doing so they claim that Bionic is then not subject to the GPL.  This is rubbish and IMHO wouldn’t stand up in court for 5 minutes.  The author of the above paper (who is an IP lawyer, unlike me) clearly agrees.

If Bionic is actually subject to the GPL, then so is the rest of Android and all of the applications that people have written linking to it.  This doesn’t mean everyone has to release their source code immediately.  First, as far as I’m aware, none of the kernel copyright holders has complained about the license infringment (and given how Android is spreading the Linux kernel at its fastest ever rate, possibly they won’t).  Second, if someone did complain, developers can instead stop distributing their applications until Google resolves the infringement.

In standard Linux, glibc is special in that it’s allowed by the Linux kernel developers to be licensed under the LGPL while using the kernel headers.  As such there’s a precendent here which makes my headline a little dramatic.  Quite possibly the kernel community would allow Bionic to keep its non-copyleft license as an exceptional case and the situation is resolved.

Were Google to actually win a legal case with their technique and argument, this sets a nasty precedent in the opposite direction.  It opens the door for anyone to use Google’s scripts to automatically “clean” the headers of any copyleft project they don’t feel like complying with the license for and linking their own proprietary code.

The reality of all of this is far less serious than my title suggests.  However it is a case of Google using free software as they see fit, without considering the wishes of the community that built it.  The kernel community should use this to force Google to play nicely - for example only permitting Google’s choice of license for Bionic on unforked versions of the kernel.  It’s not surprising that Google are willing to disregard the rights and wishes of open source developers when they’re equally happy to ignore those of large corporate entities like Oracle.

The only other interpretation I can make of this than Google doing the famous “evil” that they aren’t supposed to is they’re actually challenging the (fundamentally flawed) use of copyright law for software.  Their laughable defense against Oracle (“you wanted all of Java open source when you didn’t own it, now you’ve changed your mind” - true but irrelevant to the legal position) looks like an invitation to discuss a settlement but perhaps their lawyers have something different up their sleeves?  Right now it looks like Google believes they’re powerful enough to do as they please.

What Happened to Nokia?

On February 11th, one of the giants of the mobile world gave up control of their own fate and put it into the hands of Microsoft, the desktop software behemoth that has never managed to get a solid foothold in mobile.  How did that happen?  Did it really have to happen?  What will be the results of this bizzare marriage?

The well known history

Nokia has a global lead in smartphone volumes with Symbian.  They created the first ever smartphone with Symbian and had 100% market share - it was bound to decline! They’re just on the brink of being passed by Android with something like a third of the market each (and no, it didn’t happen in Q4 last year).

Nokia have struggled to refresh the UI on their Symbian devices and bring new innovations to market in the last few years.  They’ve been creating the Maemo/MeeGo platform as an alternative to let them move faster at the high end.  Unfortunately, they haven’t been fast enough, they’ve been losing market share at an alarming rate and they’ve ditched their plans in favour of Windows Phone… or have they?

What they actually say they’re going to do

Well, what they’ve said in the strategy announcement and subsequent developer events is that they’re going to finish refreshing the Symbian UI and finish the first proper release of MeeGo and get a device out, then switch to Windows Phone.  So Elop either believes they can’t execute on their plans over the next year successfully, or there’s something a little deeper going on.  Probably a bit of both.

The little known history

Nokia were riding high in smartphones before the iPhone but they’d already realised their internal development was fragmented by incompatible frameworks across platforms and had started consolidating.  First they dropped Series 80 and 90 and consolidated the Series 80 form factor into the S60 UI (in the E90).  Next they tried to figure out how to do cross-platform software development in order to increase their R&D efficiency.  Then the iPhone struck and finger operated touch UIs were THE thing to have.  Having just consolidated UI frameworks and trying to go further they weren’t likely to build a new one or resurrect the old, so they added touch support to S60. This was predictably (easy to say with hindsight but I did also at the time, honest) a big mess.  There was also the issue of fingers being useless for input in some of Nokia’s big markets, and resistive touch technology being cheaper than capacitive.  Easy decision to go with resistive, no?

Trolltech was acquired to solve the cross-platform problem but it was the wrong technology for the new UX.  When the painful process of porting Qt to Symbian was completed, emulating the existing S60 5th Edition UI components, they still didn’t have a competitive UI toolkit.  At this point I’m sure there remained some important folks in Nokia who didn’t believe 5th Edition was that far behind the iPhone.  The trolls of course, being exceedingly smart and great engineers, realised they had the basis for animated mobile UIs but needed to build a better framework that allowed designers to design and engineers code.  The wonder of QML was being created by a small crack team within Nokia.

Unfortunately not everyone was on board with the ‘trolls as Nokia saviours’ plan.  There’s also not much you can do to accelerate the kind of work going on in the creation of a new framework.  While the Trolltech acquisition was ongoing a team of Nokian’s had created an alternative native Symbian animated UI framework and another team built the Maemo 5 UI with GTK+.  The Symbian version, named Hitchcock/Alf had horrible APIs to code for and an evil back end architecture.  As such it was doomed to failure.  There are however, a few applications re-written with it in Symbian^3.  The Maemo 5 version was known to be a dead-end to be replaced with a Qt-based UI very early in its development too.

However, the directive from the top was Qt is the future, so the Symbian devices folks that had failed to build a decent new Symbian UI framework started re-building the S60 framework (Avkon) with Qt - an open source project known as Orbit and then renamed UIEMO.  From the outside it seems there was no proper requirement spec for this new framework and the engineers were incompetent.  Now by that I don’t mean that they’re incompetent engineers - far from it.  They were incompetent at designing APIs for 3rd party developers (a very specialist engineering skill) and they were incompetent at desiging UIs (which most engineers are, myself included).  Unfotunately they were doing both as evidenced by the code and the comment of one Nokia designer at a Symbian Foundation meeting who was publicly cornered into revealing that the S^4 UI design patterns had been reverse engineered from the code.

The Maemo/MeeGo folks also built their own new UI framework with Qt, libdui.  I didn’t look at it in detail but a strong MeeGo supporter reviewed both and concluded that the Symbian version was more mature.  A Nokia insider whose opinion I respect also told me that libdui was a complete mess.  The higher-level problem is that both teams had built the wrong thing.  They built frameworks for their own platforms on top of a cross-platform framework.  Both tried somewhat to make their frameworks cross-platform and presumably replace one another as THE framework.  Both got canned - technically libdui lives on in MeeGo but it is deprecated before launch.

There’s a common mis-conception that Nokia wasted a lot of time opening the source to Symbian while Apple and Android were running away with the market.  This is simply nonsense.  The IP checks and configuration management changes would have taken at most a couple of weeks on average for every developer in the Symbian development organisation (some will have done a lot more while others weren’t involved).  On the other hand between these two phases of dead-end UI framework re-invention and the apps built on top of them there must have been a couple of thousand man years of development wasted.  This was a horrific management faliure in both not breaking down the technology strategy even a couple of levels to the point where everyone was on the same page and not recognising the problem and fixing it much, much sooner in the development process.

The slowest “quick fix” ever?

Symbian^3 was intended to be a quick fix to remove the worst niggles in the UI while a replacement UI was in development.  As such, the decisions to include a new comms framework, a new graphics architecture and re-written UIs for several applications using a new but dead-end UI framework were “interesting”.  I don’t know how much was originally planned and how much added as it became clear S^4 was late but with a fragile UI layer this was bound to cause delays.  This undoubtedly hurt Symbian device sales heavily in the first 3 quarters of 2010 - particularly at the high end, Nokia simply didn’t have a competitive device and had to price cut aggressively to prevent even heavier market share declines than they saw.  The plus side is that S^3 is in pretty good shape for further Qt-based updates to improve the UI.

A glimmer of hope

When Anssi Vanjoki took charge of the device creation at Nokia last summer it seemed that he managed to put the ship back on course.  The badly built frameworks got canned and the focus moved to QML to rebuild the UI.  The device roadmap was slashed to simplify platform development, reduce costs and enable the much needed promise to provide continuous firmware updates to S^3 devices and beyond.  Unfortunately Vanjoki’s reforms came just before Stephen Elop was given the CEO’s role and he walked out in protest (FWIW, I don’t think Vanjoki would have made a great CEO for Nokia but he was a big loss).

The three options!?

Elop started out looking at the company’s position (rapidly worsening market share and profitability) and options, apparently with the help of some external consultants.  Nokia’s current strategy was slightly complex because they were trying to solve some tough problems.  They needed a new platform but couldn’t just start from scratch without risking alienating their existing customers and developers along the way.  They needed a UI that was fresh and improved but still somehow familiar.  They were also looking to move smartphones way down market into the sub-$100 segment.  At the same time they needed their software to be easily portable across hardware platforms, to give them more flexibility and shorter product cycles.  The market needed an alternative to Google & Apple - Nokia’s biggest customers (i.e. the worlds largest network operators) were meeting to discuss what to do about it.

Oddly, to solve all of this Elop’s strategy options were narrowed down to three:

1) Keep going with the current plan (although it was clear some changes were needed to the software development organisation!)

2) Android (surely never a real option unless heavily forked - networks would rather buy a mix of devices from smaller OEMs than have a combination of Nokia & Google as a supplier!)

3) Microsoft’s Windows Phone (doesn’t solve the familiarity problem, doesn’t solve the portability problem, doesn’t currently scale down to the low end, alienates the current developer base - Ovi Store is 2nd or 3rd on most metrics).

From a technical perspective this seems an extremely shallow selection of potential options on a 1-2 year timeline.  On a purely superficial level (i.e. without this kind of analysis) it also looks like an easy choice to drop the in-house platforms.  With Google as most dangerous competitor and an ex-Microsoft exec at the helm, the Windows Phone option seems pretty obvious.

High stakes poker

It might seem from the outside as if Nokia has struggled for a long time to refresh their UI and made little progress.  The majority of comentators, including many who should know better, often equate the UI with the OS.  It’s fairly natural to conclude that another 6-12 months won’t help and the OS has to go.  Updating the UI on Symbian where much of the look and feel was actually constrained by the S60 framework was indeed very difficult.  With QML in place, building a better UI is mostly a case of hiring some good designers (or just getting out of the way of the ones you’ve already got).  With a new UI in place, executed well, the majority of the complaints about Symbian would vanish.  That said, the name has taken such a beating in the press that a re-brand would have been in order.

In the high stakes poker game that is the smartphone market, you could say Nokia had a hand that was promising, they just needed one more good card and a potential winner was theirs.  You might be tempted to say that they’ve instead thrown out the whole hand on a big gamble - stretching the metaphor you might say they’ve taken the hand of the rich player across the table who’s been losing heavily all night instead.  As I noted above, that’s not quite right - they’re playing the hand they’ve got for this round and joining forces with the rich loser across the table for future hands.  This is a strategic move looking further ahead at the other players that have recently come to the table and the piles of chips they’ve got sitting next to them.

How big a gamble is it?  Technically there’s not a lot wrong with Windows Phone.  Microsoft solutions have been kept out of the market because of distrust from the OEMs and networks.  From a developer perspective it’s a step backwards.  The Microsoft tools are excellent but, to quote someone else’s exceptionally well chosen words:

They basically need access to everything that the underlying OS has to offer plus interfaces to the applications that ship with the phone.

Windows Phone 7 is worse in this respect than Nokia’s current platforms and even the current state of Qt Mobility.  Changing the policy on unmanaged code could help and there are bound to be additional APIs in the version that Nokia eventually ships.  The browser could be a sticking point - Microsofts’s new mobile browser UI is decent but nobody in the mobile web world really wants to have to code for IE when all the other platforms are shipping something based on WebKit.  Operationally it’s an enormous risk.  Not many of Nokia’s existing staff will be easily motivated to work on Windows Phone devices.  Time to market is critical to avoid bleeding much more market share after announcing end-of-life for Symbian (I’ve already heard of phone shop staff telling people they should by Android instead of Nokia because the Symbian platform is being killed off).  Flawless execution is essential to reverse market perception - this is something both companies have a fairly poor record on in recent history.

So, why jump?

This is something I’ve been trying to figure out since the announcement.  Also the related question - why announce it now, bringing the Osborne effect into play and demotivating the very people who will have to keep the money rolling in until the Windows Phone devices are ready?  A possible management psychology is explained well by asymco - burning the boats.  Google and Apple are very rich - Google’s revenue stream is not currently dependent on their mobile efforts and Apple is camping in the premium space where Nokia have historically made a big chunk of their profits (which they’d really like right now to fight Google).  Apple will not be easy to displace - the biggest hope there is a shift in fashion which they fail to respond to because they think they know best.  Google can clone and give away any services Nokia hope to monetise to regain profits until they’ve bled them dry.  Microsoft has a big pot of cash, revenue streams from the desktop and server world that will be around for several years yet and strong movtivation to avoid irrelevance.

However, Microsoft seems to gain a lot more from this deal than Nokia, plus Nokia is taking the lion’s share of the risk. Now we don’t know all the details of the deal yet and maybe Nokia feels fairly compensated for this risk with many billions in marketing support and cash to developers to build out the app offering?  There is also the fact that Nokia has a large US institutional shareholder group.  Most of those investors probably have a larger holding of Microsoft than they do of Nokia.  Although risky, it’s very possible that the deal is value adding for the pair, even if the bulk of that value shifts to Microsoft longer term as Nokia have given up control of the platform (and the software and services is where the bulk of the value is, not the hardware).  This kind of thinking has caused some to speculate on a future acquisition.  Personally I don’t see why Microsoft would need to - they’ve got what they want for now.  If it doesn’t work they’ve made a relatively small investment on top of what they’re doing anyway, plus buying Nokia would probably put other Windows Phone licensees off.  If it succeeds and Nokia is left with a marginally profitable smartphone business then Microsoft wins!  They only buy Nokia out if they try to shift to another platform to make more money.  If Nokia is left with a sub-profitable smartphone business and Microsoft is doing well out of it they can either subsidise further, or buy Nokia and strip out the other parts of the business.  Microsoft is risking little, they’ve already tried putting their weight behind all the other major OEMs at some point and got nowhere - once bitten, twice shy.

What I don’t buy is the argument that Apple and Google were innovating too fast for Nokia to keep up.  Apple put a lot of existing innovations together with a few tweaks of their own in a very well thought out package, with their usual excellent attention to detail (and characteristic control-freakery).  Since then they’ve added almost nothing that wasn’t just playing catch up to things they left out in the first dash to market.  Google’s first version of Android was frankly poor - since then they’ve done a great job of copying all the bits they were missing from elsewhere very, very fast indeed.  This is a lot easier than bringing new innovations to market!  If anyone strongly disagrees I’d love to have some genuine innovations from Google and Apple in the updates to their original releases pointed out in the comments.

The other possible reasoning is in Stephen Elop’s key phrase “sustainable differentiation” - with Nokia’s former open innovation strategy it was never clear how they’d maintain differentiation from the Chinese OEMs.  The plan was with additional services but, maps aside, they were failing very badly with efforts in that area.

What to expect next

Nokia has to try to migrate customers and developers.  Their Symbian UI refresh efforts should to start to move their existing customers towards a Windows Phone look and feel.  They should probably add a Silverlight runtime to as many of their devices as possible via firmware updates too and of course include it in new devices.  Maybe there’s something wrong with the Silverlight runtime for Symbian but so far the message is still strongly one of Qt for Symbian.  Possibly this signals a future for Qt in the expanded budget for Series 40 within the mobile phones unit.  In that sense, Series 40 could become a lot like Samsung’s Bada.  A feature phone platform with a sandboxed native application environment on top - a kind of feature phone/smartphone hybrid.

The other thing to watch out for (possible but not probable) is opportunistic poaching.  There will be a lot of Nokia employees who were very upset by the change in direction.  Many good people will leave and competitors have already been taking the opportunity to tempt some away.  On a larger scale, someone could set up an office in Oslo, make a decent offer and tempt enough of the Trolls away to fork, or at least gain significant control of Qt.  There’s already a semi-offical endorsement of the Qt-Android port on the Qt labs site - I doubt that would have happened with the old strategy still in place.  Google could pull in a lot of Nokia’s developers by adding it to the Android NDK, or RIM could have a credible native enviroment on top of QNX for their next generation platform.

In the slightly longer term I’d also be looking out for a strategic response by the networks - the main platforms going forward will give them very little control compared to what they’ve been used to.

Conclusion?

I still don’t know what to make of it.  I feel for the people that have been negatively impacted by this move.  I really wish that Nokia still had the sisu to go it alone - they’re clearly not the great Finnish company I once worked for.  The mobile market is a big and interesting space.  I find myself agreeing once again with asymco - there’s plenty of room for another disruptive challenger.  At the moment the most likely candidates are RIM or HP’s evolution of WebOS. Both are big and capable but that kind of scale isn’t essential either - there’s plenty of money to be made for people with the right vision and execution.