Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Monday, November 28, 2011

Controlling telephony IN and supplementary features by app

This is a very quick post, before I dash to the airport. I'll be in Singapore for the rest of this week at Telco 2.0's New Digital Economics event, speaking on both Mobile Broadband, and also on the telco-world implications of HTML5. More to follow on that topic another time.

I was out with a friend last night, and he bemoaned the fact that he can't easily adjust the number of rings his phone does before it diverts to voicemail. (He's using an Android handset on a major operator). He said he'd been able to do something via some obscure code like *7529# , but it had taken him ages to find it - and he's a serious geek as well.

It struck me that this is the sort of thing that should be done via an app, linked to the voicemail server. (I don't know if Apple's visual voicemail allows this, or 3rd-party consumer options like ON VoiceFeed, but this is just about the normal operator-provided vmail).

More broadly, there's a ton of supplementary services and other legacy IN stuff around in the telephony space that never really gets properly thought about, except how much of a pain it is in terms of ensuring backward-compatiblity. Why? Why isn't there a decent operator-provided app for voicemail, 3-way calling or whatever other features they've got? Who cares if it only works on some people's phones - and who cares if it emulates *# codes or hooks directly into the server?

It's this type of thing that Martin Geddes and I mean when we say that the basic telephony service hasn't evolved. You don't need to go to HD Voice or even application-embedded voice to make a difference - nobody seems to have sat back and thought "how can we make telephony 1.0 work better, given the tools we've got at our disposal?". Yes, I'm sure there are security issues - but sort them!

Anyone got an answer to why there's no easy "configure my voicemail" app? Or even a web interface through the operator self-care portal?

[or maybe there are examples, but I'm not aware of them. In which case, an alternative question is "why don't you tell anyone about this?"]

Saturday, November 26, 2011

The top 10 assumptions the telecoms industry makes about communications services


This is a slightly-edited repeat of a post I made last year. I think it is more relevant than ever, especially as the telecoms industry sleepwalks into another generation of mistakes, notably RCSe.

Disruptive Analysis' tagline is "don't assume". It is worth quickly stepping back to understand some of the basic premises and unspoken assumptions of the telecom industry “establishment” around personal communications. These are themes and concepts burnt into the mind of the "telecoms old guard", especially standards bodies, telecoms academics and traditional suppliers; they are seen as unwritten laws.

But are they really self-evident and unquestionable? Maybe not. It needs to be remembered that the telecom industry has grown up around the constraints and artificial boundaries of 100-year old technology (numbering, for example, or linking of length of a conversation with value). Many of those unnatural constraints no longer apply in an Internet or IP world - it is possible to more accurately replicate society's interactions - and extend them way beyond normal human modes of communication.

For any telecoms company wanting a continuing role for the next 100 years of the industry, it is worth going back to first principles. It is critical that everything previously taken for granted is reassessed in the light of real human behaviour - because we now have the tools to make communications systems work the way that people do, rather than forcing users to conform to technology's weird limitations.

For instance, we currently see the weird phenomenon of companies pushing so-called “HD” (high-definition) voice, as if it’s an amazing evolution. Actually, they mean “normal voice”, as it comes out of our mouths when we speak. We’ve only been using low-def voice because the older networks and applications weren’t capable of living up to our everyday real-life communications experience. Shockingly, this has taken decades to make a “big jump”, rather than evolving gradually and gracefully as technology improved.

So, in no particular order, these are the assumptions Disruptive Analysis believes are unwarranted.
  1. A “subscription” is assumed to be the most natural way to engage with, or pay for, communications services.
  2. The most basic quantum of human communication is assumed to be “a session”.
  3. It is entirely rational to expect people to want a single presentation layer or interface, for all their various modes of communication (ie “unified” communications or messaging).
  4. Communications capabilities are best offered as “services” rather than being owned outright, or as features of another product.
  5. A phonebook is assumed to be the best metaphor for aggregating all of a person’s contacts and affiliations.
  6. A phone call – (person A sets up a 2-way voice channel with person B for X minutes) is an accurate representation of human conversation & interaction, and not just a 100-year old best effort.
  7. People always want to tell the truth (presence, name, context) to others that wish to communicate with them.
  8. People are genuinely loyal to communications service providers, rather than merely grudgingly tolerant.
  9. Ubiquity is always more important than exclusivity or specialisation.
  10. The quality of a communications function or service is mostly determined by the technical characteristics of the network.
These are the types of issue discussed in much more depth my research reports, private advisory engagements, and in the Future of Voice workshops run by myself and Martin Geddes. 

If you'd like more detailed explanation of these assumptions - and what they mean for next-generation communications business models, please get in touch. I'm at information AT disruptive-analysis DOT com

European Court of Justice ruling on ISP filtering of P2P has wider implications for DPI & policy

There has already been quite a lot of online discussion of this week's ruling by the European Court of Justice (ECJ) on whether a Belgian ISP can be forced to filter out P2P traffic on its network. (A quick news article is here and more detailed analysis here  or here)

From my point of view, the most interesting thing about this judgement seems to be that the court has taken a very dim view of the possibility of "false positives" from the DPI or whatever other system might be used to monitor the traffic. (It also has implications for the privacy aspects of data monitoring by telcos as well - it was that, rather than more general "neutrality" concerns, that led to the Dutch Net Neutrality law)

The term "false positive" comes (I think) from the healthcare and pharmaceutical industry, where a "false positive" is a wrong diagnosis, such as telling someone that tests show they've got a disease, when actually they don't. The opposite (false negative) is arguably even worse - that means telling them that the tests show they're clear of the disease, when actually they DO have it all along. False positives/negatives also crop up regularly in discussions of security technology (eg fingerprint recognition or lie-detectors).

In other words, DPI tends to work blindly testing for a specific "class" or other group of traffic flows, using "signatures" to help it detect what's happening. This works OK, up to a point, and we see various ISPs blocking or throttling P2P traffic. It can also distinguish between particular web or IP addresses and various other parameters. But the issue with the ECJ judgement is that this process can't distinguish between "bad" P2P (illegal content piracy) and "good" P2P (legitimate use for distributing free content or other purposes).

A DPI system should be able to spot BitTorrent traffic, but likely wouldn't know if the content being transported was an illicit copy of New Order's True Faith, or a recording of a really terrible karaoke version with a plinky-plonky backing track you'd done yourself and released into the public domain. (To be fair, if your singing is as bad as mine, blocking its transmission is probably a service to humanity, but that's not the point here).

If the network is actually congested, the operator can probably claim it is reasonable to block or slow down all P2P traffic. And there is possibly a legal argument about the ISP working to limit piracy, if costs are low enough for implementation (again, a separate discussion). But if the network is not congested, it's definitely not reasonable to slow down the "false-positive" legal P2P data.

The interesting thing for me is how this could apply to other P2P use cases - for example application-based prioritisation. What is the legal (and/or consumer protection) stance when the DPI or PCRF makes a mistake? Let's say for some reason, I use a video codec, player and streaming server to transmit non-video material - animation perhaps, or some form of machine-readable codes. If the DPI leaps to the conclusion that it's "video" and prioritises/degrades/"optimises"/charges extra for it, is that a false positive and therefore illegal?

According to Wikipedia "Video is the technology of electronically capturing, recording, processing, storing, transmitting, and reconstructing a sequence of still images representing scenes in motion". There are definitely applications of video-type technology for non-video content.

There are plenty of other false-positive and false-negative risks stemming from mashups, encryption, application obfuscation or simple poor definition or consumer understanding of what constitutes "Facebook". (This is especially true where a network DPI / optimisation box / TDF works *without* the direct collaboration of the third party).

Now, in my experience, DPI vendors have been very cagey about disclosing their rate of "false positives", simply saying that overall, the situation is improved for the operator. I've heard from some sources that an accuracy of 85-90% is considered typical, but I imagine it varies quite a bit based on definition (eg # of bits vs. # of flows). It's also unclear exactly how you'd measure it from a legal point of view.

But the ECJ judgement would seem to suggest that's not good enough.

I'm quite glad that most of the vendors' product and marketing people don't work in the pharmaceuticals industry. Wrongly diagnosing 15% of patients would probably not be acceptable.... Even in other technology areas (eg anti-spam software) it would be near-useless. As I've said before about video, I think there are some very interesting use-cases for consent-based policy with involvement of the user and upstream provider, but it seems that the network *acting on its own* cannot be trusted to detect applications with sufficient accuracy, at least under stricter legal supervision from bodies like the ECJ.

Thursday, November 24, 2011

My thoughts on Ofcom's Net Neutrality statement

Ofcom has finally put out its statement on how it views Net Neutrality today. I'm quite impressed - it seems like a sensible compromise between a ton of vested interests and ideologies. If I'm honest, I'm also quite pleased because it has a broadly similar tone to my own submission to the consultation last year and my proposed "code of conduct" on this blog.

Some highlights:

  • The distinction between vanilla best-efforts "Internet Access" and "Managed Services", which can be prioritised
  • "If a service does not provide full access to the Internet, we would not expect it to be marketed as Internet Access"
  • The stipulation that the Internet Access portion should remain neutral, unless absolutely necessary for traffic management - in other words, Internet Access as a product means "The Real Internet" (TM)
  • Sensible discussion of two-sided business models
  • Recognition of the value of the "open Internet"
  • General view that competition seems to work quite well in broadband access, as long as it remains easy enough to switch between providers
  • Ofcom has seemingly booting the idea of generalised two-sided "traffic tax" out of consideration. (Luckily, the ludicrous telco-sponsored ATKearney report on a "viable future model of the Internet" seems to have been laughed out of court - pure #telcowash that ATK should have been ashamed of)
  • Making the point that any traffic management applied to Internet Access should be non-discriminatory between similar services (eg throttle one source of video, throttle them all). This is similar to the situation the Israeli regulator on my panel at the BBTM conference last week was expounding (ie "fair" rather than "reasonable" traffic management).
  • That traffic management practices need to be made very clear at point of sales and ongoing
  • Reservation of capacity for managed services should not unnecessarily impede the QoS of best-efforts Internet Access. No hard-and-fast rules, but a clear "we're watching you" with an implicit threat of more specific regulation if operators misbehave.
  • Marketing should be based on average speeds rather than theoretical peaks
  • It seems that traffic management is deemed OK for dealing with congestion, but not for dealing with variable costs on uncongested networks ("traffic management may be necessary in order to manage congestion on networks"). That means that various of the video optimisation use cases I discussed in my post this morning would be forbidden.
  • Ofcom has washed its hands of dealing with "public service" content and how that should be dealt with. Basically, that's kicking the problem of the massively successful BBC iPlayer upstairs, for the politicians to decide about.
One thing to note here is that I expect all operators to have to sell "Internet Access" as part of their offers, if not by law then in practical terms. If I buy a phone contract from an 18yo trainee on a Saturday morning and ask "does it come with Internet access?" then I'll have grounds for complaints about mis-selling if it subsequently turns out that I've been sold "processed Internet-like substitute", or "I can't believe it's not Internet". Ofcom's left it a bit vague about selling Internet Access with caveats, which suggests that burying the detail in the fine print won't be acceptable either.

Overall, this strikes me as a good compromise. If operators' and vendors' QoS systems can genuinely improve on what today's best-effort Internet can do, without damaging the utility of that vanilla Internet Access for other customers, then it's entirely reasonable for them to monetise it. Making operators sell the vanilla service in terms of average speeds (ideally average latency as well) means that the minimum QoS floor is defined when the customer buys the service. If I buy an average 2Mbit/s connection, then I don't really care if someone else gets 5Mbit/s priority turbo-boosts or whatever.

That said, as always the Devil will be in the detail. An average 2Mbit/s is pretty useless if it's really spiky. But Ofcom has reserved the right to give harder targets for QoS in future, which will likely be punitive if operators transgress what's reasonable. It seems to suggest that enough capacity to watch Internet video - and therefore, implicitly, VoIP - is likely a reasonable compromise for best-effort access.

This fits with my general view that operators are not going to be able to "monetise" stuff that works perfectly well now, just by putting a tollgate in the way. They'll have to add value and make it better - implicitly meaning that network QoS has to fight its corner for content companies and developers, against other options like better software/codecs/UI. In other words, if operators want a revenue-share from Google or Facebook, they're going to have to help them earn MORE money than they're already doing, not try and take a share of their existing business.

I suspect that the disparaging term "OTT" might finally disappear when T-Mobile's account director sits down with his YouTube counterpart to agree a new affiliate deal....


[Please sign up for my blog email distribution list - see form on top-right of the web page. If you're reading on RSS, then please click through in the browser!]

Mobile video optimisation - different perspectives

I had a briefing call with one of the mobile video optimisation companies yesterday, discussing the role played by network boxes that can compress, rate-limit and otherwise change a downloaded video stream.

It's an area I've been critical of for a while - especially where the network 'transparently' alters video content, without the explicit consent of either video publisher or end-user. I recognise that's a useful short-term fix for congested networks as it can shave 20-30% off of data throughput, but I also think that it's not a viable model in the long term.

Basically, publishers don't like the idea that some device in the data path "messes with our content", and in many cases users and regulators won't like it either. Some of the vendors suggest that reducing video "stalling" by optimisation actually improves the quality of user experience (QoE), but there are also other approaches to that which are more "consensual".

It was interesting to read the other day that Verizon's Video arm (formerly V-Cast) is working with its network teams to pre-define properly "optimal" formats for video depending on network connection, device type (ie screen & processor) and even user dataplan. In my mind, that's totally acceptable, as the content publisher/aggregator is working hand-in-hand with the network team to create a balance between QoE, network integrity and "artistic integrity". Collectively, they've thought about the trade-offs for both network and user in terms of quality, cost and performance. Presumably they also looked at the role of CDNs, adaptive bitrate streaming, on-device caching and assorted other clever mobile-video tech. According to a related article, they can't use WiFi offload because their content rights agreement is for 3G/4G only - another good illustration of the complexities here. All in all, it sounds like a great example of "holistic" traffic management, working with the various constraints imposed. I assume they have a roadmap for further evolution as it all matures.

But there's a million miles between that philosophy, and the type of non-consensual optimisation that's increasingly common for normal Internet video traffic destined for mobile devices, where the network unilaterally decides to alter the content. We're seeing solutions getting a bit smarter here - for example, only acting if the cell is actually busy, but there are various other use cases where it's more directly about the operator's costs.

The vendor I spoke to yesterday mentioned scenarios like shared networks (where each operator pays a share of costs based on traffic volumes), or where the backhaul is obtained from a third-party fixed operator at variable-cost. Another scenario I've heard (usually more for caching / CDNs) is around international transit in parts of the world with expensive connectivity. But another use cases was where the network is uncongested but high-quality video was being downloaded at high speed. In that instance "we can take 20% of the data stream off the top, and they won't notice".

Here is where we differ. In my view, if I (as a customer) pay for Internet Access, then I expect that the bits and bytes that come out of the server are the same bits and bytes that arrive on my device. The server owner thinks the same. Yes, in certain circumstances I'll accept a trade-off if it improves my QoE, as long as I am told what's happening and opt in: imagine an icon appearing, indicating "optimiser on", or a switchable user-controlled optimiser like Onavo's for reducing roaming traffic. But I'm not prepared to accept a lower quality just because the operator has been stupid enough to agree contracts for a shared network, with terms that don't take account of the types of retail broadband service its selling to its customers.

"Oh, I'm sorry Mr Bubley, we know you paid £1000 for the flight, but we've downgraded you to economy class because it's a codeshare flight, and we have to pay our partner airline £500 extra per-seat for business class passengers". I'm sorry, but that's your problem and not mine.

And as for the notion that the Internet connection can arbitrarily decide that "I won't notice" if it chops up & reformats my data? Hello? How do you know that? Maybe I've encoded something into it steganographically in the background? Maybe I've PAID for content of a specific resolution, and I have a contract with the publisher? Maybe I'm willing to take the trade-off of throttled buffered' throughput, because it's going to a background app for viewing later? If I've paid for my 2GB per month, I'll use it the way I like, thank you very much. If the network's busy, fine - tell me (and/or the publisher) & I'll use WiFi or download it later or *I* will make the decision to drop the quality if I want instant gratification.

I've got a lot of sympathy for operators that have genuine peaks and troughs of demand, or that are constrained in terms of supply by spectrum shortages or difficulties obtaining cell sites. But I've got less sympathy for companies that sell a product ("2GB of mobile Internet access at up to 10Mbit/s") and then realise they can't deliver it, because it doesn't match up to their network's capabilities or cost structure.


But it struck me that the reason that telecom operators (especially mobile) think that this is OK, is that degrading quality is at the core of their business proposition. For the last 100 years or so, we've had to deal with the fact that our natural, wideband human speech has been squished into 3KHz pipes for transport across the network. It's "optimised" (ie downgraded) at the microphone on your handset. Of course, now we have service providers trying to monetise HD voice - and expecting you to pay a premium just for transporting what your vocal cords have been putting out all along.

I'd love to know if there's internal transfer-pricing going on at Verizon for the video service. Is VZW actually "monetising" its optimisation capabilities and charging the Video department for the privilege? Or is it being done for mutual benefit, without money changing hands? I suspect the latter, which is the model I see most (but maybe not all) consensual video optimisation working out.

Oh, and if I ever go for a drink with this particular optimisation company exec, I'm going to have a word with the barman. I'll tell him to pour 80% of his pint of beer as normal, but top up the other 20% with coloured water. I'm sure he won't notice.

Tuesday, November 22, 2011

Another reason why application-based charging for mobile data won't work

I was at the Broadband Traffic Management conference in London last week, one of the largest events in the calendar for 3G data networks and policy/traffic management and charging solutions. I spoke to a wide range of vendors and operators, and moderated an afternoon stream about dealing with mobile video.

I came away from the event with a number of my beliefs about policy, WiFi offload, video optimisation and operator "politics" strengthened, and a number of new learnings and perpectives that I'll be sharing either on this blog, or in a report in early 2012. This particular post covers a couple of things about "service-based" or "application-based" charging and policy.

(As an aside: I'm going to be boycotting the BBTM event in 2012, for numerous reasons, not least of which was the ridiculous decision to host it in a place with no decent cellular coverage and £20 / day WiFi. I know from organising my own events that organisers have a lot of negotiating power with venues about the "delegate package". If the venue refuses because it has 3rd-party run WiFi with an inflexible contract [this venue used Swisscom] then go somewhere else. It's inexcusable).

I've said on numerous occasions (eg here, here and here) before that I don't believe that operators can (in general) successfully design mobile data or broadband services around application-specific policies and pricing. Despite continued hype from the industry and standards bodies, network cannot and never will be able to accurately detect and classify traffic, application or "services" on its own. With explicit cooperation from third-parties, or sophisticated on-device client software hooked into the policy engine, there's a bit more of a chance.

But I continue to hear rhetoric from the network-centric side of the policy domain about creating "a Facebook data plan", or "charging extra for video", or "zero-rating YouTube". I'm a serious skeptic of this model, believing instead that policy will be more about location, time, speed, user, device, congestion and other variables, but not an attempt to decode packets/streams etc. in an effort to guess what the user is doing. However, lots of  DPI and PCRF vendors have spent a lot of money on custom silicon and software to crunch through "traffic" and have promoted standards like 3GPP's new "Traffic Detection Function", and are now determined to justify the hype.

Much of the story fits with the usual attitude of punishing (or "monetising") the so-called OTT providers of application and content, by enabling the network to act as a selective tollgate. On a road, it's easy to differentiate charges based on the number of wheels or axles a vehicle has, as you can count them. Not so true of mobile data - some of the reasons that I'm skeptical include mashups, encryption, obfuscation, offload, Web 2.0, M&A between service providers and so on. (And obviously, national or internationl laws on Net Neutrality, privacy, copyright, consumer protection and probably other bits of legislation).

But during the BBTM, I came to a neat way to encapsulate the problem: timing.

Applications change on a month-by-month or week-by-week basis. The Facebook app on my iPhone looks different to me (and the network) when it gets upgraded via the AppStore. I talked to a network acrhitect last night about the cat-and-mouse game he plays with Skype and its introduction of new protocols. Not only that, but different versions of the app, on different devices, on different OS's, all act differently. And according to someone I met at BBTM, different countries' versions of different phones might interact differently with the network too. And the OS might get updated every few months as well.

Operators can't work on timescales of weeks/months when it comes to policy and charging. The business processes can't flex enough, and neither can customer-facing T's and C's. How do you define "Facebook"? Does it include YouTube videos or web pages shared by friends viewed *inside the app*? What about plug-ins? Who knows what they're going to launch next week? What if they shift CDN providers so the source of data changes? 

Unless you've got a really strong relationship with Facebook and hear about all upcoming changes under NDA, you'll only find out after it happens. And then how long will it take you to change your data plans, and/or change the terms of ones currently in force? What's the customer service impact when users realise they're charged extra for data they thought was zero-rated?

And if you think that's bad, what a year or two.

As we move towards HTML5 apps, I'd expect them to become more personalised. My Facebook app and your Facebook app might be completely different, just as my PC Facebook web page is. Maybe I've got encryption turned on, or maybe Mark Zuckerberg sets up the web server to put video ads on my wall, but not yours. Maybe I'm one of 5 million people out of 800 million who's testing a subtly different version of the app? Or that has a Netflix integration? Websites do that all the time - they can compare responsiveness or stickiness and test alternative designs on the real audience. And because it's all web-based, or widget-based, much of that configuration may be done on the server, on the fly.

How are you going to set up a dataplan & DPI that copes with the inherent differences between dean.m.facebook.com and personX.m.facebook.com? Especially when it changes on a day-by-day or session-by-session basis?

Yes, there will still be fairly static, "big chunks" of data that will remain understandable and predictable. If I download a 2GB movie, it's going to be fairly similar today and tomorrow. Although if I stream it with adaptive bitrate filtering, then maybe the network will find it harder to drive policy.

EDIT - another "gotcha" for application-based pricing is: How do you know that apps don't talk to each other? Maybe Facebook has a deal with Netflix to dump 8GB of movie files into my phone's memory (via the branded Facebook app & servers), which the Netflix app then accesses locally on the device? This is likely to evolve more in the future - think about your PC, and the way the applications can pass data to each other.

One last thing from the BBTM conference: we heard several speakers (and I heard several private comments) that the big pain is still signalling load of various types, not application data "tonnage". I've yet to hear a DPI vendor talk convincingly about charging per-app or per-service based on signalling, especially as much of the problem is "lower down" the network and outside of the visibility of boxes at the "back" of the network.

Yes, we'll continue to see experiments like the Belgian zero-rating one I mentioned recently. But I expect them to crumble under the realities of what applications - defined in the user's eyes, not the network's - really are, and how fast they are evolving.

UNSUBTLE SALES PITCH: if you want a deeper understanding of how application changes will impact network policy, or the fit of traffic management with WiFi offload, CDNs, optimisation, devices and user behaviour, get in touch to arrange a private workshop or in-depth advisory project with Dean Bubley of Disruptive Analysis . Email information AT disruptive-analysis DOT com

Monday, November 14, 2011

Operator fear & control over WiFi tethering is only one example of connection-sharing threat

I've just read a really interesting piece by fellow analyst Ian Fogg which highlights how operator customisation & policy can be pushed down to smartphones, even where those devices are bought "unlocked" (or "vanilla") by the end-user.

In a nutshell, the mere act of inserting a SIM into a device like an iPhone can lead to some configuration options being locked-down to the end user - specifically, data connection APNs (the named "virtual" access points on 3G/4G networks such as iphone.operator.com) - or the ability to use tethering.

Tethering has been pretty controversial for several years now - the ability to turn a phone into a WiFi hotspot, to allow multiple devices to connect through one network access subscription. Some operators charge extra for such services, while others allow it for free on certain plans. It's becoming more widely used - I see quite a few Android and a few iPhone SSIDs when I'm in public locations.

There has been a significant push-back from users, who tend to view this as a right ("I've bought 1GB of data, why should someone else determine how I use it?"), but the leading device and OS vendors seem to have bowed to operator demands and helped block unauthorised use. Google has even limited the availability of "unofficial" tethering apps on Android. Various policy tools and DPI approaches to spotting tethering are also available - for example looking for tell-tale IE or Firefox PC browser traffic going through a connection that's otherwise obviously from a phone.

I also take the view that this is one battle that (for now) the handset vendors are fairly happy to allow the operators to win. The availability of tethering doesn't really make Apple much extra profit, but potentially limits the willingness of telcos to subsidise iPhones. Various Android device makers also sell 3G/4G USB modems or MiFi-style personal hotspots, so they'd also prefer you to buy a second device rather than use your phone as a tether. While desired by consumers, tethering is also a battery-drain, so OEMs see it as something they'd rather not encourage huge use of.

There are a couple of important related issues here though.

Firstly, operators would like to use the same control channels (which probably include things like OMA's Device Management standards) to apply to WiFi use more generally - in particular, which WiFi access points the user can  log on to. This will fail though - both users and device vendors place "WiFi neutrality" much higher on the utility and importance scale than tethering, and I see attempts to lock-down or force WiFi choice as backfiring massively. This is why I have grave doubts about much of the current hype around Hotspot 2.0, ANDSF and assorted other standards aiming to give MNOs greater control over WiFi.

More generally, tethering is just one use case of a wider phenomenon I first identified a couple of years ago, called "connection sharing". This is the concept of smartphones working together to bond multiple users' data pipes, either to fill in coverage holes collaboratively, or to "multiplex" data connections together for faster connectivity.

Imagine sitting at a table with one person using an iPhone on Vodafone, another with an HTC Android on Orange, and a third with a Windows Nokia on 3UK. If they could discover each other and bolt together their connections, the three users would get much better service acting collectively. But.... the operators' data conections would be both cannibalised and commoditised. It would be impossible to enforce user-specific policy or use the SIM for alternative applications such as Identity Management services, as the networks wouldn't know which of the three people was generating which IP packets.  Not only that, but this would essentially lead to an offloading of the weakest network's traffic onto the strongest.

We'd also possibly see secondary markets evolve in selling "unused inventory" of data connectivity - people with good dataplans could try to sell spare capacity to other people. You can imagine an app working out that the user still has 300MB left two days before the end of the quota/billing-cycle, and trying to resell it to nearby users "second-hand". More securely, I've previously suggested the notion of "social tethering", where perhaps you could allocate a certain volume of your data allowance to be shared with your known Facebook or LinkedIn contacts if you're in the same room.

Overall, connection-sharing has the ability to change (or even destroy) multiple operator business models and services. Various operator services tied to SIMs would be completely undermined - most of the IMS/RCS story depends on keeping the link between network/SIM and the application, as do some of the NFC implementations. In the long run, the break between SIM and identity is inevitable in my view (breaking the "Tyranny of the SIM card"), but the operators' attempts to clamp down on tethering may delay it a little longer.

That said, I can see other workarounds emerging - especially WiFi Direct, which is an official WiFi Alliance standard intended to make the old and little-used peer-to-peer WiFi mode work a lot better. Rumours tell me that the telcos are not big fans of this, so it will be interesting to see if it makes it onto smartphones, and how exactly it is implemented.

I've also got a couple of other disruptive next-gen tethering options in mind as well, but I'll keep those to myself for now, or just for those consulting clients that employ me to assist them (either as poacher or gamekeeper).

Overall, I expect the current initiatives to reduce the impact of user-driven tethering by a certain amount. But in the medium term, I expect those controls to crumble - but perhaps by the OEMs themselves. I continue to believe there's a good chance that Apple, Google or another player will suddenly push a really disruptive WiFi play of their own, and are happy to keep the "tethering powder" dry until that point.

Friday, November 11, 2011

The smoking gun - I think O2 UK has FALLING mobile data usage

Following on from my earlier post about whether mobile data usage is flattening off, I've done a quick bit of forensic analysis and modelling about Telefonica O2 UK's mobile data traffic statistics:

From Q1 2011 report: "Data traffic from mobile broadband accesses however continued to grow with total volume increasing 45% year-on-year. Despite the removal of further heavy users to ensure the best network experience to all customers, both data usage per customer and mobile broadband penetration continued growing"

From Q2 2011 report: "Data traffic from mobile broadband accesses also continued to grow with total volume increasing 31% year-on-year in the first half."

From Q3 2011 report: "Data traffic also continued to grow with total volume increasing 22% year-on-year in the first nine months. Growth in the third quarter was lower at 7.9% year-on-year following the removal of heavy data users, with all consumer data contract base sequentially growing usage."

The model is slightly complex because we don't know exact QoQ growths for 2010, and we have to reverse them out of the 6/9 month YoY stats and make estimates.

But whichever way I set the spreadsheet up, it looks like O2 UK has recorded falling absolute levels of traffic in Q3 of 2011, and perhaps Q2 as well.

EDIT - one of the commenters has found a set of numbers that doesn't have a fall, but has 3 quarters essentially static at +/- 1% . Might be a one-off readjustment, but it's surprisingly "elegant" if that narrow possibility holds true

We know that Q1 2011 = 45% more than Q1 2010.

So let's put Q1 2010 = 100 and Q1 2011 = 145  . Interpolating, we could have Q1/2/3/4 of 2010 as perhaps 100/115/125/135 or maybe 100/120/130/140 or similar. But then we can determine Q2 and Q3 2011 from the other growth rates reported.

So for example, we get:


Q1 2010 100
Q2 2010 115
Q3 2010 125
Q4 2010 135
Q1 2011 145
Q2 2011 137
Q3 2011 135


or


Q1 2010 100
Q2 2010 120
Q3 2010 130
Q4 2010 140
Q1 2011 145
Q2 2011 143
Q3 2011 140


Unless you put really silly numbers into the model (eg Q2 2010 = 250, up 2.5x QoQ) I can't see how you get anything but an absolute decline in recent months.

Now this is just one operator in one country. And it's possible that O2 (the original iPhone exclusive operator in the UK) has seen a lot of churn to Vodafone, EE and 3UK (which still has full flatrate dataplans and has been reporting strong data growth). And O2 has repeatedly stated that it has "removed heavy data users" and has apparently also blocked adult content by default, as well as encouraging WiFi. So this could be an isolated example, or a one-time blip before growth resumes.

O2 Germany's stats reported in the same financial statements do not demonstrate an absolute fall, but also suggest a slowing rate of growth. I'm estimating that it's down to maybe 8-10% QoQ growth, based on statements such as "Mobile data traffic also increased significantly (+51% year-on-year in the first nine months; +47% in the third quarter)"

I'm going to be looking very closely for other data points, and more importantly trends over time in growth rates - any other contributions or comments most welcome.

ARE YOU A MOBILE OPERATOR OR VENDOR WONDERING WHAT IT MEANS IF YOUR MOBILE DATA USAGE FORECASTS ARE WRONG? WHAT IF THE "DATA TSUNAMI" IS OVER? BOOK A WORKSHOP OR ADVISORY PROJECT WITH DISRUPTIVE ANALYSIS - email information AT disruptive-analysis DOT com

Has mobile data growth flattened off? Are caps & tiers working too well?

From Telefonica's Q3 report on O2UK:
"Data traffic also continued to grow with total volume increasing 22% year-on-year in the first nine months. Growth in the third quarter was lower at 7.9% year-on-year following the removal of heavy data users, with all consumer data contract base sequentially growing usage."  [Hat-tip to @twhemeier for pointing me to this]

Sign up for Disruptive Wireless & Disruptive Analysis updates via email! (Top right of page)

From Cisco's February 2011 Mobile Data Forecast from its widely-used VNI (Visual Networking Index) research team -  "Global mobile data traffic will increase 26-fold between 2010 and 2015. Mobile data traffic will grow at a compound annual growth rate (CAGR) of 92 percent from 2010 to 2015, reaching 6.3 exabytes per month by 2015."

From Ericsson's new report on mobile data traffic, press release headline "Ericsson predicts Mobile Data Traffic to grow 10-fold by 2016", on Page 12 "Data traffic grew by 100 percent between Q2
2010 and Q2 2011. The comparatively smaller quarterly growth of 8 percent between Q1 and Q2 2011 is likely to be related to seasonal variations in traffic levels"


Seasonal variations.Yes, obviously. Because personally, I always use my smartphone and 3G modem less in April than February.


I've already been saying this on Twitter for a few days, but with next week's Broadband Traffic Management conference coming up in London, I thought it was worth putting a stake in the ground via the blog as well. 


I think mobile data growth is slowing - much faster than expected. The exponential has been tamed.

Now it's possible that the "tsunami" has just receded from the shore, and is going to come in again harder still with the next wave (eg tablet-driven video). But at the very least, the smooth curves drawn in millions of infamously-wrong "scissor diagrams" are wrong. Maybe we've got a 2-stage S-surve instead....


... or maybe we've got a fast, 1-stage S-curve and then steady growth, much as you get with most technology service adoptions. I think there's a good chance we've just gone past the inflection point.


I'll keep an eye on this in later posts, but a few more bits of food for thought upfront today.


Firstly, it seems that caps and tiers are working - perhaps too well. People DO change their behaviour, operators CAN limit data growth and drive more revenue. They have closed the scissors. Maybe we don't need fancy, app/content-based tariffs, and operators can offer "happy pipe" data services, and save on opex by firing their anti-Net Neutrality lobbyists and consultants?


Secondly, there has been a huge effort by network equipment vendors to lower the "production cost" of mobile data. The mythical "cost per bit" has been chased by LTE and HSPA+, but more importantly operators have shared their networks, optimised their cell-site locations, looked at distributed architectures for GGSNs, offloaded traffic, sorted out their backhaul opex, and got vendors focusing on small cells and better space/power-efficiency. And we've also recognised that traffic growth to date has (a) been filling up empty capacity, and (b) been filling it up mostly in a few cells only.


Now let's look forward. Most of the forecasts I've seen assume both increasing penetration of smartphones / tablets / mobile-enabled laptops AND a continued growth in use per month. That's not the way things work - usually the first users are early adopters, and get more enthusiastic quickly, for the first year or two. So usage grows quicker as there is both accelerating adoption and better understanding / exploitation of their capabilities (and also, better devices/networks). And then after that, you get the late majority and laggards, who are often more "casual" users - and who look for lower price-point devices and connectivity services. A $100 low-end Android user in Africa, on a $15 month plan (or $7 month prepay) is not going to be consuming the same amount of data as a $60/month user of an iPhone 4S in Stockholm or San Francisco.


Then we've got increasing use of WiFi, smarter and more tactical use of data allowances by end users. We've also got various content and application companies who've become smarter about their use of data in mobile - not universally no, but for sure Apple and YouTube and Facebook all recognise that they shoulder some responsibility for helping users keep down their bills. It doesn't help Apple sell more phones, if its customers have to spend more money on data overage charges, rather than upgradig to a new device. So we've had innovations like HLS Adaptive Bitrate Streaming, and Google working with operators like Orange to create more network-friendly apps and content.


Of course, this is all rather inconvenient for those network equipment vendors who are pitching complex policy-management and charging solutions, based around DPI boxes trying to work out traffic/app types and concoct complex, processing-intense approaches to billing for them. Never mind that such methods have severe deficiencies, because apps/content can evolve on the basis of days/weeks - much faster than telco processes can change billing business rules and implement them to fit.


Now we'll certainly see more data growth globally - clearly there are a lot of "un-smartphoned" users in India, China, Russia, and Africa and Latin America. There's a fair amount of growth left too in Europe and N America and developed Asia, especially with 2nd and 3rd smart devices (eg tablets), as well as M2M and so on.

And there's also a chance that we're seeing a one-time blip, as flatrate plans are ended, and the most egregious abusers of bandwidth are reigned in. Perhaps we'll be back to "business as usual" with natural organic growth in 6 months' time.


But I content that mobile data growth is *much* more manageable than we expected. The forecasts look too high to me, especially like-for-like in developed markets. Unfortunately, Cisco's VNI stats have scared the industry *so much* that companies have taken action to avoid them coming true. They've become a self-denying prophesy - something that, as a forecaster myself, I've got a lot of sympathy for. The problem with creating methodologically-rigorous predictions is that you can never factor in other peoples' response to those predictions. If they're independent (eg locked away in a vault and nobody sees them until afterwards), you can gloat about your accuracy retrospectively. With some of my own less-well-propagated research I can do just that. 

Unfortunately Cisco VNI (and now, Ericsson) has done what every scientist knows from quantum mechanics - the act of measuring something can actually change it.


I'm not saying that mobile data growth has completely flattened off - clearly, there is still growth. But I think it's now *manageable growth* rather than an explosion, and so some of the knee-jerk approaches to it (such as non-consensual video optimisation, or forced non-neutral WiFi offload) are no longer needed.


Edit: a full analysis of the O2 traffic numbers is here



Sign up for Disruptive Wireless & Disruptive Analysis updates via email! (Top right of page)

ARE YOU A MOBILE OPERATOR OR VENDOR WONDERING WHAT IT MEANS IF YOUR MOBILE DATA USAGE FORECASTS ARE WRONG? WHAT IF THE "DATA TSUNAMI" IS OVER? IS THERE A RISK OF OVERCAPACITY? BOOK A WORKSHOP OR ADVISORY PROJECT WITH DISRUPTIVE ANALYSIS - 
email information AT disruptive-analysis DOT com





Tuesday, November 01, 2011

Belgian MNO tries app-specific zero rating - bad idea in my view

Mobistar in Belgium has started offering a plan which gives users zero-rated Facebook, Twitter and Netlog (a local social network), as well as free access to its own mobistar.be domain. (These sites are actually fair-use capped at 1GB data per month).

[Hat-tip for this to Samira Zafar who mentioned it on one of LinkedIn's discussion forums on Net Neutrality]

Details are at http://www.mobistar.be/fr/offre/mobile/cartes-rechargeables/tempotribe

I think that this will backfire spectacularly on the company. Their terms & conditions (run through Google Translate) says:

Unlimited surfing on Facebook, Twitter and Netlog is only valid in Belgium and corresponds to a volume of 1 GB maximum.

Surfing is for free surfing on mobile sites official Facebook, Netlog and Twitter for:

• Mobistar's mobile portal (m.mobistar.be)
• The official URL such as http://m.facebook.com , http://m.netlog.be , http://mobile.twitter.com
• applications of these official sites

The unofficial applications are not included in this option.

Links, pictures and content to external URLs has Facebook, Twitter and Netlog will be charged at the normal rate of a session Mobile Mail & Surf.

So in other words, if you use the Facebook app, and a friend links to a web-page or video which renders *inside the app* (ie with the blue banner still across the top), then you get charged extra. For me and I expect many other user's, that's part of the experience of using Facebook. Not only that, but in Facebook (certainly on the iPhone app) you don't always know if someone is sharing something on-site (eg a group page or note or picture) or off-site.



Furthermore, Facebook is now allowing its own-hosted videos to play inside the app - so where does that fit? And what happens when Facebook updates the app with new features (let's say, proxying bits of the web through itself)? Especially if it's only on some OS's, or some updated versions? Or if FB decides to route bits of its own content (pictures, lets say) through a new CDN with a different set of URLs? There are so many potential "gotchas" here it's amazing.

Then there's this official / unofficial app business. I seem to recall that Twitter has acquired half the "unofficial" apps like Tweetdeck, so presumably they are now official? Or not? Who knows? Then there's the chance that twitter.com gets fired up in the browser rather than an app (perhaps because there's a link in an email) and it defaults to the PC version rather than m.twitter .


I also have my doubts that the Belgian representatives of Facebook and Twitter (if they're actively being involved) are privy to new and upcoming features, or changes in the way the apps work - let's say, adding HTTPS encryption, for example. The list of hiccups and chances for both false-positives and false-negatives is huge.

Never mind Net Neutrality (which is a separate & legitimate concern here as well), this is an absolute minefield for confusion, poor customer service & complaints.


Could be an interesting case-study though - let's watch the outcome. 

[Speaking of case-studies, this post is another one. I'm not linking it to Twitter, but only driving it via LinkedIn and native / email-subscribers of this blog. The main objective is to divert the flow of people away from Twitter followers to more direct engagement with me via email or LinkedIn. Ultimately, I want to delete my Twitter account, and this is part of the migration process].