Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To discuss Dean Bubley's appearance at a specific event, contact information AT disruptive-analysis DOT com

Saturday, June 17, 2017

Does Amazon's purchase of Whole Foods redefine "Edge Computing"?

Yesterday's announcement that Amazon is acquiring retailer Whole Foods has meant I've adapted this piece from an earlier draft version. I'm expecting a few people to read this who aren't normally involved in technology, telecoms or cloud sectors - and who are likely to be unfamiliar with some of the terminology. Welcome!

In a nutshell: I don't think Amazon is purely interested in disrupting the food-retail business, or creating distribution/pickup centres for online shopping. I think Amazon is also interested in local, mini data-centres for its huge cloud business. This means it may be able to disrupt other telecoms/IT businesses, and steer some "edge computing" technology standards and paths, if it plays this well. There was already a lot going on in this space - hence my originally-intended post - but this deal changes the dynamic even more.

There are a number of reasons to put data-centres (physical locations with servers) close out to the "edge" of the network, near to end-users connecting with fixed broadband or mobile devices. 

Top of the list is latency, or delay: how long it takes an app or website to respond. This is partly driven by the speed of light (300,000km/s), as well as the efficiency and design of the electronics. Physically-closer data centres can mean low latency, and faster applications: critical for "realtime" uses such as IoT controls, or gaming, VR and many other areas. Low-latency is a big part of the pitch for new types of network anyway (eg 5G mobile), but it also implies that speed-of-light delays must be minimised, by putting processing/storage closer to the user.

Other reasons to have edge computing include data sovereignty laws (there's a growing set of rules around cross-border data flows) and avoiding the need to ship huge volumes of data all the way to the cloud, process it remotely, and then ship it back again. Avoiding possibly-clogged core and transport networks in the middle is cheaper as well as faster. 

Network cacheing of big chunks of content, such as frequently-watched videos, is another reason - this has been done for years, but pushing it deeper into the network may be important as usage grows.

Edge-computing may turn out to be particularly important for things like machine-learning, and other forms of AI such as image-recognition or speech analysis, or large-scale use of sensors. That could mean sound from talking to Siri or Amazon Alexa, industrial IoT installations watching for problems with machinery, cameras from self-driving vehicles and so on. It may be that only 1% of data collected is interesting - so processing the bulk locally, to sift out the important nuggets or create alarms, could be better than shipping never-ending terabytes back to a central point.

There are lots of angles to edge computing, and lots of emerging standards and visions. I've recently been looking at this area a lot, and I think some participants don't realise how many possible locations for "the edge" there are. It's very badly-defined. It also needs to be seen in the context of an ongoing battle for control of "the cloud" between big Internet players like Amazon and Google and Microsoft, versus the network providers - and perhaps also enterprise/IoT specialists like GE, Cisco and IBM.

The critical point here is that the "edge" can be thought of as being "in the network" (at aggregation points or some sort of fixed/radio node), or at a user's premise, or even in the device itself. It might even turn out to be in a specific chip in the device, or we may find that devices are chained together, with a local gateway acting as a hub for "capillary" connections to individual sensors, bluetooth headsets, smart-home gadgets and so on. In theory, compute and storage could exist at any of these points (or all of them, with dynamic control).

The telecoms industry is mostly focused on MEC - originally Mobile Edge Computing, now redefined with the M for Multi-Access. In theory, MEC is the concept of distributing compute functions deeper into the telecoms network - perhaps colocated with cell-sites, local cell aggregation points, fixed-network central offices, or even with small/pico-cells inside buildings, or by the side of the road. Some in the industry position it as a way for telcos (especially mobile operators) to create a distributed cloud in order to compete with Amazon - while others are more pragmatic and just see it as a way to extend new virtualised bits of the network control itself, outside the data-centre. There are versions of MEC that couple tightly with major NFV initiatives, and others that are more independent.

The original vision of MEC - a server at every base station - now seems to be fading as the economics favour larger aggregation centres. A possible exception, however, is for in-building deployments hosting special, location-specific applications and functions - perhaps IoT control in a factory, local media-streaming for a stadium and so on. In those cases, it's not clear whether the network operator would be a classical telco, or perhaps a new "neutral host" or enterprise-owned player. This was a theme I looked at in my recent. Private & Enterprise Cellular workshop (link; more on that in another post).

There are also various other approaches to edge computing - a concept called Fog is being pitched as a multi-tiered approach to distributing compute resources, and Cloudlets are another. It's very dynamic and multi-faceted, and what might work well for bulky content distribution might be inappropriate for controlling swarms of drones, or putting better security controls at the edge. Some of the network internals for 5G may themselves need to be put at the edge (perhaps identity functions, or congestion-management), and there is a desire to use that as an opportunity to host applications or data for other parties as well, as a new revenue stream.

Meanwhile, the nature of IT and web applications themselves is changing, with use of "serverless" computing architectures, and a shift to doing lots of processing at the edge for machine-learning and other tasks, including the rise of GPU processors. I recently went to see a presentation by ARM, which was talking about doing more processing on low-end IoT devices in silicon, without needing the network much at all. That's right out at the "real" edge.

[It's worth noting that ARM is owned by Japanese telco/InternetCo Softbank, which has also
taken a stake in GPU vendor Nvidia and has just bought scary-walking-robot company Boston Dynamics from Google. It's perhaps the only telco to understand "edge" fully]

So... where does Amazon, and especially Whole Foods, fit into this?

At the moment, Amazon Web Services has around 40-50 main data centres, split into regions and "availability zones" (see this link). It's also got servers (mostly for content-delivery, a CDN) in various other companies' data-centres, notably telcos. Its huge scale has meant that most other providers of "hyperscale" cloud struggle to compete, beyond the very largest IT players. The telcos had high hopes for cloud computing a few years ago, but have now mostly shifted away from head-on competition with AWS.

Instead, the telecom industry is looking at MEC (and also CORD, central office rearchitected as a data-centre) as possible differentiators. By having localised compute resources at the network-edge, it sees an opportunity for monetising tomorrow's distributed, low-latency applications - as well as distributing its own (now virtualised) internal functions.

In theory, MEC could either allow telecom operators to create "distributed Amazon" rivals for a wide IT/Internet audience, or host AWS and its peers' servers locally, for a fee. In fact, the Amazon-on-MEC concept got a boost recently with Amazon's announcement of its Greengrass distributed IoT edge architecture (see link). I've spoken to some MEC players - vendors and operators - who are quite excited by this.

But now, Amazon has possibly just thrown a spanner in the works, at least in terms of the "MEC at network aggregation points for general cloud apps" story. With Whole Foods, it now has a footprint of 450-odd locations, principally in the US but also in London and elsewhere. Typically these are in city centres - and being supermarkets, they likely have good electricity supply (and even cold rooms) that could be used for servers, rather than just kale and houmous. It's not obvious why developers would prefer to negotiate with mutliple telcos' MEC units - and suffer probable less-than-easy purchasing mechanisms compared to Amazon's web portal.

At the moment, Amazon has made no announcement about this. This is speculation on my part. In my view the pieces fit together quite nicely, but I have absolutely no inside track on this.

That's not to mean that the acquisition isn't also - even mainly - about food retail, local distribution, maybe even drone-depots. But it does mean that network operators may have much less leverage on AWS in terms of access to large-scale, city-by-city locations for hosting data in future MEC deployment. To be fair, this doesn't impact the MEC story further out, at individual premises or by the side of the street, but there is (a) plenty of time before edge-computing proves those concepts, and (b) other opportunities for Amazon to get to those types of locations.

EDIT: One other thing to consider here is how they go from a local data-centre to the network. It may need local break/out in, which telcos often avoid doing. Or it could be that Amazon builds its own local wireless networks, eg using LoRaWAN for smart cities, or even gets CBRS licences for private localised cellular networks.

Just as a final note, I'll leave a quick "I told you so!" note here. OK, I got the target wrong and didn't expect an outright acquisition, but an Amazon+Retailer combination was something I thought about exactly a year before it happened.

Notes: If you'd like to get in touch with me about this topic, please contact Information AT disruptive-analysis dot com. I'll also be running another enterprise cellular workshop later in the year - drop me a message and I'll send details when they're available. I'm also writing a briefing report on MEC in my capacity as associate director of STL Partners' Future of the Network programme [link].

Monday, June 12, 2017

Data-over-Sound: An interesting approach to connectivity

I cover three main areas in my research & advisory work:
  • Communications networks & services - mobile network evolution, IoT connectivity, telco business models and policy, and so on
  • Communications applications & technologies - voice, video, messaging, UC etc
  • Communications futurism - the intersection of comms. with other domains such as AI, IoT, blockchain, VR/AR and so forth 
All are evolving at speed, sometimes linked and sometimes in orthogonal - or even opposite - directions. Sometimes the intersections of these various threads yield some surprising combinations and innovations, which are interesting to explore.

I've just written and published a white paper for a client (Chirp.io), on one such intersection - the use of audio signals for short-range communications, or Data-over-Sound. It can be downloaded here (link). The easiest way to think about it is as an alternative to NFC or QR-codes for certain applications - but usable by any device with a microphone/speaker, and with less need for physical proximity or cumbersome pairing like Bluetooth. It's applicable to both normal phones and PCs, and also a variety of IoT devices.

(As always when I write documents like this, I have a stringent set of rules about my editorial independence. Given my normal "spikiness" when I write, in practice it means I need to have broadly-aligned opinions in advance. I've turned down writing papers when I've known the client wouldn't like the views & conclusions in the final report).

The emerging Data-over-Sound sector is currently quite fragmented, and has a mix of new platform players and point-solutions, integrated into customised vertical applications. It's being used for mobile payments in India, device-pairing for UC meeting-room whiteboard applications, and even between robots. Other use-cases exist in retail, advertising/marketing, ticketing and other domains. It can use both audible and inaudible frequency ranges.

In some ways it's similar to technologies like WebRTC, in that it's a capability rather than a product/service in its own right. It still needs some expertise to integrate into an application - and indeed, enough people with "vision" (OK, OK, hearing & inner voice...) to recognise the possible use-cases.  Ideally, it could benefit from more standards, better interoperability, the emergence of extra tools and platforms - and also some ethical standards around things like privacy & security, especially where ultrasound is used covertly.

I don't think Data-over-Sound is going to revolutionise the entire world of connectivity - the same way I'm always skeptical when people claim blockchain is "a new Internet". But I think it should be an important addition to device-to-device communications (I've never viewed NFC positively), and should yield a range of beneficial applications as awareness grows, and applications/tools mature. (And hey, who doesn't like technologies that let your phone speak R2D2 - video link)

The download link, again, is here. The paper gives some background to the technology and use-cases, as well as discussing the emerging structure of the sector.

Friday, May 19, 2017

Blockchain and the Telecoms Industry: Thoughts from TMForum Live

I’ve just returned from TMForum’s annual conference in Nice. Blockchain / distributed-ledger technologies (and even more so AI, which I’ll cover in another post) figured quite highly.

(I'm expecting this post to be read by some non-telecom people, so a bit of background is likely to be useful here)
TMForum Live is an event traditionally aimed at the IT-facing parts of the telecoms industry. This is usually called BSS and OSS in the vernacular – business and operations support systems, such as billing, ordering, customer service, network & fault management etc. TMF was originally the “telemanagement forum”. The event talks about top-level industry themes (5G is a hot topic, as is IoT) but couches them in terms of “monetisation” and “operationalisation”. It’s necessary back-office stuff, but sometimes a bit dry.

So for outsiders – such as blockchain specialists - looking at the telecom industry, the BSS/OSS sphere is a pretty impenetrable forest of acronyms, legacy software, IT frameworks and solutions to deal with telcos’ sprawling operational and customer-facing needs. It also showcases “catalysts” – joint R&D projects run by consortia of companies, highlighting future possibilities – which are a bit more accessible, with dozens of workgroups exhibiting demos and results of their work.

In recent years, two major trends have led to the event’s character changing significantly:
  • A blurring of the boundaries between IT systems and the telcos’ networks, as virtualisation (NFV – network function virtualisation & SDN – software defined networking) takes hold
  • An increased focus on IT systems to support new customer-facing services, or adjacent areas that telcos hope to find new roles in servicing, such as IoT platforms, content, banking and smart cities. (Yes, the dreaded word “digital” makes frequent appearances)
More mundanely, the event has looked at ways to enhance the bread-and-butter costs and effectiveness of BSS and OSS solutions. Terms such as “customer experience management” and “service assurance” are everywhere, with user-centric improvements to mobile self-care apps, contact centre automation tools, chatbots, better ways to monitor network coverage and so on.

This year, quite a few conference sessions and exhibiting vendors mentioned Blockchain. It definitely wasn’t as high-profile as AI and machine-learning, but it provoked a lot of curiosity. A year ago, few attendees would have heard of it, much less thought it relevant to telecoms. Now, there is an internal working group, a panel session linking Blockchain & IoT, at least one Catalyst project, and a significant number of TMForum’s members who are taking an interest. I spoke at a smaller event TMForum ran in Portugal a few months ago, outlining my thoughts about applications, and had a significant amount of interest.

The main use-cases being discussed for telecoms blockchain included:
  • Device identity & authentication, especially in IoT. There was a Catalyst exhibited (link) which used a Microsoft blockchain to create unique identities for medical sensors (wearable patches), via an Ericsson IoT platform, and also involving AT&T and others. This was also used for data time-stamping and asset management.
  • Smart contracts, both as a possible new "Contract-as-a-service" play for enterprise-facing telcos, but also as a way to offer and manage SLAs (service level agreements) for CSPs' own network services.
  • Mobile banking and micropayments, including for IoT-type use cases such as smart electricity grids. Again, blockchains might be used by telcos to either build complete "vertical" services for end-user, or as Enabler-as-a-Service wholesale/API plays for domain specialists.
I also had private discussions with vendors in Nice that covered a lot of other possible use-cases, including ones around NFV monetisation, fraud prevention, wholesale reconciliation and data-integrity protection. Another one that I've talked about before is use of distributed databases for new shared-spectrum usage and localised private radio networks - and that was independently mentioned by a speaker at another recent conference, the Wireless Broadband Alliance's congress in London.

All of these areas, and others, will be discussed at the Telecoms Blockchain & AI workshop I'm running on May 31st in London. There are still some spaces available - you can sign up here (link) or email me at information AT disruptive-analysis dot com.

My general sense is that development of blockchain applications in telecoms is taking a rather different evolution path to AI. There are some big “framework” plays around telecoms AI, including massive shared “data lakes” relating to customer data, network status and other variables. These can help drive more-reliable operations, better planning and happier customers who are prepared to spend more. Conversely, interest in blockchain and distributed ledgers is (for now) much more dispersed. Individual projects and functions are looking at these as solutions for “point problems” – cheaper registries and databases, ways to secure identity, whether smart contracts could help create enforceable SLAs and so forth.

As such, it’s harder to see telcos developing a centralised, coherent “blockchain strategy” – it’s probably going to be used tactically in very isolated niches, for the next 1-2 years at least. There will be a lot of pilots and prototypes – and each domain will also have a wide range of alternative options to consider. We might see more strategic use in IoT in future, as that seems to be a focus of quite a lot of work. This fragmentation of effort also means that multiple vendors, integrators and blockchain platforms (private, but also potentially public blockchains) are likely to be relevant. As yet, there is no real centralisation of effort for telecom blockchains in the same way there is for banking and healthcare. That may be a next step, beyond the TMForum's own working group.

I'm interested in others' views about this - and it's something that the May 31st public workshop (the first I'm running) should shed further light on. (Workshop details here).

Thursday, May 11, 2017

Spectrum-Sharing: Europe & Asia need something like CBRS

The more I look at enterprise mobile, especially its focus on verticals and IoT, the I'm more convinced there needs to be a change in industry structure, regulation and network ownership/operation.  And that means new spectrum policy, as well.

In particular, private licensed-band wireless networks will be essential - that is, networks (using cellular, WiFi, LPWAN or other technology) that can be directly managed by organisations that are not traditional MNOs (mobile network operators), to provide high-QoS, reliable wireless connections. I'm thinking large companies running their own networks, industrial network specialists, local cooperatives, perhaps new government-sector initiatives, and various other aggregators, outsourcers and intermediaries. These will mostly be in-building / on-campus, but some may need to be genuinely wide-area, or even national, as well.

This is in addition to enterprise-centric initiatives in the MVNO/E space, vertical activities by fixed telcos and MNOs, unlicensed-band WiFi and LPWAN deployments and so on.

 There are three main models for licensing radio spectrum today:
  • Exclusive licenses: Dedicated access to certain bands is very common today, for example for mobile networks, fixed microwave links, broadcasters, satellite access and many government-sector uses, such as military radios and radar. Particular organisations have rights to solo access to particular frequencies, in a given country/region, subject to complying with various rules on power and so forth.
  • Unlicensed: (also license-exempt): Beyond some basic rules on power and antenna siting, some bands are essentially "open to all". The 2.4GHz and 5GHz bands used by technologies such as WiFi, Bluetooth and many other technologies are prime examples, as well as bands used for consumer walkie-talkies and various medical and automotive applications.
  • Shared spectrum: This covers various models for allowing multiple users for certain frequencies. It could involve temporary usage (eg for event broadcast), bands that haven't been "cleared" fully and still have incumbent users that newcomers need to "work around". It might be spectrum assigned in geographic chunks, or at low power levels and mandating "polite" protocols so that multiple users can co-exist. We've seen TV "white spaces" where under-used bands are opened up to others, and so forth.
The latter approach of sharing is becoming much more important - despite continued clamour for exclusive licenses, especially from the mobile industry. Given that the demand for spectrum is rising from all sides - mobile, WiFi, utilities, broadcast, satellite, Internet and many others - and each has a different demand profile (global / national / regional and subscription / private / amenity etc), a one-size-fits-all model cannot work, given limited spectrum resources. More spectrum-sharing will be essential.

More models are now emerging for sharing spectrum bands. Depending on the details, these open up opportunities for a greater number of stakeholders. The US' innovative CBRS model (see link) for 3.5GHz is worth examining, and perhaps replicating elsewhere, especially Europe. It is much more sophisticated - but more complex to implement - than the Licensed Shared Access (LSA) that Europe has leaned towards historically. In Disruptive Analysis' view this extra complexity is worthwhile, as it allows a much broader group of stakeholders to access spectrum, fostering greater innovation
The important differentiator for CBRS is that there are three tiers of users:
  • Incumbents, primarily the military, which gets the top level of access rights for radar and other uses in the band
  • Licensed access providers which can get dedicated slices in specific geographic areas. These are "protected" but subject to pre-emption by the top tier. They will also generate revenue for the government in terms of license fees - although awards will be for shorter periods than normal bands (3 years is being discussed).
  • General access - basically this is like unlicensed access, but it has to work around the other tiers, if they are present.
To make all this work, the CBRS system needs databases of who is using what spectrum and where, and sensors to detect any changes in the top tier's usage. (The military, as incumbents, isn't keen on spending any money to actually tell the system what it's doing - it needs to be securely automated).

When all this is up and running, there will be many potential user groups for shared spectrum such as this, using either the priority licenses, or general access tiers:
  • Incumbent mobile operators needing more capacity in specific areas
  • MVNOs wanting to "offload" some traffic from their host MNO networks, onto their own infrastructure, without the expense of full national coverage. This could work either alongside, or as an alternative to, WiFi-based offload or WiFi-primary models.
  • Enterprises wanting to deploy private cellular networks indoors or over large campuses (eg across an airport or oil-refinery for IoT usage)
  • Potentially, large-scale WiFi deployments in new bands, with less subject to interference than mainstream unlicensed bands - although this would require devices/chipsets supporting new frequencies that are currently outside the proper WiFi standards.
  • Various "neutral host" wholesale LTE models, for example run by city authorities for metropolitan users, or cloud-providers for enterprise - or as a way to provide better indoor coverage for existing incumbent "outdoor" operators, without their needing individual infrastructure in each building. This could allow the pooling of back-end / administrative functions and costs across multiple local LTE networks in shared bands. Imagine an Amazon AWS approach to buying cellular capacity, on-demand.
  • Various approaches to roaming or "un-roaming" providers - for example, a theme-park operator or hotel owner could offer its foreign guests "free LTE" while on-site.
  • Potential new classes of cellular operator, such as an Industrial Mobile Operator (imagine GE or ABB integrating cellular access into machinery & plant equipment), various IoT platform providers, and integration opportunities with Internet, healthcare, transport and other systems.

This approach may not work for enterprise wireless users requiring national (or very broad-area) coverage, such as utility companies or transport providers. There are separate arguments for utility and rail companies getting slices of dedicated spectrum, or some other model of national sharing.

Importantly, CBRS means that LTE-U variants like MuLTEfire can be used to create private cellular networks. Coupled with cheap, virtualised (& probably cloud-based) core networks, this means that mobile networks are much more accessible to new entrants. The scale economies of national licenses will no longer apply to lock out alternative providers.

In other words, we will see consolidation of national MNOs, but fragmentation of localised MNOs or (PNOs as some are calling private networks). 

While some MNOs and their industry bodies may be concerned at more competition, privately many of them acknowledge that a lot of the use-cases above cannot realistically be offered by today's industry. 

Even large MNOs can probably only pick 2 or 3 verticals to really get deep expertise in - maybe smart cities, or rail, or utilities, say. But they cannot get enough expertise to effectively build customised, small networks in all the possible contexts - car factories, ports, hospitals, mines, hotels, shopping malls, airports, public safety agencies, universities, oil refineries, power stations and so on. Each will have its own requirements, its own industry standards to observe, its own systems to integrate with, its own insurance/liability issues and so on. They need wireless for all sorts of reasons from robots to visitors - but today's MNOs will not be able to satisfy all those needs, especially indoors.

For many governments' visions of future factories, cities and public services, good quality wireless will be essential. But it will need to be provided by many new types of providers, with business models we can only guess at.

While CBRS is still at an early stage, and will be tricky to implement, we need something similar to it - with multiple tiers including a "permissionless" one - in Europe and the rest of the world. Enterprise and private cellular networks (and other licensed-band options for WiFi and LPWAN) are critical - and policymakers and regulators need to acknowledge and support this.

If you are interested in discussing this topic further, I will be running a workshop day on private cellular on May 30th in Central London, in a joint effort with Caroline Gabriel of Rethink Research. Details and booking are here: (link) or else email information AT disruptive-analysis DOT com.

Wednesday, April 12, 2017

New: Workshops on Enterprise Cellular & AI/Blockchain in Telecoms, May 30-31

I'm delighted to announce a new collaboration:

Rethink Research & Disruptive Analysis announce joint workshops on Enterprise Cellular Networks, and AI/Blockchain in Telecoms, London May 30th-31st

At the end of May, two of the leading independent thinkers in telecoms research will jointly be running small-group interactive workshops in London, addressing two of the hottest topics in telecoms technology and business models:

  • 30th May: Private Cellular Networks for Enterprise, IoT and Vertical Markets
  • 31st May: Use-cases and Evolution Paths for AI, Machine Learning and Blockchain Technologies in the Telecoms Sector
Each day will have a maximum of 30 attendees to ensure a high level of discussion and interaction. We expect a diverse mix of service providers, vendors, regulators and other interested parties such as enterprises, investors and developers. 

The sessions will combine presentations, networking opportunities, and small-group interactive discussion. Rethink Research’s Caroline Gabriel, and Disruptive Analysis’ Dean Bubley, will be the leaders and facilitators. Both are well-known industry figures, with many years of broad communications industry analysis – and outspoken views – between them.

The two events will run as separate standalone sessions, but there will be common themes and approach across both, to benefit organisations with an interest in both topics.

Enterprise & Private Cellular Networks, May 30th 

The first day will cover the rising need for businesses of many kinds to control their own, well-managed, wireless connectivity solutions. The growing use of mobile devices and the emergence of the Industrial IoT means that high-quality – often mission-critical – networks are required for new systems and applications.  

These can span both on-premise coverage (eg in a factory, office or hospital) and the wide-area (eg for smart cities or future rail networks). It is unclear that traditional mobile operators can or will be able to satisfy all the requirements for enterprise coverage – or assume legal liability for failures. Some enterprises will want to have full control for reasons of security, or industry-specific needs.

Among the topics to be discussed are:

  • Key market drivers: IoT, automation, mobile workers, industry-specific operational and regulatory issues, diffusion of wireless expertise outside of traditional telecoms providers
  • Evolution of key enabling technologies such as 5G, network-slicing, SDN, small cells and enterprise-grade IMS cores
  • Regulatory/policy issues: spectrum allocation, competition, roaming, repeaters, national infrastructure strategies and broader “Industry 4.0” economic goals
  • The shifting roles of MVNOs, MVNEs, neutral hosts and future “slice operators”
  • Spectrum-sharing approaches, including unlicensed, light-licensing and CBRS-type models. Also: can WiFi run in licensed bands?
  • Numbering and identity: eSIM, multi-IMSI, liberalised MNC codes
  • Commercial impacts, new business model opportunities & threats to incumbents
  • Vendor dynamics: Existing network equipment vendors, enterprise solution providers, vertical wireless players, managed services companies, new industrial & Internet players (eg GE, Google), implications for BSS/OSS, impact of open-source
(I've covered various of these themes in previous posts and presentations. If you want more detail about some of my thinking, see links here and here. I'll include links to Caroline's thoughts on this in subsequent posts. We will be going into a lot more depth in the workshop itself).

AI & Blockchain in Telecoms, May 31st 

The second day will consider the specific impact on the telecoms sector of two of the hottest new “buzzword” technologies in software: Artificial Intelligence (and its siblings like machine-learning) and Blockchain / Distributed Ledgers. Both have already received more than their fair share of hype: but what are the realistic use-cases and timelines for adoption? What problems do they solve, and what new opportunities do they create? Are they just re-branding exercises for “big data” and “distributed databases” respectively, when applied to telcos?

(I've been covering these areas as part of my "TelcoFuturism" research, including presenting on Blockchain at a recent TMForum event (link) and at Nexterday North last November, plus thinking about various AI intersections with telecom trends such as 5G (link). Caroline has done a large amount of work on AI / Machine Learning).

This day will benefit attendees from the telecoms industry looking at new developments; as well as  those from the AI/blockchain mainstream interested in specific applications in the telco sector. It will include some basic “101” introductions so that delegates from both sides can be sure they’re speaking each others’ language & decode the jargon.

Among the topics to be discussed are:

  • Understanding and categorising the types of AI (machine/deep learning, image recognition, natural language etc)
  • Introduction to blockchain concepts and the complexities of “trust”
  • Review of telecoms industry structure, key trends and important components of network/IT systems
  • Where will AI have the largest impacts for telcos? Improving customer insight & experience? Improved network operations & planning? New end-user facing services such as chatbots or contextually-aware communications? B2B, B2C, or B2B2C platforms?
  • Mapping the possible use-cases for blockchains in telecoms, and current trials / status of projects – from micro-transactions, to roaming settlement & fraud prevention, data-integrity protection, or smart contracts for NFV systems
  • Impact of 5G & IoT for both AI and BC
  • Risks and challenges: regulatory, privacy, new competitors?
  • Vendor and supplier ecosystems and dynamics: new entrants vs. adoption by established providers

Reserve your place today 

Both workshops will take place at the Westbury Hotel in Mayfair, central London [link]. They will run from 9am-5pm, with plenty of time for networking and interactive discussion. Come prepared to think and talk, as well as listen – these are “lean-forward” days. Coffee and lunch are included.

Fees for attending one day: £795 / US$995 / €930 + UK VAT of 20%
Fees for attending both days: £1395 / US$1750 / €1650 + UK VAT of 20%

Reserve Now: Select Your Choice of Workshop Days

Payment can be made either credit card or Paypal, or by invoice / bank transfer: please email me at information AT disruptive-analysis DOT com, for payment-request by email or with purchase-order details. Please also contact me for any more information.

Monday, April 10, 2017

Sources of value in voice: Asking the right questions

In the last few weeks I've been doing a lot of work on voice communications (and messaging / video / context):

  • I attended Enterprise Connect in Orlando discussing collaboration, UCaaS, cPaaS, WebRTC and related themes
  • I spoke at a private workshop, for a Tier-1 operator group's communications-service internal experts team
  • I've helped a client advise a strategy around the new European eCall in-vehicle emergency-call standard
  • I've been writing a report on VoLTE adoption and impact, for my Future of the Network research stream published by STL Partners / Telco 2.0 (Subscribe! Link here)
A common, over-arching, theme is starting to form for me. The future sources of value in voice are all about SPs / vendors asking the right questions when they design new services and solutions.

Historically, most value in voice communications has come from telephony (Sidenote: voice is 1000 applications/functions. Phone calls are merely one of these). And in particular, the revenue has stemmed from answering the following:

  • Who is calling?
  • Where are they?
  • Who is being called?
  • Where are they?
  • How long did they speak for?
  • Plus (sometimes):
    • When did they call?
    • What networks were they on?
    • Was the call high-quality? (drops, glitches etc)
    • Is it an emergency?
This pretty much covers most permutations for ordinary phone calls: on-net/off-net, roaming, international and long-distance, fixed-to-mobile and so forth. 

Clearly, the answers to these questions are worth a lot of money: many billions of dollars. But equally clearly, they don't seem to be enough to protect the industry from competition and substitution from other voice-comms providers, or alternative ways of conducting conversations and transactions. As a result, voice telephony services are (mostly) being bundled as flat-rate offers into data-led bundles for consumers, or perhaps per-month/per-seat fees for unified comms (or SIP trunks) for business. 

In other words, current voice revenues are being delivered based on answering fewer questions than in the past. Unsurprisingly, this is not helping to defend the voice business.

The current "mainstream" telecoms industry seems to be focused only on adding a few more questions to the voice roster:

  • Is it VoIP / VoLTE / VoWiFi? (Answer = sometimes, but "so what" for the customer?)
  • Can we use it to drag through RCS? (Answer = No)
  • How can we reduce the costs of implementation? (Answer = maybe NFV/cloud)
  • Are there special versions for emergencies? (Answer = yes, eg MCPTT and eCall)
  • Is there a role for CSPs in business UCaaS? (Answer = yes, but it's hard to differentiate against Microsoft, Cisco, RingCentral, Vonage and 100 others)
  • What do we do about Amazon Echo? (Answer = "Errrrmmmm... chatbots?")
Given the huge expense and complexity involved in implementing IMS for VoLTE, many mobile operators have very little "bandwidth" left to think about genuine voice innovation, especially given wider emphasis on NFV. What limited resources are left may get squandered on RCS or "video-calling". 

Fixed and cable operators are in a slightly better position - they have long had hybrid business models partnering with PBX/UC vendors for businesses and can monetise various solutions, especially where they bundle with enterprise connectivity. For fixed home telephony, most operators have long viewed basic calls as a commodity, and are either protected by regulators via line-rental and emergency-call requirements, or can outsource provision to third parties.

In my view, there are many other questions that can be asked and answered - and that is where the value lies for the future of voice communications. None are easy to achieve, but then they wouldn't be valuable if they were:
  • Why is the call occurring? (To buy something, ask a question, catch up with a friend, arrange a meeting or 100 other underlying purposes)
  • Where is the call being made and received (physically)? For instance indoors, in a noisy bar, on a beach with crashing waves, in a car, in a location with eavesdroppers?
  • Is the communication embedded in an app, website or business process? 
  • Is the call part of an ongoing (multi-occasion) conversation or relationship?
  • Is a "call" the right format, with interruptive ringing and no pre-announcement? Is a push-to-talk, one-way, "whisper mode", broadcast, team or other form more appropriate?
  • Are both/all parties human, or is a machine involved as well?
  • What device(s) are being used? (eg headset, car, wearable, TV, Echo, whiteboard?)
  • Who gets to record the call, and own/delete/transcribe the recording?
  • Are the call records secure, and can they be tampered with?
  • What's the most effective style of the call? (Business-like, genial, brusque, get-to-the-point-quickly etc)
  • What languages and accents are being spoken? Can these be adjusted for better understanding? What about background noise - is that helpful or hindering?
  • Can the call add/drop other parties? Are these pre-arranged, or can they be suggested by the system in context?
  • Are the participants displaying emotion? (Happiness, anger, eagerness, impatience, boredom etc) . How can this be measured, and if necessary, managed?
  • Is there a role for ultrasound and/or data-over-sound signalling before or during the call?
  • How can the call be better scheduled / postponed / rescheduled?
  • Is a normal phone number the best "identifier"? What about a different number, or a social / enterprise / gaming / secure identity?
  • Are there multiple networks involved/available for connection, or just one? What happens when there are multiple choices of access or transit providers? What happens where the last 10m is over WiFi or Bluetooth beyond the SP's visibility?
  • Is encryption needed? Whose?
  • What solutions are needed to meet the needs of specific vertical-markets or other user groups? (Banking, healthcare, hospitality, gaming etc)
  • What are the desired/undesired psychological effects of the communications event? How can the user interface and experience by improved?
  • Did the call meet the underlying objectives of all parties? How could a similar call be improved the next time?
  • How do we track, monetise and bill any of this?
In my view it is these - and many other - questions that determines the real value of voice communications. Codec choice and network QoS are certainly useful, as is (sometimes) interoperability. Network coverage is clearly paramount for mobile communications. But these should not be put on a pedestal, above all the other ways in which value can be derived from something seemingly simple - people speaking to each other.

I'm seeing various answers to some of these questions - for example, contact-centre solutions seem to be most advanced on some of the emotional analysis, language-detection and other aspects. There are some interesting human-driven psychology considerations being built into new codec designs like EVS (eg uncomfortable silences between words). MVNOs and cPaaS players are doing cool things to "program" telephony for different applications and devices. The notion of "hypervoice" was a good start, but hasn't had the traction it deserved (link). Machine-learning is being applied to help answer some of these questions - most obviously with Alexa/Siri/Assistant voice products, but also behind the scenes in some UC and contact-centre applications.

But we still lack any consistent recognition that voice is "more than calls". 99% of effort still seems to go on "person A calls person B for X minutes". Very little is being done around intention and purpose - ask a CSP "Why do people make phone calls?" and most can't give a list of the top-10 uses for a "minute". Most people still use "voice" and "telephony" synonymously - a sure-fire indicator they don't understand the depth of possibility here. And we still get hung up on replacing voice with video (they have a Venn overlap, but most uses are still voice-centric or video-centric).

Until both the telco and traditional enterprise solutions marketplaces expand their views of voice (and entrench that vision among employees, vendors and partners), we should continue to expect Internet- and IoT-based innovators to accelerate past the humble, 140yr-old phone call. Start asking the right questions, and look for ways to provide answers.