Speaking Engagements & Private Workshops - Get Dean Bubley to present or chair your event

Need an experienced, provocative & influential telecoms keynote speaker, moderator/chair or workshop facilitator?
To see recent presentations, and discuss Dean Bubley's appearance at a specific event, click here

Saturday, June 17, 2017

Does Amazon's purchase of Whole Foods redefine "Edge Computing"?

Yesterday's announcement that Amazon is acquiring retailer Whole Foods has meant I've adapted this piece from an earlier draft version. I'm expecting a few people to read this who aren't normally involved in technology, telecoms or cloud sectors - and who are likely to be unfamiliar with some of the terminology. Welcome!

In a nutshell: I don't think Amazon is purely interested in disrupting the food-retail business, or creating distribution/pickup centres for online shopping. I think Amazon is also interested in local, mini data-centres for its huge cloud business. This means it may be able to disrupt other telecoms/IT businesses, and steer some "edge computing" technology standards and paths, if it plays this well. There was already a lot going on in this space - hence my originally-intended post - but this deal changes the dynamic even more.

There are a number of reasons to put data-centres (physical locations with servers) close out to the "edge" of the network, near to end-users connecting with fixed broadband or mobile devices. 

Top of the list is latency, or delay: how long it takes an app or website to respond. This is partly driven by the speed of light (300,000km/s), as well as the efficiency and design of the electronics. Physically-closer data centres can mean low latency, and faster applications: critical for "realtime" uses such as IoT controls, or gaming, VR and many other areas. Low-latency is a big part of the pitch for new types of network anyway (eg 5G mobile), but it also implies that speed-of-light delays must be minimised, by putting processing/storage closer to the user.

Other reasons to have edge computing include data sovereignty laws (there's a growing set of rules around cross-border data flows) and avoiding the need to ship huge volumes of data all the way to the cloud, process it remotely, and then ship it back again. Avoiding possibly-clogged core and transport networks in the middle is cheaper as well as faster. 

Network cacheing of big chunks of content, such as frequently-watched videos, is another reason - this has been done for years, but pushing it deeper into the network may be important as usage grows.

Edge-computing may turn out to be particularly important for things like machine-learning, and other forms of AI such as image-recognition or speech analysis, or large-scale use of sensors. That could mean sound from talking to Siri or Amazon Alexa, industrial IoT installations watching for problems with machinery, cameras from self-driving vehicles and so on. It may be that only 1% of data collected is interesting - so processing the bulk locally, to sift out the important nuggets or create alarms, could be better than shipping never-ending terabytes back to a central point.

There are lots of angles to edge computing, and lots of emerging standards and visions. I've recently been looking at this area a lot, and I think some participants don't realise how many possible locations for "the edge" there are. It's very badly-defined. It also needs to be seen in the context of an ongoing battle for control of "the cloud" between big Internet players like Amazon and Google and Microsoft, versus the network providers - and perhaps also enterprise/IoT specialists like GE, Cisco and IBM.

The critical point here is that the "edge" can be thought of as being "in the network" (at aggregation points or some sort of fixed/radio node), or at a user's premise, or even in the device itself. It might even turn out to be in a specific chip in the device, or we may find that devices are chained together, with a local gateway acting as a hub for "capillary" connections to individual sensors, bluetooth headsets, smart-home gadgets and so on. In theory, compute and storage could exist at any of these points (or all of them, with dynamic control).




The telecoms industry is mostly focused on MEC - originally Mobile Edge Computing, now redefined with the M for Multi-Access. In theory, MEC is the concept of distributing compute functions deeper into the telecoms network - perhaps colocated with cell-sites, local cell aggregation points, fixed-network central offices, or even with small/pico-cells inside buildings, or by the side of the road. Some in the industry position it as a way for telcos (especially mobile operators) to create a distributed cloud in order to compete with Amazon - while others are more pragmatic and just see it as a way to extend new virtualised bits of the network control itself, outside the data-centre. There are versions of MEC that couple tightly with major NFV initiatives, and others that are more independent.

The original vision of MEC - a server at every base station - now seems to be fading as the economics favour larger aggregation centres. A possible exception, however, is for in-building deployments hosting special, location-specific applications and functions - perhaps IoT control in a factory, local media-streaming for a stadium and so on. In those cases, it's not clear whether the network operator would be a classical telco, or perhaps a new "neutral host" or enterprise-owned player. This was a theme I looked at in my recent. Private & Enterprise Cellular workshop (link; more on that in another post).

There are also various other approaches to edge computing - a concept called Fog is being pitched as a multi-tiered approach to distributing compute resources, and Cloudlets are another. It's very dynamic and multi-faceted, and what might work well for bulky content distribution might be inappropriate for controlling swarms of drones, or putting better security controls at the edge. Some of the network internals for 5G may themselves need to be put at the edge (perhaps identity functions, or congestion-management), and there is a desire to use that as an opportunity to host applications or data for other parties as well, as a new revenue stream.

Meanwhile, the nature of IT and web applications themselves is changing, with use of "serverless" computing architectures, and a shift to doing lots of processing at the edge for machine-learning and other tasks, including the rise of GPU processors. I recently went to see a presentation by ARM, which was talking about doing more processing on low-end IoT devices in silicon, without needing the network much at all. That's right out at the "real" edge.

[It's worth noting that ARM is owned by Japanese telco/InternetCo Softbank, which has also
taken a stake in GPU vendor Nvidia and has just bought scary-walking-robot company Boston Dynamics from Google. It's perhaps the only telco to understand "edge" fully]

So... where does Amazon, and especially Whole Foods, fit into this?

At the moment, Amazon Web Services has around 40-50 main data centres, split into regions and "availability zones" (see this link). It's also got servers (mostly for content-delivery, a CDN) in various other companies' data-centres, notably telcos. Its huge scale has meant that most other providers of "hyperscale" cloud struggle to compete, beyond the very largest IT players. The telcos had high hopes for cloud computing a few years ago, but have now mostly shifted away from head-on competition with AWS.

Instead, the telecom industry is looking at MEC (and also CORD, central office rearchitected as a data-centre) as possible differentiators. By having localised compute resources at the network-edge, it sees an opportunity for monetising tomorrow's distributed, low-latency applications - as well as distributing its own (now virtualised) internal functions.

In theory, MEC could either allow telecom operators to create "distributed Amazon" rivals for a wide IT/Internet audience, or host AWS and its peers' servers locally, for a fee. In fact, the Amazon-on-MEC concept got a boost recently with Amazon's announcement of its Greengrass distributed IoT edge architecture (see link). I've spoken to some MEC players - vendors and operators - who are quite excited by this.

But now, Amazon has possibly just thrown a spanner in the works, at least in terms of the "MEC at network aggregation points for general cloud apps" story. With Whole Foods, it now has a footprint of 450-odd locations, principally in the US but also in London and elsewhere. Typically these are in city centres - and being supermarkets, they likely have good electricity supply (and even cold rooms) that could be used for servers, rather than just kale and houmous. It's not obvious why developers would prefer to negotiate with mutliple telcos' MEC units - and suffer probable less-than-easy purchasing mechanisms compared to Amazon's web portal.

At the moment, Amazon has made no announcement about this. This is speculation on my part. In my view the pieces fit together quite nicely, but I have absolutely no inside track on this.

That's not to mean that the acquisition isn't also - even mainly - about food retail, local distribution, maybe even drone-depots. But it does mean that network operators may have much less leverage on AWS in terms of access to large-scale, city-by-city locations for hosting data in future MEC deployment. To be fair, this doesn't impact the MEC story further out, at individual premises or by the side of the street, but there is (a) plenty of time before edge-computing proves those concepts, and (b) other opportunities for Amazon to get to those types of locations.

EDIT: One other thing to consider here is how they go from a local data-centre to the network. It may need local break/out in, which telcos often avoid doing. Or it could be that Amazon builds its own local wireless networks, eg using LoRaWAN for smart cities, or even gets CBRS licences for private localised cellular networks.

Just as a final note, I'll leave a quick "I told you so!" note here. OK, I got the target wrong and didn't expect an outright acquisition, but an Amazon+Retailer combination was something I thought about exactly a year before it happened.



Notes: If you'd like to get in touch with me about this topic, please contact Information AT disruptive-analysis dot com. I'll also be running another enterprise cellular workshop later in the year - drop me a message and I'll send details when they're available. I'm also writing a briefing report on MEC in my capacity as associate director of STL Partners' Future of the Network programme [link].

Monday, June 12, 2017

Data-over-Sound: An interesting approach to connectivity

I cover three main areas in my research & advisory work:
  • Communications networks & services - mobile network evolution, IoT connectivity, telco business models and policy, and so on
  • Communications applications & technologies - voice, video, messaging, UC etc
  • Communications futurism - the intersection of comms. with other domains such as AI, IoT, blockchain, VR/AR and so forth 
All are evolving at speed, sometimes linked and sometimes in orthogonal - or even opposite - directions. Sometimes the intersections of these various threads yield some surprising combinations and innovations, which are interesting to explore.

I've just written and published a white paper for a client (Chirp.io), on one such intersection - the use of audio signals for short-range communications, or Data-over-Sound. It can be downloaded here (link). The easiest way to think about it is as an alternative to NFC or QR-codes for certain applications - but usable by any device with a microphone/speaker, and with less need for physical proximity or cumbersome pairing like Bluetooth. It's applicable to both normal phones and PCs, and also a variety of IoT devices.

(As always when I write documents like this, I have a stringent set of rules about my editorial independence. Given my normal "spikiness" when I write, in practice it means I need to have broadly-aligned opinions in advance. I've turned down writing papers when I've known the client wouldn't like the views & conclusions in the final report).

The emerging Data-over-Sound sector is currently quite fragmented, and has a mix of new platform players and point-solutions, integrated into customised vertical applications. It's being used for mobile payments in India, device-pairing for UC meeting-room whiteboard applications, and even between robots. Other use-cases exist in retail, advertising/marketing, ticketing and other domains. It can use both audible and inaudible frequency ranges.

In some ways it's similar to technologies like WebRTC, in that it's a capability rather than a product/service in its own right. It still needs some expertise to integrate into an application - and indeed, enough people with "vision" (OK, OK, hearing & inner voice...) to recognise the possible use-cases.  Ideally, it could benefit from more standards, better interoperability, the emergence of extra tools and platforms - and also some ethical standards around things like privacy & security, especially where ultrasound is used covertly.

I don't think Data-over-Sound is going to revolutionise the entire world of connectivity - the same way I'm always skeptical when people claim blockchain is "a new Internet". But I think it should be an important addition to device-to-device communications (I've never viewed NFC positively), and should yield a range of beneficial applications as awareness grows, and applications/tools mature. (And hey, who doesn't like technologies that let your phone speak R2D2 - video link)

The download link, again, is here. The paper gives some background to the technology and use-cases, as well as discussing the emerging structure of the sector.