Memories from PIMRC 2015

The conference experience began with the Workshop on M2M Communications: Challenges, Solutions and Applications, which was driven by the EIT DIGITAL collaborative project EXAM: Energy-Efficient XHAUL and M2M. We delivered a dynamic afternoon together with partners from Nokia, Ericsson and Aalto University; presentations covered topics on M2M spanning from data aggregation, coverage analysis and cognitive M2M communications. Moreover, a second session was focused on LTE-based M2M Communications. It was interesting to see how the discussions around M2M communication are beginning to loose the sense of naivety, since every presentation was focused on specific solutions for specific scenarios and conditions; adding to the increased level of maturity in the area at the academic and commercial levels.

On the first conference day, the panel on “5G: Opportunities and Challenges in Air Interface, Media Access and Resource Allocation” opened with a loud and clear statement from the moderator: “have you tried voice over WIFI? it sucks! you actually get what you payed for.” Of course, such a statement prepared a comfortable arena for a cellular-inclined panel, remarking on the fact that in many cases WIFI is considered as provided by municipalities, since the technologies are not actually suitable for roaming agreements and subscriptions…

Moving on to the second conference day, during the opening keynote the presentation was about disruptive technologies. A descriptive talk highlighting the importance of IoT in 5G, the relevance of WIFI for indoor scenarios and the unsuitability of WSN compared to the emerging proprietary technologies like SigFox, Cycleo, On-Ramp, and Neul. Interestingly, the presentation included the need to match disruptive technologies with novel business thinking, where there might be an industry shift with vendors dealing directly with industrial customers and then procuring the best operator to provide the connectivity services. This would come hand in hand with a recent trend in the industry to engage with industrial customers, talking to stakeholders to figure out want they need for the next communication systems.

I would like to close with the final remark given at the panel on European Activities on 5G, Looking at Vertical Markets, when the moderator asked a very simple question: what will NOT be on 5G? Two clear answers were given by the panelists: 1) cognitive radio because it is too difficult to ensure QoS and 2) 1ms delay…. Considered to be way beyond 5G. I find it quite difficult to disagree with Prof. Hamid Aghvami on the latter.

Posted in Business, Conferences, Internet of Things, M2M | 1 Comment

The mobile industry faced some serious setbacks at WRC-15

In a recent post by Ricardo Tavares, the main takeaways from WRC-15 are summarized. His most surprising conclusion and observation is that the mobile industry faced some serious setbacks at WRC-15, particularly when it came to securing spectrum in the lower UHF band (470 MHz – 684 MHz) for 5G. In short, the broadcasters and the satellite industry turned out to be much stronger opponents than the mobile industry anticipated. Read more at: http://techpolis.com/a-mobile-strategists-takeaway-from-wrc-15/

Posted in Uncategorized | Leave a comment

Trends in Mobile Payments

As the first prise winner at the Doctoral workshop Poster competition I have got the opportunity to attend a conference on my choice. I selected to attend an industry event International conference on payment services and solutions organized by InPayCo in Paris, France, on 4-5 of November, 2015.

During the event a range of different topics have been discussed:

  1. Representatives of consultancy companies presented an overview of general trends in mobile payments in the global market.
  2. Different companies presented a range of mobile payment solutions and services. They shared the experience of mobile payments developed in different countries:
    (i) There were a number of services developed by French start-up companies (Peer-to-Peer money transfer service Lydia, mobile wallet Fivory, solutions for on-line purchases CachWay and SlimPay).
    (ii) Presented solutions from other countries were M-Changa (a fundrising platform from Kenya); a MasterCard’s representative presented a range of initiatives in developed countries, and a Vodafone representative presented penetration of M-Pesa in Romania and Armenia. A representative of Worldremit highlighted the situation and issues related to remittances to developing countries.
    (iii) A round table discussion was focused on services for unbanked people and developing countries.
  3. Another important stream of discussions was related to regulation issues of mobile payments and legislation in EU. Several presentations and round table discussions were addressing this topic in terms of security and privacy, and changes in regulation.
  4. Crypto-currencies (specifically Bitcoin) became a hot topic of discussions. Specifics (focus of P2P payments) and reputation of Bitcoin slows down its penetration among banks. However, the secure mechanism behind Bitcoin (blockchain), that is open source, gets popularity on its own. Mobile payment providers and even banks use blockchain in order to develop security of the services.

As a final summary, the representatives of mobile payment industry think that payment in the future will be connected to social media, more engaging and fun. In some services the service experience will play the main role and with less focus on payment process, that is payment will be more hidden (today we have an examples of Airbnb and Uber).

The payment in the future will be digitalised meaning lots of savings for all market actors starting from merchants, banks, and government.

Banks will need to be more connected in order to provide a real time transactions.

Posted in Business, Conferences | Tagged , , | Leave a comment

Connected everything: Un-engineered and Un-managed system of systems

Screen Shot 2015-10-27 at 10.08.45

I was invited speaker at USRR 2015 – 3rd International Workshop on Understanding the inter-play between Sustainability, Resilience, and Robustness in networks in Munich. It was co-located with RNDM – the International Workshop on Reliable Network Design and Modeling. It is a single track workshop gathering prominent scientists and researchers in the area from around the world, sponsored by IEEE Communications Society. Since it is a single track event there is room for deep fruitfull discussions between researchers during the sessions. My talk was on ” Interplay between Energy Efficiency and Survivable Network Planning with Shared Backup Resources”. I wrote about this interesting trade-off before so this time I will not go into this topic.

This year there were two Keynote speakers. The first one was Bjarne E. Helvik from Norwegian University of Science and Technology. He delivered a very inspiring talk on ” Dependability of non-engineered and unmanaged system of systems”. I can feel that when you read these lines you are almost getting lost. What is the system of systems? Why unmanaged? Think of today’s ICT infrastructure and future trends such as IoT, connected devices, furnitures, vehicles, robots, everyhing… If we look at this new internet of things it is a system of systems: smart grid, smart cities, smart homes, etc. continuously growing, highly dynamic and heterogeneous. This extremely complex system is composed of sub-systems which can be seen as autonomous systems and there is no central entity to manage or design this whole system. Instead all autonomous systems are designed and engineered locally tailored to different requirements.

” … the overall system is a result of commercial agreements rather than the outcome of an engineering process. Furthermore, there is no entity that has an overview of the overall system and may efficiently manage failure scenarios involving multiple autonomous systems.

The second keynote was given by Roland Wessaely from Atesio, the company specialized in optimization in telecommunications. He presented the results from the European FP7 project DISCUS (DIStributed Core for unlimited bandwidth supply for all Users and Services). I was involved in DISCUS as the coordinator of KTH part three years ago before I handed the project to my colleagues in KTH ONLAB. Therefore it was very interesting for me to hear the results from Roland. The talk was about optimization and planning of the end-to-end DISCUS architecture focusing on UK scenario providing very high network availability for end user services.

Talking about RNDM, I should also give the good news that next year RNDM will be organised in Sweden.

Posted in Uncategorized | Leave a comment

Virgin Media – Chesham pilots UK’s first Smart WiFi Pavement with Virgin Media

Manhole_cover_2

It only is this the first time we’ve built metropolitan WiFi directly from our street cabinets, it is also the UK’s first deployment of a WiFi connected pavement. It is literally public WiFi under your feet. We want to build more networks like this across the UK and encourage more forward thinking councils just like Chesham to get in touch.”

Source: Virgin Media – Chesham pilots UK’s first Smart WiFi Pavement with Virgin Media

Posted in Business, Systems | Leave a comment

LTE-U: U for unwanted or underappreciated?

Confused about LTE-U, LAA? And now MuLTEfire?

Don’t worry–you are certainly not the only one. Even 3GPP had struggled for a while before they finally reached consensus on these terminologies. Simply put, LTE-U is the general term for a modified version of LTE to be deployed in the unlicensed band (ISM band in 5GHz in particular), while License Assisted Access (LAA) is one specific way to deploy LTE-U.

Ericsson has introduced the concept of LAA. It enables the operators to use their licensed spectrum to transmit control data and the unlicensed band only for downlink data transmission. Thus it allows the operator to retain certain level of control on the quality of service (QoS). The catch is that you still need a piece of licensed spectrum to employ LAA.

Then what about MuLTEfire? It is the name Qualcomm branded on their idea of standalone LTE-U. Standalone LTE-U is another way to deploy LTE in the unlicensed band, where both control and uplink/downlink data are transmitted over the unlicensed band without any requirement for licensed spectrum. Everybody could deploy it as a standalone system just like Wi-Fi, hence the name. Unlike LAA, MuLTEfire is not a standard terminology included in 3GPP.

LTE-U vs WiFi

The original idea of LTE-U is fairly straightforward. Operators need more offloading capacity, especially indoors. So 3GPP engineers asked themselves why should we let WiFi have all the fun when we could also deploy LTE in the unlicensed band? The claim is, after all, WiFi is so inefficient in utilizing the spectrum, and LTE could provide a much higher capacity if given access to the unlicensed band.

Technically, LTE, being a centralized system, is indeed much more spectral efficient than the distributed/best-effort WiFi system. However, regulations in many regions require LTE protocol to be modified considerably to adhere to certain etiquette rules, such as ‘listen-before-talk’. This measure is to ensure everyone could still have a fair access to the unlicensed band. Therefore LTE-U would not be as efficient as standard LTE and its performance is only moderately higher than WiFi. For standalone LTE-U, without the benefit of a dedicated control channel, its advantage in throughput would be even less significant.

Where 3GPP is heading?

LTE-U was discussed as early as in the preparations for Release 11. Since then, some technical feasibility studies have been done during Release 12. Gradually the focus shifts more towards regulation requirements and detailed protocol implementations. The LAA version of LTE-U is currently being standardized in Release 13, which is planned to come out by the end of this year.

Despite of these progresses pushed mainly by the vendors, the standardization process for LTE-U is far from a smooth journey. One of the major reasons is the lack of interest, and sometimes the open hostilities, from many operators. One would assume a technology coming from 3GPP would have the best interests of the operators at heart, then why are they not buying it?

Why is LTE-U not popular among operators?

I see two main reasons behind their oppositions: licensed spectrum and WiFi hotspots.

It may seem counter-intuitive that the operators are against deploying LTE in unlicensed bands while at the same time they are asking for more capacity and spectrum. But in fact what they want is not just any spectrum, but only the licensed spectrum—an exclusive strategic asset that has always been the main entry barrier to the mobile service industry. The idea that anyone could deploy standalone LTE-U without a licensed spectrum and offer competitive mobile services is horrifying for these operators. It would substantially dilute the strategic value of the licensed spectrum that they have spent so much to get hold of.

Some operators consider License Assisted Access as a good compromise that allows them to enjoy the advantages of LTE technology in the unlicensed band while keeping the entry barrier intact. However, a few ardent antagonists of LTE-U are worried that the standardization of LAA would place 3GPP onto a dangerous slippery slope leading towards the eventual inclusion of standalone LTE-U.

The latter group of operators shares a common trait: heavy investment in WiFi hotspots comparing to the operators who supported LAA. Why should the dominant players endorse a new technology that would give the underdog a potential edge in the quality of service? Concerns about its coexistence with WiFi have also been raised as a main argument against LTE-U. Because LTE is more resilient to interference, it might overwhelm its WiFi neighbors with more aggressive transmissions. The vendors are trying to demonstrate that LTE-U can be a good neighbor to existing WiFi deployments, which is the main study item in Release 13.

Where the vendors stand?

Both Ericsson and Huawei are mentioning LAA only, since the 3GPP community has agreed that LAA is the politically correct term. They keep themselves a good distance from the standalone LTE-U to avoid touching the operators’ sensitive nerves.

Qualcomm, on the other hand, is persistently pushing its standalone LTE-U idea. Apparently frustrated by the tough resistance within 3GPP, it has set up a so-called ‘LTE-U Forum’ outside 3GPP to gain a wider publicity. So far Verizon has been the only major operator on board. In June, Qualcomm also unilaterally announced its own version of standalone LTE-U, so-called MuLTEfire.

Being the leading LTE chipset manufacture, Qualcomm stands at an advantageous position for pushing its favorite technology in the handset market. But I am not sure whether or how MuLTEfire could be employed independently outside the 3GPP framework, considering the fact that its core is still 3GPP’s LTE standard.

Who wants standalone LTE-U or MuLTEfire?

If the operators are not interested, Qualcomm has made it perfectly clear that standalone LTE-U can also be adopted by broadband ISPs, enterprises, venue owners, and indeed any others lacking licensed spectrum. Sounds like a logical and attractive offer, right?

Curiously enough, Google, which I had assumed to be a potential customer of MuLTEfire, responded negatively. It is alarmed by the development of LTE-U, which might convert the unlicensed 5GHz band into a de-facto licensed spectrum band and drive out other unlicensed users. Cablelabs, representing the cable operators, also voiced their suspicion over LTE-U, fearing its deployment might damage their own WiFi services. The rationale behind these concerns becomes clearer: when the existing solution is cheap and sufficiently good, why pay more for a slightly better but yet unproven alternative?

In conclusion: from a pure technical point of view, LTE-U seems to be a natural step forward to extend the scope of LTE, but in the business reality, it is a tough sale. It’s a classic technology push rather than a product driven by market demand. LAA may gain some traction after it becomes standardized later this year, but it still has to face the property ownership issue that frustrated many Femtocell deployment attempts. The prospect for standalone LTE-U hasn’t yet lightened up after Qualcomm’s announcement of MuLTEfire. Maybe MuLTEfire would become the corner stone of Qualcomm’s next killer product or it might turn out to be another ‘MediaFlo’. The jury is still out…

/Lei

Lei is a Consultant at Northstream

Posted in Uncategorized | 1 Comment

How much spectrum data is needed for efficient spectrum sharing ?

Dyspan_logoIEEE DySPAN 2015 is under way in Stockholm and I attended (actually co-organized) an NSF sponsored workshop on “Future Directions in Spectrum Management Research“. The actual organization of the workshop became, admittedly, somewhat “ad hoc”, but thanks to the strong and somewhat provocative keynotes, the on-the-mark invited presentations and a very active audience the workshop content was very interesting and the discussions lively all the way to the very end.

A big part of the discussion was revolving about the geolocation database systems, coined Spectrum Access Systems (SAS), that will be used to authorize shared access – now being applied to the 3,5GHz-band in the US.  Preston Marshall from Google started things of by the describing the concepts and by discussing the accuracy of todays 3D maps of our environment that are allowing detailed radio propagation. Keeping track of 10’s of thousands of receiver that need to be protected – is not considered a computational difficult or heavy task these days.   An interesting question that I raised in the concluding discussions,  is to what extent it was useful to augment the maps with large bulks of actual measurement data.  Several other speakers described measurement campaigns to collect such spectrum-analyzer data.  One way to approach the issue is to ask what are the limits of such measurements – what if all these measurements were perfect – if the 3-D maps where perfect – to  what accuracy could they predict interference to primary receivers from a hypothetical device?  Anand Sahai posed the question if there is a “Nyquist sampling rate” for 3D spectrum maps, i.e. if we sample more densely than this, we could perfectly reconstruct the signals.  I am somewhat doubtful. Further,  despite such “perfect prediction”, if it is not known what is the orientation of my antenna, is the window open, do I stand behind or in front of a temporary object (e.g. a bus) in the street etc, how many dB:s margin to I have to include in my SAS decision to “guarantee” primary protection. 10dB, 20 dB?   A large margin significantly reduces our capability to effectively reuse the spectrum.

Another way to answer the above question, is  to ask is what are actually the bottleneck components in the interference prediction calculation ?  Is it the (primary) receiver characteristics, the propagation modelling including the above local, temporary propagation conditions (see example above) , knowledge about interference sources, receiver locations etc.  This will give us a hint where to improve by collecting more and more accurate data or if some parts are just fundamentally out of reach. As the chain is not stronger than its weakest link, it would hardly be worth the effort of significantly improving one of the factors, when we in fact are limited by something completely different.  So do we need more spectrum analysis data,  or is the bottleneck somewhere else ?

Posted in Conferences, Spectrum | Leave a comment

The “Tactile Internet” – a contradiction in terms ?

5GDimensionsI was visiting our “sister center”  5G-Lab Germany  at their Academic Interaction Day yesterday. An interesting crowd of distinguished researchers was discussing the key design challenges and architectural concepts for 5G. There was a wide consensus about the three design dimensions (left) and that the “more bandwidth” (xMBB) and “50 billion devices” (M-MTC) -challenges probably can be managed in a more evolutionary manner. The key here is not what air interface will be used, but that the system architecture will also in the future be based on “best-effort” IP, i.e. the “internet”-solution. In this architecture with its transparency due to the “end-to-end principle”, keeps the network  completely separated from the applications, and almost any current and future application will be able to use the network. This is key to the immense success of the “internet”. 15 years ago everyone wondered what would be the “killer application” that alone would pay for the infrastructure. No one asks that question today as IP-connectivity itself is the “killer app” – enabling millions of application today – and the millions future applications that are yet to be invented.

What the “best-effort” internet solution cannot deliver is performance guarantees – low delay and high reliability. The way this has been traditionally managed in IP networks is by “overproviding” resources – higher and higher data rates. This however an expensive way to achieve QoS in wireless networks – in particular if the reliability figures contains many “9:s” and the delay is creeping down into the ms-range.  Reaching the low ms-range required for tactile/haptic interaction,  is also limited by the speed of light (300 km/ms). Control loops including cloud servers 100:s or 1000:s of km away will thus not allow for the extreme low latencies. These issues are very much in the focus of the  5G-Lab Germany  .

The solutions proposed include “edge clouds”, computational resources close to the base stations. If we want to control a mobile unit, say a driverless car, the (virtual) “edge cloud”  would have to follow the mobile devices and has to be “handed-off” to as the mobile unit moves to far from the original location.  This sounds like a great technical solution – it has one significant drawback, however.  I will require services inside the network, which violates the “end-to-end” principle.  The “tactile internet” is a very compelling term, but is likely to be a contradictions in terms. Yes,  there will be low latency connectivity, but its hardly the best effort, end-to-end “internet” that will be used to provide this connectivity. More “intelligence inside” is adding value to networking which of course is something the network operators would love, but deploying edge clouds together with a new 5G air interface specially designed for the C-MTC case will require significant investments. Will this turn the clock 15 years back – will we again ask what is the killer application ? Alternatively, is the class of possible future low latency applications enough to trigger these  investments?

The alternative solution – using the best-effort network – is the semi-autonomous car. This car would handle all fast interactions (e.g. avoiding collisions) locally (which corresponds to distributing the “edge-cloud” to the mobile units themselves), whereas the remote control over the network would be using longer time constants (e.g. route guidance).

Posted in Internet of Things, Systems | Leave a comment

Dynamic Spectrum – a solution for Africas spectrum problems?

Africon2015Last week I was invited speaker at Africon 2015.  My keynote talk was on the role of Dynamic/Flexible Spectrum and “Cognitive Radio” in the deployment of rural internet access using wireless technology.  This sparked some interesting discussions in the panel I also participated in.

First of all, it is quite clear from e.g. the EU FP7 QUASAR project, that dynamic spectrum access is no technique that can magically create “new”  spectrum out of thin air. Finding “unused” spectrum somewhere may definitely be possible, but its likely to be in a different place in every country or even every specific location.  Is there supplier that can find sufficient economies of scale and produce low cost radios if the available bands are different in every country ?  Dynamic spectrum can, however, be used to give new technologies fast access to new spectrum in a limited scale which is good to foster competition. Secondly, there is no real shortage of spectrum for internet access in rural areas as the traffic demand, per area unit, is low. There is already plenty of spectrum allocated for mobile data service with large volume, affordable terminal products (e.g. LTE)  that easily could solve these problems. Here, however,  a) most countries/regulators have been to lenient towards their cellular operators as they mostly not require rural coverage, or if the cellular operator actually has the coverage, b) the price charged is to high for most of the rural population to make the service attractive. The problem is therefore hardly a spectrum issue, its quite the normal techno-economic rural area problem – how much do we want to pay for maintaining the same service for a handful of users in the villages as for 10’s of thousands of users in the city. It applies literally to any physical infrastructure from fiber and wireless systems to electric power and sewage systems.

Posted in Business, Conferences, Rural Communication, Spectrum | Leave a comment

ITS Europe: Issues of mobile payment adoption

On 27-28 of June, I attended the 7th PhD Seminar organized by the International Telecommunications Society (ITS) Europe in San Lorenzo del Escorial, Spain. PhD Seminar’s participants also had the opportunity to attend the 26th European Regional Conference of the ITS. Using this opportunity I have attended a number of presentations.

One of the presentation was the most related to the area of my research (e.i. mobile payments). This was Harry Bouwman who presented a paper titled “Mobile payments: a multi-perspective, multi-method research project” written in cooperation with six co-authors. The researches have looked into different aspects of mobile payment adoption issues that face different stakeholders (consumers,  merchants, MNOs, banks, and service providers). The major conclusions are:

  • There is a limited usage of mobile payments by consumers (only 1% of Dutch consumers use the service on regular basis);
  • Development of common mobile payment platforms by banks and MNOs fails;
  • Mobile payments are rarely seen as an alternative to existing payment systems;
  • Safety and critical mass of consumers is an important factor for merchant adoption.

In terms of a service development:

  • User experience and security are important for customers;
  • Business case and viability of the mobile payment solution are important for a service provider.

These results are similar to the results we get in our research of mobile payments in Sweden. The authors see the future of mobile payments as multisided platforms uniting all involved stakeholders. This would allow reaching a critical mass of users and making the service attractive for merchants and other parties.

During the PhD seminar I presented a paper “The effect of innovative service introduction on existing business networks”. It provides the analysis of a change that the introduction of mobile payments made to the existing business network in retail industry in Sweden. The major contribution of the seminar is in the provided comments and feedback from the assigned discussant and following discussion of the paper by all participants of the seminar.

Posted in Business, Conferences | Tagged , , | Leave a comment