A few words on ITS Europe and the MNO vs OTT battle

The 26th European Regional Conference of the International Telecommunications Society was held on June 24th in San Lorenzo del Escorial in Spain. I had the opportunity to attend and present our paper titled: Internet of Things: Redefinition of business models for the next generation of telecom services.

During the three days of the conference we had the opportunity to participate in discussions on a number of topics related to the development of the telecommunications industry in Europe, topics such as Net Neutrality, Digital Divide, 5G, Internet-Of-Things, business modelling and others.

In my session I had the pleasure to listen to Miguel Vidal, from T-Mobile, with his presentation on: “Is competition just for losers? The economics of the internet value chain revisited”. A very interesting description of the economics behind the internet, the potential creation of monopolies and how the internet giants are erasing the business results of telcos. In this sense, the different regulation for internet and for telcos was shown. Something that, in my humble opinion, has more to do with the level of maturity of the two industries than with other reasons.

Finally the need of a common and strong regulation was for the digital market in Europe was introduced. In my opinion, it would be very interesting to listen to also other actors involved in this process and see which points they share and which ones they do not. In any case, the presentation was a very complete introduction to the economics on the internet era and the implications for the different actors involved.

Regarding our paper “Internet of Things: Redefinition of business models for the next generation of telecom services” provides an assessment of existing M2M business models and arrangements in selected industry segments. It studies the drivers and barriers for adaption of M2M and IoT in transforming operations in the selected industry segments, and focusses on identifying recurring patterns in the transformation process. The change in value propositions as well as how value networks transform is studied and analysed. The studied segments and services are: Public transportation services, Automotive and vehicle related services, Smart energy services and Health care and home care services.

Posted in Business, Conferences, Internet of Things | Leave a comment

Net Neutrality and QoE – Some notes from the ITS Conference

ITS Conference in San Lorenzo del Escorial offered an excellent opportunity to integrate and discuss different visions about the development of the telecommunications market. Interesting ideas came out when the Net Neutrality issue was on the floor. In this post I will try to summarize some of the main point in discussion:

Quality of service (QoS) parameters and mechanisms are important to enable network operators to design, build and manage their networks, but they are not directly visible to end-users. Crucial for the end-users, however is the quality that they personally experience when they use a service. QoS involves tracking of jitter, latency and other measurable parameters. If the QoS score is not good enough, operators can identify the problem and fix it. With QoE the solution is less straight-forward. QoE is a subjective measure of how the viewer is judging the content delivered by the network. This means the same type of content might be evaluated in a different way depending on the user profile and expectations. In that sense, meeting user expectations would require from the content provider and the network operator a deeper understanding of the user interests, awareness of the content that is traversing the network, and new ways to manage/prioritize the traffic. In these two cases, one of the key tools to provide awareness and activate traffic management policies is deep packet inspection (DPI).  However, a challenge arises when data encryption makes difficult the analysis of the information in the network . A combination of sensors in the field, combined with machine intelligence and big data analytics will produce a tremendous amount of data that will help to improve analysis of problems, aid in the planning of system upgrades and even support sales efforts, generating positive effect on users’ QoE . However, it remains to be seen how deeply it can be leveraged by operators. The uncertainty about Net Neutrality is causing some operators to move a bit more slowly on QoE in general and DPI and sensing technologies in particular.

Traffic management is a collection of technologies and policies which lead to different types of traffic being treated differently, which in principle goes beyond the best effort principles that support the original Internet idea. Without traffic management, different data packets are treated more or less equally, which means that under congested conditions traffic management would cause some data to have a greater chance of being delivered than others. Traffic management can be implemented in different ways, which include:

  • guaranteeing delivery of data or reserving bandwidth for that data;
  • prioritizing certain types of data in the event of queuing;
  • de-prioritizing certain types of data;
  • restricting certain types of data or the bandwidth allocated;
  • blocking certain types of data.

Such discrimination between data types would probably affect users’ QoE; in the extreme some applications would not be able to function. Of course, congestion could also cause applications to fail, but the distinguishing feature of traffic management is that it involves purposeful discrimination. In one hand, the traffic management could guarantee or prioritize data for sensitive applications and reduce the congestion to manageable levels, allowing fair use for all the users, increasing their satisfaction levels. On the other hand, the traffic management can restrict or block certain applications and make other people’s traffic take priority, which can generate a negative impact on the user’s perception.

On the other hand, full transparency would involve providing data that describe the effects of policies over time and therefore the resulting quality of experience for users. This implies the need for diagnostic tools to help users understand whether and in what way traffic management is affecting them.

Finally, we could see that Net Neutrality may influence QoE in two ways: how deeply operators are allowed to examine the packets flowing through those networks in order to use the extracted information to feed mechanisms to improve users’ QoE, and the transparency about the prioritization policies implemented to fulfill user’ expectations and requirements.

 

 

Posted in Uncategorized | Leave a comment

D2D communications: an effective means for beyond frequency reuse >>1

Device-to-Device (D2D) communications has been gaining more popularity among researchers in both academia and industry. It has the potential to offload traffic from cellular networks, improve energy efficiency, and extend coverage, etc. D2D enables mobile devices in proximity to establish local links for data communications. D2D enables novel wireless applications for proximity services and public safety communications. Our recent research shows that indeed, D2D is also the most powerful tool to boost cellular capacity, from frequency reuse 1 of today’s LTE-A networks, to as high as for example 10, or even higher. In addition, the sacrifice of the performance of existing cellular links to enable such a boost is almost completely negligible!

This exciting research result is what I have shared at Infocom 2015 in Hong Kong, a top ranked conference on networking, in the end of April. Our research has developed a standard-compatible solution to maximize the network frequency use, by means of enabling the highest number of simultaneous D2D communications in multi-cell network environment while reusing the same resource as cellular users. To mitigate the impact to existing cellular users, a careful design of interference coordination is needed. This design effectively controls interferences among the two layers of network in D2D enabled cellular networks, i.e. the cellular layer and the D2D layer, while ensures QoS of all devices. Curious about how good it is? Take a look at the following curves. The network spectrum efficiency is boosted by more than 10 times! Surely, the improvement is dependent on user density, but it shows a promising way to go. If you’re interested in more details, please refer to our paper at Infocom for more details: “Scalable Interference Coordination for Device-to-Device Communications”.

D2d

 

 

 

 

 

 

Posted in Devices and Gadgets, Green radio, Internet of Things, M2M, Spectrum, Systems, Uncategorized | Leave a comment

Does the Internet-of-Things dwell in Extremistan?

black-swanExtremistan – is not were IS terrorists rule, it’s the fictitious country introduced in N.N. Taleb’s renowned book “The Black Swan:The Impact of the Highly Improbable“. It is some times contrasted against Taleb’s other imaginary country,  Mediocristan, to describe the impact of globalization and the internet revolution:  In Mediocristan live the traditional providers of individual services, small town shop owners,  the craftsmen, i.e. people that either have only a very local market for their goods and basically paid by the hour. A physician may have a very high hourly rate, but their are only 24h per day for him to charge to his customers. A hundred years ago, almost everyone lived in Mediocristan, even the local opera singer or soccer player. People went to the local opera stage and watched the local soccer game. The “market” in Mediocristan was limited to local population so the prospect to make large amounts of money were slim, but on the other hand there was little competition and many in the local community could make a decent living. Now, globalization and the internet, has taken (some of) us to Extremistan.  The opera singer, the novelist, the movie star and the soccer player have here a global audience – the potential of making money is vast. However,  as also the competition has become tremendous, very few will make “all” the money, whereas the rest can’t make a living. Who want’s to pay to listen to the mediocre, local opera singer, when “the three tenors” are on TV ?  Who want’s to pay to see the local soccer team play the team from the other suburb when there is a Champions League game on?

The web browser, smartphone and tablet “Apps” clearly meet all the requirements to reside in Extremistan. The technical platforms are highly standardized. The apps or web based services are mostly providing generic infotainment services that address many users with the same needs (or are even willing to adapt there life style to the “app”),  they use English and are designed to run everywhere in the world, on all platforms, networks etc. If needed, they can been enhanced by throwing more communication bandwidth and cloud based storage and/or computational capacity at them. In short, Apps and Web services are highly scalable can be used everywhere and most of them have tremendous growth opportunities. At almost zero (marginal) cost they can be provided in an instant and have the potential to reach billions of people.  Now doubt, this has made a some people incredible rich, and has lowered the cost dramatically for many services. On the other hand many more have lost their jobs in local bookstores, travel agencies and photo print shops – jobs that were perfectly OK in Mediocristan but no longer exist in Extremistan.

So what about the “Internet-of-things” ?  Is there an “exponential” growth in numbers and benefit similar to the infotainment business – or in other words – can the IoT concept exist in Extremistan ?  Or is “Internet-of-things” a contradiction-in-terms ?  Are we as Gartner predicts just at the top of the hype cycle – heading down into disillusionment? Remember, we have been here before not so long ago (around 2000). So what is fundamentally different today compared to era of the last hype ?  Yes, Moore’s law has made a few cycles since, but is that all ?

A key observation is that “things” live in and interact with the physical world. On the “Planet of the Apps”,  services run in highly standardized environments in cyberspace – they are by painstaking design removed from the physical world to make them scalable and capable to run everywhere, on any platform.  The “Things” are part of cyber-physical systems, they interact with the environment which is mostly different everywhere. They work only “here” and its not obvious how we can leverage Moore’s law to improve their performance by increasing bandwidth, storage or computational power. “IoT-apps” have still to be tailored to specific environments and systems in a “craftsman” fashion, a work that will not be directly applicable anywhere else.

Example – The “Smart” Home:  Although it’s obvious how to engineer (at least in principle), say, a home climate control system with sensors and actuators, you will not be able to buy the solution in an app store with a few clicks.  As your current home is rather “dumb”, you still have to install sensors and “actuators” in proper places. This is likely to be done by some consultant (system integrator), someone that will adapt and tune the system to your house. The you have to interface with the heating and electrical systems in your house. This may look different in different countries and the installation has for safety reasons to be done by another specialist, the electrician.  Who will make the most money in this business ? Its likely to be then consultant and the  electrician, hardly some global player like Apple or Google or even the internet operator.   Right, the consultant and the electrician have to physically come home to your place and they are paid by the hour, i.e. they both live in Mediocristan.  At least for now, they cannot be replaced by some cloud service or some call center in Asia.  This has significant impact on the cost of making your old home “smart” – to the extent that very few will see any economic upside, e.g. in saving energy.  What about future homes ? In the past homes were built by architects that actually took pride in making every new home different. In the future, new homes might well be produced in a more standardized way, they will have standardized interfaces and behavior the will allow for generic control that can be provided as a global service. This transition has little to do with Moore’s law and other drivers from cyber-space. As most buildings are built to last more than 50 years, it will take many decades until a significant part of our homes meet these requirements.

You can take similar examples from transportation systems and health care. IoT technology certain can provide large benefits in these areas. However, in addition to the differences in physical environment from instance to instance, we have also to deal with the inertia of organizations. All this boils to that most of the time and money spent will be in adaption of systems and organizations, not in sensors, communication or cloud based services.  This adaptation unfortunately does not come at the speed of light or at (almost) zero cost. Nevertheless, there are still large savings in efficiency that will drive organizations introduce IoT-technology. This is serious business, however,  again  the bulk of the money is not likely to be made by global internet players but by “local” system integrators from Mediocristan that know the environment and the business of the customer.

So do all IoT applications live in Mediocristan ? The ones that we discussed above were not scalable due to the diverse environments the operated in.  Are there applications that operate in a standardized physical environment that could scale ? The environment that come to my mind is the human body.  Our bodies are very similar and have the same functions around the globe and interfacing with “things” should work everywhere in the world.  Gadgets attached to hour body related to personal health and fitness are indeed  taking off big time – both due to our Western world obsession with health, but also due to their scalability  – these gadgets work everywhere and are easily connected to the smartphone platforms and benefit from cloud based services. They are only a (standardized) Bluetooth or WiFi connection away from cyberspace. In professional healthcare, however,  there may still be organizational barriers as some decisions still have to be taken by physicians from Mediocristan.

Other examples that look promising include the car industry. Although there is a lot of activity in “connected cars“,  standardized platforms that allow global scaling, are conceivable, but still far away. Is the car industry really interested going down this risky path or do they aim at containing and controlling the technology only for their purposes ? A striking resemblance is the mobile phone industry before the iPhone and the App Store.  “Yes, mobile services, but at our terms”,  are some famous last words.

In summary, my take on the “IoT revolution” is:

  • Yes, it will happen, but much slower than we expect/hope. The key economic incentives are in making big systems more efficient. In big systems, the upside is usually big so we can afford to build them in Mediocristan. The “internet industry” plays a “sidekick” role as they do not know anything about the actually applications.
  • Yes, smart homes, controlling your heating, sensors everywhere, connecting your flowerpot and refrigerator – it can all be done, you can already buy all the gadgets, but actually doing something sensible with them requires today craftsman’s skills – the “internet scalabilty” is not there and it will be tough to reach an attractive price point for consumers. Unless you are a geek, there is dire  need for a “killer app” to pay for it all.
  • Closest to Extremistan are personal “things” on your body .. and possibly your car if the car industry is really interested.
Posted in Business, Internet of Things, M2M, Systems | 3 Comments

Who want’s LTE-U ?

Quite some effort is now being put in moving LTE also into unlicensed spectrum. One reason is of course the large chunks of spectrum where the big competitor in short-range indoor and hot-spot market, WiFi,  is now ruling alone.  The other is that the mobile industry has  a solution .. now where can they apply it?  Now several studies are being presented to show how “polite” LTE-U is to existing WiFi (“you won’t feel a thing”).  You can probably debate this quite a lot, but the two concepts are fundamentally different: LTE (as any cellular system) is  designed to withstand a lot of interference operating in a “reuse-1” environment, whereas WiFi is designed for carrier sensing that creates clear channels, promoting short transmissions at very high data rates. The debate if the sharing is “fair” or not aside, the reason why I not believe that LTE-U will make it, is not because the technology is bad (on the contrary, LTE-U could probably provide more capacity!), but because of simple, common sense business reasons.

LTE-U and all the network assisted spectrum sharing  is a brilliant idea from the public operator perspective. With LTE-U the operators want to claim part of the unlicensed spectrum for their technology,  thereby attempting to limit the competition from WiFi . They aim at introducing  their business model into the indoor short range setting where WiFi is already dominant.  The current operator business model is based on “owning” the spectrum and “owning” the customer through the SIM card, which works outdoor because the customer has no choice.

Now indoor,  it’s a completely different ballgame: the operator’s spectrum exclusivity is gone, and what now really matters is getting access to the physical space where the network is to be deployed. The facility manager is in the driver seat, not the operator.  Mostly the building owner has already made the largest investment, i.e. the wired infrastructure and he pays the electricity bill.  Now why in the world would I  let one of the operators deploy their access points in my building for the benefit of only their customers How should my guests and customers in my building get access ? I can hardly let 3 or 4 operators deploy their networks in my building. The only few places where there may be a reason, is where there are very, very high capacity demands. But would you include a new air interface in your smartphone or tablet for those few cases ?

The other reason is  that  WiFi enabled equipment is already dominating the short-range scene. The incentives for vendors of computer, tablet  and other equipment to include LTE-U in their equipment is small and for facility owners even negative – why do I  need additional complexity, operator control and a SIM card for a system that basically does the same thing as WiFi ?  Even if there is only a mere suspicion that LTE-U may limit the capacity for the large bulk of WiFi devices – facility owners will resist deployment.

I think the main problem is that the public operator model is not suited for indoor deployment – regardless of technology .  Shared infrastructure is the natural solution indoors, and  as long as the operators resist this concept,  non-operator deployed WiFi will increase its share of this market – not because WiFi is technically superior but because  in WiFi, infrastructure sharing has never been an issue. So, in the meantime we will see more professionally deployed and managed WiFi networks – which for most applications provides ample  capacity at a much, much lower price point.

Posted in Business, Spectrum, Systems | 3 Comments

ICCS Conference 2015: Designing the city of the future

Last week I had the opportunity to attend to “International Conference on City Sciences: New architectures, infrastructures and services for future cities”, which took place during June 4th and 5th in College of Architecture and Urban Planning of Tongji University in Shanghai. The conference was organized by a joint collaboration between Tongji university and Polytechnic University of Madrid, aiming to foster multi-disciplinary research on city sciences. In this context city sciences refer to a broad number of disciplines like urban planning, architecture, transport or ICT, aiming to improve live and increase sustainability in the cities of the future.

The common context of the conference presenters was the increasing urbanization happening all over the globe, with critical implications for some developing countries like China or India where urbanization is happening extremely fast producing megacities like Shanghai.  In this sense, the challenges of fast urbanization are about improving quality of life within the cities, as it is already too late to create a holistic plan for the city. Profesor Wu Jiang, from Tongji University, explained how the growth of the city has overtake the urban planning for chinese cities during the last 30 years, making impossible to coordinate planning with existing urban evolution. In the same line profesor Iñaki Ábalos introduced the concept of ecologic urbanism, highlighting the importance of 0 emission in future cities.

The conference was divided in a number of tracks, including transportation, energy, urban planning, ecologic urbanism and smart cities. Within the Smart cities track I had the opportunity to present two co-authored papers: Quality of Experience (QoE)-basedservice differentiation in the smartcities context: Business Analysis and Horizontalization of Internet of Things  services for Smart Cities – Use cases study.

The paper Quality of Experience (QoE)-based service differentiation in the smart cities context: Business Analysis intends to provide an initial research on how QoE service provision will impact the development of smart city services. In order to do that, authors aim to identify which Smart City services may be impact and in which way for QoE service provision. Aligned with the same objective, business analysis for QoE service provision is introduced with the intention of identify recurring patterns that could be used in the different Smart City services considered.

In the paper Horizontalization of Internet of Things  services for Smart Cities – Use cases study, we introduce a number of use cases from selected industries; Transportation, Automotive, Energy and Healthcare. In the second part of the paper value network and value proposition for the use cases introduced previously are analyzed, with the ultimate goal of obtaining insights on the potential horizontalization and integration of Smart City services through ICT. The analysis is focused on the relations between actors, offering and value propisition and value network.

Posted in Conferences, Internet of Things | Leave a comment

5 = 14 +

That was the ending slide from Pedro Riao (Ericsson, Germany) during the ICC 2015 panel on 5G Challenges and Opportunities. The message makes sense: 5G will be for 14+ different industries.

We are mid-conference but this year’s mantras are already clear; extreme diversity in requirements, network slicing, integration of vertical industries and the Internet of everything.

In this sense, during the panel on 5G Architecture, Simon Saunders (Real Wireless Limited) and Mischa Dohler (King’s College London) made remarks on the need for a change in the way we handle technology, as we saw earlier during the 2015 Johannesberg Summit. In short, we need to listen to the future users (industries) in order to understand their wishes–communication requirements–and involve them in the development of future solutions.

Yes, the vertical industries are the next frontier and the benefits in bringing them to the 5G community are unprecedented. But if we listen to the big industries and focus the development in fulfilling their wishes, what (or who) is going to guarantee that we also fulfill the requirements for those users that are becoming less lucrative? Who will be interested in those users bringing marginal profits when industrial communications are so appealing?

Maybe these changes will take decades, but is it interesting to see how the majority of the presentations and discussions are dominated by industrial communications.

Posted in Conferences | Leave a comment

The 22nd International Conference on Telecommunications (ICT 2015): Beyond 5G and Post-Moore Law

From 27th – 29th April 2015, I attended the 22nd International Conference on Telecommunications (ICT 2015), Sydney (https://www.engineersaustralia.org.au/ict2015-conference), where I presented a paper titled “ Impact of the Flexible Spectrum Aggregation Schemes on the Cost of Future Mobile Networks”. The general theme for the ICT 2015 conference this year (in my view) was about the emerging technological innovations towards future mobile systems (5G and Beyond). In addition to the discussion of the 5G network architecture, Dr. Philipp Zhang, Chief Scientist gives a presentation about the required revolution roadmap at the circuit- design level.

In his presentation Dr. Zhang stated that “the era of exponential gains in microelectronics is coming to an end. That is why, there is an urgent need for new materials and circuit architecture designs  to overcome the physical scaling limits of the silicon transistor. The primary contender for the post-silicon computation paradigm is molecular electronics, a nano-scale alternative to the CMOS transistor.  Moreover, considering the foreseen requirements for future mobile networks (i.e. 5G and beyond), new breakthroughs in the radio-frequency (RF) front-end are required from 5G and beyond Terminals design Perspective. In this regard, the next generation of microelectronics technologies are expected to deal with challenges such as lowering power in Data converter technology and RF, supporting huge number of antennas (Massive MIMO), improving the poor efficiency of Power Amplifiers (PA) to support wide spectrum band operation up to 100 GHz.

Posted in Uncategorized | Leave a comment

TVWS, the Un-abandoned Child

My first attendance of an international (IEEE) conference took place last week, in the 81st IEEE VTC conference in Glasgow, Scotland. As expected, the conference was fully tagged with 5G technologies, mmWave, massive MIMO and etc. However, an exhibition held by NICT (Japan) on TVWS and geolocation database prototype drew my special attention. The prototype included IEEE 802.11af system and LTE system operating under TVWS as well as White-space database.

I did my master thesis ‘Potential capacity of Wi-Fi system in TVWS’ based on geo-location data base method, by which we can obtain the transmit power limitation and frequency availability. When I saw the real geo-location database prototype, I was surprised by the accuracy and real time performance.

IMG_2807        IMG_2783

On the left, it is the geo-location database for the city of Glasgow. The pinpoint is VTC conference center, shown with available frequencies of that place and the supported frequencies of the hardware. On the right, the database of Japan is displayed.

According to their information, IEEE 802.11af system can achieve 2.2 Mbps downlink throughput over 3.7 km with EIRP 36 dBm, tested between two campus of KCL. LTE FDD and TDD can achieve maximum 45.4 Mbps and 19.5Mbps downlink throughput (no channel aggregation for TDD).

I made an question to the exhibitor about the requirement of the user device for TWVS signal. He then showed me the cellphone made in Japan (photo below). According to him, the cellphones produced in Japan are equipped with TV antennas so that they can watch TV on the phone. In this case, the cellphone can utilize TVWS for communication in a very easy way.

IMG_2799

 

Ironically, although all the prototypes are produced by this Japanese company and the geo-location database for Japan is setup already, the Japanese regulator hasn’t allowed TVWS for commercial communication,only for some broadcasting use.

As most research goes to high frequency to obtain wide bandwidth for 5G, the role of TVWS is somehow embarrassed. The utilization of TVWS is quite limited by location and regulatory, destine not to be welcomed by large commercial use. But its favorable properties and cheap price (as long as authorized as secondary user) are still attractive. I wonder who will adopt this ‘child’ at last.

Posted in Conferences, Spectrum | Tagged , , | Leave a comment

All eyes on latency?

Last week I attended the 81st IEEE VTC conference in Glasgow, Scotland, to present my co-authored paper titled “On the Feasibility of Blind Dynamic TDD in Ultra-Dense Wireless Networks”. As always, key trends and enabling technologies for the 5G ecosystem were on full display in many of the presentations, with not surprisingly xMBB driven to its extreme by UHD 4K/8K video and 3D online gaming at the top of the list.

Listening to the speakers in the panels and plenaries, though, there seemed to be a growing emphasis on latency rather than ever more capacity this time, which is especially important for delay-intolerant applications and mission-critical services with smaller payloads, as well as for maintaining a certain quality of experience. In one of the workshops, the French operator Orange listed capacity on a less impressive 7th place on their priority list on the network side, with lower power consumption, cost efficiency (surprise, surprise), and greater flexibility for future evolutions instead claiming the top three spots.

During the panel session on “What is driving 5G?”, Prof. Rahim Tafazolli, director of 5GIC and Institute of Communication Systems at the University of Surrey, echoed this sentiment by saying that minimizing end-to-end delay had not been researched enough and that it demanded more attention going forward. A few proposals were however made by some attendees, including more flexible frame lengths and shorter TTIs in the air interface, as well as more efficient tunneling protocols in the higher layers.

Also attending the panel was Paul Crane, head of Practice, Research and Innovation at British Telecom. In his short introductory talk, he included a slide from a GSMA report showcasing different use cases with respect to latency and data rate requirements. The idea was to show that existing LTE/LTE-A technology seems to provide good enough speeds for many of the use cases envisioned for 5G. The additional cost will instead come from lowering the end-to-end delay which he predicted may have a not so negligible impact on the business model of BT and others when thinking about things like the tactile Internet.

Source: GSMA Intelligence (https://gsmaintelligence.com/research/?file=141208-5g.pdf&download)

Source: GSMA Intelligence (https://gsmaintelligence.com/research/?file=141208-5g.pdf&download)

In the case of automated driving which also appears high up in the chart, the circumstances are a bit different. Latency is for obvious reasons a critical requirement, but not necessarily the toughest. As noted in one of the plenaries by Dr. Luca Delgrossi, director of Autonomous Driving U.S. at Mercedes-Benz, an even larger issue is the need for a dedicated short-range broadcast channel since point-to-point communication in the context of V2V eventually becomes way too burdensome for the network to handle as the number of connected cars increases. Who will end up facilitating (paying) for these spectral resources is at this point less clear.

 

Posted in Conferences, Systems | Tagged | Leave a comment