How much spectrum data is needed for efficient spectrum sharing ?

Dyspan_logoIEEE DySPAN 2015 is under way in Stockholm and I attended (actually co-organized) an NSF sponsored workshop on “Future Directions in Spectrum Management Research“. The actual organization of the workshop became, admittedly, somewhat “ad hoc”, but thanks to the strong and somewhat provocative keynotes, the on-the-mark invited presentations and a very active audience the workshop content was very interesting and the discussions lively all the way to the very end.

A big part of the discussion was revolving about the geolocation database systems, coined Spectrum Access Systems (SAS), that will be used to authorize shared access – now being applied to the 3,5GHz-band in the US.  Preston Marshall from Google started things of by the describing the concepts and by discussing the accuracy of todays 3D maps of our environment that are allowing detailed radio propagation. Keeping track of 10’s of thousands of receiver that need to be protected – is not considered a computational difficult or heavy task these days.   An interesting question that I raised in the concluding discussions,  is to what extent it was useful to augment the maps with large bulks of actual measurement data.  Several other speakers described measurement campaigns to collect such spectrum-analyzer data.  One way to approach the issue is to ask what are the limits of such measurements – what if all these measurements were perfect – if the 3-D maps where perfect – to  what accuracy could they predict interference to primary receivers from a hypothetical device?  Anand Sahai posed the question if there is a “Nyquist sampling rate” for 3D spectrum maps, i.e. if we sample more densely than this, we could perfectly reconstruct the signals.  I am somewhat doubtful. Further,  despite such “perfect prediction”, if it is not known what is the orientation of my antenna, is the window open, do I stand behind or in front of a temporary object (e.g. a bus) in the street etc, how many dB:s margin to I have to include in my SAS decision to “guarantee” primary protection. 10dB, 20 dB?   A large margin significantly reduces our capability to effectively reuse the spectrum.

Another way to answer the above question, is  to ask is what are actually the bottleneck components in the interference prediction calculation ?  Is it the (primary) receiver characteristics, the propagation modelling including the above local, temporary propagation conditions (see example above) , knowledge about interference sources, receiver locations etc.  This will give us a hint where to improve by collecting more and more accurate data or if some parts are just fundamentally out of reach. As the chain is not stronger than its weakest link, it would hardly be worth the effort of significantly improving one of the factors, when we in fact are limited by something completely different.  So do we need more spectrum analysis data,  or is the bottleneck somewhere else ?

About Jens Zander

Professor Jens Zander is professor in Radio Communication Systems at the Royal Institute of Technology, Stockholm, Sweden. He has been among the few in Swedens Ny Teknik magazine's annual list of influential people in ICT that have been given the epithet “Mobile Guru”. He is one of the leading researchers in mobile communication and is the Scientific director of the industry/academia collaboration center Wireless@KTH. His research group focuses on three main areas – the efficient and scalable use of the radio frequency spectrum, economic aspects of mobile systems and application and energy efficiency in future wireless infrastructures.
This entry was posted in Conferences, Spectrum. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *