IEEE Globecom’12- What’s a small cell and what are research problems there?

I attended IEEE Globecom 2012 in Anaheim during the last week. Overall, I found that there are growing interests on M2M and cloud services at least from an industry side. Also, interests on small cells and energy efficiency were relatively declined than previous years but still they were one of major research areas.

Particularly in this blogposting, I would like to express my personal opinion on small cells. This is mainly based on my impressions from participating one industry forum and tutorial: “Small Cell and Heterogeneous Network (HetNet) Deployment” headed by several experts on small cells: Jie Zhang, University of Sheffield , David López-Pérez, Bell Labs, Guilluame de la Roche, Mindspeed, Hui Song, Ranplan, “An Introduction to Small Cell Wireless Networks” led by Mehdi Bennis, University of Oulu, and Walid Saad, University of Miami.

Low-power + licensed band = small cell?

In both tutorial and industry forum, they seem to mainly focus on the conventional femtocells under a new brand name ‘small cell’ which embrace more generic low power nodes, i.e., femto/pico/micro/metro, etc. However, there is still a clear definition of small cells missing as before when a femtocell concept was introduced a couple of years ago.

Someone seems to think more or less indoor solutions both for homes and enterprises where cheap or almost free backhaul is available. The others (mostly from industries related to mobile operators) consider it as outdoor lamp-post type of microcells deployed by operators. Regardless of deployment scenarios (by whom and in where), it generally seems to be referred to as low power nodes with physically small book size of base stations. We can find one explicit definition from small forum which also used by Dr. David López-Pérez during the forum (http://www.smallcellforum.org/aboutsmallcells-small-cells-what-is-a-small-cell)

“Small cells are low-power wireless access points that operate in licensed spectrum, are operator-managed and feature edge-based intelligence.”

Here, I found two key words, i.e., low-power and licensed spectrum. Regarding low-power, I interpret it as NOT traditional macrocellular output power. I admit the transmission power of small cells is drastically different from traditional BIG macrocellular BSs. However, it was little bit vague to me how to differentiate all different types of small cells (e.g., femto/pico/micro/metro). For instance, although BS  equipment with the same maximum output power capability will have different actual transmission power depending on deployment environments. At the same time, the same equipment with fixed transmission power but with different deployment density can ideally lead to the almost same per-cell capacity due to interference-limited situations.

Also, a strange thing comes from licensed spectrum constraint in the definition. According to the definition, if I use HeNB (terminology for a small cell in a LTE context) in unlicensed, it may not be considered as a small cell although its transmission power is in the range of typical low power nodes (e.g., 20 dBm to 30 dBm). I guess they may not want to include Wi-Fi technologies in their scope by having such constraint. Conversely, if we simply use Wi-Fi in 2.6GHz licensed cellular band with marginal modifications, it seems to be considered as a small cell.

Do we need a distributed RRM for small cell deployment?

One of challenges in small cell deployment is too many nodes and its optimization complexity. I fully agree with this. However, regarding solutions and research approaches, there are several things that I could not clearly understand. Significant efforts seem to be put on developing distributed algorithms with focusing on sub or close to centralized performance without or little information exchange among small cell nodes, but generally with substantial iterative processes. This is basically the original meaning of self-organizing network concept from Biology, learning based on local interactions without any explicit information exchange. However, I am little bit doubtful about why we need such complex algorithms when almost free backhaul is available to exchange information among small cells. Distributed RRM algorithms will be needed only when simply centralized algorithms are not  feasible, e.g., the lack of central entity for scalability or network flexibility, when no backhaul is available. For coordination among infrastructureless nodes, e.g., ad-hoc or end-user terminals (uplink), the distributed algorithms are essential although it would be challenging in practice. The only motivation of having the distributed algorithms that I could find now would be faster adaptation than information exchange via shared IP backhaul which typically has the order of 10~100 ms latency at the worst case. However, such level of fast adaptation requires much faster convergence time of distributed algorithms in order to have benefit over centralized algorithms. Otherwise, simple myopic decision based algorithms with poor performance may be needed, e.g., CSMA.

In my opinion, slow or long-term (from minute level to hour level) RRM for small cell deployment needs to be in a centralized manner regardless of how many nodes are there since it has always better performance than distributed. For this, only practical issue is where central coordination functionality should be located, on physically separated node or just one of nodes with a master function. Nevertheless, the majority of researches seem to forget that the free backhaul is there for at least indoor small cells (even outdoor small cell system always have backhaul for data forwarding). Short-term or fast RRM algorithms is always desirable but, I think that finding those algorithms are extremely challenging.

As far as centralized algorithms are concerned, main issues are more related to optimization algorithms which may not be the problem in a wireless domain. In the wireless research perspective, it is still not so clear how much performance can be improved ideally depending on different level of  RRM (let’s say it coordination level). Interpretation the performance benefit in terms of deployment will be interesting as well. Typically, coordination or more efficient RRM has been studied to show the performance improvement in a given BS deployment. However, X% more bits by coordination does not naturally yield X% fewer BSs. Thus, how much BS density can be saved by more efficient coordination regardless of algorithms is not trivial. It seems that there are not so many studies on this regard while majority still concentrates on algorithm development. Do I miss some literature review?

This entry was posted in Conferences, Systems and tagged , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *