On a “5G event” like the Johannesberg Summit, the above question may actually sound like blasphemy, but seen over the two days this was one of the key discussion items. Ralf Irmer’s (Vodafone) presentation illustrates some of the key issues:
- The wishlist is long
- The list is not that different from the 4G and 3G wishlist
One of my colleagues even dared to suggest that he today could give a 5G talk with 1997-vintage 3G slides and hardly anyone would know the difference. I wouldn’t go quite that far, but my main take-away from these two days is that
- it is not obvious that all the scenarios and requirement can actually be met with one new “5G” system. Since we do not exactly know which are the “killer applications” today, like in 3G we end up in an immensely complex standard that is overloaded with features (the term “feature porn” was actually used) of which most are never used. The only two of the innumerable “bearer services” in 3G that actually made it to widespread use was voice and best effort data. In 4G only best effort data remained.
- 5G is to fix what we didn’t manage to fix in 3G and 4G (i.e. evolution). The key requirements that stand out as truly different (revolution) are the Machine Type Communication (MTC) modes with low data rates, but large volume of devices (sometimes) requiring low latency, low power and very high reliability.
The “Elephant in the Room”, the one question that kept hanging in the air was why cant we partition the problem and provide separate solutions, i.e. for the Person-Type-Communication problems (the “Data Tsunami” issues) we go with evolved 4G and High Efficiency WLAN (HEW), whereas we for the MTC design a new system. We already have 3-4 co-existing legacy systems (GSM, EDGE, 3G, HSPA, LTE etc) in both towers and terminals, so whats the big fuzz with one more? 3GPP has done this trick before so if we want to call the combination “5G”, so be it!
I was struck by a few marked differences of perspective between the industry reps (cellular service operators and device/system vendors) and the net-heads (mostly the academics and IT types).
Industry seemed gung ho about licensed outdoor millimeter-wave deployments, but some net-heads were skeptical about both the outdoor scenario and whether 3GPP was the right venue for standardization, given that IEEE seemed to be making good progress.
The net-heads tried to broaden the focus, with one panelist suggesting that 5G should be designed as a generic distributed computing platform, i.e. the community should take a holistic view of communications, computation and storage. The cellular industry was, not surprisingly, focused on comms and especially Radio Access Technologies. (They do love animal acronyms: RATs, COWs, COLTs, CROWs…)
Industry seemed to see machine-to-machine as a defining scenario for 5G, but the non-cellular types weren’t convinced; one argued persuasively in an off-the-record panel that cellular ignored M2M in 2G, 3G and 4G and that by the time 5G rolls around it would be too late since millions of devices with decades-long service lives will have been deployed.
On the computing platform issue – to some extent I can understand the “RAN-Heads”, i.e. the current cellular industry. They have a hard time identifying their role in the “distributed computing platform” future. I guess, I their position, I would see it (at best) as only a marginal improvement to be the one providing this generic platform than being relegated to be the lowly provider of bit-pipes. It could actually be worse, since contrary to physical connections, computation and storage resources are not localized, which increases the competition even more. The money seems to be made elsewhere – in the end-user service and content domain.