For a graduation project, we need a topic that is academically rigorous, demonstrates our embedded systems programming skills, and solves a realistic telecom problem.
Design and Evaluation of an Energy-Aware MAC Scheduler for IoT Traffic
The General Concept:
Traditional LTE MAC schedulers (like Max C/I or Proportional Fair) are designed for broadband traffic, aiming to maximize throughput or balance fairness among mobile phone users. However, in an IoT ecosystem, devices often run on batteries and transmit small, infrequent packets. An energy-aware scheduler modifies the resource allocation strategy to minimize the time an IoT device needs to stay awake monitoring the Physical Downlink Control Channel (PDCCH).
The problem
It is crucial to understand that standard LTE schedulers (like Proportional Fair or Max C/I) were designed for mobile broadband - their primary goals are maximizing cell throughput and ensuring fairness for users downloading large files or streaming video. They assume the User Equipment (UE) has a large, rechargeable battery.
When we apply these standard schedulers to IoT devices (which transmit tiny amounts of data and must run on a coin-cell battery for up to 10 years), several massive inefficiencies occur at the MAC and Physical layer interface.
The “Blind Decoding” Energy Drain
The most significant battery drain for an idle or lightly active IoT device is listening to the eNodeB. The eNodeB uses the Physical Downlink Control Channel (PDCCH) to send Downlink Control Information (DCI). The DCI tells the UE exactly which Resource Blocks (RBs) contain its data in the current 1ms Transmission Time Interval (TTI).
The problem is that the eNodeB does not tell the UE where on the PDCCH its specific DCI is located. To find its DCI, the UE must perform Blind Decoding. It searches through the Control Channel Elements (CCEs) across multiple Aggregation Levels (AL 1, 2, 4, and 8). A UE might have to attempt up to 44 different mathematical decoding attempts (checking the CRC against its unique RNTI) every single millisecond just to figure out if the eNodeB is talking to it.
If a standard scheduler has no data for an IoT device, or delays the data because the channel is busy, the IoT device wakes up, executes 44 computationally heavy matrix operations, finds nothing, and goes back to sleep. The energy-aware scheduler would aim to synchronize data delivery perfectly so the UE only wakes up when a successful decode is guaranteed.
The “Tail Energy” and DRX Misalignment
To save battery, LTE utilizes Discontinuous Reception (DRX). DRX allows the UE’s radio to enter a deep sleep state and wake up only during a brief, negotiated “On-Duration.”
However, DRX relies on an Inactivity Timer. When a UE successfully receives a packet, this timer starts. As long as the timer is running, the UE must stay fully awake, monitoring the PDCCH, just in case more data is coming.
- The Scheduler Problem: A traditional Proportional Fair scheduler might send a 100-byte IoT payload in small chunks: 20 bytes in Subframe 1, nothing in Subframe 2, 40 bytes in Subframe 3, etc. Every time a chunk arrives, the
Inactivity Timerresets, keeping the IoT device’s radio fully powered on for much longer than necessary. This wasted time is called “Tail Energy.” - The Solution: An energy-aware scheduler actively monitors the MAC buffers and the DRX state of the UEs. Instead of dripping the data over several subframes, it forcefully bundles the data together, allocating a larger block of RBs in a single TTI. This flushes the buffer instantly, allowing the
Inactivity Timerto expire rapidly so the device can return to deep sleep.
CQI vs. Battery Trade-off (Optional)
Standard schedulers obsess over the Channel Quality Indicator (CQI). If a UE reports a bad CQI, a Max C/I scheduler will ignore that UE and serve someone else with a better radio link to maximize total cell throughput.
- The Scheduler Problem: IoT devices are often static and located in terrible radio environments (like deep inside a basement). Their CQI is consistently poor. A standard scheduler will continuously delay their transmission, waiting for the channel to “improve.” But the channel will never improve, and the IoT device is bleeding battery life staying awake waiting to be scheduled.
- The Solution: An energy-aware algorithm introduces a “Time-to-Sleep” or “Energy-Urgency” weight into the scheduling metric. Even if the CQI is terrible, if the IoT device’s battery is draining because it has been awake too long, the scheduler must override the CQI logic, force an allocation using a robust Modulation and Coding Scheme (like QPSK), and let the device transmit so it can go to sleep.
DCCH Blocking and CCE Starvation (The mMTC Problem) (Optional)
IoT deployments usually involve massive Machine-Type Communications (mMTC)—meaning thousands of sensors in a single cell. Because these devices use DRX, they often wake up at the exact same time (at the start of a DRX cycle).
The PDCCH has a hard limit on how many DCI messages it can transmit in a single subframe because there is a finite number of Control Channel Elements (CCEs).
- The Scheduler Problem: If 50 IoT devices wake up in Subframe , the MAC scheduler might only have enough CCEs to schedule 10 of them. The standard scheduler schedules those 10 and leaves the other 40 devices waiting. Those 40 devices just wasted battery performing blind decoding for nothing, and they now have to stay awake into Subframe to try again. This is known as PDCCH Blocking.
- The Solution: Our algorithm would need to implement “CCE prediction” or dynamic DRX offset calculation. The scheduler would recognize that a collision of awake UEs is about to happen and proactively manage the CCE allocations, perhaps utilizing cross-subframe scheduling, to ensure no device wakes up unless there is guaranteed PDCCH space for it.
Evaluation
We can simulate this using a network simulator (like NS-3) or write a custom C/C++ simulation. We would measure the trade-off between latency (how long a packet waits in the eNodeB queue) and the estimated battery life saved on the embedded IoT node.
AI Integration
Machine Learning allows the eNodeB to become predictive, anticipating traffic and network congestion before it drains the User Equipment (UE) battery.
Traffic Pattern Prediction for Dynamic DRX and Blind Decoding Avoidance
The Specific Problem to Solve: IoT devices often generate predictable traffic, but standard MAC schedulers and Discontinuous Reception (DRX) timers are static. If an IoT sensor sends a temperature update every 10 minutes, a standard scheduler might still let the device wake up periodically according to a rigid DRX cycle, forcing it to perform blind decoding on the Physical Downlink Control Channel (PDCCH) when there is absolutely no data waiting for it.
The Machine Learning Application: We can deploy a Time-Series forecasting model, such as a Long Short-Term Memory (LSTM) network, Gated Recurrent Unit (GRU), or even a lighter Random Forest Regressor - at the eNodeB to predict exactly when a specific IoT device will have Downlink data arriving at the MAC buffer.
In-Depth Execution & Architecture:
- Feature Engineering: The eNodeB MAC layer continuously collects historical data for each UE.
- The Prediction Output: The model outputs a continuous value representing the estimated time (in milliseconds or subframes) until the next packet arrives for a specific Radio Network Temporary Identifier (RNTI).
- MAC Layer Integration: Once the ML model predicts that the next packet for UE #1 will not arrive for another 5000 milliseconds, the eNodeB actively manipulates the UE’s sleep cycle. Instead of waiting for standard timers to expire, the MAC scheduler generates a DRX Command MAC Control Element (MAC CE) and sends it to the UE. This forcefully commands the UE to go into deep sleep and perfectly aligns its next “On-Duration” wake-up time with the exact millisecond the predicted data will arrive.
- The Result: Blind decoding is virtually eliminated. The UE only wakes up when the eNodeB is mathematically certain it has data to transmit, drastically saving battery life.