Get started !
online LTE test
online C test

Updated or New
5G NR Data Rate calculator New
5G NR TBS calculator New
5G NR UE Registration New
Python programming
C++ programming
MIMO for 3GPP Developer - 2
Uplink power control
MIMO for 3GPP Developer
NR ARFCN and GSCN
5GS Interfaces



About
Feedback
Information Theory
Modulation
Multiple Access
DSP
OSI Model
Data Link layer
SS7
Word about ATM
GSM
GPRS
UMTS
WiMAX
LTE
CV2X
5G
Standard Reference
Reference books
Resources on Web
Miscellaneous
Mind Map
Magic MSC tool
Bar graph tool
C programming
C++ programming
Perl resources
Python programming
Javascript/HTML
MATLAB
ASCII table
Project Management

another knowledge site

3GPP Modem
Simulator


Sparkle At Office comic strip

Base station scheduling - Incremental solution - 1

Incremental Solution - 1[Under Telecom > Base station scheduling]

Architecture

Below diagram shows one possible scheduler architecture.

Data structures will contain the dynamic information about the current allocation requests. Allocator - on receiving allocation request - will put up the allocation requests in Data structures. Deallocator will remove the allocation requests from Data structures - on receiving request for deallocation. Scheduler engine will check Data structures every Δt. It then - based on scheduling logic - will allocate the upcoming bucket from resource queue to appropriate user.

Data Structures

We will begin our analysis with Data Structures (being the central point). As there are classes of services (CR, VR, and HPVR), we can have three queues, one to each class. It is easy to see that these queues itself can be prioritised: HPVR - priority 1, CR - priority 2, VR - priority 3.

As you can see in above diagram, we have prioritised list of classes. Each class queue contain list of allocation requests. A allocation request would contain two parameters "Allocation Criterion" and "User ID". Before. Note that this allocation request is not same as that is given to allocator. Allocator logic converts original allocation request - which is made in terms of certain service parameters (just like QoS parameters) - into appropriate internal per bucket allocation request or requests. So the allocation requests maintained by data structures are more of internal (per bucket) allocation requests. This point would be more clear as we proceed.

Scheduler Engine

So Scheduler Engine has allocate resources from resource queues based per bucket allocation requests hold by Data Structures.

After every Δt, Scheduler Engine traverses allocation request queues, starting from highest priority service queue (i.e. in order of HPVR, CR, and VR). At each allocation request, it evaluates "Allocation Criterion" for upcoming bucket (resource). If Criterion is satisfied, Scheduler Engine allocates the upcoming bucket to that particular User ID. If Criterion is not satisfied, Scheduler Engine moves to next allocation request. If all allocation requests have been visited, Scheduler does not allocate the bucket to anyone.

As can be seen, Scheduler Engine has Δt to traverse through allocation request queues. One way to make sure Scheduler Engine does not run out of time would be Scheduler Engine doing evaluations of "Allocation Criterion" in parallel (may be one thread/process per service queue).

Let us take an example to see clearly how this works.

Say currently, there are two users requesting allocations , one requesting { bearer of class CR with certain data rate } and the other requesting { bearer of class VR to transfer certain amount of data }. Say allocator converts this allocation requests to two internal per bucket allocation requests { User1 require a bucket every 7th Δt (i.e. every 7th bucket) } and { User 2 require any 4 buckets }. Allocator puts User1 allocation request in CR queue and User2 llocation request in VR queue.

When Scheduler Engine wakes up (after Δt from last wake up call) - say for nth bucket, it first checks CR queue, finds User 1 with allocation criterion as "every 7th bucket". As allocation criterion is satisfied, Scheduler Engine allocates the nth bucket to User1 and updates its allocation criterion as "every 7th bucket starting from nth bucket".

When Scheduler Engine wakes up for (n+1)th bucket, it checks CR queue - User1 allocation request. As (n+1)th bucket does not satisfy this allocation criteria and CR queue being finished, Scheduler Engine goes to VR queue and finds User 2 request. Scheduler Engine allocates (n+1)th bucket to User2 and updates its allocation criterion as "require any 3 buckets".

So here is how resource allocation queue would look like this:

Bucket # User
n User1
n+1 User2 (R3)
n+2 User2 (R2)
n+3 User2 (R1)
n+4 User2 (E)
n+5 -
n+6 -
n+7 User1
n+8 -

Would this algorithm work fine for all scenarios ? The answer is "No". Let's add another User3 in above example and see if the Scheduler Engine behaves well enough. Say User3 request VR bearer to transfer certain amount of data. Say allocator convert this request to internal per bucket allocation request { User3 require any 8 buckets } and puts it into VR queue.

Scheduler Engine would give nth bucket to User1, User1 being in CR queue). (n+1)th bucket goes to User2 and that is also OK. The problem is to which user (n+2) th bucket should go, User2 or User3. Both are in same queue, but User2 being ahead of User3 in queue will get allocation. Though service requirements would be met for all users, it is not fair to User3 .

Above Scheduler Engine logic can be made "fair" by converting allocation request queue for VR queue from linear to circular with (allocation request) pointer advancing after every allocation to next allocation request. With this enhancement, allocation queue would look like this:

Bucket # User
n User1
n+1 User2 (R3)
n+2 User3 (R7)
n+3 User2 (R2)
n+4 User3 (R6)
n+5 User2 (R1)
n+6 User3 (R5)
n+7 User1
n+8 User2 (E)
n+9 User3 (R4)
n+10 User3 (R3)
n+11 User3 (R2)
n+12 User3 (R1)
n+13 User3 (E)
n+14 User1
n+15 -

Note that we have taken an example of CR and VR combination without HPVR. Addition of HPVR would cause problem to CR as HPVR being of more priority than CR may just eat up CR's share !!

Let us pursure this problem in next article.

Copyright © Samir Amberkar 2010-11§

Problem statement « Base Station Scheduling Index » Incremental solution - 2