Network-first Separate Training with Raw Dataset Sharing: A Promising Training Approach for AI/ML-driven CSI Feedback

25 October 2023

New Image

AIML for CSI feedback investigates an AI encoder at user equipment (UE) to compress and quantize the channel state information (CSI) to a codeword in bits before sending it over the air to the network (NW), containing an AI decoder to reconstruct CSI. Conventionally, the AI encoder and decoder are trained jointly in the same training session. Joint training of the encoder and decoder not only enforces the disclosure of the proprietary model information but also exposes the system to adversarial attacks. It also limits the application of the two-sided models in a multi-user or a multi-base station system. Thus, separate training of model entities has gained support. It has two flavours, namely UE-first and NW-first separate training. Initial studies in Rel-18 AIML for Air Interface show that the UE-first case outperforms the NW-first case. However, the UE-first scheme restricts NW's flexibility regarding cell, site, and scenario configuration-specific model support. This paper proposes an enhancement on top of conventional NW-first separate training, peculiarly providing gains at low quantizer resolution bits. It also has been validated that the modified NW-first and the UE-first separate training schemes perform similarly and close to the joint training method, which is the upper bound.