DECICE Project’s potential for Federated Learning Use Cases

January 23, 2024

Federated Learning (FL) in the DECICE project involves a decentralized machine learning model shared across edge devices. In FL each device locally processes data for model training and sends updates to the central server, reducing bandwidth usage and preserving data privacy. FL offers privacy by avoiding raw data sharing and minimizes bandwidth use, though local compute power is essential for efficient training.

The DECICE project explores the potential of Federated Learning (FL), a decentralized machine learning model shared across edge devices. FL minimizes bandwidth usage and preserves data privacy by allowing local processing of data for model training, with only model updates sent to the central server.

In the context of DECICE, FL holds promise for applications like next-word prediction on mobile keyboards, enabling language models to be trained across phones while maintaining data privacy and optimizing bandwidth usage. The project addresses challenges in synchronous federated learning, focusing on participant selection, resource optimization, asynchronous training, and incentive mechanisms for diverse environments. The project recognizes trade-offs, balancing privacy benefits with the need for local compute power.

While FL provides privacy advantages by avoiding raw data sharing, DECICE acknowledges the trust dynamics between master and edge nodes. The project actively explores reduced bandwidth usage benefits, especially in scenarios where streaming data to a central server is impractical, such as with camera feeds.

Read the full article here