Back to Research
Research
May 22, 2024

Federated Learning at Scale: Lessons from 10 000 Edge Nodes

Background

Federated learning (FL) enables model training across distributed data sources without centralising sensitive data. In collaboration with a healthcare consortium, BF-Q Labs deployed FL across 10 000 edge devices spanning 47 hospitals in 3 countries.

Challenges Encountered

Communication Bottlenecks

Naive FL aggregation requires O(N) communication rounds. We implemented our QuantumAggregation protocol, which uses quantum annealing to identify an optimal sparse gradient-sharing topology, reducing communication cost by 73 %.

Non-IID Data Distribution

Hospital datasets are inherently non-independent and identically distributed (non-IID). Our personalised FL approach uses a meta-learning initialisation that adapts to local distributions within 5 fine-tuning steps.

Privacy Guarantees

We applied differential privacy with ε = 0.5, δ = 10⁻⁵ — satisfying HIPAA requirements — with less than 2 % accuracy degradation vs. the centralised baseline.

Results

  • Global model accuracy: 94.2 % (vs. 95.1 % centralised)
  • Training time: 6× faster than synchronous FL baselines
  • Privacy: DP (ε = 0.5, δ = 10⁻⁵) certified
  • Communication savings: 73 % reduction

Open Source

The QuantumAggregation protocol is available in our open-source FL toolkit at github.com/bf-q/quantumagg.

Interested in this research area?

Explore partnership and collaboration opportunities with BF-Q Labs.

Get in Touch