Federated Learning: Google’s Take

Aishwarya Srinivasan
3 min readDec 6, 2020


It is intriguing yet spooky to see how much data the tech giants like Google, Facebook, Apple to name a few have of you. I just recently saw my Google Maps timeline data, and as much as I enjoyed reminiscing the memories, there was still a part of me getting chills about how wrongly this data can be used?

Here is a snap of my Google Map statistics:

Yepp, 2019 was a year filled with travel for me.

In all the devices and applications, we do have the option of not sharing our data. But that would mean lesser personalization. That would also mean I wouldn’t get customized recommendations. But if one starts to share data, there is a risk of “privacy”.

In recent times a lot of technology companies have gotten into trouble because of data privacy violations. Hence, most companies are working hard to facilitate a means to make the best use of machine learning capabilities while ensuring data privacy to users. One very famous research is around “Federated Learning.

By Wikipedia definition, Federated learning (also known as collaborative learning) is a machine learning technique that trains an algorithm across multiple decentralized edge devices or servers holding local data samples, without exchanging them. This approach stands in contrast to traditional centralized machine learning techniques where all the local datasets are uploaded to one server, as well as to more classical decentralized approaches which often assume that local data samples are identically distributed.

This blog will be focusing on the work Google has doing in the Federated Learning space. In 2017, Google AI Research published a paper on “Federated Learning: Collaborative Machine Learning without Centralized Training Data”. The paper talks about the methodology of Federated Learning, where the current machine learning model will be downloaded in your device, the model would be improved by training on your data on your phone, and the changes in the interactively trained model would be summarized as a new update. Only this update to the machine learning model is sent back to the cloud, using encrypted methods. Similarly, all the updates coming in from multiple devices would be collated and the final model is produced. The research states that “All the training data remains on your device, and no individual updates are stored in the cloud”.

Apart from multiple research published by Google AI, I came across this quirky yet informative Federated Learning online comic strip by Google AI.

Here is a sneak peek (my favorite section of the strip):

See the full comic strip HERE.

Stay tuned to read more about research and efforts around Federated Learning.

Take care, keep safe, (and turn off all unnecessary permissions on your phone).



Aishwarya Srinivasan

LinkedIn Top Voice 2020- Data Science || MS Data Science - Columbia University || IBM- Data Science Elite || Unicorn in Data Science || Scikit-Learn Contributor