The current trend of machine learning reminds us of heavy computational machinery; thanks to GPUs and TPUs that make these possible. However going ahead in this manner would mean that all the data need to be sourced to the cloud, or one may have to think of carrying the cloud in your pocket. Not a good idea isn't it? And also what bothers is the serious consequence on the environment due to the swarm of computers that does your ML in large datacenters. Of course big does not always mean better.
Let us now look into a smartphone which has a tiny processor almost eight times smaller than a TPU and thousands of times smaller than that football ground (yes! those enormous datacenters). Everyone is concerned about data privacy nowadays- so the question is whether you can bring code closer to the data rather than bringing data close to the code. Of course there are no free lunches, but the best hope is that if numerous devices are equipped for edge ML then the drawbacks of low accuracy diminishes. Imagine this to a noisy protest of thousands of people. Although individual opinions seem to be dumb on the government, the collective cause is certainly effective. Each of these edge devices can collaborate to do what was impossible by any individual.
References:
[1] TinyML Book - O'Reilly, Pete Warden URL:https://www.oreilly.com/library/view/tinyml/9781492052036/
[2] Fundamentals of TinyML - Harvard university course, https://online-learning.harvard.edu/course/fundamentals-tinyml