#AI #SmallAI #SelfTrain
One of the arguments in favour of surveillance capitalism is the great usefulness of cloud-based ML predictions.
After all, who can deny the usefulness of photo apps that automatically recognize faces, detect your speech or help you making sense of the deluge of information in a social feed?
The argument usually goes like this: these features require large neural networks, which in turn require a lot of computational power to train the models, and a lot of memory and disk storage in order to load and save those models.
You can't do such things on small devices that run on batteries. Therefore your phone *HAS* to send your data to #BigTech servers if you want to get those features. Otherwise, you just won't get those features.
Except that... What if this whole argument is bollocks?
#POET (Private Optimal Energy Training) proves that you can run both the training and the predictions locally, without compromising neither on precision, nor on performance.
After all, the really expensive part of training is back-propagation. POET breaks down the back-propagation performance issue by quantizing the layers (so real-number large tensor multiplications get reduced to smaller multiplications of integer tensors, without sacrificing precision too much), and a clever way of caching the layers that are most likely to be needed, so we don't have to recalculate them, without caching everything though (which would be prohibitive in terms of storage).
The arguments in the paper sound very convincing to me. The code is publicly available on Github. I haven't yet had time to test it myself, but I will quite soon - and try to finally build an alternative voice assistant that can completely run on my phone.
https://proceedings.mlr.press/v162/patil22b.html