Training one single AI language model emits more carbon dioxide than 5 cars during their whole lifetime. While the AI community is becoming more and more aware of the sustainability issues of artificial intelligence and of the large training models, there is still no consensus about the best way to solve this problem.
This webinar explores:
- The reasons behind AI energy-efficiency issues
- Three approaches that try to solve this problem
> Quantum computing
> Hardware acceleration
> Reverse engineering the brain
- Semantic Folding, an approach that leverages reverse engineering the brain, and is already implemented in enterprise environments.