Tiny AI is a set of principles that leverages the latest advancements in data, hardware and software to reduce the overall economic and ecological costs of artificial intelligence (AI). While AI has the potential to change the world, or at least add $13 trillion to the global economy, according to the McKinsey Global Institute, it faces many technological issues preventing its wider adoption.

AI technology poses a hefty environmental cost, requiring a substantial amount of computing resources and a lot of bandwidth when run on the cloud. According to data from the International Energy Agency, data centers account for between 2.5% and 3.7% of global greenhouse gas emissions. This measurement exceeds the total greenhouse gas emissions from the commercial aviation industry at 2.4%, an industry with a well-documented history of environmental criticism.

These emissions are grouped into three categories:

  • Group 1: Accounts for emissions generated by refrigerants (for cooling), diesel (backup electric generators), and natural gas (for heating and fueling).
  • Group 2: Accounts for emissions generated by electricity consumed or bought in the operation of the data center.
  • Group 3: Accounts for indirect greenhouse gas emissions, like the kind generated by computing equipment.

MIT Technology Review corroborates this point, noting that the process of training an AI “can emit more than 626,000 pounds of carbon dioxide,” equivalent to “nearly five times the lifetime emissions of the average American car.”

Even without the problems of physical infrastructure, according to the IEA, storing 100GB of data in the cloud every year would produce 0.2 tons of CO2. Considering Open Vault’s Broadband Insights Report from 2024 states that the average internet user consumes 641GB of data per month, the cloud is hardly an environmentally conscious solution for AI emissions.

Beyond its environmental cost, AI is also very expensive financially. For large language models to understand the meaning of a request and generate text that reads like a human response, they need to study billions of documents of text from all over the internet. The cost to train GPT-3, an AI that can answer trivia questions and write basic news articles, was $12 million.

Additionally, AI training requires a lot of physical infrastructure because the algorithms used for deep learning need to process an obscene amount of data. The sheer amount of computing resources necessary can lead to reductions in efficiency and limitations on privacy for AI applications. Tirias Research forecasted that on its current course, generative AI data center infrastructure and operating costs will exceed $78 billion by 2028.

While you can trade physical infrastructure in for cloud services like the latest Google Cloud, AWS, and Azure offerings to relieve demand for local memory and computing power, you would rely on a lot more bandwidth to do so.

With all of this said, you might be wondering why on earth the whole tech world seems to be moving towards AI in the future, but the truth is, Pandora's box has been opened, and there’s no closing it. The National Institute of Standards and Technology suggests that there will be more than 75 billion IoT devices in use by 2025, based on research from IHS Markit. With the number of devices coming into circulation, processing power requirements may explode due to the sheer amount of new data, unless we offload some of the compute load to edge devices that require less processing power and resources.

How does Tiny AI address these problems?

AI models don’t necessarily have to be hundreds of gigabytes in size or occupy thousands of dollars’ worth of physical server infrastructure. MobileNet models, for example, are based on a streamlined architecture that uses just 20 MB and is capable of low latency for mobile and embedded devices.

Tiny AI models use model condition techniques such as knowledge distillation, network pruning, and quantization, which all reduce the number of parameters that would need to be built into an AI model without reducing its accuracy. Running Tiny AI with new hardware, better designed to handle compute-intensive tasks, can also reduce strain on cloud services and alleviate security concerns around transferring data to the cloud in the first place.

Tiny AI holds several applications in a wide range of industries and, while in early development, is bound to have a substantial impact on the near future. In healthcare, Tiny AI can lead to faster results for medical testing. Granting medical providers easy access to comprehensive research and the ability to utilize deep learning to analyze the results of testing quickly, make an informed diagnosis, and treat any issues. Tiny AI will also allow for faster, safer reactions from self-driving cars and will improve the image processing functions in cameras.

Fully realized, Tiny AI would enable simple, compact devices, like your smartphone, to deploy complex training algorithms without connecting to the cloud. Developers have already begun to make progress on this as Google discovered a way to run Google Assistant locally on smartphones in May, and Apple can use Siri’s speech recognition application offline on iPhones with iOS 13 and beyond.