Yesterday, the team at Facebook released Pythia, a deep learning framework that supports multitasking in the vision and language multimodal research. Pythia is built on the open-source PyTorch framework and enables researchers to easily build, reproduce, and benchmark AI models.
We’re open sourcing Pythia, a deep learning framework that supports multitasking for vision and language tasks. It’s built on PyTorch, and researchers can use Pythia to more easily build, reproduce, and benchmark #AI models. https://t.co/UQSB42XtOP
It is designed for vision and language tasks, such as answering questions that are related to visual data and automatically generates image captions. This framework also incorporates elements of Facebook’s winning entries in recent AI competitions including the VQA Challenge 2018 and Vizwiz Challenge 2018.
Features of Pythia
- Reference implementations: Pythia references implementations to show how previous state-of-the-art models achieved related benchmark results.
- Performance gauging: It also helps in gauging the performance of new models.
- Multitasking: Pythia supports multitasking and distributed training.
- Datasets: It also includes support for various datasets built-in including VizWiz, VQA,TextVQA and VisualDialog.
- Customization: Pythia features custom losses, metrics, scheduling, optimizers, tensorboard as per the needs of the customers.
- Unopinionated: Pythia is unopinionated about the dataset and model implementations that are built on top of it.
The goal of the team behind Pythia is to accelerate the AI models and their results and further make it easier for the AI community to build on, and benchmark against, successful systems.
The team hopes that Pythia will also help researchers to develop adaptive AI that synthesizes multiple kinds of understanding into a more context-based, multimodal understanding. The team also plans to continue adding tools, data sets, tasks, and reference models.
To know more about this news, check out the official Facebook announcement.