From 7dbfea19d29773a6e0165484744935c49d44c305 Mon Sep 17 00:00:00 2001
From: Piotr Nawrot
Date: Thu, 16 Mar 2023 15:24:25 +0100
Subject: [PATCH] Update README.md
---
README.md | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/README.md b/README.md
index 9df59d2..e25578c 100644
--- a/README.md
+++ b/README.md
@@ -174,6 +174,10 @@ For pre-training we compile our model with PyTorch 2.0 using `model.compile=true
We show that it is possible to successfully pre-train a "Large Language Model" (T5) under a limited budget (1xA100 GPU, ~20 hours) in PyTorch. We make our codebase, configs and training logs publicly available to enhance the accessibility of NLP research. We are keen to hear your suggestions to improve the codebase further.
+### Acknowledgements:
+
+Thanks to [Edoardo Maria Ponti](https://ducdauge.github.io) for his feedback!
+
## References:
- [T5 paper](https://arxiv.org/pdf/1910.10683.pdf)
- [T5 v1.1 paper](https://arxiv.org/pdf/2002.05202.pdf)