aitech-eks-pub/cw/13_transformery2.ipynb
2021-09-27 12:34:44 +02:00

2.2 MiB

Logo 1

Ekstrakcja informacji

13. Transformery 2 [ćwiczenia]

Jakub Pokrywka (2021)

Logo 2

Wizualizacja atencji

!pip install bertviz
Requirement already satisfied: bertviz in /home/kuba/anaconda3/lib/python3.8/site-packages (1.1.0)
Requirement already satisfied: torch>=1.0 in /home/kuba/anaconda3/lib/python3.8/site-packages (from bertviz) (1.7.1)
Requirement already satisfied: boto3 in /home/kuba/anaconda3/lib/python3.8/site-packages (from bertviz) (1.17.93)
Requirement already satisfied: transformers>=2.0 in /home/kuba/anaconda3/lib/python3.8/site-packages (from bertviz) (4.2.2)
Requirement already satisfied: sentencepiece in /home/kuba/anaconda3/lib/python3.8/site-packages (from bertviz) (0.1.95)
Requirement already satisfied: requests in /home/kuba/anaconda3/lib/python3.8/site-packages (from bertviz) (2.24.0)
Requirement already satisfied: regex in /home/kuba/anaconda3/lib/python3.8/site-packages (from bertviz) (2020.6.8)
Requirement already satisfied: tqdm in /home/kuba/anaconda3/lib/python3.8/site-packages (from bertviz) (4.47.0)
Requirement already satisfied: numpy in /home/kuba/anaconda3/lib/python3.8/site-packages (from torch>=1.0->bertviz) (1.18.5)
Requirement already satisfied: typing-extensions in /home/kuba/anaconda3/lib/python3.8/site-packages (from torch>=1.0->bertviz) (3.7.4.2)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/kuba/anaconda3/lib/python3.8/site-packages (from boto3->bertviz) (0.10.0)
Requirement already satisfied: s3transfer<0.5.0,>=0.4.0 in /home/kuba/anaconda3/lib/python3.8/site-packages (from boto3->bertviz) (0.4.2)
Requirement already satisfied: botocore<1.21.0,>=1.20.93 in /home/kuba/anaconda3/lib/python3.8/site-packages (from boto3->bertviz) (1.20.93)
Requirement already satisfied: sacremoses in /home/kuba/anaconda3/lib/python3.8/site-packages (from transformers>=2.0->bertviz) (0.0.43)
Requirement already satisfied: packaging in /home/kuba/anaconda3/lib/python3.8/site-packages (from transformers>=2.0->bertviz) (20.4)
Requirement already satisfied: tokenizers==0.9.4 in /home/kuba/anaconda3/lib/python3.8/site-packages (from transformers>=2.0->bertviz) (0.9.4)
Requirement already satisfied: filelock in /home/kuba/anaconda3/lib/python3.8/site-packages (from transformers>=2.0->bertviz) (3.0.12)
Requirement already satisfied: certifi>=2017.4.17 in /home/kuba/anaconda3/lib/python3.8/site-packages (from requests->bertviz) (2020.6.20)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /home/kuba/anaconda3/lib/python3.8/site-packages (from requests->bertviz) (1.25.9)
Requirement already satisfied: chardet<4,>=3.0.2 in /home/kuba/anaconda3/lib/python3.8/site-packages (from requests->bertviz) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /home/kuba/anaconda3/lib/python3.8/site-packages (from requests->bertviz) (2.10)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /home/kuba/anaconda3/lib/python3.8/site-packages (from botocore<1.21.0,>=1.20.93->boto3->bertviz) (2.8.1)
Requirement already satisfied: click in /home/kuba/anaconda3/lib/python3.8/site-packages (from sacremoses->transformers>=2.0->bertviz) (7.1.2)
Requirement already satisfied: six in /home/kuba/anaconda3/lib/python3.8/site-packages (from sacremoses->transformers>=2.0->bertviz) (1.15.0)
Requirement already satisfied: joblib in /home/kuba/anaconda3/lib/python3.8/site-packages (from sacremoses->transformers>=2.0->bertviz) (0.16.0)
Requirement already satisfied: pyparsing>=2.0.2 in /home/kuba/anaconda3/lib/python3.8/site-packages (from packaging->transformers>=2.0->bertviz) (2.4.7)
from transformers import AutoTokenizer, AutoModel
from bertviz import model_view, head_view
TEXT = "This is a sample input sentence for a transformer model"
MODEL = "distilbert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL, output_attentions=True)
inputs = tokenizer.encode(TEXT, return_tensors='pt')
outputs = model(inputs)
attention = outputs[-1]
tokens = tokenizer.convert_ids_to_tokens(inputs[0]) 

SELF ATTENTION MODELS

head_view(attention, tokens)
Layer:
model_view(attention, tokens)

ENCODER-DECODER MODELS

MODEL = "Helsinki-NLP/opus-mt-en-de"
TEXT_ENCODER = "She sees the small elephant."
TEXT_DECODER = "Sie sieht den kleinen Elefanten."
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL, output_attentions=True)
encoder_input_ids = tokenizer(TEXT_ENCODER, return_tensors="pt", add_special_tokens=True).input_ids
decoder_input_ids = tokenizer(TEXT_DECODER, return_tensors="pt", add_special_tokens=True).input_ids

outputs = model(input_ids=encoder_input_ids, decoder_input_ids=decoder_input_ids)

encoder_text = tokenizer.convert_ids_to_tokens(encoder_input_ids[0])
decoder_text = tokenizer.convert_ids_to_tokens(decoder_input_ids[0])
head_view(
    encoder_attention=outputs.encoder_attentions,
    decoder_attention=outputs.decoder_attentions,
    cross_attention=outputs.cross_attentions,
    encoder_tokens= encoder_text,
    decoder_tokens = decoder_text
)
Layer: Attention: Encoder Decoder Cross
model_view(
    encoder_attention=outputs.encoder_attentions,
    decoder_attention=outputs.decoder_attentions,
    cross_attention=outputs.cross_attentions,
    encoder_tokens= encoder_text,
    decoder_tokens = decoder_text
)
Attention: Encoder Decoder Cross

Zadanie (10 minut)

Za pomocą modelu en-fr przetłumacz dowolne zdanie z angielskiego na język francuski i sprawdź wagi atencji dla tego tłumaczenia

PRZYKŁAD: GPT3

ZADANIE DOMOWE - POLEVAL