aitech-eks-pub/wyk/03_Tfidf.ipynb

4551 lines
134 KiB
Plaintext
Raw Normal View History

2021-03-24 12:10:05 +01:00
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Wyszukiwarka - szybka i sensowna"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Roboczy przykład\n",
"\n",
"Zakładamy, że mamy pewną kolekcję dokumentów $D = {d_1, \\ldots, d_N}$. ($N$ - liczba dokumentów w kolekcji)."
]
},
{
"cell_type": "code",
2021-03-31 15:20:26 +02:00
"execution_count": 1,
2021-03-24 12:10:05 +01:00
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
2021-03-31 15:20:26 +02:00
"Ala ma kota."
2021-03-24 12:10:05 +01:00
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"{-# LANGUAGE OverloadedStrings #-}\n",
"\n",
"import Data.Text hiding(map, filter, zip)\n",
"import Prelude hiding(words, take)\n",
"\n",
"collectionD :: [Text]\n",
2021-03-31 15:20:26 +02:00
"collectionD = [\"Ala ma kota.\", \"Podobno jest kot w butach.\", \"Ty chyba masz kota!\", \"But chyba zgubiłem.\", \"Kot ma kota.\"]\n",
2021-03-24 12:10:05 +01:00
"\n",
"-- Operator (!!) zwraca element listy o podanym indeksie\n",
"-- (Przy większych listach będzie nieefektywne, ale nie będziemy komplikować)\n",
2021-03-31 15:20:26 +02:00
"Prelude.head collectionD"
2021-03-24 12:10:05 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Wydobycie tekstu\n",
"\n",
"Przykładowe narzędzia:\n",
"\n",
"* pdftotext\n",
"* antiword\n",
"* Tesseract OCR\n",
"* Apache Tika - uniwersalne narzędzie do wydobywania tekstu z różnych formatów\n",
"\n",
"## Normalizacja tekstu\n",
"\n",
"Cokolwiek robimy z tekstem, najpierw musimy go _znormalizować_."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Tokenizacja\n",
"\n",
"Po pierwsze musimy podzielić tekst na _tokeny_, czyli wyrazapodobne jednostki.\n",
"Może po prostu podzielić po spacjach?"
]
},
{
"cell_type": "code",
2021-03-31 15:20:26 +02:00
"execution_count": 2,
2021-03-24 12:10:05 +01:00
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Ala"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ma"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"kota."
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"tokenizeStupidly :: Text -> [Text]\n",
"-- words to funkcja z Data.Text, która dzieli po spacjach\n",
"tokenizeStupidly = words\n",
"\n",
"tokenizeStupidly $ Prelude.head collectionD"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A, trzeba _chociaż_ odsunąć znaki interpunkcyjne. Najprościej użyć wyrażenia regularnego. Warto użyć [unikodowych własności](https://en.wikipedia.org/wiki/Unicode_character_property) znaków i konstrukcji `\\p{...}`. "
]
},
{
"cell_type": "code",
2021-03-31 15:20:26 +02:00
"execution_count": 3,
2021-03-24 12:10:05 +01:00
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
2021-03-31 15:20:26 +02:00
"But"
2021-03-24 12:10:05 +01:00
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
2021-03-31 15:20:26 +02:00
"chyba"
2021-03-24 12:10:05 +01:00
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
2021-03-31 15:20:26 +02:00
"zgubiłem"
2021-03-24 12:10:05 +01:00
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"."
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"{-# LANGUAGE QuasiQuotes #-}\n",
"\n",
"import Text.Regex.PCRE.Heavy\n",
"\n",
"tokenize :: Text -> [Text]\n",
2021-03-31 15:20:26 +02:00
"tokenize = map fst . scan [re|C\\+\\+|[\\p{L}0-9]+|\\p{P}|]\n",
2021-03-24 12:10:05 +01:00
"\n",
2021-03-31 15:20:26 +02:00
"tokenize $ collectionD !! 3\n"
2021-03-24 12:10:05 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Cała kolekcja stokenizowana:"
]
},
{
"cell_type": "code",
2021-03-31 15:20:26 +02:00
"execution_count": 4,
2021-03-24 12:10:05 +01:00
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Ala"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ma"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"kota"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"."
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"Podobno"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"jest"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"kot"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"w"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"butach"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"."
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"Ty"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"chyba"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"masz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"kota"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"!"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"But"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"chyba"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"zgubiłem"
]
},
"metadata": {},
"output_type": "display_data"
},
2021-03-31 15:20:26 +02:00
{
"data": {
"text/plain": [
"."
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"Kot"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ma"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"kota"
]
},
"metadata": {},
"output_type": "display_data"
},
2021-03-24 12:10:05 +01:00
{
"data": {
"text/plain": [
"."
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"map tokenize collectionD"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Problemy z tokenizacją\n",
"\n",
"##### Język angielski"
]
},
{
"cell_type": "code",
2021-03-31 15:20:26 +02:00
"execution_count": 16,
2021-03-24 12:10:05 +01:00
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"I"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"use"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"a"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"data"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"-"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"base"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"tokenize \"I use a data-base\""
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"I"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"use"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"a"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"database"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"tokenize \"I use a database\""
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"I"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"use"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"a"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"data"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"base"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"tokenize \"I use a data base\""
]
},
{
"cell_type": "code",
2021-03-31 15:20:26 +02:00
"execution_count": 10,
2021-03-24 12:10:05 +01:00
"metadata": {},
"outputs": [
2021-03-31 15:20:26 +02:00
{
"data": {
"text/plain": [
"'"
]
},
"metadata": {},
"output_type": "display_data"
},
2021-03-24 12:10:05 +01:00
{
"data": {
"text/plain": [
"I"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"don"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"'"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"t"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"like"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"Python"
]
},
"metadata": {},
"output_type": "display_data"
2021-03-31 15:20:26 +02:00
},
{
"data": {
"text/plain": [
"'"
]
},
"metadata": {},
"output_type": "display_data"
2021-03-24 12:10:05 +01:00
}
],
"source": [
2021-03-31 15:20:26 +02:00
"tokenize \"'I don't like Python'\""
2021-03-24 12:10:05 +01:00
]
},
{
"cell_type": "code",
2021-03-31 15:20:26 +02:00
"execution_count": 12,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"I"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"can"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"see"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"the"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"Johnes"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"'"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"house"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"tokenize \"I can see the Johnes' house\""
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"I"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"do"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"not"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"like"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"Python"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"tokenize \"I do not like Python\""
]
},
{
"cell_type": "code",
"execution_count": 13,
2021-03-24 12:10:05 +01:00
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"0018"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"555"
]
},
"metadata": {},
"output_type": "display_data"
},
2021-03-31 15:20:26 +02:00
{
"data": {
"text/plain": [
"-"
]
},
"metadata": {},
"output_type": "display_data"
},
2021-03-24 12:10:05 +01:00
{
"data": {
"text/plain": [
"555"
]
},
"metadata": {},
"output_type": "display_data"
},
2021-03-31 15:20:26 +02:00
{
"data": {
"text/plain": [
"-"
]
},
"metadata": {},
"output_type": "display_data"
},
2021-03-24 12:10:05 +01:00
{
"data": {
"text/plain": [
"122"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
2021-03-31 15:20:26 +02:00
"tokenize \"+0018 555-555-122\""
2021-03-24 12:10:05 +01:00
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"0018555555122"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"tokenize \"+0018555555122\""
]
},
{
"cell_type": "code",
2021-03-31 15:20:26 +02:00
"execution_count": 15,
2021-03-24 12:10:05 +01:00
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Which"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"one"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"is"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"better"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
":"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
2021-03-31 15:20:26 +02:00
"C++"
2021-03-24 12:10:05 +01:00
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"or"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"C"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"#"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"?"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"tokenize \"Which one is better: C++ or C#?\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### Inne języki?"
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Rechtsschutzversicherungsgesellschaften"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"wie"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"die"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"HUK"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"-"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"Coburg"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"machen"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"es"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"bereits"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"seit"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"geraumer"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"Zeit"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"vor"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
":"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"tokenize \"Rechtsschutzversicherungsgesellschaften wie die HUK-Coburg machen es bereits seit geraumer Zeit vor:\""
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"今日波兹南是贸易"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"、"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"工业及教育的中心"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"。"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"波兹南是波兰第五大的城市及第四大的工业中心"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"波兹南亦是大波兰省的行政首府"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"。"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"也舉辦有不少展覽會"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"。"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"是波蘭西部重要的交通中心都市"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"。"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"tokenize \"今日波兹南是贸易、工业及教育的中心。波兹南是波兰第五大的城市及第四大的工业中心,波兹南亦是大波兰省的行政首府。也舉辦有不少展覽會。是波蘭西部重要的交通中心都市。\""
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"l"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"'"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ordinateur"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"tokenize \"l'ordinateur\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Lematyzacja"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"_Lematyzacja_ to sprowadzenie do formy podstawowej (_lematu_), np. \"krześle\" do \"krzesło\", \"zrobimy\" do \"zrobić\" dla języka polskiego, \"chairs\" do \"chair\", \"made\" do \"make\" dla języka angielskiego."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Lematyzacja dla języka polskiego jest bardzo trudna, praktycznie nie sposób wykonać ją regułowo, po prostu musimy się postarać o bardzo obszerny _słownik form fleksyjnych_.\n",
"\n",
"Na potrzeby tego wykładu stwórzmy sobie mały słownik form fleksyjnych w postaci tablicy asocjacyjnej (haszującej)."
]
},
{
"cell_type": "code",
2021-03-31 15:20:26 +02:00
"execution_count": 5,
2021-03-24 12:10:05 +01:00
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<style>/* Styles used for the Hoogle display in the pager */\n",
".hoogle-doc {\n",
"display: block;\n",
"padding-bottom: 1.3em;\n",
"padding-left: 0.4em;\n",
"}\n",
".hoogle-code {\n",
"display: block;\n",
"font-family: monospace;\n",
"white-space: pre;\n",
"}\n",
".hoogle-text {\n",
"display: block;\n",
"}\n",
".hoogle-name {\n",
"color: green;\n",
"font-weight: bold;\n",
"}\n",
".hoogle-head {\n",
"font-weight: bold;\n",
"}\n",
".hoogle-sub {\n",
"display: block;\n",
"margin-left: 0.4em;\n",
"}\n",
".hoogle-package {\n",
"font-weight: bold;\n",
"font-style: italic;\n",
"}\n",
".hoogle-module {\n",
"font-weight: bold;\n",
"}\n",
".hoogle-class {\n",
"font-weight: bold;\n",
"}\n",
".get-type {\n",
"color: green;\n",
"font-weight: bold;\n",
"font-family: monospace;\n",
"display: block;\n",
"white-space: pre-wrap;\n",
"}\n",
".show-type {\n",
"color: green;\n",
"font-weight: bold;\n",
"font-family: monospace;\n",
"margin-left: 1em;\n",
"}\n",
".mono {\n",
"font-family: monospace;\n",
"display: block;\n",
"}\n",
".err-msg {\n",
"color: red;\n",
"font-style: italic;\n",
"font-family: monospace;\n",
"white-space: pre;\n",
"display: block;\n",
"}\n",
"#unshowable {\n",
"color: red;\n",
"font-weight: bold;\n",
"}\n",
".err-msg.in.collapse {\n",
"padding-top: 0.7em;\n",
"}\n",
".highlight-code {\n",
"white-space: pre;\n",
"font-family: monospace;\n",
"}\n",
".suggestion-warning { \n",
"font-weight: bold;\n",
"color: rgb(200, 130, 0);\n",
"}\n",
".suggestion-error { \n",
"font-weight: bold;\n",
"color: red;\n",
"}\n",
".suggestion-name {\n",
"font-weight: bold;\n",
"}\n",
"</style><div class=\"suggestion-name\" style=\"clear:both;\">Use head</div><div class=\"suggestion-row\" style=\"float: left;\"><div class=\"suggestion-warning\">Found:</div><div class=\"highlight-code\" id=\"haskell\">collectionD !! 0</div></div><div class=\"suggestion-row\" style=\"float: left;\"><div class=\"suggestion-warning\">Why Not:</div><div class=\"highlight-code\" id=\"haskell\">head collectionD</div></div>"
],
"text/plain": [
"Line 22: Use head\n",
"Found:\n",
"collectionD !! 0\n",
"Why not:\n",
"head collectionD"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"but"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"butami"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"Ala"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"mieć"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"kot"
]
},
"metadata": {},
"output_type": "display_data"
},
2021-03-31 15:20:26 +02:00
{
"data": {
"text/plain": [
"."
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"Wczoraj"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"kupiłem"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"kot"
]
},
"metadata": {},
"output_type": "display_data"
},
2021-03-24 12:10:05 +01:00
{
"data": {
"text/plain": [
"."
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"import Data.Map as Map hiding(take, map, filter)\n",
"\n",
"mockInflectionDictionary :: Map Text Text\n",
"mockInflectionDictionary = Map.fromList [\n",
" (\"kota\", \"kot\"),\n",
" (\"butach\", \"but\"),\n",
" (\"masz\", \"mieć\"),\n",
" (\"ma\", \"mieć\"),\n",
" (\"buta\", \"but\"),\n",
" (\"zgubiłem\", \"zgubić\")]\n",
"\n",
"lemmatizeWord :: Map Text Text -> Text -> Text\n",
"lemmatizeWord dict w = findWithDefault w w dict\n",
"\n",
"lemmatizeWord mockInflectionDictionary \"butach\"\n",
"-- a tego nie ma w naszym słowniczku, więc zwracamy to samo\n",
"lemmatizeWord mockInflectionDictionary \"butami\"\n",
"\n",
"lemmatize :: Map Text Text -> [Text] -> [Text]\n",
"lemmatize dict = map (lemmatizeWord dict)\n",
"\n",
"lemmatize mockInflectionDictionary $ tokenize $ collectionD !! 0 \n",
"\n",
2021-03-31 15:20:26 +02:00
"lemmatize mockInflectionDictionary $ tokenize \"Wczoraj kupiłem kota.\""
2021-03-24 12:10:05 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Pytanie**: Nawet w naszym słowniczku mamy problemy z niejednoznacznością lematyzacji. Jakie?\n",
"\n",
"Obszerny słownik form fleksyjnych dla języka polskiego: http://zil.ipipan.waw.pl/PoliMorf?action=AttachFile&do=view&target=PoliMorf-0.6.7.tab.gz"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Stemowanie\n",
"\n",
"Stemowanie (rdzeniowanie) obcina wyraz do _rdzenia_ niekoniecznie będącego sensownym wyrazem, np. \"krześle\" może być rdzeniowane do \"krześl\", \"krześ\" albo \"krzes\", \"zrobimy\" do \"zrobi\".\n",
"\n",
"* stemowanie nie jest tak dobrze określone jak lematyzacja (można robić na wiele sposobów)\n",
"* bardziej podatne na metody regułowe (choć dla polskiego i tak trudno)\n",
"* dla angielskiego istnieją znane algorytmy stemowania, np. [algorytm Portera](https://tartarus.org/martin/PorterStemmer/def.txt)\n",
"* zob. też [program Snowball](https://snowballstem.org/) z regułami dla wielu języków\n",
"\n",
"Prosty stemmer \"dla ubogich\" dla języka polskiego to obcinanie do sześciu znaków."
]
},
{
"cell_type": "code",
2021-03-31 15:20:26 +02:00
"execution_count": 35,
2021-03-24 12:10:05 +01:00
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"zrobim"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"komput"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"butach"
]
},
"metadata": {},
"output_type": "display_data"
2021-03-31 15:20:26 +02:00
},
{
"data": {
"text/plain": [
"źdźbła"
]
},
"metadata": {},
"output_type": "display_data"
2021-03-24 12:10:05 +01:00
}
],
"source": [
"poorMansStemming :: Text -> Text\n",
2021-03-31 15:20:26 +02:00
"poorMansStemming = Data.Text.take 6\n",
2021-03-24 12:10:05 +01:00
"\n",
"poorMansStemming \"zrobimy\"\n",
"poorMansStemming \"komputerami\"\n",
2021-03-31 15:20:26 +02:00
"poorMansStemming \"butach\"\n",
"poorMansStemming \"źdźbłami\"\n"
2021-03-24 12:10:05 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### _Stop words_\n",
"\n",
"Często wyszukiwarki pomijają krótkie, częste i nieniosące znaczenia słowa - _stop words_ (_słowa przestankowe_)."
]
},
{
"cell_type": "code",
2021-03-31 15:20:26 +02:00
"execution_count": 7,
2021-03-24 12:10:05 +01:00
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"False"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"True"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"isStopWord :: Text -> Bool\n",
"isStopWord \"w\" = True\n",
"isStopWord \"jest\" = True\n",
"isStopWord \"że\" = True\n",
"-- przy okazji możemy pozbyć się znaków interpunkcyjnych\n",
"isStopWord w = w ≈ [re|^\\p{P}+$|]\n",
"\n",
"isStopWord \"kot\"\n",
"isStopWord \"!\"\n"
]
},
{
"cell_type": "code",
2021-03-31 15:20:26 +02:00
"execution_count": 9,
2021-03-24 12:10:05 +01:00
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Ala"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ma"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"kota"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"removeStopWords :: [Text] -> [Text]\n",
"removeStopWords = filter (not . isStopWord)\n",
"\n",
"removeStopWords $ tokenize $ Prelude.head collectionD "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Pytanie**: Jakim zapytaniom usuwanie _stop words_ może szkodzić? Podać przykłady dla języka polskiego i angielskiego. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Normalizacja - różności\n",
"\n",
"W skład normalizacji może też wchodzić:\n",
"\n",
"* poprawianie błędów literowych\n",
"* sprowadzanie do małych liter (lower-casing czy raczej case-folding)\n",
"* usuwanie znaków diakrytycznych\n",
"\n"
]
},
{
"cell_type": "code",
2021-03-31 15:20:26 +02:00
"execution_count": 21,
2021-03-24 12:10:05 +01:00
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"żdźbło"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"toLower \"ŻDŹBŁO\""
]
},
{
"cell_type": "code",
2021-03-31 15:20:26 +02:00
"execution_count": 22,
2021-03-24 12:10:05 +01:00
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"źdźbło"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"toCaseFold \"ŹDŹBŁO\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Pytanie:** Kiedy _case-folding_ da inny wynik niż _lower-casing_? Jakie to ma praktyczne znaczenie?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Normalizacja jako całościowy proces\n",
"\n",
"Najważniejsza zasada: dokumenty w naszej kolekcji powinny być normalizowane w dokładnie taki sposób, jak zapytania.\n",
"\n",
"Efektem normalizacji jest zamiana dokumentu na ciąg _termów_ (ang. _terms_), czyli znormalizowanych wyrazów.\n",
"\n",
"Innymi słowy po normalizacji dokument $d_i$ traktujemy jako ciąg termów $t_i^1,\\dots,t_i^{|d_i|}$."
]
},
{
"cell_type": "code",
2021-03-31 15:20:26 +02:00
"execution_count": 38,
2021-03-24 12:10:05 +01:00
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
2021-03-31 15:20:26 +02:00
"ala"
2021-03-24 12:10:05 +01:00
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
2021-03-31 15:20:26 +02:00
"mieć"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"kot"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"podobn"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"kot"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"but"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ty"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"chyba"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"mieć"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"kot"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"but"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"chyba"
2021-03-24 12:10:05 +01:00
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"zgubić"
]
},
"metadata": {},
"output_type": "display_data"
2021-03-31 15:20:26 +02:00
},
{
"data": {
"text/plain": [
"kot"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"mieć"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"kot"
]
},
"metadata": {},
"output_type": "display_data"
2021-03-24 12:10:05 +01:00
}
],
"source": [
"normalize :: Text -> [Text]\n",
2021-03-31 15:20:26 +02:00
"normalize = map poorMansStemming . removeStopWords . map toLower . lemmatize mockInflectionDictionary . tokenize\n",
2021-03-24 12:10:05 +01:00
"\n",
2021-03-31 15:20:26 +02:00
"map normalize collectionD"
2021-03-24 12:10:05 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Zbiór wszystkich termów w kolekcji dokumentów nazywamy słownikiem (ang. _vocabulary_), nie mylić ze słownikiem jako strukturą danych w Pythonie (_dictionary_).\n",
"\n",
"$$V = \\bigcup_{i=1}^N \\{t_i^1,\\dots,t_i^{|d_i|}\\}$$\n",
"\n",
"(To zbiór, więc liczymy bez powtórzeń!)"
]
},
{
"cell_type": "code",
2021-03-31 15:20:26 +02:00
"execution_count": 11,
2021-03-24 12:10:05 +01:00
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
2021-03-31 15:20:26 +02:00
"fromList [\"ala\",\"but\",\"chyba\",\"kot\",\"mie\\263\",\"podobn\",\"ty\",\"zgubi\\263\"]"
2021-03-24 12:10:05 +01:00
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"import Data.Set as Set hiding(map)\n",
"\n",
"getVocabulary :: [Text] -> Set Text \n",
"getVocabulary = Set.unions . map (Set.fromList . normalize) \n",
"\n",
"getVocabulary collectionD"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Jak wyszukiwarka może być szybka?\n",
"\n",
2021-03-31 15:20:26 +02:00
"_Odwrócony indeks_ (ang. _inverted index_) pozwala wyszukiwarce szybko szukać w milionach dokumentów. Odwrócony indeks to prostu... indeks, jaki znamy z książek (mapowanie słów na numery stron/dokumentów).\n",
2021-03-24 12:10:05 +01:00
"\n",
"\n",
"\n"
]
},
{
"cell_type": "code",
2021-03-31 15:20:26 +02:00
"execution_count": 43,
2021-03-24 12:10:05 +01:00
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<style>/* Styles used for the Hoogle display in the pager */\n",
".hoogle-doc {\n",
"display: block;\n",
"padding-bottom: 1.3em;\n",
"padding-left: 0.4em;\n",
"}\n",
".hoogle-code {\n",
"display: block;\n",
"font-family: monospace;\n",
"white-space: pre;\n",
"}\n",
".hoogle-text {\n",
"display: block;\n",
"}\n",
".hoogle-name {\n",
"color: green;\n",
"font-weight: bold;\n",
"}\n",
".hoogle-head {\n",
"font-weight: bold;\n",
"}\n",
".hoogle-sub {\n",
"display: block;\n",
"margin-left: 0.4em;\n",
"}\n",
".hoogle-package {\n",
"font-weight: bold;\n",
"font-style: italic;\n",
"}\n",
".hoogle-module {\n",
"font-weight: bold;\n",
"}\n",
".hoogle-class {\n",
"font-weight: bold;\n",
"}\n",
".get-type {\n",
"color: green;\n",
"font-weight: bold;\n",
"font-family: monospace;\n",
"display: block;\n",
"white-space: pre-wrap;\n",
"}\n",
".show-type {\n",
"color: green;\n",
"font-weight: bold;\n",
"font-family: monospace;\n",
"margin-left: 1em;\n",
"}\n",
".mono {\n",
"font-family: monospace;\n",
"display: block;\n",
"}\n",
".err-msg {\n",
"color: red;\n",
"font-style: italic;\n",
"font-family: monospace;\n",
"white-space: pre;\n",
"display: block;\n",
"}\n",
"#unshowable {\n",
"color: red;\n",
"font-weight: bold;\n",
"}\n",
".err-msg.in.collapse {\n",
"padding-top: 0.7em;\n",
"}\n",
".highlight-code {\n",
"white-space: pre;\n",
"font-family: monospace;\n",
"}\n",
".suggestion-warning { \n",
"font-weight: bold;\n",
"color: rgb(200, 130, 0);\n",
"}\n",
".suggestion-error { \n",
"font-weight: bold;\n",
"color: red;\n",
"}\n",
".suggestion-name {\n",
"font-weight: bold;\n",
"}\n",
"</style><div class=\"suggestion-name\" style=\"clear:both;\">Use tuple-section</div><div class=\"suggestion-row\" style=\"float: left;\"><div class=\"suggestion-warning\">Found:</div><div class=\"highlight-code\" id=\"haskell\">\\ t -> (t, ix)</div></div><div class=\"suggestion-row\" style=\"float: left;\"><div class=\"suggestion-warning\">Why Not:</div><div class=\"highlight-code\" id=\"haskell\">(, ix)</div></div>"
],
"text/plain": [
"Line 4: Use tuple-section\n",
"Found:\n",
"\\ t -> (t, ix)\n",
"Why not:\n",
"(, ix)"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"fromList [(\"chyba\",2),(\"kot\",2),(\"mie\\263\",2),(\"ty\",2)]"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"collectionDNormalized = map normalize collectionD\n",
"\n",
"documentToPostings :: ([Text], Int) -> Set (Text, Int)\n",
"documentToPostings (d, ix) = Set.fromList $ map (\\t -> (t, ix)) d\n",
"\n",
"documentToPostings (collectionDNormalized !! 2, 2) \n",
"\n"
]
},
{
"cell_type": "code",
2021-03-31 15:20:26 +02:00
"execution_count": 46,
2021-03-24 12:10:05 +01:00
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<style>/* Styles used for the Hoogle display in the pager */\n",
".hoogle-doc {\n",
"display: block;\n",
"padding-bottom: 1.3em;\n",
"padding-left: 0.4em;\n",
"}\n",
".hoogle-code {\n",
"display: block;\n",
"font-family: monospace;\n",
"white-space: pre;\n",
"}\n",
".hoogle-text {\n",
"display: block;\n",
"}\n",
".hoogle-name {\n",
"color: green;\n",
"font-weight: bold;\n",
"}\n",
".hoogle-head {\n",
"font-weight: bold;\n",
"}\n",
".hoogle-sub {\n",
"display: block;\n",
"margin-left: 0.4em;\n",
"}\n",
".hoogle-package {\n",
"font-weight: bold;\n",
"font-style: italic;\n",
"}\n",
".hoogle-module {\n",
"font-weight: bold;\n",
"}\n",
".hoogle-class {\n",
"font-weight: bold;\n",
"}\n",
".get-type {\n",
"color: green;\n",
"font-weight: bold;\n",
"font-family: monospace;\n",
"display: block;\n",
"white-space: pre-wrap;\n",
"}\n",
".show-type {\n",
"color: green;\n",
"font-weight: bold;\n",
"font-family: monospace;\n",
"margin-left: 1em;\n",
"}\n",
".mono {\n",
"font-family: monospace;\n",
"display: block;\n",
"}\n",
".err-msg {\n",
"color: red;\n",
"font-style: italic;\n",
"font-family: monospace;\n",
"white-space: pre;\n",
"display: block;\n",
"}\n",
"#unshowable {\n",
"color: red;\n",
"font-weight: bold;\n",
"}\n",
".err-msg.in.collapse {\n",
"padding-top: 0.7em;\n",
"}\n",
".highlight-code {\n",
"white-space: pre;\n",
"font-family: monospace;\n",
"}\n",
".suggestion-warning { \n",
"font-weight: bold;\n",
"color: rgb(200, 130, 0);\n",
"}\n",
".suggestion-error { \n",
"font-weight: bold;\n",
"color: red;\n",
"}\n",
".suggestion-name {\n",
"font-weight: bold;\n",
"}\n",
2021-03-31 15:20:26 +02:00
"</style><div class=\"suggestion-name\" style=\"clear:both;\">Use zipWith</div><div class=\"suggestion-row\" style=\"float: left;\"><div class=\"suggestion-warning\">Found:</div><div class=\"highlight-code\" id=\"haskell\">map documentToPostings $ Prelude.zip coll [0 .. ]</div></div><div class=\"suggestion-row\" style=\"float: left;\"><div class=\"suggestion-warning\">Why Not:</div><div class=\"highlight-code\" id=\"haskell\">zipWith (curry documentToPostings) coll [0 .. ]</div></div>"
2021-03-24 12:10:05 +01:00
],
"text/plain": [
"Line 2: Use zipWith\n",
"Found:\n",
2021-03-31 15:20:26 +02:00
"map documentToPostings $ Prelude.zip coll [0 .. ]\n",
2021-03-24 12:10:05 +01:00
"Why not:\n",
"zipWith (curry documentToPostings) coll [0 .. ]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
2021-03-31 15:20:26 +02:00
"fromList [(\"ala\",0),(\"but\",1),(\"but\",3),(\"chyba\",2),(\"chyba\",3),(\"kot\",0),(\"kot\",1),(\"kot\",2),(\"kot\",4),(\"mie\\263\",0),(\"mie\\263\",2),(\"mie\\263\",4),(\"podobn\",1),(\"ty\",2),(\"zgubi\\263\",3)]"
2021-03-24 12:10:05 +01:00
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"collectionToPostings :: [[Text]] -> Set (Text, Int)\n",
2021-03-31 15:20:26 +02:00
"collectionToPostings coll = Set.unions $ map documentToPostings $ Prelude.zip coll [0..]\n",
2021-03-24 12:10:05 +01:00
"\n",
"collectionToPostings collectionDNormalized"
]
},
{
"cell_type": "code",
2021-03-31 15:20:26 +02:00
"execution_count": 41,
2021-03-24 12:10:05 +01:00
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<style>/* Styles used for the Hoogle display in the pager */\n",
".hoogle-doc {\n",
"display: block;\n",
"padding-bottom: 1.3em;\n",
"padding-left: 0.4em;\n",
"}\n",
".hoogle-code {\n",
"display: block;\n",
"font-family: monospace;\n",
"white-space: pre;\n",
"}\n",
".hoogle-text {\n",
"display: block;\n",
"}\n",
".hoogle-name {\n",
"color: green;\n",
"font-weight: bold;\n",
"}\n",
".hoogle-head {\n",
"font-weight: bold;\n",
"}\n",
".hoogle-sub {\n",
"display: block;\n",
"margin-left: 0.4em;\n",
"}\n",
".hoogle-package {\n",
"font-weight: bold;\n",
"font-style: italic;\n",
"}\n",
".hoogle-module {\n",
"font-weight: bold;\n",
"}\n",
".hoogle-class {\n",
"font-weight: bold;\n",
"}\n",
".get-type {\n",
"color: green;\n",
"font-weight: bold;\n",
"font-family: monospace;\n",
"display: block;\n",
"white-space: pre-wrap;\n",
"}\n",
".show-type {\n",
"color: green;\n",
"font-weight: bold;\n",
"font-family: monospace;\n",
"margin-left: 1em;\n",
"}\n",
".mono {\n",
"font-family: monospace;\n",
"display: block;\n",
"}\n",
".err-msg {\n",
"color: red;\n",
"font-style: italic;\n",
"font-family: monospace;\n",
"white-space: pre;\n",
"display: block;\n",
"}\n",
"#unshowable {\n",
"color: red;\n",
"font-weight: bold;\n",
"}\n",
".err-msg.in.collapse {\n",
"padding-top: 0.7em;\n",
"}\n",
".highlight-code {\n",
"white-space: pre;\n",
"font-family: monospace;\n",
"}\n",
".suggestion-warning { \n",
"font-weight: bold;\n",
"color: rgb(200, 130, 0);\n",
"}\n",
".suggestion-error { \n",
"font-weight: bold;\n",
"color: red;\n",
"}\n",
".suggestion-name {\n",
"font-weight: bold;\n",
"}\n",
"</style><div class=\"suggestion-name\" style=\"clear:both;\">Eta reduce</div><div class=\"suggestion-row\" style=\"float: left;\"><div class=\"suggestion-warning\">Found:</div><div class=\"highlight-code\" id=\"haskell\">updateInvertedIndex (t, ix) invIndex\n",
" = insertWith (++) t [ix] invIndex</div></div><div class=\"suggestion-row\" style=\"float: left;\"><div class=\"suggestion-warning\">Why Not:</div><div class=\"highlight-code\" id=\"haskell\">updateInvertedIndex (t, ix) = insertWith (++) t [ix]</div></div>"
],
"text/plain": [
"Line 2: Eta reduce\n",
"Found:\n",
"updateInvertedIndex (t, ix) invIndex\n",
" = insertWith (++) t [ix] invIndex\n",
"Why not:\n",
"updateInvertedIndex (t, ix) = insertWith (++) t [ix]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
2021-03-31 15:20:26 +02:00
"fromList [(\"ala\",[0]),(\"but\",[1,3]),(\"chyba\",[2,3]),(\"kot\",[0,1,2,4]),(\"mie\\263\",[0,2,4]),(\"podobn\",[1]),(\"ty\",[2]),(\"zgubi\\263\",[3])]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"[0,1,2,4]"
2021-03-24 12:10:05 +01:00
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"updateInvertedIndex :: (Text, Int) -> Map Text [Int] -> Map Text [Int]\n",
"updateInvertedIndex (t, ix) invIndex = insertWith (++) t [ix] invIndex\n",
"\n",
"getInvertedIndex :: [[Text]] -> Map Text [Int]\n",
"getInvertedIndex = Prelude.foldr updateInvertedIndex Map.empty . Set.toList . collectionToPostings\n",
"\n",
2021-03-31 15:20:26 +02:00
"ind = getInvertedIndex collectionDNormalized\n",
"ind\n",
"ind ! \"kot\""
2021-03-24 12:10:05 +01:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Relewantność\n",
"\n",
"Potrafimy szybko przeszukiwać znormalizowane dokumenty, ale które dokumenty są ważne (_relewantne_) względem potrzeby informacyjnej użytkownika?\n",
"\n",
"### Zapytania boole'owskie\n",
"\n",
2021-03-31 15:20:26 +02:00
"* `pizzeria Poznań dowóz` to `pizzeria AND Poznań AND dowóz` czy `pizzeria OR Poznań OR dowóz`\n",
2021-03-24 12:10:05 +01:00
"* `(pizzeria OR pizza OR tratoria) AND Poznań AND dowóz\n",
"* `pizzeria AND Poznań AND dowóz AND NOT golonka`\n",
"\n",
"Jak domyślnie interpretować zapytanie?\n",
"\n",
"* jako zapytanie AND -- być może za mało dokumentów\n",
"* rozwiązanie pośrednie?\n",
"* jako zapytanie OR -- być może za dużo dokumentów\n",
"\n",
"Możemy jakieś miary dopasowania dokumentu do zapytania, żeby móc posortować dokumenty...\n",
"\n",
"### Mierzenie dopasowania dokumentu do zapytania\n",
"\n",
"Potrzebujemy jakieś funkcji $\\sigma : Q x D \\rightarrow \\mathbb{R}$. \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Musimy jakoś zamienić dokumenty na liczby, tj. dokumenty na wektory liczb, a całą kolekcję na macierz.\n",
"\n",
"Po pierwsze ponumerujmy wszystkie termy ze słownika."
]
},
{
"cell_type": "code",
2021-03-31 15:20:26 +02:00
"execution_count": 15,
2021-03-24 12:10:05 +01:00
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
2021-03-31 15:20:26 +02:00
"fromList [(0,\"ala\"),(1,\"but\"),(2,\"chyba\"),(3,\"kot\"),(4,\"mie\\263\"),(5,\"podobn\"),(6,\"ty\"),(7,\"zgubi\\263\")]"
2021-03-24 12:10:05 +01:00
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
2021-03-31 15:20:26 +02:00
"fromList [(\"ala\",0),(\"but\",1),(\"chyba\",2),(\"kot\",3),(\"mie\\263\",4),(\"podobn\",5),(\"ty\",6),(\"zgubi\\263\",7)]"
2021-03-24 12:10:05 +01:00
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ala"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"2"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"voc = getVocabulary collectionD\n",
"\n",
"vocD :: Map Int Text\n",
"vocD = Map.fromList $ zip [0..] $ Set.toList voc\n",
"\n",
"invvocD :: Map Text Int\n",
"invvocD = Map.fromList $ zip (Set.toList voc) [0..]\n",
"\n",
"vocD\n",
"\n",
"invvocD\n",
"\n",
"vocD ! 0\n",
"invvocD ! \"chyba\"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Napiszmy funkcję, która _wektoryzuje_ znormalizowany dokument.\n",
"\n"
]
},
{
"cell_type": "code",
2021-03-31 15:20:26 +02:00
"execution_count": 16,
2021-03-24 12:10:05 +01:00
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<style>/* Styles used for the Hoogle display in the pager */\n",
".hoogle-doc {\n",
"display: block;\n",
"padding-bottom: 1.3em;\n",
"padding-left: 0.4em;\n",
"}\n",
".hoogle-code {\n",
"display: block;\n",
"font-family: monospace;\n",
"white-space: pre;\n",
"}\n",
".hoogle-text {\n",
"display: block;\n",
"}\n",
".hoogle-name {\n",
"color: green;\n",
"font-weight: bold;\n",
"}\n",
".hoogle-head {\n",
"font-weight: bold;\n",
"}\n",
".hoogle-sub {\n",
"display: block;\n",
"margin-left: 0.4em;\n",
"}\n",
".hoogle-package {\n",
"font-weight: bold;\n",
"font-style: italic;\n",
"}\n",
".hoogle-module {\n",
"font-weight: bold;\n",
"}\n",
".hoogle-class {\n",
"font-weight: bold;\n",
"}\n",
".get-type {\n",
"color: green;\n",
"font-weight: bold;\n",
"font-family: monospace;\n",
"display: block;\n",
"white-space: pre-wrap;\n",
"}\n",
".show-type {\n",
"color: green;\n",
"font-weight: bold;\n",
"font-family: monospace;\n",
"margin-left: 1em;\n",
"}\n",
".mono {\n",
"font-family: monospace;\n",
"display: block;\n",
"}\n",
".err-msg {\n",
"color: red;\n",
"font-style: italic;\n",
"font-family: monospace;\n",
"white-space: pre;\n",
"display: block;\n",
"}\n",
"#unshowable {\n",
"color: red;\n",
"font-weight: bold;\n",
"}\n",
".err-msg.in.collapse {\n",
"padding-top: 0.7em;\n",
"}\n",
".highlight-code {\n",
"white-space: pre;\n",
"font-family: monospace;\n",
"}\n",
".suggestion-warning { \n",
"font-weight: bold;\n",
"color: rgb(200, 130, 0);\n",
"}\n",
".suggestion-error { \n",
"font-weight: bold;\n",
"color: red;\n",
"}\n",
".suggestion-name {\n",
"font-weight: bold;\n",
"}\n",
"</style><div class=\"suggestion-name\" style=\"clear:both;\">Redundant $</div><div class=\"suggestion-row\" style=\"float: left;\"><div class=\"suggestion-warning\">Found:</div><div class=\"highlight-code\" id=\"haskell\">map (\\ i -> count (v ! i) doc) $ [0 .. (vecSize - 1)]</div></div><div class=\"suggestion-row\" style=\"float: left;\"><div class=\"suggestion-warning\">Why Not:</div><div class=\"highlight-code\" id=\"haskell\">map (\\ i -> count (v ! i) doc) [0 .. (vecSize - 1)]</div></div><div class=\"suggestion-name\" style=\"clear:both;\">Redundant bracket</div><div class=\"suggestion-row\" style=\"float: left;\"><div class=\"suggestion-warning\">Found:</div><div class=\"highlight-code\" id=\"haskell\">(collectionDNormalized !! 2)</div></div><div class=\"suggestion-row\" style=\"float: left;\"><div class=\"suggestion-warning\">Why Not:</div><div class=\"highlight-code\" id=\"haskell\">collectionDNormalized !! 2</div></div>"
],
"text/plain": [
"Line 2: Redundant $\n",
"Found:\n",
"map (\\ i -> count (v ! i) doc) $ [0 .. (vecSize - 1)]\n",
"Why not:\n",
"map (\\ i -> count (v ! i) doc) [0 .. (vecSize - 1)]Line 9: Redundant bracket\n",
"Found:\n",
"(collectionDNormalized !! 2)\n",
"Why not:\n",
"collectionDNormalized !! 2"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ty"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"chyba"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"mieć"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"kot"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"[0.0,0.0,1.0,1.0,1.0,0.0,1.0,0.0]"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"vectorize :: Int -> Map Int Text -> [Text] -> [Double]\n",
"vectorize vecSize v doc = map (\\i -> count (v ! i) doc) $ [0..(vecSize-1)]\n",
" where count t doc \n",
" | t `elem` doc = 1.0\n",
" | otherwise = 0.0\n",
" \n",
"vocSize = Set.size voc\n",
"\n",
"(collectionDNormalized !! 2)\n",
"vectorize vocSize vocD (collectionDNormalized !! 2)\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" ![image](./macierz.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Jak inaczej uwzględnić częstość wyrazów?\n",
"\n",
"<div style=\"display:none\">\n",
" $\n",
" \\newcommand{\\idf}{\\mathop{\\rm idf}\\nolimits}\n",
" \\newcommand{\\tf}{\\mathop{\\rm tf}\\nolimits}\n",
" \\newcommand{\\df}{\\mathop{\\rm df}\\nolimits}\n",
" \\newcommand{\\tfidf}{\\mathop{\\rm tfidf}\\nolimits}\n",
" $\n",
"</div>\n",
"\n",
2021-03-31 15:20:26 +02:00
"* $\\tf_{t,d}$ - term frequency\n",
2021-03-24 12:10:05 +01:00
"\n",
"* $1+\\log(\\tf_{t,d})$\n",
"\n",
"* $0.5 + \\frac{0.5 \\times \\tf_{t,d}}{max_t(\\tf_{t,d})}$"
]
},
{
2021-03-31 15:20:26 +02:00
"cell_type": "code",
"execution_count": 17,
2021-03-24 12:10:05 +01:00
"metadata": {},
2021-03-31 15:20:26 +02:00
"outputs": [
{
"data": {
"text/html": [
"<style>/* Styles used for the Hoogle display in the pager */\n",
".hoogle-doc {\n",
"display: block;\n",
"padding-bottom: 1.3em;\n",
"padding-left: 0.4em;\n",
"}\n",
".hoogle-code {\n",
"display: block;\n",
"font-family: monospace;\n",
"white-space: pre;\n",
"}\n",
".hoogle-text {\n",
"display: block;\n",
"}\n",
".hoogle-name {\n",
"color: green;\n",
"font-weight: bold;\n",
"}\n",
".hoogle-head {\n",
"font-weight: bold;\n",
"}\n",
".hoogle-sub {\n",
"display: block;\n",
"margin-left: 0.4em;\n",
"}\n",
".hoogle-package {\n",
"font-weight: bold;\n",
"font-style: italic;\n",
"}\n",
".hoogle-module {\n",
"font-weight: bold;\n",
"}\n",
".hoogle-class {\n",
"font-weight: bold;\n",
"}\n",
".get-type {\n",
"color: green;\n",
"font-weight: bold;\n",
"font-family: monospace;\n",
"display: block;\n",
"white-space: pre-wrap;\n",
"}\n",
".show-type {\n",
"color: green;\n",
"font-weight: bold;\n",
"font-family: monospace;\n",
"margin-left: 1em;\n",
"}\n",
".mono {\n",
"font-family: monospace;\n",
"display: block;\n",
"}\n",
".err-msg {\n",
"color: red;\n",
"font-style: italic;\n",
"font-family: monospace;\n",
"white-space: pre;\n",
"display: block;\n",
"}\n",
"#unshowable {\n",
"color: red;\n",
"font-weight: bold;\n",
"}\n",
".err-msg.in.collapse {\n",
"padding-top: 0.7em;\n",
"}\n",
".highlight-code {\n",
"white-space: pre;\n",
"font-family: monospace;\n",
"}\n",
".suggestion-warning { \n",
"font-weight: bold;\n",
"color: rgb(200, 130, 0);\n",
"}\n",
".suggestion-error { \n",
"font-weight: bold;\n",
"color: red;\n",
"}\n",
".suggestion-name {\n",
"font-weight: bold;\n",
"}\n",
"</style><div class=\"suggestion-name\" style=\"clear:both;\">Redundant $</div><div class=\"suggestion-row\" style=\"float: left;\"><div class=\"suggestion-warning\">Found:</div><div class=\"highlight-code\" id=\"haskell\">map (\\ i -> count (v ! i) doc) $ [0 .. (vecSize - 1)]</div></div><div class=\"suggestion-row\" style=\"float: left;\"><div class=\"suggestion-warning\">Why Not:</div><div class=\"highlight-code\" id=\"haskell\">map (\\ i -> count (v ! i) doc) [0 .. (vecSize - 1)]</div></div><div class=\"suggestion-name\" style=\"clear:both;\">Redundant bracket</div><div class=\"suggestion-row\" style=\"float: left;\"><div class=\"suggestion-warning\">Found:</div><div class=\"highlight-code\" id=\"haskell\">(collectionDNormalized !! 4)</div></div><div class=\"suggestion-row\" style=\"float: left;\"><div class=\"suggestion-warning\">Why Not:</div><div class=\"highlight-code\" id=\"haskell\">collectionDNormalized !! 4</div></div>"
],
"text/plain": [
"Line 2: Redundant $\n",
"Found:\n",
"map (\\ i -> count (v ! i) doc) $ [0 .. (vecSize - 1)]\n",
"Why not:\n",
"map (\\ i -> count (v ! i) doc) [0 .. (vecSize - 1)]Line 7: Redundant bracket\n",
"Found:\n",
"(collectionDNormalized !! 4)\n",
"Why not:\n",
"collectionDNormalized !! 4"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"kot"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"mieć"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"kot"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"[0.0,0.0,0.0,1.0,1.0,0.0,0.0,0.0]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"[0.0,0.0,0.0,2.0,1.0,0.0,0.0,0.0]"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"vectorizeTf :: Int -> Map Int Text -> [Text] -> [Double]\n",
"vectorizeTf vecSize v doc = map (\\i -> count (v ! i) doc) $ [0..(vecSize-1)]\n",
" where count t doc = fromIntegral $ (Prelude.length . Prelude.filter (== t)) doc\n",
"\n",
"vocSize = Set.size voc\n",
"\n",
"(collectionDNormalized !! 4)\n",
"vectorize vocSize vocD (collectionDNormalized !! 4)\n",
"vectorizeTf vocSize vocD (collectionDNormalized !! 4)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<div style=\"display:none\">\n",
" $\n",
" \\newcommand{\\idf}{\\mathop{\\rm idf}\\nolimits}\n",
" \\newcommand{\\tf}{\\mathop{\\rm tf}\\nolimits}\n",
" \\newcommand{\\df}{\\mathop{\\rm df}\\nolimits}\n",
" \\newcommand{\\tfidf}{\\mathop{\\rm tfidf}\\nolimits}\n",
" $\n",
"</div>\n",
"\n",
"### Odwrotna częstość dokumentowa\n",
"\n",
"Czy wszystkie wyrazy są tak samo ważne?\n",
"\n",
"**NIE.** Wyrazy pojawiające się w wielu dokumentach są mniej ważne.\n",
"\n",
"Aby to uwzględnić, przemnażamy frekwencję wyrazu przez _odwrotną\n",
" częstość w dokumentach_ (_inverse document frequency_):\n",
"\n",
"$$\\idf_t = \\log \\frac{N}{\\df_t},$$\n",
"\n",
"gdzie:\n",
"\n",
"* $\\idf_t$ - odwrotna częstość wyrazu $t$ w dokumentach\n",
"\n",
"* $N$ - liczba dokumentów w kolekcji\n",
"\n",
"* $\\df_f$ - w ilu dokumentach wystąpił wyraz $t$?\n",
"\n",
"#### Dlaczego idf?\n",
"\n",
"term $t$ wystąpił...\n",
"\n",
"* w 1 dokumencie, $\\idf_t = \\log N/1 = \\log N$\n",
"* 2 razy w kolekcji, $\\idf_t = \\log N/2$ lub $\\log N$\n",
"* w połowie dokumentów kolekcji, $\\idf_t = \\log N/(N/2) = \\log 2$\n",
"* we wszystkich dokumentach, $\\idf_t = \\log N/N = \\log 1 = 0$\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"0.22314355131420976"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"idf :: [[Text]] -> Text -> Double\n",
"idf coll t = log (fromIntegral n / fromIntegral df)\n",
" where df = Prelude.length $ Prelude.filter (\\d -> t `elem` d) coll\n",
" n = Prelude.length coll\n",
" \n",
"idf collectionDNormalized \"kot\" "
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"0.9162907318741551"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"idf collectionDNormalized \"chyba\" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Co z tego wynika?\n",
"\n",
"Zamiast $\\tf_{t,d}$ będziemy w wektorach rozpatrywać wartości:\n",
"\n",
"$$\\tfidf_{t,d} = \\tf_{t,d} \\times \\idf_{t}$$\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"kot"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"mieć"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"kot"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"[0.0,0.0,0.0,1.0,1.0,0.0,0.0,0.0]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"[0.0,0.0,0.0,2.0,1.0,0.0,0.0,0.0]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"[0.0,0.0,0.0,0.44628710262841953,0.5108256237659907,0.0,0.0,0.0]"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"vectorizeTfIdf :: Int -> [[Text]] -> Map Int Text -> [Text] -> [Double]\n",
"vectorizeTfIdf vecSize coll v doc = map (\\i -> count (v ! i) doc * idf coll (v ! i)) [0..(vecSize-1)]\n",
" where count t doc = fromIntegral $ (Prelude.length . Prelude.filter (== t)) doc\n",
"\n",
"vocSize = Set.size voc\n",
"\n",
"collectionDNormalized !! 4\n",
"vectorize vocSize vocD (collectionDNormalized !! 4)\n",
"vectorizeTf vocSize vocD (collectionDNormalized !! 4)\n",
"vectorizeTfIdf vocSize collectionDNormalized vocD (collectionDNormalized !! 4)"
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[[1.6094379124341003,0.0,0.0,0.22314355131420976,0.5108256237659907,0.0,0.0,0.0],[0.0,0.9162907318741551,0.0,0.22314355131420976,0.0,1.6094379124341003,0.0,0.0],[0.0,0.0,0.9162907318741551,0.22314355131420976,0.5108256237659907,0.0,1.6094379124341003,0.0],[0.0,0.9162907318741551,0.9162907318741551,0.0,0.0,0.0,0.0,1.6094379124341003],[0.0,0.0,0.0,0.44628710262841953,0.5108256237659907,0.0,0.0,0.0]]"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"map (vectorizeTfIdf vocSize collectionDNormalized vocD) collectionDNormalized"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Teraz zdefiniujemy _overlap score measure_:\n",
"\n",
"$$\\sigma(q,d) = \\sum_{t \\in q} \\tfidf_{t,d}$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Podobieństwo kosinusowe\n",
"\n",
"_Overlap score measure_ nie jest jedyną możliwą metryką, za pomocą której możemy mierzyć dopasowanie dokumentu do zapytania. Możemy również sięgnąć po intuicje geometryczne (skoro mamy do czynienia z wektorami).\n",
"\n",
"**Pytanie**: Ile wymiarów mają wektory, na których operujemy? Jak \"wyglądają\" te wektory? Czy możemy wykonywać na nich standardowe operacje geometryczne czy te, które znamy z geometrii liniowej?\n",
"\n",
"#### Podobieństwo między dokumentami\n",
"\n",
"Zajmijmy się teraz poszukiwaniem miary mierzącej podobieństwo między dokumentami $d_1$ i $d_2$ (czyli poszukujemy sensownej funkcji $\\sigma : D x D \\rightarrow \\mathbb{R}$).\n",
"\n",
"**Uwaga** Pojęcia \"miary\" używamy nieformalnie, nie spełnia ona założeń znanych z teorii miary.\n",
"\n",
"Rozpatrzmy zbiorek tekstów legend miejskich z <git://gonito.net/polish-urban-legends>.\n",
"\n",
"(To autentyczne teksty z Internentu, z językiem potocznym, wulgarnym itd.)\n",
"\n",
"```\n",
" git clone git://gonito.net/polish-urban-legends\n",
" paste polish-urban-legends/dev-0/expected.tsv polish-urban-legends/dev-0/in.tsv > legendy.txt\n",
"``` "
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"na_ak"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"Opowieść prawdziwa... Olsztyn, akademik, 7 piętro, impreza u Mariusza, jak to na polskiej najebce bywa ktoś rzucił tekstem: \"Mariusz nie zjedziesz na nartach po schodach\". Sprawa ucichla, studencii wrocili do tego co lubia i w sumie umieją najbardziej czyli picia, lecz nad ranem kolo godziny 6.00 ludzia przypomnialo sie ze Mariusz miał zjechać na nartach po schodach. Tu warto wspomnieć że Mariusz był zapalonym narciarzem stąd właśnie w jego pokoju znalezc można bylo narty, bo po ki huj komuś narty w Olsztynie! Tak wracajac do historii nasz bohater odział się w sprzet, podszed do schodow i niestety dał radę zjechać jedynie w połowie, gdyż jak to powiedzial \"no kurwa potknąłem sie\", ale nieustraszoony Mariusz próbowal dalej. Nastepny zjazd byl perfekcyjny, jedno pietro zanim, niestety pomiedzy 6 a 5 pietrem Mariusza natrafil na Pania sprzątaczke, która potrącił i zwiał z miejsca wypadku. Ok godziny 10.00 nastopilo przebudzenie Mariusza, ktory zaraz po obudzeniu uslyszal co narobił, mianowicie o skutkach potracenia, Pani sprzataczka złamala rękę i trafiła do szpitala. Mogły powstać przez to cieżkie konsekwencje, Mariusz mógł wyleciec z akademika jeżeli kierownik dowie sie o calym zajściu. Wiec koledzy poradzili narciażowi, aby kupił kwiaty i bombonierkę i poszedł do szpitala z przeprosinami. Po szybkich zakupach w sasiedniej Biedrące, Mariusz byl przygotowany na konfrontacje z Pania sprzątaczka, ale nie mogło pojść pięknie i gładko. Po wejściu do szpitala nasz bohater skierowal swoje kroki do recepcji pytajac się o ciocię, która miała wypadek w akademiku, recepcjonistka skierowała go do lekarza, gdzie czekał na jego wyjście ok 15 minut, gdy lekarz już wyszedł ten odrazu podleciał do niego, żeby spytać się o stan zdrowia Pani sprzątaczki. Wnet uslyszla od lekarz, niestety Pani teraz jest u psychiatry po twierdzi, że ktoś potracil ja zjeżdzajac na nartach w akademiku. Po uslyszeniu tej wiadomosci Mariusz odwrocił się, wybiegł, kupił piecie i szybko pobiegł do akademika pić dalej! Morał... student potrafi!"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"import System.IO\n",
"import Data.List.Split as SP\n",
"\n",
"legendsh <- openFile \"legendy.txt\" ReadMode\n",
"hSetEncoding legendsh utf8\n",
"contents <- hGetContents legendsh\n",
"ls = Prelude.lines contents\n",
"items = map (map pack . SP.splitOn \"\\t\") ls\n",
"Prelude.head items"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"87"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"nbOfLegends = Prelude.length items\n",
"nbOfLegends"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"na_ak"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"w_lud"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ba_hy"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"w_lap"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ne_dz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"be_wy"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"zw_oz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"mo_zu"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"be_wy"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ba_hy"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"mo_zu"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"be_wy"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"w_lud"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ne_dz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ta_ab"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ta_ab"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ta_ab"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"w_lap"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ba_hy"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ne_dz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ba_hy"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"tr_su"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ne_dz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ba_hy"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"mo_zu"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"tr_su"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"zw_oz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ne_dz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ne_dz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"w_lud"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"zw_oz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"zw_oz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"zw_oz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ne_dz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ta_ab"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"zw_oz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"w_lud"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"na_ak"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"zw_oz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"w_lap"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"be_wy"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"na_ak"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"zw_oz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"w_lap"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"na_ak"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ba_hy"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"zw_oz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"w_lud"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"zw_oz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"zw_oz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"mo_zu"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ba_hy"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"zw_oz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"tr_su"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"na_ak"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ba_hy"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"w_lud"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"w_lud"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"zw_oz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"tr_su"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"zw_oz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"w_lud"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"zw_oz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"zw_oz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"be_wy"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"tr_su"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"zw_oz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"na_ak"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ba_hy"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"zw_oz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ne_dz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ba_hy"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"na_ak"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"zw_oz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"w_lud"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"mo_zu"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"mo_zu"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"na_ak"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"w_lap"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ne_dz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ba_hy"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"mo_zu"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ba_hy"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ne_dz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"zw_oz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"tr_su"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ne_dz"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"w_lud"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"Ja podejrzewam że o polowaniu nie było mowy, po prostu znalazł martwego szczupaka i skorzystał z okazji! Mnie mocno zdziwiła jego siła żeby taki pół kilogramowy okaz szczupaka przesuwać o parę metrów i to w trzcinach! Szacuneczek. Przypomniala mi sie historia którą kiedys zaslyszalem o wlascicielce pytona, ktory nagle polozyl sie wzdluz jej łóżka. Leżał tak wyciągniety jak struna dłuższy czas jak nieżywy (a był długości łóżka), więc kobitka zadzonila do weterynarza co ma robić. Usłyszała że ma szybko zamknąć się w łazience i poczekać na niego bo pyton ją mierzy jako potencjalną ofiarę (czy mu się zmieści w brzuchu...). Wierzyć, nie wierzyć? Kiedyś nie wierzyłem ale od kilku dni mam wątpliwosci... Pozdrawiam"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"labelsL = map Prelude.head items\n",
"labelsL\n",
"collectionL = map (!!1) items\n",
"items !! 1"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"0"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"348"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"collectionLNormalized = map normalize collectionL\n",
"voc' = getVocabulary collectionL\n",
"\n",
"vocLSize = Prelude.length voc'\n",
"\n",
"vocL :: Map Int Text\n",
"vocL = Map.fromList $ zip [0..] $ Set.toList voc'\n",
"\n",
"invvocL :: Map Text Int\n",
"invvocL = Map.fromList $ zip (Set.toList voc') [0..]\n",
"\n",
"vocL ! 0\n",
"invvocL ! \"chyba\"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Wektoryzujemy całą kolekcję:"
]
},
{
"cell_type": "code",
"execution_count": 48,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.38837067474886433,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.752336051950276,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0647107369924282,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,3.7727609380946383,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.2078115806331018,0.0,0.0,0.0,0.0,0.0,1.247032293786383,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.5947071077466928,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,2.268683541318364,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.2078115806331018,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.7578579175523736,0.0,0.0,0.0,0.0,0.0,0.3550342544812725,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,3.7727609380946383,3.367295829986474,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.9395475940384223,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.21437689194643514,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.2878542883066382,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"lVectorized = map (vectorizeTfIdf vocLSize collectionLNormalized vocL) collectionLNormalized\n",
"lVectorized !! 1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Szukamy funkcji $sigma$, która da wysoką wartość dla tekstów dotyczących tego samego wątku legendowego (np. $d_1$ i $d_2$ mówią o wężu przymierzającym się do zjedzenia swojej właścicielki) i niską dla tekstów z różnych wątków (np. $d_1$ opowiada o wężu ludojadzie, $d_2$ - bałwanku na hydrancie)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Może po prostu odległość euklidesowa, skoro to punkty w wielowymiarowej przestrzeni?"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<style>/* Styles used for the Hoogle display in the pager */\n",
".hoogle-doc {\n",
"display: block;\n",
"padding-bottom: 1.3em;\n",
"padding-left: 0.4em;\n",
"}\n",
".hoogle-code {\n",
"display: block;\n",
"font-family: monospace;\n",
"white-space: pre;\n",
"}\n",
".hoogle-text {\n",
"display: block;\n",
"}\n",
".hoogle-name {\n",
"color: green;\n",
"font-weight: bold;\n",
"}\n",
".hoogle-head {\n",
"font-weight: bold;\n",
"}\n",
".hoogle-sub {\n",
"display: block;\n",
"margin-left: 0.4em;\n",
"}\n",
".hoogle-package {\n",
"font-weight: bold;\n",
"font-style: italic;\n",
"}\n",
".hoogle-module {\n",
"font-weight: bold;\n",
"}\n",
".hoogle-class {\n",
"font-weight: bold;\n",
"}\n",
".get-type {\n",
"color: green;\n",
"font-weight: bold;\n",
"font-family: monospace;\n",
"display: block;\n",
"white-space: pre-wrap;\n",
"}\n",
".show-type {\n",
"color: green;\n",
"font-weight: bold;\n",
"font-family: monospace;\n",
"margin-left: 1em;\n",
"}\n",
".mono {\n",
"font-family: monospace;\n",
"display: block;\n",
"}\n",
".err-msg {\n",
"color: red;\n",
"font-style: italic;\n",
"font-family: monospace;\n",
"white-space: pre;\n",
"display: block;\n",
"}\n",
"#unshowable {\n",
"color: red;\n",
"font-weight: bold;\n",
"}\n",
".err-msg.in.collapse {\n",
"padding-top: 0.7em;\n",
"}\n",
".highlight-code {\n",
"white-space: pre;\n",
"font-family: monospace;\n",
"}\n",
".suggestion-warning { \n",
"font-weight: bold;\n",
"color: rgb(200, 130, 0);\n",
"}\n",
".suggestion-error { \n",
"font-weight: bold;\n",
"color: red;\n",
"}\n",
".suggestion-name {\n",
"font-weight: bold;\n",
"}\n",
"</style><div class=\"suggestion-name\" style=\"clear:both;\">Eta reduce</div><div class=\"suggestion-row\" style=\"float: left;\"><div class=\"suggestion-warning\">Found:</div><div class=\"highlight-code\" id=\"haskell\">formatNumber x = printf \"% 7.2f\" x</div></div><div class=\"suggestion-row\" style=\"float: left;\"><div class=\"suggestion-warning\">Why Not:</div><div class=\"highlight-code\" id=\"haskell\">formatNumber = printf \"% 7.2f\"</div></div>"
],
"text/plain": [
"Line 5: Eta reduce\n",
"Found:\n",
"formatNumber x = printf \"% 7.2f\" x\n",
"Why not:\n",
"formatNumber = printf \"% 7.2f\""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
" 0.00 79.93 78.37 76.57 87.95 81.15 82.77 127.50 124.54 76.42 84.19 78.90 90.90"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"import Text.Printf\n",
"import Data.List (take)\n",
"\n",
"formatNumber :: Double -> String\n",
"formatNumber x = printf \"% 7.2f\" x\n",
"\n",
"similarTo :: ([Double] -> [Double] -> Double) -> [[Double]] -> Int -> Text\n",
"similarTo simFun vs ix = pack $ Prelude.unwords $ map (formatNumber . ((vs !! ix) `simFun`)) vs\n",
"\n",
"euclDistance :: [Double] -> [Double] -> Double\n",
"euclDistance v1 v2 = sqrt $ sum $ Prelude.zipWith (\\x1 x2 -> (x1 - x2)**2) v1 v2\n",
"\n",
"limit = 13\n",
"labelsLimited = Data.List.take limit labelsL\n",
"limitedL = Data.List.take limit lVectorized\n",
"\n",
"similarTo euclDistance limitedL 0\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<style>/* Styles used for the Hoogle display in the pager */\n",
".hoogle-doc {\n",
"display: block;\n",
"padding-bottom: 1.3em;\n",
"padding-left: 0.4em;\n",
"}\n",
".hoogle-code {\n",
"display: block;\n",
"font-family: monospace;\n",
"white-space: pre;\n",
"}\n",
".hoogle-text {\n",
"display: block;\n",
"}\n",
".hoogle-name {\n",
"color: green;\n",
"font-weight: bold;\n",
"}\n",
".hoogle-head {\n",
"font-weight: bold;\n",
"}\n",
".hoogle-sub {\n",
"display: block;\n",
"margin-left: 0.4em;\n",
"}\n",
".hoogle-package {\n",
"font-weight: bold;\n",
"font-style: italic;\n",
"}\n",
".hoogle-module {\n",
"font-weight: bold;\n",
"}\n",
".hoogle-class {\n",
"font-weight: bold;\n",
"}\n",
".get-type {\n",
"color: green;\n",
"font-weight: bold;\n",
"font-family: monospace;\n",
"display: block;\n",
"white-space: pre-wrap;\n",
"}\n",
".show-type {\n",
"color: green;\n",
"font-weight: bold;\n",
"font-family: monospace;\n",
"margin-left: 1em;\n",
"}\n",
".mono {\n",
"font-family: monospace;\n",
"display: block;\n",
"}\n",
".err-msg {\n",
"color: red;\n",
"font-style: italic;\n",
"font-family: monospace;\n",
"white-space: pre;\n",
"display: block;\n",
"}\n",
"#unshowable {\n",
"color: red;\n",
"font-weight: bold;\n",
"}\n",
".err-msg.in.collapse {\n",
"padding-top: 0.7em;\n",
"}\n",
".highlight-code {\n",
"white-space: pre;\n",
"font-family: monospace;\n",
"}\n",
".suggestion-warning { \n",
"font-weight: bold;\n",
"color: rgb(200, 130, 0);\n",
"}\n",
".suggestion-error { \n",
"font-weight: bold;\n",
"color: red;\n",
"}\n",
".suggestion-name {\n",
"font-weight: bold;\n",
"}\n",
"</style><div class=\"suggestion-name\" style=\"clear:both;\">Move brackets to avoid $</div><div class=\"suggestion-row\" style=\"float: left;\"><div class=\"suggestion-warning\">Found:</div><div class=\"highlight-code\" id=\"haskell\">\"\\n\"\n",
" <>\n",
" (Data.Text.unlines\n",
" $ map (\\ (lab, ix) -> lab <> \" \" <> similarTo simFun vs ix)\n",
" $ zip labels [0 .. (Prelude.length vs - 1)])</div></div><div class=\"suggestion-row\" style=\"float: left;\"><div class=\"suggestion-warning\">Why Not:</div><div class=\"highlight-code\" id=\"haskell\">\"\\n\"\n",
" <>\n",
" Data.Text.unlines\n",
" (map (\\ (lab, ix) -> lab <> \" \" <> similarTo simFun vs ix)\n",
" $ zip labels [0 .. (Prelude.length vs - 1)])</div></div><div class=\"suggestion-name\" style=\"clear:both;\">Use zipWith</div><div class=\"suggestion-row\" style=\"float: left;\"><div class=\"suggestion-warning\">Found:</div><div class=\"highlight-code\" id=\"haskell\">map (\\ (lab, ix) -> lab <> \" \" <> similarTo simFun vs ix)\n",
" $ zip labels [0 .. (Prelude.length vs - 1)]</div></div><div class=\"suggestion-row\" style=\"float: left;\"><div class=\"suggestion-warning\">Why Not:</div><div class=\"highlight-code\" id=\"haskell\">zipWith\n",
" (curry (\\ (lab, ix) -> lab <> \" \" <> similarTo simFun vs ix))\n",
" labels [0 .. (Prelude.length vs - 1)]</div></div><div class=\"suggestion-name\" style=\"clear:both;\">Move brackets to avoid $</div><div class=\"suggestion-row\" style=\"float: left;\"><div class=\"suggestion-warning\">Found:</div><div class=\"highlight-code\" id=\"haskell\">\" \"\n",
" <> (Data.Text.unwords $ map (\\ l -> pack $ printf \"% 7s\" l) labels)</div></div><div class=\"suggestion-row\" style=\"float: left;\"><div class=\"suggestion-warning\">Why Not:</div><div class=\"highlight-code\" id=\"haskell\">\" \"\n",
" <> Data.Text.unwords (map (\\ l -> pack $ printf \"% 7s\" l) labels)</div></div><div class=\"suggestion-name\" style=\"clear:both;\">Avoid lambda</div><div class=\"suggestion-row\" style=\"float: left;\"><div class=\"suggestion-warning\">Found:</div><div class=\"highlight-code\" id=\"haskell\">\\ l -> pack $ printf \"% 7s\" l</div></div><div class=\"suggestion-row\" style=\"float: left;\"><div class=\"suggestion-warning\">Why Not:</div><div class=\"highlight-code\" id=\"haskell\">pack . printf \"% 7s\"</div></div>"
],
"text/plain": [
"Line 2: Move brackets to avoid $\n",
"Found:\n",
"\"\\n\"\n",
" <>\n",
" (Data.Text.unlines\n",
" $ map (\\ (lab, ix) -> lab <> \" \" <> similarTo simFun vs ix)\n",
" $ zip labels [0 .. (Prelude.length vs - 1)])\n",
"Why not:\n",
"\"\\n\"\n",
" <>\n",
" Data.Text.unlines\n",
" (map (\\ (lab, ix) -> lab <> \" \" <> similarTo simFun vs ix)\n",
" $ zip labels [0 .. (Prelude.length vs - 1)])Line 2: Use zipWith\n",
"Found:\n",
"map (\\ (lab, ix) -> lab <> \" \" <> similarTo simFun vs ix)\n",
" $ zip labels [0 .. (Prelude.length vs - 1)]\n",
"Why not:\n",
"zipWith\n",
" (curry (\\ (lab, ix) -> lab <> \" \" <> similarTo simFun vs ix))\n",
" labels [0 .. (Prelude.length vs - 1)]Line 3: Move brackets to avoid $\n",
"Found:\n",
"\" \"\n",
" <> (Data.Text.unwords $ map (\\ l -> pack $ printf \"% 7s\" l) labels)\n",
"Why not:\n",
"\" \"\n",
" <> Data.Text.unwords (map (\\ l -> pack $ printf \"% 7s\" l) labels)Line 3: Avoid lambda\n",
"Found:\n",
"\\ l -> pack $ printf \"% 7s\" l\n",
"Why not:\n",
"pack . printf \"% 7s\""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
" na_ak w_lud ba_hy w_lap ne_dz be_wy zw_oz mo_zu be_wy ba_hy mo_zu be_wy w_lud\n",
"na_ak 0.00 79.93 78.37 76.57 87.95 81.15 82.77 127.50 124.54 76.42 84.19 78.90 90.90\n",
"w_lud 79.93 0.00 38.92 34.35 56.48 44.89 47.21 109.24 104.82 35.33 49.88 39.98 60.20\n",
"ba_hy 78.37 38.92 0.00 30.37 54.23 40.93 43.83 108.15 102.91 27.37 46.95 35.81 58.99\n",
"w_lap 76.57 34.35 30.37 0.00 51.54 37.46 40.86 107.43 103.22 25.22 43.66 32.10 56.53\n",
"ne_dz 87.95 56.48 54.23 51.54 0.00 57.98 60.32 113.66 109.59 50.96 62.17 54.84 70.70\n",
"be_wy 81.15 44.89 40.93 37.46 57.98 0.00 49.55 110.37 100.50 37.77 51.54 37.09 62.92\n",
"zw_oz 82.77 47.21 43.83 40.86 60.32 49.55 0.00 111.11 107.57 41.02 54.07 45.23 64.65\n",
"mo_zu 127.50 109.24 108.15 107.43 113.66 110.37 111.11 0.00 139.57 107.38 109.91 108.20 117.07\n",
"be_wy 124.54 104.82 102.91 103.22 109.59 100.50 107.57 139.57 0.00 102.69 108.32 99.06 113.25\n",
"ba_hy 76.42 35.33 27.37 25.22 50.96 37.77 41.02 107.38 102.69 0.00 43.83 32.08 56.68\n",
"mo_zu 84.19 49.88 46.95 43.66 62.17 51.54 54.07 109.91 108.32 43.83 0.00 47.87 66.40\n",
"be_wy 78.90 39.98 35.81 32.10 54.84 37.09 45.23 108.20 99.06 32.08 47.87 0.00 59.66\n",
"w_lud 90.90 60.20 58.99 56.53 70.70 62.92 64.65 117.07 113.25 56.68 66.40 59.66 0.00"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"paintMatrix :: ([Double] -> [Double] -> Double) -> [Text] -> [[Double]] -> Text\n",
"paintMatrix simFun labels vs = header <> \"\\n\" <> (Data.Text.unlines $ map (\\(lab, ix) -> lab <> \" \" <> similarTo simFun vs ix) $ zip labels [0..(Prelude.length vs - 1)])\n",
" where header = \" \" <> (Data.Text.unwords $ map (\\l -> pack $ printf \"% 7s\" l) labels)\n",
" \n",
"paintMatrix euclDistance labelsLimited limitedL"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Problem: za dużo zależy od długości tekstu.\n",
"\n",
"Rozwiązanie: znormalizować wektor $v$ do wektora jednostkowego.\n",
"\n",
"$$ \\vec{1}(v) = \\frac{v}{|v|} $$\n",
"\n",
"Taki wektor ma długość 1!"
]
},
{
"cell_type": "code",
"execution_count": 54,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1.0"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
" na_ak w_lud ba_hy w_lap ne_dz be_wy zw_oz mo_zu be_wy ba_hy mo_zu be_wy w_lud\n",
"na_ak 10.00 0.67 0.66 0.66 0.67 0.67 0.67 0.67 0.67 0.67 0.66 0.67 0.67\n",
"w_lud 0.67 10.00 0.67 0.68 0.67 0.66 0.67 0.67 0.68 0.66 0.67 0.67 0.68\n",
"ba_hy 0.66 0.67 10.00 0.66 0.67 0.67 0.67 0.67 0.69 0.74 0.66 0.67 0.66\n",
"w_lap 0.66 0.68 0.66 10.00 0.66 0.66 0.66 0.66 0.67 0.66 0.66 0.66 0.66\n",
"ne_dz 0.67 0.67 0.67 0.66 10.00 0.67 0.67 0.68 0.69 0.68 0.67 0.67 0.68\n",
"be_wy 0.67 0.66 0.67 0.66 0.67 10.00 0.66 0.67 0.74 0.66 0.67 0.76 0.66\n",
"zw_oz 0.67 0.67 0.67 0.66 0.67 0.66 10.00 0.67 0.67 0.66 0.66 0.67 0.67\n",
"mo_zu 0.67 0.67 0.67 0.66 0.68 0.67 0.67 10.00 0.69 0.67 0.69 0.68 0.67\n",
"be_wy 0.67 0.68 0.69 0.67 0.69 0.74 0.67 0.69 10.00 0.68 0.67 0.75 0.67\n",
"ba_hy 0.67 0.66 0.74 0.66 0.68 0.66 0.66 0.67 0.68 10.00 0.66 0.67 0.66\n",
"mo_zu 0.66 0.67 0.66 0.66 0.67 0.67 0.66 0.69 0.67 0.66 10.00 0.67 0.67\n",
"be_wy 0.67 0.67 0.67 0.66 0.67 0.76 0.67 0.68 0.75 0.67 0.67 10.00 0.67\n",
"w_lud 0.67 0.68 0.66 0.66 0.68 0.66 0.67 0.67 0.67 0.66 0.67 0.67 10.00"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"vectorNorm :: [Double] -> Double\n",
"vectorNorm vs = sqrt $ sum $ map (\\x -> x * x) vs\n",
"\n",
"toUnitVector :: [Double] -> [Double]\n",
"toUnitVector vs = map (/ n) vs\n",
" where n = vectorNorm vs\n",
"\n",
"vectorNorm (toUnitVector [3.0, 4.0])\n",
"\n",
"euclDistanceNormalized :: [Double] -> [Double] -> Double\n",
"euclDistanceNormalized v1 v2 = toUnitVector v1 `euclDistance` toUnitVector v2\n",
"\n",
"euclSim v1 v2 = 1 / (d + 0.1)\n",
" where d = euclDistanceNormalized v1 v2\n",
"\n",
"paintMatrix euclSim labelsLimited limitedL"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Podobieństwo kosinusowe\n",
"\n",
"Częściej zamiast odległości euklidesowej stosuje się podobieństwo kosinusowe, czyli kosinus kąta między wektorami.\n",
"\n",
"Wektor dokumentu ($\\vec{V}(d)$) - wektor, którego składowe odpowiadają wyrazom.\n",
"\n",
"$$\\sigma(d_1,d_2) = \\cos\\theta(\\vec{V}(d_1),\\vec{V}(d_2)) = \\frac{\\vec{V}(d_1) \\cdot \\vec{V}(d_2)}{|\\vec{V}(d_1)||\\vec{V}(d_2)|} $$\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Zauważmy, że jest to iloczyn skalarny znormalizowanych wektorów!\n",
"\n",
"$$\\sigma(d_1,d_2) = \\vec{1}(\\vec{V}(d_1)) \\times \\vec{1}(\\vec{V}(d_2)) $$"
]
},
{
"cell_type": "code",
"execution_count": 55,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"1.0"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"(✕) :: [Double] -> [Double] -> Double\n",
"(✕) v1 v2 = sum $ Prelude.zipWith (*) v1 v2\n",
"\n",
"[2, 1, 0] ✕ [-2, 5, 10]"
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
" na_ak w_lud ba_hy w_lap ne_dz be_wy zw_oz mo_zu be_wy ba_hy mo_zu be_wy w_lud\n",
"na_ak 1.00 0.02 0.01 0.01 0.03 0.02 0.02 0.04 0.03 0.02 0.01 0.02 0.03\n",
"w_lud 0.02 1.00 0.02 0.05 0.04 0.01 0.03 0.04 0.06 0.01 0.02 0.03 0.06\n",
"ba_hy 0.01 0.02 1.00 0.01 0.02 0.03 0.03 0.04 0.08 0.22 0.01 0.04 0.01\n",
"w_lap 0.01 0.05 0.01 1.00 0.01 0.01 0.00 0.01 0.02 0.00 0.00 0.00 0.00\n",
"ne_dz 0.03 0.04 0.02 0.01 1.00 0.04 0.03 0.07 0.08 0.06 0.03 0.03 0.05\n",
"be_wy 0.02 0.01 0.03 0.01 0.04 1.00 0.01 0.03 0.21 0.01 0.02 0.25 0.01\n",
"zw_oz 0.02 0.03 0.03 0.00 0.03 0.01 1.00 0.04 0.03 0.00 0.01 0.02 0.02\n",
"mo_zu 0.04 0.04 0.04 0.01 0.07 0.03 0.04 1.00 0.10 0.02 0.09 0.05 0.04\n",
"be_wy 0.03 0.06 0.08 0.02 0.08 0.21 0.03 0.10 1.00 0.05 0.03 0.24 0.04\n",
"ba_hy 0.02 0.01 0.22 0.00 0.06 0.01 0.00 0.02 0.05 1.00 0.01 0.02 0.00\n",
"mo_zu 0.01 0.02 0.01 0.00 0.03 0.02 0.01 0.09 0.03 0.01 1.00 0.01 0.02\n",
"be_wy 0.02 0.03 0.04 0.00 0.03 0.25 0.02 0.05 0.24 0.02 0.01 1.00 0.02\n",
"w_lud 0.03 0.06 0.01 0.00 0.05 0.01 0.02 0.04 0.04 0.00 0.02 0.02 1.00"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"cosineSim v1 v2 = toUnitVector v1 ✕ toUnitVector v2\n",
"\n",
"paintMatrix cosineSim labelsLimited limitedL"
]
},
{
"cell_type": "code",
"execution_count": 140,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"na tylnym siedzeniu w autobusie siedzi matka z 7-8 letnim synkiem. naprzeciwko synka siedzi kobieta (zwrócona twarzą do dzieciaka). synek co chwile wymachuje nogami i kopie kobietę, matka widząc to nie reaguje na to wogóle. wreszcie kobieta zwraca uwagę matce, żeby ta powiedziała coś synowi a matka do niej: nie mogę, bo wychowuję syna bezstresowo!!! ...chłopak, który stał w pobliżu i widział i słyszał całe to zajście wypluł z ust gumę do żucia i przykleił matce na czoło i powiedział: ja też byłem bezstresowo wychowywany... autentyczny przypadek w londyńskim autobusie (a tym co przykleił matce gumę na czoło był chyba nawet młody Polak)"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"collectionL !! 5"
]
},
{
"cell_type": "code",
"execution_count": 141,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Krótko zwięźle i na temat. Zastanawia mnie jak ludzie wychowują dzieci. Co prawda sam nie mam potomstwa i nie zamierzam mieć jak narazie (bo to trochę głupie mieć 17-letniego tatusia), ale niestety mam przyjemność oglądać efekty wychowawcze niektórych par (dzięki znajomym rodziców w różnym wieku). Są trzy najbardziej znane mi modele wychowania. Surowe, bezstresowe (w moim znaczeniu) i \"bezstresowe\" w mowie potocznej. Zaczynam od tego pierwszego. Jak nazwa wskazuje, jest to surowe wychowanie, oparte na karach cielesnych lub torturach umysłowych. Nie uważam tego za dobre wychowanie, bo dziecko jak będzie nieco starsze będzie się bało wszystkiego, bo uzna, ż jak zrobi coś żle to spotka je kara. Więc bicie za różne rzeczy odpada (no chyba, że dzieciak na serio nabroi to oczywiście). Wychowanie bezstresowe z mojego słownika oznacza nienarażanie dziecka na stresy, pocieszanie w trudnych sytuacjach, załatwianie problemów przez rozmowę oraz stały kontakt z dzieckiem. I to chyba najlepsze. Sam zostałem tak wychowany i cieszę się z tego powodu. I oczywiście \"wychowanie bezstresowe\". A tu się normalnie rozpiszę. Po pierwsze geneza. Więc jak dochodzi do takiego wychowania? Odpowiedź. Mamusi i tatusiowi się zachciało bobaska bo to takie malutkie fajniutkie i ooo. Oboje zazdroszczą innym parom bo one mają, a oni nie, więc oni też chcą. No więc rodzi im się bobasek, chuchają dmuchają na niego póki małe. Ale przychodzi ten okres, kiedy dziecko trzeba wychować i kiedy ma się na dzieciaka największy wpływ. I tu się zaczynają schody. Nagle oboje nie mają czasu i mówią \"Wychowamy go/ją/ich (niepotrzebne skreślić) bezstresowo.\" Po drugie. Decyzja o sposobie wychowania podjęta. A więc jak to wygląda? Odpowiedź. Totalna olewka! Mama i tata balują, a dzieciaka zostawiają samemu sobie, albo pod opiekę babci, która również leje na dziecko ciepłym moczem. Dzieciak rośnie i rośnie, nie wie co dobre a co złe. Przypomniała mi się pewna, podobno autentyczna scenka. Chłopak jedzie ze szwagrem autobusem czy tam tramwajem. Na jednym miejscu siedzi starowinka, a na przeciwko niej siedzi lafirynda z brzdącem na kolanach. No i sobie dzieciak macha nóżkami i tu ciach i kopnął staruszkę w nogę. Babcia nic sobie z tego nie zrobiła, a dzieciak nie widząc reakcji zaczął ją już celowo kopać. Staruszka: Może pani powiedzieć coś synkowi żeby mnie nie kopał. Matka: Nie bo ja go wychowuję bezstresowo. Szwagier wyciąga z ust gumę do żucia i przykleja mamusi na czoło mówiąc: Moja mama też mnie wychowała bezstresowo. Ciekaw jestem ile w tym prawdy było, a jeżeli 100% to czy mamusi się odmieniły poglądy. Kto go wie? Po trzecie. Dorosły wychowany bezstresowo. Jaki on jest? Odpowiedź. Zupełnie inny. Myśli, że jest pępkiem świata i że wszystko musi być pod jego dyktando. Pracując w Szwajcarii przy pielęgnacji winogron, syn polskiego kolegi taty zaczął rzucać we mnie winogronami. Miałem ochotę wbić mu nożyczki (którymi podcinałem liście) w oczy. A to byłby ciekawy widok. Dzieciak o białych włosach, skórze i niebieskich oczach stałby sie albinosem (bo z niebieskich oczu stałyby sie czerwone jak u białych szczurów i myszek). Ojciec sie co prawda na niego wydzierał, żeby nie przeszkadzał, ale jak widać dzieciak miał to po prostu w dupie. Więc skoro dziecko nie słucha się nawet rodzica, to jak w szkole posłucha nauczyciela? Jak znajdzie pracę, w której będzie jakiś szef (chyba, że sam sobie będzie szefem)? W ten oto sposób jak dowiaduję się o tym, że ktoś wychowuje dzieciaka bezstresowo, ciary przechodzą mi po plecach, a tegoż rodzica mam ochotę palnąć mu w łeb tak żeby się przekręcił (zarówno łeb jak i poglądy). A jak mnie wychowano? Byłem często sam sobie zostawiany. Ale nie oznacza że to byla wspomniana olewka. Jako, że rodzice pracowali, a rodzeństwo chodziło do szkoły, podrzucali mnie do babci. A wieczorami się mną opiekowali. Gadali jak miałem problemy i nie bili bo po
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"collectionL !! 8"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### Z powrotem do wyszukiwarek\n",
"\n",
"Możemy potraktować zapytanie jako bardzo krótki dokument, dokonać jego wektoryzacji i policzyć cosinus kąta między zapytaniem a dokumentem."
]
},
{
"cell_type": "code",
"execution_count": 56,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ja za to znam przypadek, że koleżanka mieszkala w bloku parę lat temu, pewnego razu wchodzi do łazienki w samej bieliźnie a tam ogromny wąż na podłodze i tak się wystraszyła że wybiegła z wrzaskiem z mieszkania i wyleciała przed blok w samej bieliźnie i uciekła do babci swojej, która mieszkala gdzieś niedaleko. a potem się okazało, że jej sąsiad z dołu hodował sobie węża i tak właśnie swobodnie go \"pasał\" po mieszkaniu i wąż mu spierdzielił przez rurę w łazience :cool :"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"Pewna dziewczyna, wieku mi nieznanego, w mieście stołecznym - rozwiodła się. Była sama i samotna, więc zapragnęła kupić sobie zwierzę, aby swą miłą obecnością rozjaśniało jej puste wieczory i takież poranki. Dziewczyna była najwyraźniej ekscentryczką, bo zamiast rozkosznego, miękkiego kociaka z czerwonym kłębuszkiem wełenki lub kudłatego pieska , co sika na parkiet i gryzie skarpetki - kupiła sobie ... węża. Wąż zamieszkał z dziewczyną, i dobrze im było. Gad jadł, spał i rósł, a po pierwszym okresie obojętności ( zwłaszcza ze strony węża ) nawiązała się między nimi nić porozumienia. Przynajmniej dziewczyna odczuwała tę nić wyraźnie, gdyż wąż reagował na jej obecność, a nocą spał zwinięty w kłębek w nogach jej łóżka. Po dwóch latach wspólnego bytowania, nie przerywanych żadnym znaczącym wydarzeniem w ich wzajemnych relacjach, dziewczyna zauważyła, że wąż stał się osowiały. Przestał jeść, chował się po kątach, a nocami, zamiast w nogach łóżka - sypiał wyciągnięty wzdłuż jej boku. Martwiła się o swojego gada i poszła z nim do weterynarza. Weterynarz zbadał go, zapisał leki na poprawę apetytu ( ciekawe, jak się bada węża ? ) i odesłał do domu. Zdrowie śliskiego pacjenta nie poprawiło się, więc troskliwa dziewczyna postanowiła zasięgnąć porady u znawcy gadów i gadzich obyczajów. Znawca wysłuchał opisu niepokojących objawów, i powiedział : - Proszę pani. Ten wąż nie jest chory. On teraz pości. A leży wzdłuż pani nocą, bo sprawdza, czy pani się zmieści. To prawdziwa historia. Opowiedziała nam ją dziś klientka. Leżę na łóżku, pisze tego posta, i patrzę na drzemiącą obok mnie kotkę. Trochę mała jest. Raczej nie ma szans, żebym sie zmieściła, jakby co.."
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"Anakonda. Czy to kolejna miejska legenda? Jakiś czas temu koleżanka na jednej z imprez towarzyskich opowiedziała mrożącą krew w żyłach historię o dziewczynie ze swojej pracy, która w Warszawie na dyskotece w Dekadzie poznała chłopaka. Spotykała się z nim na kawę i po drugiej randce doszło do pocałunków. Umówiła się na trzecią randkę, ale zanim do niej doszło wyskoczył jej jakiś pryszcz na twarzy. Poszła do lekarza, a ten... zawiadomił policję, prokuraturę itd. , bo rozpoznał zarażenie... jadem trupim! Rozpoczęto przesłuchanie dziewczyny i po wyjaśnieniach trafiono do chłopaka, z którym się całowała. W jego domu odkryto rozkładające się zwłoki dwóch dziewczyn. Byłam ta historią wstrząśnięta. Następnego dnia opowiedziałam ją w pracy, a koleżanka Justyna przyznała, że już o tym slyszała. To mnie utwierdziło, że historia jest prawdziwa, ale... tylko do wieczora. Coś mi nie dawało spokoju. Uwaga TVN nic? Interwencja Polsatu - nic? Nasz rodzimy Telekurier nic? Zaczęłam sprawdzać w internecie co to jest jad trupi, opryszczka od zakażenia tymże jadem i tak... trafiłam na miejską legendę. Historia wydarzyła się nie tylko w Warszawie, ale i w Olsztynie, Toruniu, Wrocławiu i Krakowie, a być może w ogóle za granicą. Choć prawdopodobne jest, że nie wydarzyła się nigdy. Głośno o niej było na miejskch forach. Za każdym razem ofiara była czyjąś znajomą. Po przeczytaniu kolejnej wersji historii zadzwoniłam do koleżanki, która opowiedziała mi tę historię i sklęłam czym świat stoi. Dlatego kiedy kilka dni temu inna koleżanka opowiedziała kolejną mrożącą krew w żyłach historię - tym razem o anakondzie - rozpoczęłam poszukiwania w internecie czy to nie jest następna miejska legenda. Nic nie znalazłam. Jednak coś mi nie pasuje, choć ta historia może brzmieć wielce prawdopodobnie. Zwłaszcza, gdy ktoś oglądał głupawy film z J. Lo. Zainteresowało mnie to, bo siedząc nad powieścią \"Dzika\" poczytałam trochę o wężach. A o jaką historię mi chodzi? Pewna kobieta (podobno sąsiadka tej mojej koleżanki z pracy, która historię opowiadała) hodowała w domu węża - anakondę. Hodowała ją pięć lat i nie trzymała w terrarium. Anakonda chodziła (pełzała) samopas po domu i co kilka dni dostawała chomika, szczura, mysz lub królika do zjedzenia. Pewnego dnia przestała jeść i zaczęła się dziwnie zachowywać. Każdego ranka po przebudzeniu właścicielka znajdowała ją w swoim łóżku wyprostowaną jak struna. Po dwóch tygodniach takich zachowań ze strony anakondy właścicielka zaniepokojona stanem zdrowia ukochanego węża poszła z nim do lekarza. Ten wysłuchał objawów \"choroby\" i powiedział, że anakonda głodziła się, by zjeść... włascicielkę. Kładzenie się koło niej było mierzeniem ile jeszcze głodzić się trzeba, by właścicielka zmieściła się w pysku no i badaniem od której strony trzeba ją zaatakować. Wężowi chodziło bowiem o to, by smakowity i duży obiad się za bardzo nie bronił. Ja domyśliłam się od razu do czego zmierza ta historia (lektura artykułów o wężach zrobiła swoje), ale dla reszty, którzy słuchali było to szokiem. Mnie szokuje co innego. Po co trzymać węża skoro nie ma z nim człowiek żadnego kontaktu? To nie pies, kot czy inny ssak. To nie ptak. Wąż to wąż! Nie przyjdzie na zawołanie. Jaby ktoś nie wiedział to... Węże są mięsożerne. Połykają ofiary w całości, mimo że często wielokrotnie są one większe od samego węża. Połykanie polega na nasuwaniu się węża na swoją ofiarę. A anakonda... żyje zwykle w wodzie i na drzewach, żywiąc się ssakami (m.in. tapiry, dziki, kapibary, jelenie!, gryzonie, niekiedy nawet jaguary), gadami (kajmany), rybami i ptakami, polując zazwyczaj w nocy. Jest w stanie połknąć ofiarę znacznie szerszą od swojego ciała, co jest możliwe dzięki rozciągnięciu szczęk. Trawienie jest bardzo powolne - po posiłku wąż trawi większą ofiarę przez wiele dni, a potem może poś
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"import Data.Ord\n",
"import Data.List\n",
"\n",
"legendVectorizer = vectorizeTfIdf vocLSize collectionLNormalized vocL . normalize\n",
2021-03-24 12:10:05 +01:00
"\n",
"\n",
2021-03-31 15:20:26 +02:00
"query vs vzer q = map ((collectionL !!) . snd) $ Data.List.take 3 $ sortBy (\\a b -> fst b `compare` fst a) $ zip (map (`cosineSim` qvec) vs) [0..] \n",
" where qvec = vzer q \n",
2021-03-24 12:10:05 +01:00
"\n",
2021-03-31 15:20:26 +02:00
"query lVectorized legendVectorizer \"wąż przymierza się do zjedzenia właścicielki\"\n",
2021-03-24 12:10:05 +01:00
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Haskell",
"language": "haskell",
"name": "haskell"
},
"language_info": {
"codemirror_mode": "ihaskell",
"file_extension": ".hs",
"mimetype": "text/x-haskell",
"name": "haskell",
"pygments_lexer": "Haskell",
"version": "8.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 4
}