181 lines
3.7 KiB
Plaintext
181 lines
3.7 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "slide"
|
|
}
|
|
},
|
|
"source": [
|
|
"### Uczenie maszynowe\n",
|
|
"# 9a. Metody zbiorcze"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
}
|
|
},
|
|
"source": [
|
|
" * **Metody zbiorcze** (*ensemble methods*) używają połączonych sił wielu modeli uczenia maszynowego w celu uzyskania lepszej skuteczności niż mogłaby być osiągnięta przez każdy z tych modeli z osobna."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "fragment"
|
|
}
|
|
},
|
|
"source": [
|
|
" * Na metodę zbiorczą składa się:\n",
|
|
" * dobór modeli\n",
|
|
" * sposób agregacji wyników"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "fragment"
|
|
}
|
|
},
|
|
"source": [
|
|
" * Warto zastosować randomizację, czyli przetasować zbiór uczący przed trenowaniem każdego modelu."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "slide"
|
|
}
|
|
},
|
|
"source": [
|
|
"### Uśrednianie prawdopodobieństw"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "fragment"
|
|
}
|
|
},
|
|
"source": [
|
|
"#### Przykład\n",
|
|
"\n",
|
|
"Mamy 3 modele, które dla klas $c=1, 2, 3, 4, 5$ zwróciły prawdopodobieństwa:\n",
|
|
"\n",
|
|
"* $M_1$: [0.10, 0.40, **0.50**, 0.00, 0.00]\n",
|
|
"* $M_2$: [0.10, **0.60**, 0.20, 0.00, 0.10]\n",
|
|
"* $M_3$: [0.10, 0.30, **0.40**, 0.00, 0.20]\n",
|
|
"\n",
|
|
"Która klasa zostanie wybrana według średnich prawdopodobieństw dla każdej klasy?"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
}
|
|
},
|
|
"source": [
|
|
"Średnie prawdopodobieństwo: [0.10, **0.43**, 0.36, 0.00, 0.10]\n",
|
|
"\n",
|
|
"Została wybrana klasa $c = 2$"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "slide"
|
|
}
|
|
},
|
|
"source": [
|
|
"### Głosowanie klas"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "fragment"
|
|
}
|
|
},
|
|
"source": [
|
|
"#### Przykład\n",
|
|
"\n",
|
|
"Mamy 3 modele, które dla klas $c=1, 2, 3, 4, 5$ zwróciły prawdopodobieństwa:\n",
|
|
"\n",
|
|
"* $M_1$: [0.10, 0.40, **0.50**, 0.00, 0.00]\n",
|
|
"* $M_2$: [0.10, **0.60**, 0.20, 0.00, 0.10]\n",
|
|
"* $M_3$: [0.10, 0.30, **0.40**, 0.00, 0.20]\n",
|
|
"\n",
|
|
"Która klasa zostanie wybrana według głosowania?"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "subslide"
|
|
}
|
|
},
|
|
"source": [
|
|
"Liczba głosów: [0, 1, **2**, 0, 0]\n",
|
|
"\n",
|
|
"Została wybrana klasa $c = 3$"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"slideshow": {
|
|
"slide_type": "slide"
|
|
}
|
|
},
|
|
"source": [
|
|
"### Inne metody zbiorcze\n",
|
|
"\n",
|
|
" * Bagging\n",
|
|
" * Boostng\n",
|
|
" * Stacking\n",
|
|
" \n",
|
|
"https://towardsdatascience.com/ensemble-methods-bagging-boosting-and-stacking-c9214a10a205"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"celltoolbar": "Slideshow",
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.10.12"
|
|
},
|
|
"livereveal": {
|
|
"start_slideshow_at": "selected",
|
|
"theme": "white"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 4
|
|
}
|