{ "cells": [ { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "lnjzW0-TPMvF" }, "source": [ "# __Training a neural network for a toy classification problem__" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "Ie13pHt3gyh8" }, "source": [ "

Summary

\n", "\n", "
\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We start by importing the *toynn_2023* library. It defines three classes *ToyPb*, *nD_data* and *ToyNN*.
\n", "See the ipython file *Introduction_to_the_toynn_2023_toolbox.ipynb* for a description of these classes and associated methods." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "from toynn_2023 import *\n", "# also loads the libraries:\n", "# import numpy as np\n", "# from numpy import random as nprd\n", "# from matplotlib import pyplot as plt\n", "# from matplotlib import cm as cm\n", "# from copy import deepcopy as dcp" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 1. Framework (problem, data, shape of the neural network) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "top\n", "  \n", "          \n", "          \n", "   \n", "1.\n", "  \n", "          \n", "          \n", "   \n", "2.\n", "  \n", "          \n", "          \n", "   \n", "3.\n", "  \n", "          \n", "          \n", "   \n", "4.\n", "  \n", "          \n", "          \n", "   \n", "bot." ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "Woacru-MVEgA" }, "source": [ "We start by defining a classification problem of points in the plane. This problem is described by a _pb_ object of the _ToyPb_ class." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pb = ToyPb(name = \"ring\", bounds = (-1,1))\n", "pb.show_border()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We then build a set of random points of the plan which are \"tagged\" according to the previous problem ($1$ if the point is in the disk, $-1$ otherwise). This data is stored in a _data_ object of the *nD_data* class.
\n", "    \n", "($*$) The number of tagged points is *data.n*.
\n", "    \n", "($*$) The coordinates of these points are stored in the numpy array _data.X_ of size (*data.n*$)\\times2$.
\n", "    \n", "($*$) Tags are stored in the numpy _data.Y_ array of length *data.n*.\n", "\n", "The data from _data_ will be used for training. We can also build a set of data of the same type _test_ for the tests (for instance to check that there is no \"overfitting\"." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "ndata = 1000\n", "data = nD_data(n = ndata, pb = pb)\n", "\n", "ntest = 500\n", "test = nD_data(n = ntest, pb = pb, init_pred='yes')\n", "\n", "test.show_class()\n", "pb.show_border('k--')\n", "plt.legend(loc=1, fontsize=15)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we build an object of type _ToyNN_ which defines a type of neural network, characterized by the parameters _CardNodes_ and the activation function _chi_." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "CardNodes = (2, 4, 6, 4, 1)\n", "nn = ToyNN(card = CardNodes, chi=\"tanh\", grid=(-1,1,41))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 2. The gradient descent method\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "top\n", "          \n", "          \n", "          \n", "   \n", "1.\n", "          \n", "          \n", "          \n", "   \n", "2.\n", "          \n", "          \n", "          \n", "   \n", "3.\n", "          \n", "          \n", "          \n", "   \n", "bot." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We wish to minimize the function:\n", "$$\n", "F(A)=\\dfrac1{n_d}\\sum_{i=0}^{n_d-1}\\ell(h(x_i,A)\\times y_i).\n", "$$\n", "where:
\n", "    \n", "($*$) $A$ contains the coefficients (weights) of a neural network of type *nn*,
\n", "    \n", "($*$) The $x_i$ and $y_i$ are the training data stored in *data.X*[i] and *data.Y*[i],
\n", "    \n", "($*$) $h(x,A)$ is the value returned by the neural network of weights $A$ with the input $x$,
\n", "    \n", "($*$) $\\ell$ is the *pb.loss* error function." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One step of the gradient method consists of:\n", "$$\n", "A\\ \\longleftarrow\\ A - \\alpha\\nabla F(A).\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Initialization__. We define:
\n", "    \n", "($*$) An initial set of coefficients in the form of a randomly constructed coef-list _A_.
\n", "    \n", "($*$) An float _alpha_ corresponding to the learning rate ($\\alpha=0.05$ here).
\n", "    \n", "($*$) A total number of iterations _Niter_.
\n", "    \n", "($*$) A integer _niter_ initialized to 0 which will represent the number of iterations performed.
\n", "    \n", "($*$) An integer _Ndata_ representing the size of the data.
\n", "    \n", "($*$) A integer _niterplot_ indicating the frequency of plots during iterations (one plot every _niterplot_ iterations).
\n", "    \n", "($*$) An empty list *Total_loss* to store the evolution of the total error during the iterations:\n", "$$\n", "A\\ \\longleftarrow\\ A - \\alpha\\nabla F(A).\n", "$$" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Parameters\n", "alpha=.1 #(learning rate)\n", "Niter=500\n", "Ndata=data.n\n", "niterplot=50\n", "\n", "# Initializations\n", "A=nn.create_rand()\n", "niter=0\n", "Erreur =[]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Optimization loop__. We then implement the descent gradient method with constant step (or constant learning rate) _alpha_." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "posplot=0\n", "# Optimization loop for the gradient descent method\n", "for i_ in range(Niter):\n", " niter+=1\n", " # initialization of dA\n", " dA=nn.create_zero()\n", " # computation of -alpha*sum gradient of f_j\n", " for j in range(Ndata): nn.descent(A,data.X[j],data.Y[j],B=dA, pb=pb)\n", " \n", " # update df the coefficients of A\n", " nn.add(A,dA,c=alpha/Ndata, output=False)\n", " \n", " # computation of the error and graphic representations\n", " \n", " if not niter%niterplot:\n", " error = nn.total_loss_and_prediction(A,data,pb=pb)\n", " Erreur.append(error)\n", " if not posplot: plt.figure(figsize=(16,4))\n", " posplot+=1\n", " plt.subplot(1,4,posplot)\n", " data.show_class(pred=True)\n", " nn.show_pred(A)\n", " pb.show_border('k--')\n", " plt.title(f\"iteration {niter}, Total loss : {error:1.3e}.\", fontsize=12)\n", " if posplot==4 : \n", " plt.show()\n", " posplot=0\n", " else:\n", " error = nn.total_loss(A,data,pb=pb)\n", " Erreur.append(error) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that you can restart the optimization loop (without reinitializing) by executing the previous block again. You may repeat the operation until you are satisfied with the performance of the neural network. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We use the method *nn.show()* to represent the coefficients of the last computed _A_.\n", "and we represent the evolution of the error along the iterations. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "## Graphic representation of the evolution of the error along the iterations.\n", "nn.show(A)\n", "\n", "plt.figure(figsize=(16,6))\n", "plt.subplot(121)\n", "plt.plot(np.linspace(1,niter,niter),Erreur)\n", "plt.title(\"Error as a function of the number of iterations\")\n", "\n", "plt.subplot(122)\n", "debut = niter//2\n", "plt.plot(np.linspace(debut+1, niter,niter-debut),Erreur[debut:])\n", "plt.title(\"Error as a function of the number of iterations\")\n", "plt.show()\n", "\n", "plt.figure(figsize=(16,6))\n", "plt.subplot(121)\n", "debut = 3*(niter//4)\n", "plt.plot(np.linspace(debut+1, niter,niter-debut),Erreur[debut:])\n", "plt.title(\"Error as a function of the number of iterations\")\n", "\n", "plt.subplot(122)\n", "debut = 7*(niter//8)\n", "plt.plot(np.linspace(debut+1, niter,niter-debut),Erreur[debut:])\n", "plt.title(\"Error as a function of the number of iterations\")\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 3. The stochastic gradient method with minibatch " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "top\n", "  \n", "          \n", "          \n", "   \n", "1.\n", "  \n", "          \n", "          \n", "   \n", "2.\n", "  \n", "          \n", "          \n", "   \n", "3.\n", "  \n", "          \n", "          \n", "   \n", "4.\n", "  \n", "          \n", "          \n", "   \n", "bot." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The initialization is the same as in the previous part except that the counter *niter* is replaced by the counter *nepoch*. The number of epochs is counted instead of the number of iterations.\n", "\n", "We introduce the integer *nbatch*. \n", "\n", "The cost of one iteration of the stochastic gradient with minibtach is approximately (nbatch/Ndata) times the cost of one iteration of the full gradient. So (for fair comparison) one epoch contains (Ndata//nbatch) iterations. In the end the cost of one epoch is approximately the same as the cost of one iteration of full gradient." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab_type": "text", "id": "GafO0zXoJ6Cx" }, "outputs": [], "source": [ "# Parameters\n", "alpha=0.1 #(learning rate)\n", "Nepoch=500\n", "Ndata=data.n\n", "nepochplot=20\n", "\n", "nbatch=30\n", "ItersInOneEpoch = Ndata//nbatch\n", "\n", "# Initializations\n", "A=nn.create_rand()\n", "nepoch=0\n", "Erreur =[]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The optimization loop is different. An tuple *J* of nbatch integers are drawn randomly in the set $\\{0,1,2,\\dots,N-1\\}$ using *J=nprd.randint(Ndata,size=nbatch)*.\n", "\n", "We calculate the new iterate by making a gradient step for the function\n", "$$\n", "F_J(A)=\\frac1{nbatch}\\sum_{i\\in J}\\ell(h(x^i,A)\\times y^i)\n", "$$\n", "where $x^i=$*data.X*[i] and $y^i=$*data.Y*[i].
\n", "The new iterate is obtained by\n", "$$\n", "A\\ \\longleftarrow\\ A - \\alpha \\nabla F_i(A).\n", "$$" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "posplot=0\n", "# Optimization loop for the stochastic gradient method\n", "for i_ in range(Nepoch): # loop on the epochs\n", " nepoch+=1\n", " for j_ in range(ItersInOneEpoch): # in each epoch, loop on the iterations\n", " J = nprd.randint(Ndata, size=nbatch) # draw nbatch random integers \n", " \n", " # computation of the descent direction\n", " dA=nn.create_zero()\n", " for i in J: nn.descent(A, data.X[i], data.Y[i], B=dA, pb=pb)\n", " \n", " # update of the coefficients\n", " nn.add(A,dA,c=alpha/nbatch,output=False)\n", " \n", " # computation of the error and graphic representations\n", " if not nepoch%nepochplot:\n", " error = nn.total_loss_and_prediction(A,data,pb=pb)\n", " Erreur.append(error)\n", " if not posplot: plt.figure(figsize=(16,4))\n", " posplot+=1\n", " plt.subplot(1,4,posplot)\n", " data.show_class(pred=True)\n", " nn.show_pred(A)\n", " pb.show_border('k--')\n", " plt.title(f\"ep: {nepoch}, Loss: {error:1.5e}.\", fontsize=12)\n", " if posplot==4 : \n", " plt.show()\n", " posplot=0\n", " else:\n", " error = nn.total_loss(A,data,pb=pb)\n", " Erreur.append(error)\n", " #print(f\"epoch {nepoch}, Total loss : {error:1.5e}.\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Graphic representations:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "## Graphic representation of the evolution of the error along the iterations.\n", "nn.show(A)\n", "\n", "plt.figure(figsize=(16,6))\n", "plt.subplot(121)\n", "plt.plot(np.linspace(1,nepoch,nepoch),Erreur)\n", "plt.title(\"Error as a function of the number of epochs\")\n", "\n", "plt.subplot(122)\n", "debut = nepoch//2\n", "plt.plot(np.linspace(debut+1, nepoch,nepoch-debut),Erreur[debut:])\n", "plt.title(\"Error as a function of the number of epochs\")\n", "plt.show()\n", "\n", "plt.figure(figsize=(16,6))\n", "plt.subplot(121)\n", "debut = 3*(nepoch//4)\n", "plt.plot(np.linspace(debut+1, nepoch,nepoch-debut),Erreur[debut:])\n", "plt.title(\"Error as a function of the number of epochs\")\n", "\n", "plt.subplot(122)\n", "debut = 7*(nepoch//8)\n", "plt.plot(np.linspace(debut+1, nepoch,nepoch-debut),Erreur[debut:])\n", "plt.title(\"Error as a function of the number of epochs\")\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 4. Stabilization by decaying learning rate " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "top\n", "  \n", "          \n", "          \n", "   \n", "1.\n", "  \n", "          \n", "          \n", "   \n", "2.\n", "  \n", "          \n", "          \n", "   \n", "3.\n", "  \n", "          \n", "          \n", "   \n", "4.\n", "  \n", "          \n", "          \n", "   \n", "bot." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We try to stabilize the convergence of the algorithm by slowly decaying the learning rate, with the formula:\n", "\n", "$$\n", "\\alpha_k\\quad\\leftarrow\\quad\\frac{\\alpha_0}{1+k/K}.\n", "$$\n", "\n", "where $\\alpha_k$ is the learning rate at the $k^{\\text{th}}$ iteration (not the $k^{\\text{th}}$ epoch!). \n", "\n", "The parameter $\\alpha_0$ is the initial learning rate and the parameter $K$ controls the decay of the learning rates.\n", "\n", "Remark that after $mK$ iterations, the learning rate has been devided by $m+1$." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab_type": "text", "id": "GafO0zXoJ6Cx" }, "outputs": [], "source": [ "# Parameters\n", "alpha0=0.1 #(initial learning rate)\n", "Nepoch=500\n", "Ndata=data.n\n", "nepochplot=20\n", "\n", "nbatch=30\n", "\n", "ItersInOneEpoch = Ndata//nbatch\n", "K=100*(ItersInOneEpoch)\n", "\n", "# Initializations\n", "A=nn.create_rand()\n", "nepoch=0\n", "Erreur =[]\n", "k=0" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "posplot=0\n", "# Optimization loop for the stochastic gradient method with decaying learning rate\n", "for i_ in range(Nepoch): # loop on the epochs\n", " nepoch+=1\n", " for j_ in range(ItersInOneEpoch): # in each epoch, loop on the iterations\n", " J = nprd.randint(Ndata, size=nbatch) # draw nbatch random integers \n", " \n", " # computation of the descent direction\n", " dA=nn.create_zero()\n", " for i in J: nn.descent(A, data.X[i], data.Y[i], B=dA, pb=pb)\n", " \n", " # update of the coefficients\n", " alphak=alpha0/(1 + k/K)\n", " nn.add(A,dA,c=alphak/nbatch,output=False)\n", " k+=1\n", " \n", " # computation of the error and graphic representations\n", " if not nepoch%nepochplot:\n", " error = nn.total_loss_and_prediction(A,data,pb=pb)\n", " Erreur.append(error)\n", " if not posplot: plt.figure(figsize=(16,4))\n", " posplot+=1\n", " plt.subplot(1,4,posplot)\n", " data.show_class(pred=True)\n", " nn.show_pred(A)\n", " pb.show_border('k--')\n", " plt.title(f\"ep: {nepoch}, Loss: {error:1.5e}.\", fontsize=12)\n", " if posplot==4 : \n", " plt.show()\n", " posplot=0\n", " else:\n", " error = nn.total_loss(A,data,pb=pb)\n", " Erreur.append(error)\n", " #print(f\"epoch {nepoch}, Total loss : {error:1.5e}.\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Graphic representations:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "## Graphic representation of the evolution of the error along the iterations.\n", "nn.show(A)\n", "\n", "plt.figure(figsize=(16,6))\n", "plt.subplot(121)\n", "plt.plot(np.linspace(1,nepoch,nepoch),Erreur)\n", "plt.title(\"Error as a function of the number of epochs\")\n", "\n", "plt.subplot(122)\n", "debut = nepoch//2\n", "plt.plot(np.linspace(debut+1, nepoch,nepoch-debut),Erreur[debut:])\n", "plt.title(\"Error as a function of the number of epochs\")\n", "plt.show()\n", "\n", "plt.figure(figsize=(16,6))\n", "plt.subplot(121)\n", "debut = 3*(nepoch//4)\n", "plt.plot(np.linspace(debut+1, nepoch,nepoch-debut),Erreur[debut:])\n", "plt.title(\"Error as a function of the number of epochs\")\n", "\n", "plt.subplot(122)\n", "debut = 7*(nepoch//8)\n", "plt.plot(np.linspace(debut+1, nepoch,nepoch-debut),Erreur[debut:])\n", "plt.title(\"Error as a function of the number of epochs\")\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "top\n", "  \n", "          \n", "          \n", "   \n", "1.\n", "  \n", "          \n", "          \n", "   \n", "2.\n", "  \n", "          \n", "          \n", "   \n", "3.\n", "  \n", "          \n", "          \n", "   \n", "4.\n", "  \n", "          \n", "          \n", "   \n", "bot." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# End of file " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "anaconda-cloud": {}, "colab": { "collapsed_sections": [ "GafO0zXoJ6Cx", "5l_mvC1OJ6Da", "ZzS5-IzwaKn3", "89AjhkJ2aKoB" ], "name": "ToyNN_class.ipynb", "provenance": [], "toc_visible": true }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.16" } }, "nbformat": 4, "nbformat_minor": 1 }