{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Debug CNTK programs\n", "\n", "> \"Help! I just got this recipe from the web, I don't understand what it does, why it fails, and how to modify it for my purposes\". --- Anonymous\n", "\n", "The purpose of this tutorial is to help you understand some of the facilities CNTK provides to make the development of deep learning models easier. Some of the advice here are considered good programming practices in general, but we will still cover them in the context of building models." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": true }, "outputs": [], "source": [ "from __future__ import print_function\n", "import cntk as C\n", "import numpy as np\n", "import scipy.sparse as sparse\n", "import sys\n", "import cntk.tests.test_utils\n", "cntk.tests.test_utils.set_device_from_pytest_env() # (only needed for our build system)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Why isn't CNTK using my GPU?\n", "First check the following.\n", "- You have an NVidia GPU\n", "- It is listed when running nvidia-smi\n", "\n", "Then make sure CNTK sees your GPU: `all_devices()` returns all the available devices. If your GPU is not listed here, your installation is somehow broken. If CNTK lists a GPU, make sure no other CNTK process is using it (check nvidia-smi, under ``C:\\Program Files\\NVIDIA Corporation\\NVSMI\\nvidia-smi.exe`` on Windows and ``/usr/bin/nvidia-smi`` on Linux). If you have a zombie process using it you can try this \n", "\n", "- on Linux\n", " ```bash\n", " $ fuser -k /var/lock/CNTK_exclusive_lock_for_GPU_0\n", " ```\n", " will kill the process that created `/var/lock/CNTK_exclusive_lock_for_GPU_0`\n", "- on Windows\n", " * Make sure you have [Process Explorer](https://technet.microsoft.com/en-us/sysinternals/processexplorer.aspx)\n", " * Open Process Explorer and under View -> Select Columns... click on the GPU tab and check all the checkboxes\n", " * Now you should be able to sort all processes based on things like \"GPU System Bytes\" or other attributes. You can kill Python processes that are hogging your GPU(s) and this will automatically release the lock on this device.\n", "\n", "Even if some other process is using the GPU you can still use it as well with `try_set_default_device(C.gpu(0))`; the locks are only meant for automatic device selection to not accidentally allocate one GPU to two processes that are going to it heavily. If you know that's not the case, it's better to specify the GPU explicitly with `try_set_default_device` " ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(GPU[0] GeForce GTX TITAN X, CPU)" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "C.all_devices()" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "True\n" ] } ], "source": [ "success=C.try_set_default_device(C.gpu(0))\n", "print(success)" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "GPU[0] GeForce GTX TITAN X\n" ] } ], "source": [ "dev=C.use_default_device()\n", "print(dev)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Does this network do what I think it does?\n", "\n", "First, if you are coding something from scratch, start small and try to verify every step. Don't write a full network and hope everything will work when you use it. CNTK is doing some type checking as you construct the graph but this can be limited especially when you use placeholders (it's hard to prove that no input shape can match the requirements of the network). In particular the cntk layers library makes extensive use of placeholders so error messages at the point of first use are quite common.\n", "\n", "There are multiple levels of verification you can engage into. The simplest one is to just print the functions you are building\n", "Consider the following (broken) code " ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Composite(Dense): Placeholder('x', [???], [???]) -> Output('Block2019_Output_0', [???], [???])\n" ] } ], "source": [ "def create_gru_stack(input_layer):\n", " e = C.layers.Embedding(300)(input_layer)\n", " return C.layers.Fold(C.layers.GRU(64))(e)\n", "\n", "def create_model(question_input, answer_input):\n", " with C.default_options(init=C.glorot_uniform()):\n", " question_stack = create_gru_stack(question_input)\n", " answer_stack = create_gru_stack(answer_input)\n", " combined = C.splice(question_stack, answer_stack)\n", " combined = C.layers.Dropout(0.5)(combined)\n", " combined = C.layers.LayerNormalization()(combined)\n", " combined = C.layers.Dense(64, activation=C.sigmoid)(combined)\n", " combined = C.layers.LayerNormalization()\n", " combined = C.layers.Dense(1, activation=C.softmax)(combined)\n", " return combined\n", "\n", "question_input = C.sequence.input_variable(shape=10, is_sparse=True, name='q_input')\n", "answer_input = C.sequence.input_variable(shape=10, is_sparse=True, name='a_input')\n", "\n", "model = create_model(question_input, answer_input)\n", "print(repr(model))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Digging deeper\n", "This doesn't look right. We have clearly given the function two sequences of vectors of dimensionality 10 each yet the model has been created with a single Placeholder input of unknown dynamic axes, as indicated by the first `[???]`, and unknown shape, indicated by the second `[???]`. Because of that, the Output is also of unknown shape and dynamic axes. \n", "\n", "How do we find and eliminate the cause of this issue? One possibility is to do a sort of binary search. Clearly the model starts with well defined inputs, but ends up ignoring them. At which point did this happen? We can try \"prefixes\" of the above model (i.e. including only the first few layers) in a binary search fashion. We pretty soon find these" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Composite(Dense): Input('q_input', [#, *], [10]), Input('a_input', [#, *], [10]) -> Output('Block3826_Output_0', [#], [64])\n", "Composite(LayerNormalization): Placeholder('x', [???], [???]) -> Output('Block5955_Output_0', [???], [???])\n" ] } ], "source": [ "def create_model_working(question_input, answer_input):\n", " with C.default_options(init=C.glorot_uniform()):\n", " question_stack = create_gru_stack(question_input)\n", " answer_stack = create_gru_stack(answer_input)\n", " combined = C.splice(question_stack, answer_stack)\n", " combined = C.layers.Dropout(0.5)(combined)\n", " combined = C.layers.LayerNormalization()(combined)\n", " combined = C.layers.Dense(64, activation=C.sigmoid)(combined)\n", " return combined\n", "\n", "def create_model_broken(question_input, answer_input):\n", " with C.default_options(init=C.glorot_uniform()):\n", " question_stack = create_gru_stack(question_input)\n", " answer_stack = create_gru_stack(answer_input)\n", " combined = C.splice(question_stack, answer_stack)\n", " combined = C.layers.Dropout(0.5)(combined)\n", " combined = C.layers.LayerNormalization()(combined)\n", " combined = C.layers.Dense(64, activation=C.sigmoid)(combined)\n", " combined = C.layers.LayerNormalization()\n", " return combined\n", "\n", "model_working = create_model_working(question_input, answer_input)\n", "print(repr(model_working))\n", "\n", "model_broken = create_model_broken(question_input, answer_input)\n", "print(repr(model_broken))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Aha!\n", "The problem is of course that we did not call \n", "```python\n", "combined = C.layers.LayerNormalization()(combined)\n", "```\n", "but \n", "```python\n", "combined = C.layers.LayerNormalization()\n", "``` \n", "which creates a layer normalization layer with a placeholder as an input.\n", "\n", "This mistake is easy to make because it is tedious to write `result = layer(layer_attributes)(result)` all the time. The layers library that comes with CNTK can eliminate these kinds of bugs." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Composite(Dense): Input('q_input', [#, *], [10]), Input('a_input', [#, *], [10]) -> Output('Block7931_Output_0', [#], [1])\n" ] } ], "source": [ "def create_model_layers(question_input, answer_input):\n", " with C.default_options(init=C.glorot_uniform()):\n", " question_stack = create_gru_stack(question_input)\n", " answer_stack = create_gru_stack(answer_input)\n", " combined = C.splice(question_stack, answer_stack)\n", " return C.layers.Sequential([C.layers.Dropout(0.5),\n", " C.layers.LayerNormalization(),\n", " C.layers.Dense(64, activation=C.sigmoid),\n", " C.layers.LayerNormalization(),\n", " C.layers.Dense(1, activation=C.softmax)])(combined)\n", "\n", "model_layers = create_model_layers(question_input, answer_input)\n", "print(repr(model_layers))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Guideline 1\n", "\n", "> Use the layers library as much as possible\n", "\n", "This sort of advice can be found in every programming language. The library that comes with CNTK is more tested than your code, and subsequent improvements in the library can automatically benefit your program.\n", "\n", "### Runtime errors\n", "\n", "The network above has more problems. In particular when we feed data to it will complain. The reason for it complaining has to do with the meaning of `[#, *]` that gets printed as part of the signature of `model_layers` above. CNTK uses `#` to mean the batch axis (the mnemonic is `#` that the [number sign](https://en.wikipedia.org/wiki/Number_sign) designates the number of samples in the minibatch). Traditionally, CNTK has been using `*` to mean the default sequence axis. When two variables have the same axes, this means they must have exactly the same shape. So when we see that both inputs in the above example have dynamic axes `[#, *]` it means that they must have the same length. This is clearly not reasonable in this example where the length of the question and the length of the answer don't need to be the same. To fix this we need to explicitly say that `question` and `answer` can have different lengths. " ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Composite(Dense): Input('q_input', [#, q], [10]), Input('a_input', [#, a], [10]) -> Output('Block10087_Output_0', [#], [1])\n" ] } ], "source": [ "q_axis = C.Axis.new_unique_dynamic_axis('q')\n", "a_axis = C.Axis.new_unique_dynamic_axis('a')\n", "q_input = C.sequence.input_variable(shape=10, is_sparse=True, sequence_axis=q_axis, name='q_input')\n", "a_input = C.sequence.input_variable(shape=10, is_sparse=True, sequence_axis=a_axis, name='a_input')\n", "\n", "model_layers_distinct_axes = create_model_layers(q_input, a_input)\n", "print(repr(model_layers_distinct_axes))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Guideline 2\n", "\n", "> Understand CNTK's types and assumptions.\n", "\n", "The Python API documentation tries to include examples of usage for every basic operation so it is easy to see for each operation what is expected and what gets produced.\n", "\n", "### Guideline 3\n", "\n", "> When debugging, print each function to verify the types of its inputs and outputs.\n", "\n", "We were able to catch two bugs so far by simply inspecting the output of print. For big models that you did not write yourself you might have to do this on each layer or in a binary search fashion as we did for finding the first bug.\n", "\n", "### Model bugs\n", "\n", "We are not done with the network above. So far we have only used printing of types to guide us. But this is not always enough to debug all issues. We can get more information from a function by plotting the underlying graph. That can be done with `logging.graph.plot` and it requires to have [graphviz](http://www.graphviz.org) installed, and have the binaries in your PATH environment variable. Inside a notebook we can display the network inline (use the scrollbar on the bottom and/or the right to see the whole network). Notice that none of the parameters are shared between the question and the answer. A typical solution might want to share the embedding, or both the embedding and the GRU if data is limited.\n" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "image/svg+xml": [ "\n", "\n", "network_graph\n", "\n", "\n", "Block10087\n", "\n", "Dense\n", "\n", "\n", "Block10087_Output_0\n", "\n", "[#](1,)\n", "\n", "\n", "Block10087->Block10087_Output_0\n", "\n", "\n", "[#](1,)\n", "\n", "\n", "Parameter9598\n", "\n", "W\n", "(64, 1)\n", "\n", "\n", "Parameter9598->Block10087\n", "\n", "\n", "W\n", "(64, 1)\n", "\n", "\n", "Parameter9599\n", "\n", "b\n", "(1,)\n", "\n", "\n", "Parameter9599->Block10087\n", "\n", "\n", "b\n", "(1,)\n", "\n", "\n", "Block10062\n", "\n", "LayerNormalization\n", "\n", "\n", "Block10062->Block10087\n", "\n", "\n", "Block10062_Output_0\n", "[#](64,)\n", "\n", "\n", "Parameter9549\n", "\n", "scale\n", "()\n", "\n", "\n", "Parameter9549->Block10062\n", "\n", "\n", "scale\n", "()\n", "\n", "\n", "Parameter9550\n", "\n", "bias\n", "()\n", "\n", "\n", "Parameter9550->Block10062\n", "\n", "\n", "bias\n", "()\n", "\n", "\n", "Block10022\n", "\n", "Dense\n", "\n", "\n", "Block10022->Block10062\n", "\n", "\n", "Block10022_Output_0\n", "[#](64,)\n", "\n", "\n", "Parameter9529\n", "\n", "W\n", "(128, 64)\n", "\n", "\n", "Parameter9529->Block10022\n", "\n", "\n", "W\n", "(128, 64)\n", "\n", "\n", "Parameter9530\n", "\n", "b\n", "(64,)\n", "\n", "\n", "Parameter9530->Block10022\n", "\n", "\n", "b\n", "(64,)\n", "\n", "\n", "Block9997\n", "\n", "LayerNormalization\n", "\n", "\n", "Block9997->Block10022\n", "\n", "\n", "Block9997_Output_0\n", "[#](128,)\n", "\n", "\n", "Parameter9480\n", "\n", "scale\n", "()\n", "\n", "\n", "Parameter9480->Block9997\n", "\n", "\n", "scale\n", "()\n", "\n", "\n", "Parameter9481\n", "\n", "bias\n", "()\n", "\n", "\n", "Parameter9481->Block9997\n", "\n", "\n", "bias\n", "()\n", "\n", "\n", "Block9959\n", "\n", "Dropout\n", "\n", "\n", "Block9959->Block9997\n", "\n", "\n", "Block9959_Output_0\n", "[#](128,)\n", "\n", "\n", "Splice9467\n", "\n", "Splice\n", "\n", "\n", "Splice9467->Block9959\n", "\n", "\n", "Splice9467_Output_0\n", "[#](128,)\n", "\n", "\n", "Block8726\n", "\n", "Sequence::Slice\n", "\n", "\n", "Block8726->Splice9467\n", "\n", "\n", "Block8726_Output_0\n", "[#](64,)\n", "\n", "\n", "Block9397\n", "\n", "Sequence::Slice\n", "\n", "\n", "Block9397->Splice9467\n", "\n", "\n", "Block9397_Output_0\n", "[#](64,)\n", "\n", "\n", "Block8682\n", "\n", "GRU\n", "\n", "\n", "Block8682->Block8726\n", "\n", "\n", "Block8682_Output_0\n", "[#,*](64,)\n", "\n", "\n", "PastValue8612\n", "\n", "PastValue\n", "\n", "\n", "Block8682->PastValue8612\n", "\n", "\n", "Block8682_Output_0\n", "[#,*](64,)\n", "\n", "\n", "Parameter8152\n", "\n", "b\n", "(192,)\n", "\n", "\n", "Parameter8152->Block8682\n", "\n", "\n", "b\n", "(192,)\n", "\n", "\n", "Parameter8153\n", "\n", "W\n", "(300, 192)\n", "\n", "\n", "Parameter8153->Block8682\n", "\n", "\n", "W\n", "(300, 192)\n", "\n", "\n", "Parameter8154\n", "\n", "H\n", "(64, 128)\n", "\n", "\n", "Parameter8154->Block8682\n", "\n", "\n", "H\n", "(64, 128)\n", "\n", "\n", "Parameter8155\n", "\n", "H1\n", "(64, 64)\n", "\n", "\n", "Parameter8155->Block8682\n", "\n", "\n", "H1\n", "(64, 64)\n", "\n", "\n", "PastValue8612->Block8682\n", "\n", "\n", "PastValue8612_Output_0\n", "[#,*](64,)\n", "\n", "\n", "Block8143\n", "\n", "Embedding\n", "\n", "\n", "Block8143->Block8682\n", "\n", "\n", "Block8143_Output_0\n", "[#,*](300,)\n", "\n", "\n", "Constant8248\n", "\n", "[ 0.]\n", "\n", "\n", "Constant8248->PastValue8612\n", "\n", "\n", "Constant8248\n", "(1,)\n", "\n", "\n", "Parameter8125\n", "\n", "E\n", "(10, 300)\n", "\n", "\n", "Parameter8125->Block8143\n", "\n", "\n", "E\n", "(10, 300)\n", "\n", "\n", "Input8123\n", "\n", "Input\n", "q_input\n", "[#,*](10,)\n", "\n", "\n", "Input8123->Block8143\n", "\n", "\n", "q_input\n", "[#,*](10,)\n", "\n", "\n", "Block9353\n", "\n", "GRU\n", "\n", "\n", "Block9353->Block9397\n", "\n", "\n", "Block9353_Output_0\n", "[#,*](64,)\n", "\n", "\n", "PastValue9283\n", "\n", "PastValue\n", "\n", "\n", "Block9353->PastValue9283\n", "\n", "\n", "Block9353_Output_0\n", "[#,*](64,)\n", "\n", "\n", "Parameter8823\n", "\n", "b\n", "(192,)\n", "\n", "\n", "Parameter8823->Block9353\n", "\n", "\n", "b\n", "(192,)\n", "\n", "\n", "Parameter8824\n", "\n", "W\n", "(300, 192)\n", "\n", "\n", "Parameter8824->Block9353\n", "\n", "\n", "W\n", "(300, 192)\n", "\n", "\n", "Parameter8825\n", "\n", "H\n", "(64, 128)\n", "\n", "\n", "Parameter8825->Block9353\n", "\n", "\n", "H\n", "(64, 128)\n", "\n", "\n", "Parameter8826\n", "\n", "H1\n", "(64, 64)\n", "\n", "\n", "Parameter8826->Block9353\n", "\n", "\n", "H1\n", "(64, 64)\n", "\n", "\n", "PastValue9283->Block9353\n", "\n", "\n", "PastValue9283_Output_0\n", "[#,*](64,)\n", "\n", "\n", "Block8814\n", "\n", "Embedding\n", "\n", "\n", "Block8814->Block9353\n", "\n", "\n", "Block8814_Output_0\n", "[#,*](300,)\n", "\n", "\n", "Constant8919\n", "\n", "[ 0.]\n", "\n", "\n", "Constant8919->PastValue9283\n", "\n", "\n", "Constant8919\n", "(1,)\n", "\n", "\n", "Parameter8796\n", "\n", "E\n", "(10, 300)\n", "\n", "\n", "Parameter8796->Block8814\n", "\n", "\n", "E\n", "(10, 300)\n", "\n", "\n", "Input8124\n", "\n", "Input\n", "a_input\n", "[#,*](10,)\n", "\n", "\n", "Input8124->Block8814\n", "\n", "\n", "a_input\n", "[#,*](10,)\n", "\n", "\n", "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "from IPython.display import SVG, display\n", "\n", "def display_model(model):\n", " svg = C.logging.graph.plot(model, \"tmp.svg\")\n", " display(SVG(filename=\"tmp.svg\"))\n", "\n", "display_model(model_layers_distinct_axes)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's fix this by sharing the embedding. Sharing the GRU parameters can be done in an even simpler way as shown in the unused function `create_model_shared_all`. In the layers library, passing an input to a layer means sharing parameters with all other inputs that get passed to this layer. If you need a copy of the parameters you need to explicitly make one either via `clone()` or by creating a new layer object. " ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "image/svg+xml": [ "\n", "\n", "network_graph\n", "\n", "\n", "Block12641\n", "\n", "Dense\n", "\n", "\n", "Block12641_Output_0\n", "\n", "[#](1,)\n", "\n", "\n", "Block12641->Block12641_Output_0\n", "\n", "\n", "[#](1,)\n", "\n", "\n", "Parameter12152\n", "\n", "W\n", "(64, 1)\n", "\n", "\n", "Parameter12152->Block12641\n", "\n", "\n", "W\n", "(64, 1)\n", "\n", "\n", "Parameter12153\n", "\n", "b\n", "(1,)\n", "\n", "\n", "Parameter12153->Block12641\n", "\n", "\n", "b\n", "(1,)\n", "\n", "\n", "Block12616\n", "\n", "LayerNormalization\n", "\n", "\n", "Block12616->Block12641\n", "\n", "\n", "Block12616_Output_0\n", "[#](64,)\n", "\n", "\n", "Parameter12103\n", "\n", "scale\n", "()\n", "\n", "\n", "Parameter12103->Block12616\n", "\n", "\n", "scale\n", "()\n", "\n", "\n", "Parameter12104\n", "\n", "bias\n", "()\n", "\n", "\n", "Parameter12104->Block12616\n", "\n", "\n", "bias\n", "()\n", "\n", "\n", "Block12576\n", "\n", "Dense\n", "\n", "\n", "Block12576->Block12616\n", "\n", "\n", "Block12576_Output_0\n", "[#](64,)\n", "\n", "\n", "Parameter12083\n", "\n", "W\n", "(128, 64)\n", "\n", "\n", "Parameter12083->Block12576\n", "\n", "\n", "W\n", "(128, 64)\n", "\n", "\n", "Parameter12084\n", "\n", "b\n", "(64,)\n", "\n", "\n", "Parameter12084->Block12576\n", "\n", "\n", "b\n", "(64,)\n", "\n", "\n", "Block12551\n", "\n", "LayerNormalization\n", "\n", "\n", "Block12551->Block12576\n", "\n", "\n", "Block12551_Output_0\n", "[#](128,)\n", "\n", "\n", "Parameter12034\n", "\n", "scale\n", "()\n", "\n", "\n", "Parameter12034->Block12551\n", "\n", "\n", "scale\n", "()\n", "\n", "\n", "Parameter12035\n", "\n", "bias\n", "()\n", "\n", "\n", "Parameter12035->Block12551\n", "\n", "\n", "bias\n", "()\n", "\n", "\n", "Block12513\n", "\n", "Dropout\n", "\n", "\n", "Block12513->Block12551\n", "\n", "\n", "Block12513_Output_0\n", "[#](128,)\n", "\n", "\n", "Splice12021\n", "\n", "Splice\n", "\n", "\n", "Splice12021->Block12513\n", "\n", "\n", "Splice12021_Output_0\n", "[#](128,)\n", "\n", "\n", "Block11096\n", "\n", "Sequence::Slice\n", "\n", "\n", "Block11096->Splice12021\n", "\n", "\n", "Block11096_Output_0\n", "[#](64,)\n", "\n", "\n", "Block11951\n", "\n", "Sequence::Slice\n", "\n", "\n", "Block11951->Splice12021\n", "\n", "\n", "Block11951_Output_0\n", "[#](64,)\n", "\n", "\n", "Block11052\n", "\n", "GRU\n", "\n", "\n", "Block11052->Block11096\n", "\n", "\n", "Block11052_Output_0\n", "[#,*](64,)\n", "\n", "\n", "PastValue10972\n", "\n", "PastValue\n", "\n", "\n", "Block11052->PastValue10972\n", "\n", "\n", "Block11052_Output_0\n", "[#,*](64,)\n", "\n", "\n", "Parameter10311\n", "\n", "b\n", "(192,)\n", "\n", "\n", "Parameter10311->Block11052\n", "\n", "\n", "b\n", "(192,)\n", "\n", "\n", "Parameter10312\n", "\n", "W\n", "(300, 192)\n", "\n", "\n", "Parameter10312->Block11052\n", "\n", "\n", "W\n", "(300, 192)\n", "\n", "\n", "Parameter10313\n", "\n", "H\n", "(64, 128)\n", "\n", "\n", "Parameter10313->Block11052\n", "\n", "\n", "H\n", "(64, 128)\n", "\n", "\n", "Parameter10314\n", "\n", "H1\n", "(64, 64)\n", "\n", "\n", "Parameter10314->Block11052\n", "\n", "\n", "H1\n", "(64, 64)\n", "\n", "\n", "PastValue10972->Block11052\n", "\n", "\n", "PastValue10972_Output_0\n", "[#,*](64,)\n", "\n", "\n", "Block10981\n", "\n", "Embedding\n", "\n", "\n", "Block10981->Block11052\n", "\n", "\n", "Block10981_Output_0\n", "[#,*](300,)\n", "\n", "\n", "Constant10407\n", "\n", "[ 0.]\n", "\n", "\n", "Constant10407->PastValue10972\n", "\n", "\n", "Constant10407\n", "(1,)\n", "\n", "\n", "Parameter10300\n", "\n", "E\n", "(10, 300)\n", "\n", "\n", "Parameter10300->Block10981\n", "\n", "\n", "E\n", "(10, 300)\n", "\n", "\n", "Block11836\n", "\n", "Embedding\n", "\n", "\n", "Parameter10300->Block11836\n", "\n", "\n", "E\n", "(10, 300)\n", "\n", "\n", "Input8123\n", "\n", "Input\n", "q_input\n", "[#,*](10,)\n", "\n", "\n", "Input8123->Block10981\n", "\n", "\n", "q_input\n", "[#,*](10,)\n", "\n", "\n", "Block11907\n", "\n", "GRU\n", "\n", "\n", "Block11907->Block11951\n", "\n", "\n", "Block11907_Output_0\n", "[#,*](64,)\n", "\n", "\n", "PastValue11827\n", "\n", "PastValue\n", "\n", "\n", "Block11907->PastValue11827\n", "\n", "\n", "Block11907_Output_0\n", "[#,*](64,)\n", "\n", "\n", "Parameter11166\n", "\n", "b\n", "(192,)\n", "\n", "\n", "Parameter11166->Block11907\n", "\n", "\n", "b\n", "(192,)\n", "\n", "\n", "Parameter11167\n", "\n", "W\n", "(300, 192)\n", "\n", "\n", "Parameter11167->Block11907\n", "\n", "\n", "W\n", "(300, 192)\n", "\n", "\n", "Parameter11168\n", "\n", "H\n", "(64, 128)\n", "\n", "\n", "Parameter11168->Block11907\n", "\n", "\n", "H\n", "(64, 128)\n", "\n", "\n", "Parameter11169\n", "\n", "H1\n", "(64, 64)\n", "\n", "\n", "Parameter11169->Block11907\n", "\n", "\n", "H1\n", "(64, 64)\n", "\n", "\n", "PastValue11827->Block11907\n", "\n", "\n", "PastValue11827_Output_0\n", "[#,*](64,)\n", "\n", "\n", "Block11836->Block11907\n", "\n", "\n", "Block11836_Output_0\n", "[#,*](300,)\n", "\n", "\n", "Constant11262\n", "\n", "[ 0.]\n", "\n", "\n", "Constant11262->PastValue11827\n", "\n", "\n", "Constant11262\n", "(1,)\n", "\n", "\n", "Input8124\n", "\n", "Input\n", "a_input\n", "[#,*](10,)\n", "\n", "\n", "Input8124->Block11836\n", "\n", "\n", "a_input\n", "[#,*](10,)\n", "\n", "\n", "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "def create_model_shared_embedding(question_input, answer_input):\n", " with C.default_options(init=C.glorot_uniform()):\n", " e = C.layers.Embedding(300)\n", " question_stack = C.layers.Sequential([e, C.layers.Fold(C.layers.GRU(64))])(question_input)\n", " answer_stack = C.layers.Sequential([e, C.layers.Fold(C.layers.GRU(64))])(answer_input)\n", " combined = C.splice(question_stack, answer_stack)\n", " model = C.layers.Sequential([C.layers.Dropout(0.5),\n", " C.layers.LayerNormalization(),\n", " C.layers.Dense(64, activation=C.sigmoid),\n", " C.layers.LayerNormalization(),\n", " C.layers.Dense(1, activation=C.softmax)])\n", " return model(combined)\n", "\n", "def create_model_shared_all(question_input, answer_input):\n", " with C.default_options(init=C.glorot_uniform()):\n", " stack = C.layers.Sequential([C.layers.Embedding(300), C.layers.Fold(C.layers.GRU(64))])\n", " question_stack = stack(question_input)\n", " answer_stack = stack(answer_input)\n", " combined = C.splice(question_stack, answer_stack)\n", " model = C.layers.Sequential([cl.Dropout(0.5),\n", " C.layers.LayerNormalization(),\n", " C.layers.Dense(64, activation=C.sigmoid),\n", " C.layers.LayerNormalization(),\n", " C.layers.Dense(1, activation=C.softmax)])\n", " return model(combined)\n", "\n", "model_shared_embedding = create_model_shared_embedding(q_input, a_input)\n", "\n", "display_model(model_shared_embedding)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Guideline 4\n", "\n", "> Verify weight sharing and other structural issues in your network by plotting the underlying graph\n", "\n", "We are much better at processing visual information than by following the equations of a big model. With CNTK only the necessary dimensions need to be specified and everything else can be inferred. However when we plot a graph we can see the shapes of all inputs, outputs, and parameters at the same time, without having to do the shape inference in our heads. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### More model bugs\n", "\n", "Once all the structural bugs have been eliminated we can proceed in finding bugs related to running data through the network. We can start with feeding random data but a better choice is to feed the first few minibatches of real data through the network. This can reveal scale issues i.e. that the output or some other intermediate layer can take on too large values. \n", "\n", "A common cause of this is that the **learning rate is too high**. This will be observed from the second minibatch onwards and it can cause the learning to diverge. If you see large values in parameters or other outputs, just reduce the learning rate by a factor of 2 and retry until things look stable. \n", "\n", "Another possibility can be that the **data contains large values** which can cause intermediate outputs to become large and even overflow if the network is doing a lot of processing (such as an RNN on a long sequence or a very deep network). The training procedures currently used actually work better when the input values do not contain outliers and are centered or close to 0 (this is the reason why in many examples with image data you can see that the first thing that happens is the subtraction of the average pixel value). If you have large values in the input you can try dividing the data by the maximum value. If you have non-negative values and you want to mostly preserve the order of magnitude but don't care so much about the exact value you can transform your inputs with a `log` i.e. `transformed_features = C.log(1+features)`.\n", "\n", "In our sample code we have a problem that could be detected simply by feeding random data so we will do just that:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[ 1.],\n", " [ 1.],\n", " [ 1.],\n", " [ 1.],\n", " [ 1.]], dtype=float32)" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "random_questions = [sparse.rand(i*i+1, 10, density=0.5, format='csr', dtype=np.float32) for i in range(5)]\n", "random_answers = [sparse.rand(i+1, 10, density=0.5, format='csr', dtype=np.float32) for i in range(5)] \n", "\n", "model_shared_embedding.eval({q_input:random_questions, a_input:random_answers})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is very suspicious. We gave 5 **random** \"questions\" (of lengths 1, 2, 5, 10, and 17), and 5 **random** \"answers\" (of lengths 1, 2, 3, 4, and 5) and we got the **same** response. Again we can perform a binary search through the network to see where the responses become so uniform. We find that the following network behaves as expected" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[ 1.32114685e+00 2.15835363e-01 1.21132720e+00]\n", " [ 1.14431441e+00 -4.78121191e-01 -7.26600170e-01]\n", " [ 9.21114624e-01 -1.12726915e+00 7.72424161e-01]\n", " [ 1.31514442e+00 -4.65529203e-01 4.94043827e-01]\n", " [ 1.53898752e+00 -1.37763005e-03 1.37341189e+00]]\n" ] } ], "source": [ "def create_model_shared_embedding_working(question_input, answer_input):\n", " with C.default_options(init=C.glorot_uniform()):\n", " e = C.layers.Embedding(300)\n", " question_stack = C.layers.Sequential([e, C.layers.Fold(C.layers.GRU(64))])(question_input)\n", " answer_stack = C.layers.Sequential([e, C.layers.Fold(C.layers.GRU(64))])(answer_input)\n", " combined = C.splice(question_stack, answer_stack)\n", " model = C.layers.Sequential([C.layers.Dropout(0.5),\n", " C.layers.LayerNormalization(),\n", " C.layers.Dense(64, activation=C.sigmoid),\n", " C.layers.LayerNormalization()])\n", " return model(combined)\n", "\n", "model_shared_embedding_working = create_model_shared_embedding_working(q_input, a_input)\n", "\n", "working_outputs = model_shared_embedding_working.eval({q_input:random_questions, a_input:random_answers})\n", "print(working_outputs[:,:3])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The only difference then is this line\n", "```python\n", "C.layers.Dense(1, activation=C.softmax)\n", "```\n", "We can play around a little more e.g. by modifying the activation or the number of outputs and we find that everything is working except for the combination of arguments given above. If we look at the definition of softmax we can see the problem:\n", "$$\n", "\\textrm{softmax}(z) = \\left(\\begin{array}{c} \\frac{\\exp(z_1)}{\\sum_j \\exp(z_j)}\\\\ \\frac{\\exp(z_2)}{\\sum_j \\exp(z_j)}\\\\ \\vdots \\\\ \\frac{\\exp(z_n)}{\\sum_j \\exp(z_j)} \\end{array}\\right)\n", "$$\n", "\n", "and we only have one output! So the softmax will compute the exponential of that output and then **divide it by itself** giving us 1. One solution here is to have two outputs, one for each class. This is different from how binary classification is typically done where there's a single output representing the probability of the positive class. This latter approach can be implemented by using a sigmoid non-linearity. Therefore either of the following will work:\n", "```python\n", "cl.Dense(1, activation=C.sigmoid)\n", "```\n", "or\n", "```python\n", "cl.Dense(2, activation=C.softmax)\n", "```\n", "\n", "### Guideline 5\n", "\n", "> Feed some data to your network and look for large values in the output or other suspicious behavior.\n", "\n", "It's also good if you can train for a few minibatches to see if different outputs in the network exhibit worrisome trends, which could mean that your learning rate is very large.\n", "\n", "\n", "### Tricky errors\n", "\n", "Even after you have tried all of the above, you might still ran into problems. One example is a `NaN` (Not-a-Number) which you can get from operations whose meaning is not defined (for example $0 \\times \\infty$ or ${(-0.5)}^{0.5}$). Another case is if you are writing your own layer and it is not behaving as expected. CNTK offers some support to find your issue. Here's a contrived example that demonstrates how to catch where `NaN`s are generated." ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[ nan]], dtype=float32)" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "C.debugging.set_checked_mode(False)\n", "w = C.input_variable(1)\n", "x = C.input_variable(1)\n", "y = C.layers.Sequential([C.square, C.square, C.square])\n", "z = C.exp(-y(x))*C.exp(y(w))+1\n", "\n", "w0 = np.array([3.0],dtype=np.float32)\n", "x0 = np.array([3.0],dtype=np.float32)\n", "z.eval({w:w0, x:x0})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The code computes $3^8=6561$ and then takes the exponetial of it (which overflows to infinity) and the expoential of it's negative (which underflows to 0). The result above is because $0 \\times \\infty$ is `NaN` according to the floating point standard. If we understand the issue like in this contrived example we can rearrange our computations for example as " ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[ 2.]], dtype=float32)" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "z_stable = C.exp(-y(x)+y(w))+1\n", "\n", "w0 = np.array([3.0],dtype=np.float32)\n", "x0 = np.array([3.0],dtype=np.float32)\n", "z_stable.eval({w:w0, x:x0})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Typically we don't know what causes the `NaN` to get generated. CNTK provides a \"checked mode\" where `NaN`s can cause an exception. The request for checked_mode needs to be specified before the function is created." ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[ nan]], dtype=float32)" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "C.debugging.set_checked_mode(True)\n", "z.eval({w:w0, x:x0})" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Error: ElementTimes15622 ElementTimes operation unexpectedly produced NaN values.\n" ] }, { "data": { "image/svg+xml": [ "\n", "\n", "network_graph\n", "\n", "\n", "Plus15626\n", "\n", "+\n", "\n", "\n", "Plus15626_Output_0\n", "\n", "[#](1,)\n", "\n", "\n", "Plus15626->Plus15626_Output_0\n", "\n", "\n", "[#](1,)\n", "\n", "\n", "ElementTimes15622\n", "\n", "*\n", "\n", "\n", "ElementTimes15622->Plus15626\n", "\n", "\n", "ElementTimes15622_Output_0\n", "[#](1,)\n", "\n", "\n", "Constant15625\n", "\n", "1.0\n", "\n", "\n", "Constant15625->Plus15626\n", "\n", "\n", "Constant15625\n", "()\n", "\n", "\n", "Exp15604\n", "\n", "Exp\n", "\n", "\n", "Exp15604->ElementTimes15622\n", "\n", "\n", "Exp15604_Output_0\n", "[#](1,)\n", "\n", "\n", "Exp15619\n", "\n", "Exp\n", "\n", "\n", "Exp15619->ElementTimes15622\n", "\n", "\n", "Exp15619_Output_0\n", "[#](1,)\n", "\n", "\n", "Negate15601\n", "\n", "Negate\n", "\n", "\n", "Negate15601->Exp15604\n", "\n", "\n", "Negate15601_Output_0\n", "[#](1,)\n", "\n", "\n", "ElementTimes15595\n", "\n", "*\n", "\n", "\n", "ElementTimes15595->Negate15601\n", "\n", "\n", "ElementTimes15595_Output_0\n", "[#](1,)\n", "\n", "\n", "ElementTimes15593\n", "\n", "*\n", "\n", "\n", "ElementTimes15593->ElementTimes15595\n", "\n", "\n", "ElementTimes15593_Output_0\n", "[#](1,)\n", "\n", "\n", "ElementTimes15593->ElementTimes15595\n", "\n", "\n", "ElementTimes15593_Output_0\n", "[#](1,)\n", "\n", "\n", "ElementTimes15591\n", "\n", "*\n", "\n", "\n", "ElementTimes15591->ElementTimes15593\n", "\n", "\n", "ElementTimes15591_Output_0\n", "[#](1,)\n", "\n", "\n", "ElementTimes15591->ElementTimes15593\n", "\n", "\n", "ElementTimes15591_Output_0\n", "[#](1,)\n", "\n", "\n", "Input15475\n", "\n", "Input\n", "[#](1,)\n", "\n", "\n", "Input15475->ElementTimes15591\n", "\n", "\n", "Input15475\n", "[#](1,)\n", "\n", "\n", "Input15475->ElementTimes15591\n", "\n", "\n", "Input15475\n", "[#](1,)\n", "\n", "\n", "ElementTimes15613\n", "\n", "*\n", "\n", "\n", "ElementTimes15613->Exp15619\n", "\n", "\n", "ElementTimes15613_Output_0\n", "[#](1,)\n", "\n", "\n", "ElementTimes15611\n", "\n", "*\n", "\n", "\n", "ElementTimes15611->ElementTimes15613\n", "\n", "\n", "ElementTimes15611_Output_0\n", "[#](1,)\n", "\n", "\n", "ElementTimes15611->ElementTimes15613\n", "\n", "\n", "ElementTimes15611_Output_0\n", "[#](1,)\n", "\n", "\n", "ElementTimes15609\n", "\n", "*\n", "\n", "\n", "ElementTimes15609->ElementTimes15611\n", "\n", "\n", "ElementTimes15609_Output_0\n", "[#](1,)\n", "\n", "\n", "ElementTimes15609->ElementTimes15611\n", "\n", "\n", "ElementTimes15609_Output_0\n", "[#](1,)\n", "\n", "\n", "Input15474\n", "\n", "Input\n", "[#](1,)\n", "\n", "\n", "Input15474->ElementTimes15609\n", "\n", "\n", "Input15474\n", "[#](1,)\n", "\n", "\n", "Input15474->ElementTimes15609\n", "\n", "\n", "Input15474\n", "[#](1,)\n", "\n", "\n", "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "C.debugging.set_checked_mode(True)\n", "z_checked = C.exp(-y(x))*C.exp(y(w))+1\n", "try:\n", " z_checked.eval({w:w0, x:x0})\n", "except:\n", " exc_type, exc_value, exc_traceback = sys.exc_info()\n", " error_msg = str(exc_value).split('\\n')[0]\n", " print(\"Error: %s\"%error_msg)\n", "display_model(z_checked)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Searching for the name of the operation in the graph, we (unsurprisingly) find that it is the multiplication of the two exponentials that is causing the issue (The number after ElementTimes matches the output name of that node in the graph).\n", "\n", "### Guideline 6\n", "\n", "> Use `set_checked_mode(True)` to figure out which operation is producing NaNs\n", "\n", "This was a contrived example, and in some cases you can get NaNs after many hours of training. Checked mode is introducing a big performance hit so you will need to rerun from your last valid checkpoint. While that is happening on the background inspect your graph for operations that can cause problems such as exponentials that can produce very large numbers.\n", "\n", "Finally, the debugging module includes two other ways that can help you find problems with your code\n", "\n", "* The `set_computation_network_trace_level()` function takes an integer argument that determines the amount of information that CNTK will produce for each operation in the graph\n", " - 1: outputs the dimensions and some other static information\n", " - 1000: outputs for each minibatch the sum of absolute values of all the elements in the output (this can help catch sign and permutation errors) \n", " - 1000000: outputs for each minibatch all the elements in the output. This is intended to be used as a last resort.\n", "* The `debug_model()` function takes a CNTK network and returns a new network that has debug operations inserted everywhere. Debug operations can let you inspect the values that flow through the network in the forward and backward direction in an interactive way in the console. This is hard to appeciate through this tutorial, since if we were to run this here it would freeze this notebook (waiting for user input), but you are welcome to check out the [documentation](https://www.cntk.ai/pythondocs/cntk.debugging.debug.html) of how it works and try it out!\n", "\n", "### Guideline 7\n", "\n", "> Use `debug_model(function)` and `set_computation_network_trace_level(level)` to smoke out any remaining bugs. \n", "\n", "\n", "### Very advanced bugs\n", "\n", "Beyond the debugging module, there are a few more internal APIs that can help with certain classes of bugs. All of these internal APIs are in the `cntk.cntk_py` module so when we refer to, say, `force_deterministic_algorithms()` that really means\n", "`cntk_py.force_deterministic_algorithms()`. The following functions can be useful\n", "- **`force_deterministic_algorithms()`**: Many of the libraries we use offer various algorithms for performing each operation. Typically the fastest algorithms are non-deterministic because the output is a summation (as in the case of matrix products or convolutions) and multiple threads are working on partial sums that have to be added together. Since addition of floating point numbers is not associative, you can get different results from different executions. force_deterministic_algorithms() will make all subsequent operations select a slower but deterministic algorithm if one is available. This is useful when bitwise reproducibility is important.\n", "- **`set_gpumemory_allocation_trace_level(level)`**: Sets the trace level for gpu memory allocations. A value greater than 0 will cause the gpu memory allocator to print information about the allocation, the free and total memory, and a call stack of where this allocation was called from. This can be useful in debugging out of memory issues on the GPU.\n", "- **`enable_synchronous_gpukernel_execution()`**: Makes all gpu kernel launches synchronous. This can help with profiling execution times because the profile of a program with asynchronous execution of gpu kernels can be hard to interpret.\n", "- **`set_fixed_random_seed(value)`**: All CNTK code goes through a single GenerateRandomSeed API which by default assigns distinct random seeds to each operation that requires randomness (including, random initialization, dropout, and random number generation according to a distribution). With this call all these operations will have the same fixed random seed which can help debug reproducibility issues after you have refactored your program and some parts of the networks are now created in different order. There is still some legacy code that picks the random seed in other ways, so you can still get non-reproducible results with this option. Furthermore, this option reduces the statistical quality of dropout and other random operations in the network and should be used with care.\n", "- **`disable_forward_values_sharing()`**: CNTK is very aggressive about reusing GPU memory. There are many opportunities both during the forward and the backward pass where a buffer of intermediate results can be reused. Unfortunately, if you write a new operation and do not properly mark which buffers should and should not be reused, you can have very subtle bugs. The backward value sharing is straightforward and you cannot do much to cause CNTK to get it wrong. If you are suspecting such a bug you can see whether disabling forward values (buffers) sharing leads to different results. If so, you need to investigate whether your operation is improperly marking some buffers as possible to share.\n", "\n", "\n", "### Guideline 8\n", "\n", "> Use `cntk_py.set_gpumemory_allocation_trace_level(1)` to find out why you are running out of GPU memory.\n", "\n", "### Guideline 9\n", "\n", "> Use `cntk_py.enable_synchronous_gpukernel_execution()` to make the profiling results easier to understand.\n", "\n", "### Guideline 10\n", "\n", "> Use `cntk_py.force_deterministic_algorithms()` and `cntk_py.set_fixed_random_seed(seed)` to improve reproducibility.\n", "\n", "### Guideline 11\n", "\n", "> Use `cntk_py.disable_forward_values_sharing()` if you suspect a memory sharing issue with CNTK." ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": true }, "outputs": [], "source": [ "# For testing purposes, ensure that what the guide says can be executed without failures\n", "C.debugging.set_computation_network_trace_level(1)\n", "C.cntk_py.set_gpumemory_allocation_trace_level(1)\n", "C.cntk_py.enable_synchronous_gpukernel_execution()\n", "C.cntk_py.force_deterministic_algorithms() \n", "C.cntk_py.set_fixed_random_seed(98052)\n", "C.cntk_py.disable_forward_values_sharing()\n", "dm = C.debugging.debug_model(model_shared_embedding_working) " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.2" } }, "nbformat": 4, "nbformat_minor": 1 }