{ "cells": [ { "cell_type": "markdown", "id": "46879ddc", "metadata": {}, "source": [ "
\"Open\n", "
\n", "\"GitHub
" ] }, { "cell_type": "markdown", "id": "88f4dd69", "metadata": {}, "source": [ "# Nice MathJax for Matrices\n", "\n", "Purpose of this notebook is to make plain ascii form of matrices look more visually appealing. All that done by just a few lines of code inside Jupyter notebooks with help of [MathJax](https://www.mathjax.org/) and `IPython.display` methods. In particular methods `Math` and `Markdown`.\n", "\n", "Great source of information about `MathJax` syntax is at [Basic Tutorial](https://math.meta.stackexchange.com/questions/5020/mathjax-basic-tutorial-and-quick-reference)." ] }, { "cell_type": "markdown", "id": "ad2f8507", "metadata": {}, "source": [ "## The eye candy function" ] }, { "cell_type": "markdown", "id": "31026c0b", "metadata": {}, "source": [ "### Some code hygiene\n", "\n", "Let's make the code more reproducible by capturing module versions and environment using [watermark](https://github.com/rasbt/watermark)" ] }, { "cell_type": "code", "execution_count": 1, "id": "9756d04d", "metadata": {}, "outputs": [], "source": [ "!pip -q install watermark\n", "# assuming that you have already installed all the dependencies and libraries\n", "# uncomment below if needed\n", "#!pip -q install tensorflow torch numpy" ] }, { "cell_type": "code", "execution_count": 2, "id": "61108a82", "metadata": {}, "outputs": [], "source": [ "from IPython.display import Math\n", "from IPython.display import Markdown\n", "import tensorflow as tf\n", "import torch\n", "import numpy as np" ] }, { "cell_type": "code", "execution_count": 3, "id": "c54fc993", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Last updated: 2021-08-02T13:07:23.444236+02:00\n", "\n", "Compiler : Clang 12.0.5 (clang-1205.0.22.9)\n", "OS : Darwin\n", "Release : 20.6.0\n", "Machine : x86_64\n", "Processor : i386\n", "CPU cores : 8\n", "Architecture: 64bit\n", "\n", "torch : 1.9.0\n", "numpy : 1.19.5\n", "tensorflow: 2.5.0\n", "\n", "Watermark: 2.2.0\n", "\n" ] } ], "source": [ "%load_ext watermark\n", "%watermark -iv -i -z -u -m -w" ] }, { "cell_type": "markdown", "id": "3fbfcf42", "metadata": {}, "source": [ "### The pair of functions\n", "\n", "The functions below are the heart of this notebook reference, together with the imports above they are the piece that should be used wherever needed. Most probably only the `sm_sd` function only as it is more portable." ] }, { "cell_type": "code", "execution_count": 4, "id": "48602248", "metadata": {}, "outputs": [], "source": [ "## The Math version of a matrix formating\n", "# which might or might not be available, relies on __IPython.display.Math__ working properly\n", "# (not supported at Google's Colab)\n", "def sm_math(mat,mname=\"A\"):\n", " latex = \"\"\n", " latex += \"\\\\renewcommand{\\\\vec}[1]{\\\\boldsymbol{#1}}\\n\"\n", " latex += \"\\\\renewcommand{\\\\matrix}[1]{\\\\boldsymbol{\\\\mathrm{#1}}}\\n\"\n", " latex += \"\\\\newcommand{\\\\tensor}[1]{\\\\boldsymbol{\\mathrm{#1}}}\\n\"\n", " if mname is not None:\n", " latex += \"\\\\matrix{\"+mname+\"} = \"\n", " latex += \"\\\\begin{bmatrix}\"\n", " tmp_rows = list()\n", " for rr in range(len(mat)):\n", " tmp_rows.append(\" & \".join([(f'{item:.2f}').rstrip('0').rstrip('.') for item in mat[rr]]))\n", " latex += \"\\\\\\\\\".join(tmp_rows)\n", " latex += \"\\\\end{bmatrix}\"\n", " return display(Math(latex))\n", "\n", "## More simplistic version of function above\n", "# which works on Google's Colab as well\n", "def sm_md(inmat):\n", " md = \"\"\n", " md += \"\\\\begin{bmatrix}\"\n", " tmp_rows = list()\n", " mat = inmat\n", " # Pythonify the TensorFlow\n", " if isinstance(inmat,tf.Variable):\n", " mat = inmat.numpy().tolist()\n", " # Pythonify the Numpy and PyTorch\n", " if isinstance(inmat,(torch.Tensor,np.ndarray)):\n", " mat = inmat.tolist()\n", " for rr in range(len(mat)):\n", " tmp_rows.append(\" & \".join([(f'{item:.2f}').rstrip('0').rstrip('.') for item in mat[rr]]))\n", " md += \"\\\\\\\\\".join(tmp_rows)\n", " md += \"\\\\end{bmatrix}\"\n", " return display(Markdown(md))" ] }, { "cell_type": "markdown", "id": "a51fa997", "metadata": {}, "source": [ "## Debug and test\n", "\n", "so let's test them, this one works in local `jupyter-notebook`" ] }, { "cell_type": "code", "execution_count": 5, "id": "733e69fe", "metadata": {}, "outputs": [ { "data": { "text/latex": [ "$\\displaystyle \\renewcommand{\\vec}[1]{\\boldsymbol{#1}}\n", "\\renewcommand{\\matrix}[1]{\\boldsymbol{\\mathrm{#1}}}\n", "\\newcommand{\\tensor}[1]{\\boldsymbol{\\mathrm{#1}}}\n", "\\begin{bmatrix}2 & 3 & 4\\\\5 & 6 & 7\\\\20 & 7.98 & 6\\end{bmatrix}$" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "sm_math([[2,3,4],[5,6,7],[19.9999999999,7.9849999,6]],mname=None)" ] }, { "cell_type": "markdown", "id": "c9cf0463", "metadata": {}, "source": [ "and this one works both locally and on Google's Colab platform" ] }, { "cell_type": "code", "execution_count": 6, "id": "91f1eac1", "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "\\begin{bmatrix}2 & 3 & 4\\\\5 & 6 & 7\\\\20 & 7.98 & 6\\end{bmatrix}" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "sm_md([[2,3,4],[5,6,7],[19.9999999999,7.9849999,6]])" ] }, { "cell_type": "markdown", "id": "1a363bd9", "metadata": {}, "source": [ "Create `TensorFlow` tensor variable from Python native list of lists ~ 2D tensor/array.\n", "It prints out the textual reprensentation below including the type of all array elements" ] }, { "cell_type": "code", "execution_count": 7, "id": "d2de1c4d", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "2021-08-02 13:07:23.532520: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA\n", "To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\n" ] }, { "data": { "text/plain": [ "" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "v = tf.Variable([[2,3,4],[5,6,7],[19.9999999999,7.9849999,6]])\n", "v" ] }, { "cell_type": "markdown", "id": "3c3025ec", "metadata": {}, "source": [ "Let's check the type of the variable and try to covert it back to plain Python" ] }, { "cell_type": "code", "execution_count": 8, "id": "d4f9bd7a", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "True\n" ] }, { "data": { "text/plain": [ "[[2.0, 3.0, 4.0], [5.0, 6.0, 7.0], [20.0, 7.985000133514404, 6.0]]" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "print(isinstance(v,tf.Variable))\n", "v.numpy().tolist()" ] }, { "cell_type": "markdown", "id": "ff80a265", "metadata": {}, "source": [ "and now display with help of MathJax" ] }, { "cell_type": "code", "execution_count": 9, "id": "16274d39", "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "\\begin{bmatrix}2 & 3 & 4\\\\5 & 6 & 7\\\\20 & 7.99 & 6\\end{bmatrix}" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "sm_md(v)" ] }, { "cell_type": "markdown", "id": "f90248c9", "metadata": {}, "source": [ "Repeat for `PyTorch`" ] }, { "cell_type": "code", "execution_count": 10, "id": "d068eda3", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "tensor([[ 2.0000, 3.0000, 4.0000],\n", " [ 5.0000, 6.0000, 7.0000],\n", " [20.0000, 7.9850, 6.0000]])" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "w = torch.tensor([[2,3,4],[5,6,7],[19.9999999999,7.9849999,6]])\n", "w" ] }, { "cell_type": "markdown", "id": "6b89e079", "metadata": {}, "source": [ "check the type and again try to convert back to plain Python" ] }, { "cell_type": "code", "execution_count": 11, "id": "f052e967", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "True\n" ] }, { "data": { "text/plain": [ "[[2.0, 3.0, 4.0], [5.0, 6.0, 7.0], [20.0, 7.985000133514404, 6.0]]" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "print(isinstance(w,torch.Tensor))\n", "w.tolist()" ] }, { "cell_type": "markdown", "id": "3456318d", "metadata": {}, "source": [ "and again with MathJax magic" ] }, { "cell_type": "code", "execution_count": 12, "id": "eaa6a849", "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "\\begin{bmatrix}2 & 3 & 4\\\\5 & 6 & 7\\\\20 & 7.99 & 6\\end{bmatrix}" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "sm_md(w)" ] }, { "cell_type": "markdown", "id": "dbb96c64", "metadata": {}, "source": [ "and last alien on the block `numpy`" ] }, { "cell_type": "code", "execution_count": 13, "id": "7cdd6833", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[ 2. , 3. , 4. ],\n", " [ 5. , 6. , 7. ],\n", " [20. , 7.9849999, 6. ]])" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x = np.array([[2,3,4],[5,6,7],[19.9999999999,7.9849999,6]])\n", "x" ] }, { "cell_type": "code", "execution_count": 14, "id": "d9d4591d", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "True\n" ] }, { "data": { "text/plain": [ "[[2.0, 3.0, 4.0], [5.0, 6.0, 7.0], [19.9999999999, 7.9849999, 6.0]]" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "print(isinstance(x,np.ndarray))\n", "x.tolist()" ] }, { "cell_type": "markdown", "id": "be269702", "metadata": {}, "source": [ "and last `MathJax` show" ] }, { "cell_type": "code", "execution_count": 15, "id": "31b1cd0e", "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "\\begin{bmatrix}2 & 3 & 4\\\\5 & 6 & 7\\\\20 & 7.98 & 6\\end{bmatrix}" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "sm_md(x)" ] }, { "cell_type": "markdown", "id": "48cf0e8f", "metadata": {}, "source": [ "## Clean usage for mere mortals" ] }, { "cell_type": "code", "execution_count": 16, "id": "ef0e046f", "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "\\begin{bmatrix}2 & 3 & 4\\\\5 & 6 & 7\\\\20 & 7.99 & 6\\end{bmatrix}" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "v = tf.Variable([[2,3,4],[5,6,7],[19.9999999999,7.9849999,6]])\n", "sm_md(v)" ] }, { "cell_type": "code", "execution_count": 17, "id": "c7af4149", "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "\\begin{bmatrix}2 & 3 & 4\\\\5 & 6 & 7\\\\20 & 7.99 & 6\\end{bmatrix}" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "w = torch.tensor([[2,3,4],[5,6,7],[19.9999999999,7.9849999,6]])\n", "sm_md(w)" ] }, { "cell_type": "code", "execution_count": 18, "id": "9d8e9bc6", "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "\\begin{bmatrix}2 & 3 & 4\\\\5 & 6 & 7\\\\20 & 7.98 & 6\\end{bmatrix}" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "x = np.array([[2,3,4],[5,6,7],[19.9999999999,7.9849999,6]])\n", "sm_md(x)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.6" } }, "nbformat": 4, "nbformat_minor": 5 }