{"question":"How do I set the figure title and axes labels font size? I am creating a figure in Matplotlib like this: \n```py\nfrom matplotlib import pyplot as plt\n\nfig = plt.figure()\nplt.plot(data)\nfig.suptitle('test title')\nplt.xlabel('xlabel')\nplt.ylabel('ylabel')\nfig.savefig('test.jpg')\n```\nI want to specify font sizes for the figure title and the axis labels. I need all three to be different font sizes, so setting a global font size (mpl.rcParams['font.size']=x) is not what I want. How do I set font sizes for the figure title and the axis labels individually?","response":"Functions dealing with text like label, title, etc. accept parameters same as matplotlib.text.Text. For the font size you can use size\/fontsize: \n```\nfrom matplotlib import pyplot as plt \n\nfig = plt.figure()\nplt.plot(data)\nfig.suptitle('test title', fontsize=20)\nplt.xlabel('xlabel', fontsize=18)\nplt.ylabel('ylabel', fontsize=16)\nfig.savefig('test.jpg')\n```\nFor globally setting title and label sizes, mpl.rcParams contains axes.titlesize and axes.labelsize. (From the page): \n```\naxes.titlesize : large # fontsize of the axes title\naxes.labelsize : medium # fontsize of the x any y labels\n```\n(As far as I can see, there is no way to set x and y label sizes separately.) And I see that axes.titlesize does not affect suptitle. I guess, you need to set that manually.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/12444716\/how-do-i-set-the-figure-title-and-axes-labels-font-size","best_answers_votes":1262,"tags":["python","matplotlib","axis-labels","yaxis","x-axis"],"question_length":555,"response_length":784,"tag_count":5} {"question":"When to use cla(), clf() or close() for clearing a plot Matplotlib offers these functions: \n```py\ncla() # Clear axis\nclf() # Clear figure\nclose() # Close a figure window\n```\nWhen should I use each function and what exactly does it do?","response":"They all do different things, since matplotlib uses a hierarchical order in which a figure window contains a figure which may consist of many axes. Additionally, there are functions from the pyplot interface and there are methods on the Figure class. I will discuss both cases below. pyplot interface pyplot is a module that collects a couple of functions that allow matplotlib to be used in a functional manner. I here assume that pyplot has been imported as import matplotlib.pyplot as plt. In this case, there are three different commands that remove stuff: See matplotlib.pyplot Functions: plt.cla() clears an axis, i.e. the currently active axis in the current figure. It leaves the other axes untouched. plt.clf() clears the entire current figure with all its axes, but leaves the window opened, such that it may be reused for other plots. plt.close() closes a window, which will be the current window, if not specified otherwise. Which functions suits you best depends thus on your use-case. The close() function furthermore allows one to specify which window should be closed. The argument can either be a number or name given to a window when it was created using figure(number_or_name) or it can be a figure instance fig obtained, i.e., usingfig = figure(). If no argument is given to close(), the currently active window will be closed. Furthermore, there is the syntax close('all'), which closes all figures. methods of the Figure class Additionally, the Figure class provides methods for clearing figures. I'll assume in the following that fig is an instance of a Figure: fig.clf() clears the entire figure. This call is equivalent to plt.clf() only if fig is the current figure. fig.clear() is a synonym for fig.clf() Note that even del fig will not close the associated figure window. As far as I know the only way to close a figure window is using plt.close(fig) as described above.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/8213522\/when-to-use-cla-clf-or-close-for-clearing-a-plot","best_answers_votes":960,"tags":["python","matplotlib","plot"],"question_length":238,"response_length":1898,"tag_count":3} {"question":"What does the argument mean in fig.add_subplot(111)? Sometimes I come across code such as this: \n```\nimport matplotlib.pyplot as plt\nx = [1, 2, 3, 4, 5]\ny = [1, 4, 9, 16, 25]\nfig = plt.figure()\nfig.add_subplot(111)\nplt.scatter(x, y)\nplt.show()\n```\nWhich produces: I've been reading the documentation like crazy but I can't find an explanation for the 111. sometimes I see a 212. What does the argument of fig.add_subplot() mean?","response":"I think this would be best explained by the following picture: To initialize the above, one would type: \n```\nimport matplotlib.pyplot as plt\nfig = plt.figure()\nfig.add_subplot(221) #top left\nfig.add_subplot(222) #top right\nfig.add_subplot(223) #bottom left\nfig.add_subplot(224) #bottom right \nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/3584805\/what-does-the-argument-mean-in-fig-add-subplot111","best_answers_votes":638,"tags":["python","matplotlib","subplot","figure"],"question_length":428,"response_length":315,"tag_count":4} {"question":"How to change the figure size of a seaborn axes or figure level plot How do I change the size of my image so it's suitable for printing? For example, I'd like to use an A4 paper, whose dimensions are 11.7 inches by 8.27 inches in landscape orientation.","response":"You can also set figure size by passing dictionary to rc parameter with key 'figure.figsize' in seaborn set_theme method (which replaces the set method, deprecated in v0.11.0 (September 2020)) \n```\nimport seaborn as sns\n\nsns.set_theme(rc={'figure.figsize':(11.7,8.27)})\n```\nOther alternative may be to use figure.figsize of rcParams to set figure size as below: \n```\nfrom matplotlib import rcParams\n\n# figure size in inches\nrcParams['figure.figsize'] = 11.7,8.27\n```\nMore details can be found in matplotlib documentation","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/31594549\/how-to-change-the-figure-size-of-a-seaborn-axes-or-figure-level-plot","best_answers_votes":568,"tags":["python","matplotlib","seaborn","figsize"],"question_length":252,"response_length":520,"tag_count":4} {"question":"How to remove xticks from a plot I have a semilogx plot and I would like to remove the xticks. I tried: \n```\nplt.gca().set_xticks([])\nplt.xticks([])\nax.set_xticks([])\n```\nThe grid disappears (ok), but small ticks (at the place of the main ticks) remain. How to remove them?","response":"The plt.tick_params method is very useful for stuff like this. This code turns off major and minor ticks and removes the labels from the x-axis. Note that there is also ax.tick_params for matplotlib.axes.Axes objects. \n```\nfrom matplotlib import pyplot as plt\nplt.plot(range(10))\nplt.tick_params(\n axis='x', # changes apply to the x-axis\n which='both', # both major and minor ticks are affected\n bottom=False, # ticks along the bottom edge are off\n top=False, # ticks along the top edge are off\n labelbottom=False) # labels along the bottom edge are off\nplt.show()\nplt.savefig('plot')\nplt.clf()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/12998430\/how-to-remove-xticks-from-a-plot","best_answers_votes":751,"tags":["python","matplotlib","axis","xticks"],"question_length":273,"response_length":640,"tag_count":4} {"question":"Matplotlib different size subplots I need to add two subplots to a figure. One subplot needs to be about three times as wide as the second (same height). I accomplished this using GridSpec and the colspan argument but I would like to do this using figure so I can save to PDF. I can adjust the first figure using the figsize argument in the constructor, but how do I change the size of the second plot?","response":"As of matplotlib 3.6.0, width_ratios and height_ratios can now be passed directly as keyword arguments to plt.subplots and subplot_mosaic, as per What's new in Matplotlib 3.6.0 (Sep 15, 2022). f, (a0, a1) = plt.subplots(1, 2, width_ratios=[3, 1]) f, (a0, a1, a2) = plt.subplots(3, 1, height_ratios=[1, 1, 3]) Another way is to use the subplots function and pass the width ratio with gridspec_kw matplotlib Tutorial: Customizing Figure Layouts Using GridSpec and Other Functions matplotlib.gridspec.GridSpec has available gridspect_kw options \n```py\nimport numpy as np\nimport matplotlib.pyplot as plt \n\n# generate some data\nx = np.arange(0, 10, 0.2)\ny = np.sin(x)\n\n# plot it\nf, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios': [3, 1]})\na0.plot(x, y)\na1.plot(y, x)\n\nf.tight_layout()\nf.savefig('grid_figure.pdf')\n```\nBecause the question is canonical, here is an example with vertical subplots. \n```py\n# plot it\nf, (a0, a1, a2) = plt.subplots(3, 1, gridspec_kw={'height_ratios': [1, 1, 3]})\n\na0.plot(x, y)\na1.plot(x, y)\na2.plot(x, y)\n\nf.tight_layout()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/10388462\/matplotlib-different-size-subplots","best_answers_votes":668,"tags":["python","matplotlib","subplot","figure","matplotlib-gridspec"],"question_length":402,"response_length":1062,"tag_count":5} {"question":"Display image as grayscale I'm trying to display a grayscale image using matplotlib.pyplot.imshow(). My problem is that the grayscale image is displayed as a colormap. I need it to be grayscale because I want to draw on top of the image with color. I read in the image and convert to grayscale using PIL's Image.open().convert(\"L\") \n```\nimage = Image.open(file).convert(\"L\")\n```\nThen I convert the image to a matrix so that I can easily do some image processing using \n```\nmatrix = scipy.misc.fromimage(image, 0)\n```\nHowever, when I do \n```\nfigure() \nmatplotlib.pyplot.imshow(matrix) \nshow()\n```\nit displays the image using a colormap (i.e. it's not grayscale). What am I doing wrong here?","response":"The following code will load an image from a file image.png and will display it as grayscale. \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom PIL import Image\n\nfname = 'image.png'\nimage = Image.open(fname).convert(\"L\")\narr = np.asarray(image)\nplt.imshow(arr, cmap='gray', vmin=0, vmax=255)\nplt.show()\n```\nIf you want to display the inverse grayscale, switch the cmap to cmap='gray_r'.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/3823752\/display-image-as-grayscale","best_answers_votes":520,"tags":["python","matplotlib","grayscale","imshow"],"question_length":691,"response_length":397,"tag_count":4} {"question":"How can I convert an RGB image into grayscale in Python? I'm trying to use matplotlib to read in an RGB image and convert it to grayscale. In matlab I use this: \n```\nimg = rgb2gray(imread('image.png'));\n```\nIn the matplotlib tutorial they don't cover it. They just read in the image \n```\nimport matplotlib.image as mpimg\nimg = mpimg.imread('image.png')\n```\nand then they slice the array, but that's not the same thing as converting RGB to grayscale from what I understand. \n```\nlum_img = img[:,:,0]\n```\nI find it hard to believe that numpy or matplotlib doesn't have a built-in function to convert from rgb to gray. Isn't this a common operation in image processing? I wrote a very simple function that works with the image imported using imread in 5 minutes. It's horribly inefficient, but that's why I was hoping for a professional implementation built-in. Sebastian has improved my function, but I'm still hoping to find the built-in one. matlab's (NTSC\/PAL) implementation: \n```\nimport numpy as np\n\ndef rgb2gray(rgb):\n\n r, g, b = rgb[:,:,0], rgb[:,:,1], rgb[:,:,2]\n gray = 0.2989 * r + 0.5870 * g + 0.1140 * b\n\n return gray\n```","response":"How about doing it with Pillow: \n```\nfrom PIL import Image\nimg = Image.open('image.png').convert('L')\nimg.save('greyscale.png')\n```\nIf an alpha (transparency) channel is present in the input image and should be preserved, use mode LA: \n```\nimg = Image.open('image.png').convert('LA')\n```\nUsing matplotlib and the formula \n```\nY' = 0.2989 R + 0.5870 G + 0.1140 B\n```\nyou could do: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\n\ndef rgb2gray(rgb):\n return np.dot(rgb[...,:3], [0.2989, 0.5870, 0.1140])\n\nimg = mpimg.imread('image.png') \ngray = rgb2gray(img) \nplt.imshow(gray, cmap=plt.get_cmap('gray'), vmin=0, vmax=1)\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/12201577\/how-can-i-convert-an-rgb-image-into-grayscale-in-python","best_answers_votes":488,"tags":["python","matplotlib"],"question_length":1140,"response_length":683,"tag_count":2} {"question":"Is there a way to detach matplotlib plots so that the computation can continue? After these instructions in the Python interpreter one gets a window with a plot: \n```\nfrom matplotlib.pyplot import *\nplot([1,2,3])\nshow()\n# other code\n```\nUnfortunately, I don't know how to continue to interactively explore the figure created by show() while the program does further calculations. Is it possible at all? Sometimes calculations are long and it would help if they would proceed during examination of intermediate results.","response":"Use matplotlib's calls that won't block: Using draw(): \n```\nfrom matplotlib.pyplot import plot, draw, show\nplot([1,2,3])\ndraw()\nprint('continue computation')\n\n# at the end call show to ensure window won't close.\nshow()\n```\nUsing interactive mode: \n```\nfrom matplotlib.pyplot import plot, ion, show\nion() # enables interactive mode\nplot([1,2,3]) # result shows immediatelly (implicit draw())\n\nprint('continue computation')\n\n# at the end call show to ensure window won't close.\nshow()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/458209\/is-there-a-way-to-detach-matplotlib-plots-so-that-the-computation-can-continue","best_answers_votes":265,"tags":["python","matplotlib","plot"],"question_length":518,"response_length":486,"tag_count":3} {"question":"Rotate label text in seaborn I have a simple factorplot \n```\nimport seaborn as sns\ng = sns.factorplot(\"name\", \"miss_ratio\", \"policy\", dodge=.2, \n linestyles=[\"none\", \"none\", \"none\", \"none\"], data=df[df[\"level\"] == 2])\n```\nThe problem is that the x labels all run together, making them unreadable. How do you rotate the text so that the labels are readable?","response":"I had a problem with the answer by @mwaskorn, namely that \n```\ng.set_xticklabels(rotation=30)\n```\nfails, because this also requires the labels. A bit easier than the answer by @Aman is to just add \n```\nplt.xticks(rotation=30)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/26540035\/rotate-label-text-in-seaborn","best_answers_votes":449,"tags":["python","matplotlib","seaborn","x-axis"],"question_length":359,"response_length":229,"tag_count":4} {"question":"Set markers for individual points on a line I have used Matplotlib to plot lines on a figure. Now I would now like to set the style, specifically the marker, for individual points on the line. How do I do this? To clarify my question, I want to be able to set the style for individual markers on a line, not every marker on said line.","response":"Specify the keyword args linestyle and\/or marker in your call to plot. For example, using a dashed line and blue circle markers: \n```\nplt.plot(range(10), linestyle='--', marker='o', color='b', label='line with marker')\nplt.legend()\n```\nA shortcut call for the same thing: \n```\nplt.plot(range(10), '--bo', label='line with marker')\nplt.legend()\n```\nHere is a list of the possible line and marker styles: \n```\n================ ===============================\ncharacter description\n================ ===============================\n - solid line style\n -- dashed line style\n -. dash-dot line style\n : dotted line style\n . point marker\n , pixel marker\n o circle marker\n v triangle_down marker\n ^ triangle_up marker\n < triangle_left marker\n > triangle_right marker\n 1 tri_down marker\n 2 tri_up marker\n 3 tri_left marker\n 4 tri_right marker\n s square marker\n p pentagon marker\n * star marker\n h hexagon1 marker\n H hexagon2 marker\n + plus marker\n x x marker\n D diamond marker\n d thin_diamond marker\n | vline marker\n _ hline marker\n================ ===============================\n```\nedit: with an example of marking an arbitrary subset of points, as requested in the comments: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nxs = np.linspace(-np.pi, np.pi, 30)\nys = np.sin(xs)\nmarkers_on = [12, 17, 18, 19]\nplt.plot(xs, ys, '-gD', markevery=markers_on, label='line with select markers')\nplt.legend()\nplt.show()\n```\nThis last example using the markevery kwarg is possible in since 1.4+, due to the merge of this feature branch. If you are stuck on an older version of matplotlib, you can still achieve the result by overlaying a scatterplot on the line plot. See the edit history for more details.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/8409095\/set-markers-for-individual-points-on-a-line","best_answers_votes":552,"tags":["python","matplotlib"],"question_length":334,"response_length":2157,"tag_count":2} {"question":"How to convert a NumPy array to PIL image applying matplotlib colormap I want to take a NumPy 2D array which represents a grayscale image, and convert it to an RGB PIL image while applying some of the matplotlib colormaps. I can get a reasonable PNG output by using the pyplot.figure.figimage command: \n```\ndpi = 100.0\nw, h = myarray.shape[1]\/dpi, myarray.shape[0]\/dpi\nfig = plt.figure(figsize=(w,h), dpi=dpi)\nfig.figimage(sub, cmap=cm.gist_earth)\nplt.savefig('out.png')\n```\nAlthough I could adapt this to get what I want (probably using StringIO do get the PIL image), I wonder if there is not a simpler way to do that, since it seems to be a very natural problem of image visualization. Let's say, something like this: \n```\ncolored_PIL_image = magic_function(array, cmap)\n```","response":"Quite a busy one-liner, but here it is: First ensure your NumPy array, myarray, is normalised with the max value at 1.0. Apply the colormap directly to myarray. Rescale to the 0-255 range. Convert to integers, using np.uint8(). Use Image.fromarray(). And you're done: \n```\nfrom PIL import Image\nfrom matplotlib import cm\nim = Image.fromarray(np.uint8(cm.gist_earth(myarray)*255))\n```\nwith plt.savefig(): with im.save():","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/10965417\/how-to-convert-a-numpy-array-to-pil-image-applying-matplotlib-colormap","best_answers_votes":400,"tags":["python","numpy","matplotlib","python-imaging-library","color-mapping"],"question_length":777,"response_length":419,"tag_count":5} {"question":"Reduce left and right margins in matplotlib plot I'm struggling to deal with my plot margins in matplotlib. I've used the code below to produce my chart: \n```\nplt.imshow(g)\nc = plt.colorbar()\nc.set_label(\"Number of Slabs\")\nplt.savefig(\"OutputToUse.png\")\n```\nHowever, I get an output figure with lots of white space on either side of the plot. I've searched google and read the matplotlib documentation, but I can't seem to find how to reduce this.","response":"One way to automatically do this is the bbox_inches='tight' kwarg to plt.savefig. E.g. \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\ndata = np.arange(3000).reshape((100,30))\nplt.imshow(data)\nplt.savefig('test.png', bbox_inches='tight')\n```\nAnother way is to use fig.tight_layout() \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nxs = np.linspace(0, 1, 20); ys = np.sin(xs)\n\nfig = plt.figure()\naxes = fig.add_subplot(1,1,1)\naxes.plot(xs, ys)\n\n# This should be called after all axes have been added\nfig.tight_layout()\nfig.savefig('test.png')\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/4042192\/reduce-left-and-right-margins-in-matplotlib-plot","best_answers_votes":392,"tags":["python","matplotlib"],"question_length":447,"response_length":562,"tag_count":2} {"question":"Getting individual colors from a color map in matplotlib If you have a Colormap cmap, for example: \n```\ncmap = matplotlib.cm.get_cmap('Spectral')\n```\nHow can you get a particular colour out of it between 0 and 1, where 0 is the first colour in the map and 1 is the last colour in the map? Ideally, I would be able to get the middle colour in the map by doing: \n```\n\n```python\ndo_some_magic(cmap, 0.5) # Return an RGBA tuple\n#Output\n#(0.1, 0.2, 0.3, 1.0)\n#```\n```","response":"You can do this with the code below, and the code in your question was actually very close to what you needed, all you have to do is call the cmap object you have. \n```\nimport matplotlib\n\ncmap = matplotlib.cm.get_cmap('Spectral')\n\nrgba = cmap(0.5)\nprint(rgba) # (0.99807766255210428, 0.99923106502084169, 0.74602077638401709, 1.0)\n```\nFor values outside of the range [0.0, 1.0] it will return the under and over colour (respectively). This, by default, is the minimum and maximum colour within the range (so 0.0 and 1.0). This default can be changed with cmap.set_under() and cmap.set_over(). For \"special\" numbers such as np.nan and np.inf the default is to use the 0.0 value, this can be changed using cmap.set_bad() similarly to under and over as above. Finally it may be necessary for you to normalize your data such that it conforms to the range [0.0, 1.0]. This can be done using matplotlib.colors.Normalize simply as shown in the small example below where the arguments vmin and vmax describe what numbers should be mapped to 0.0 and 1.0 respectively. \n```\nimport matplotlib\n\nnorm = matplotlib.colors.Normalize(vmin=10.0, vmax=20.0)\n\nprint(norm(15.0)) # 0.5\n```\nA logarithmic normaliser (matplotlib.colors.LogNorm) is also available for data ranges with a large range of values. (Thanks to both Joe Kington and tcaswell for suggestions on how to improve the answer.)","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/25408393\/getting-individual-colors-from-a-color-map-in-matplotlib","best_answers_votes":415,"tags":["python","matplotlib","colors"],"question_length":441,"response_length":1373,"tag_count":3} {"question":"How to do a scatter plot with empty circles in Python? In Python, with Matplotlib, how can a scatter plot with empty circles be plotted? The goal is to draw empty circles around some of the colored disks already plotted by scatter(), so as to highlight them, ideally without having to redraw the colored circles. I tried facecolors=None, to no avail.","response":"From the documentation for scatter: \n```\nOptional kwargs control the Collection properties; in particular:\n\n edgecolors:\n The string \u2018none\u2019 to plot faces with no outlines\n facecolors:\n The string \u2018none\u2019 to plot unfilled outlines\n```\nTry the following: \n```\nimport matplotlib.pyplot as plt \nimport numpy as np \n\nx = np.random.randn(60) \ny = np.random.randn(60)\n\nplt.scatter(x, y, s=80, facecolors='none', edgecolors='r')\nplt.show()\n```\nNote: For other types of plots see this post on the use of markeredgecolor and markerfacecolor.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/4143502\/how-to-do-a-scatter-plot-with-empty-circles-in-python","best_answers_votes":408,"tags":["python","matplotlib","geometry","scatter-plot","scatter"],"question_length":350,"response_length":550,"tag_count":5} {"question":"How to plot in multiple subplots I am a little confused about how this code works: \n```\nfig, axes = plt.subplots(nrows=2, ncols=2)\nplt.show()\n```\nHow does the fig, axes work in this case? What does it do? Also why wouldn't this work to do the same thing: \n```\nfig = plt.figure()\naxes = fig.subplots(nrows=2, ncols=2)\n```","response":"There are several ways to do it. The subplots method creates the figure along with the subplots that are then stored in the ax array. For example: \n```\nimport matplotlib.pyplot as plt\n\nx = range(10)\ny = range(10)\n\nfig, ax = plt.subplots(nrows=2, ncols=2)\n\nfor row in ax:\n for col in row:\n col.plot(x, y)\n\nplt.show()\n```\nHowever, something like this will also work, it's not so \"clean\" though since you are creating a figure with subplots and then add on top of them: \n```\nfig = plt.figure()\n\nplt.subplot(2, 2, 1)\nplt.plot(x, y)\n\nplt.subplot(2, 2, 2)\nplt.plot(x, y)\n\nplt.subplot(2, 2, 3)\nplt.plot(x, y)\n\nplt.subplot(2, 2, 4)\nplt.plot(x, y)\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/31726643\/how-to-plot-in-multiple-subplots","best_answers_votes":343,"tags":["python","pandas","matplotlib","seaborn","subplot"],"question_length":320,"response_length":664,"tag_count":5} {"question":"How to set xlim and ylim for a subplot [duplicate] This question already has answers here: How to set the subplot axis range (6 answers) Closed 10 years ago. I would like to limit the X and Y axis in matplotlib for a specific subplot. The subplot figure itself doesn't have any axis property. I want for example to change only the limits for the second plot: \n```\nimport matplotlib.pyplot as plt\nfig=plt.subplot(131)\nplt.scatter([1,2],[3,4])\nfig=plt.subplot(132)\nplt.scatter([10,20],[30,40])\nfig=plt.subplot(133)\nplt.scatter([15,23],[35,43])\nplt.show()\n```","response":"You should use the object-oriented interface to matplotlib, rather than the state machine interface. Almost all of the plt.* function are thin wrappers that basically do gca().*. plt.subplot returns an axes object. Once you have a reference to the axes object you can plot directly to it, change its limits, etc. \n```\nimport matplotlib.pyplot as plt\n\nax1 = plt.subplot(131)\nax1.scatter([1, 2], [3, 4])\nax1.set_xlim([0, 5])\nax1.set_ylim([0, 5])\n\n\nax2 = plt.subplot(132)\nax2.scatter([1, 2],[3, 4])\nax2.set_xlim([0, 5])\nax2.set_ylim([0, 5])\n```\nand so on for as many axes as you want. Or better, wrap it all up in a loop: \n```\nimport matplotlib.pyplot as plt\n\nDATA_x = ([1, 2],\n [2, 3],\n [3, 4])\n\nDATA_y = DATA_x[::-1]\n\nXLIMS = [[0, 10]] * 3\nYLIMS = [[0, 10]] * 3\n\nfor j, (x, y, xlim, ylim) in enumerate(zip(DATA_x, DATA_y, XLIMS, YLIMS)):\n ax = plt.subplot(1, 3, j + 1)\n ax.scatter(x, y)\n ax.set_xlim(xlim)\n ax.set_ylim(ylim)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/15858192\/how-to-set-xlim-and-ylim-for-a-subplot","best_answers_votes":370,"tags":["python","matplotlib","plot","subplot"],"question_length":556,"response_length":957,"tag_count":4} {"question":"Date ticks and rotation [duplicate] This question already has answers here: Rotate axis tick labels (13 answers) Closed 2 years ago. I am having an issue trying to get my date ticks rotated in matplotlib. A small sample program is below. If I try to rotate the ticks at the end, the ticks do not get rotated. If I try to rotate the ticks as shown under the comment 'crashes', then matplot lib crashes. This only happens if the x-values are dates. If I replaces the variable dates with the variable t in the call to avail_plot, the xticks(rotation=70) call works just fine inside avail_plot. Any ideas? \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport datetime as dt\n\ndef avail_plot(ax, x, y, label, lcolor):\n ax.plot(x,y,'b')\n ax.set_ylabel(label, rotation='horizontal', color=lcolor)\n ax.get_yaxis().set_ticks([])\n\n #crashes\n #plt.xticks(rotation=70)\n\n ax2 = ax.twinx()\n ax2.plot(x, [1 for a in y], 'b')\n ax2.get_yaxis().set_ticks([])\n ax2.set_ylabel('testing')\n\nf, axs = plt.subplots(2, sharex=True, sharey=True)\nt = np.arange(0.01, 5, 1)\ns1 = np.exp(t)\nstart = dt.datetime.now()\ndates=[]\nfor val in t:\n next_val = start + dt.timedelta(0,val)\n dates.append(next_val)\n start = next_val\n\navail_plot(axs[0], dates, s1, 'testing', 'green')\navail_plot(axs[1], dates, s1, 'testing2', 'red')\nplt.subplots_adjust(hspace=0, bottom=0.3)\nplt.yticks([0.5,],(\"\",\"\"))\n#doesn't crash, but does not rotate the xticks\n#plt.xticks(rotation=70)\nplt.show()\n```","response":"If you prefer a non-object-oriented approach, move plt.xticks(rotation=70) to right before the two avail_plot calls, eg \n```\nplt.xticks(rotation=70)\navail_plot(axs[0], dates, s1, 'testing', 'green')\navail_plot(axs[1], dates, s1, 'testing2', 'red')\n```\nThis sets the rotation property before setting up the labels. Since you have two axes here, plt.xticks gets confused after you've made the two plots. At the point when plt.xticks doesn't do anything, plt.gca() does not give you the axes you want to modify, and so plt.xticks, which acts on the current axes, is not going to work. For an object-oriented approach not using plt.xticks, you can use \n```\nplt.setp( axs[1].xaxis.get_majorticklabels(), rotation=70 )\n```\nafter the two avail_plot calls. This sets the rotation on the correct axes specifically.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/11264521\/date-ticks-and-rotation","best_answers_votes":319,"tags":["python","matplotlib","xticks"],"question_length":1492,"response_length":805,"tag_count":3} {"question":"Format y axis as percent I have an existing plot that was created with pandas like this: \n```\ndf['myvar'].plot(kind='bar')\n```\nThe y axis is format as float and I want to change the y axis to percentages. All of the solutions I found use ax.xyz syntax and I can only place code below the line above that creates the plot (I cannot add ax=ax to the line above.) How can I format the y axis as percentages without changing the line above? Here is the solution I found but requires that I redefine the plot: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport matplotlib.ticker as mtick\n\ndata = [8,12,15,17,18,18.5]\nperc = np.linspace(0,100,len(data))\n\nfig = plt.figure(1, (7,4))\nax = fig.add_subplot(1,1,1)\n\nax.plot(perc, data)\n\nfmt = '%.0f%%' # Format you want the ticks, e.g. '40%'\nxticks = mtick.FormatStrFormatter(fmt)\nax.xaxis.set_major_formatter(xticks)\n\nplt.show()\n```\nLink to the above solution: Pyplot: using percentage on x axis","response":"This is a few months late, but I have created PR#6251 with matplotlib to add a new PercentFormatter class. With this class you just need one line to reformat your axis (two if you count the import of matplotlib.ticker): \n```\nimport ...\nimport matplotlib.ticker as mtick\n\nax = df['myvar'].plot(kind='bar')\nax.yaxis.set_major_formatter(mtick.PercentFormatter())\n```\nPercentFormatter() accepts three arguments, xmax, decimals, symbol. xmax allows you to set the value that corresponds to 100% on the axis. This is nice if you have data from 0.0 to 1.0 and you want to display it from 0% to 100%. Just do PercentFormatter(1.0). The other two parameters allow you to set the number of digits after the decimal point and the symbol. They default to None and '%', respectively. decimals=None will automatically set the number of decimal points based on how much of the axes you are showing. Update PercentFormatter was introduced into Matplotlib proper in version 2.1.0.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/31357611\/format-y-axis-as-percent","best_answers_votes":376,"tags":["python","pandas","matplotlib","plot"],"question_length":947,"response_length":963,"tag_count":4} {"question":"reducing number of plot ticks I have too many ticks on my graph and they are running into each other. How can I reduce the number of ticks? For example, I have ticks: \n```\n1E-6, 1E-5, 1E-4, ... 1E6, 1E7\n```\nAnd I only want: \n```\n1E-5, 1E-3, ... 1E5, 1E7\n```\nI've tried playing with the LogLocator, but I haven't been able to figure this out.","response":"Alternatively, if you want to simply set the number of ticks while allowing matplotlib to position them (currently only with MaxNLocator), there is pyplot.locator_params, \n```\npyplot.locator_params(nbins=4)\n```\nYou can specify specific axis in this method as mentioned below, default is both: \n```\n# To specify the number of ticks on both or any single axes\npyplot.locator_params(axis='y', nbins=6)\npyplot.locator_params(axis='x', nbins=10)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/6682784\/reducing-number-of-plot-ticks","best_answers_votes":368,"tags":["python","matplotlib","xticks","yticks"],"question_length":341,"response_length":444,"tag_count":4} {"question":"How to plot multiple dataframes in subplots I have a few Pandas DataFrames sharing the same value scale, but having different columns and indices. When invoking df.plot(), I get separate plot images. what I really want is to have them all in the same plot as subplots, but I'm unfortunately failing to come up with a solution to how and would highly appreciate some help.","response":"You can manually create the subplots with matplotlib, and then plot the dataframes on a specific subplot using the ax keyword. For example for 4 subplots (2x2): \n```\nimport matplotlib.pyplot as plt\n\nfig, axes = plt.subplots(nrows=2, ncols=2)\n\ndf1.plot(ax=axes[0,0])\ndf2.plot(ax=axes[0,1])\n...\n```\nHere axes is an array which holds the different subplot axes, and you can access one just by indexing axes. If you want a shared x-axis, then you can provide sharex=True to plt.subplots.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/22483588\/how-to-plot-multiple-dataframes-in-subplots","best_answers_votes":414,"tags":["python","pandas","matplotlib","seaborn","subplot"],"question_length":371,"response_length":483,"tag_count":5} {"question":"How do I equalize the scales of the x-axis and y-axis? How do I create a plot where the scales of x-axis and y-axis are the same? This equal ratio should be maintained even if I change the window size. Currently, my graph scales together with the window size. I tried: \n```\nplt.xlim(-3, 3)\nplt.ylim(-3, 3)\nplt.axis('equal')\n```","response":"Use Axes.set_aspect in the following manner: \n```\nfrom matplotlib import pyplot as plt\nplt.plot(range(5))\nplt.xlim(-3, 3)\nplt.ylim(-3, 3)\nax = plt.gca()\nax.set_aspect('equal', adjustable='box')\nplt.draw()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/17990845\/how-do-i-equalize-the-scales-of-the-x-axis-and-y-axis","best_answers_votes":323,"tags":["python","matplotlib","axis","aspect-ratio"],"question_length":327,"response_length":208,"tag_count":4} {"question":"Set Colorbar Range I have the following code: \n```py\nimport matplotlib.pyplot as plt\n\ncdict = {\n 'red' : ( (0.0, 0.25, .25), (0.02, .59, .59), (1., 1., 1.)),\n 'green': ( (0.0, 0.0, 0.0), (0.02, .45, .45), (1., .97, .97)),\n 'blue' : ( (0.0, 1.0, 1.0), (0.02, .75, .75), (1., 0.45, 0.45))\n}\n \ncm = m.colors.LinearSegmentedColormap('my_colormap', cdict, 1024)\n \nplt.clf()\nplt.pcolor(X, Y, v, cmap=cm)\nplt.loglog()\nplt.xlabel('X Axis')\nplt.ylabel('Y Axis')\n \nplt.colorbar()\nplt.show()\n```\nThis produces a graph of the values v on the axes X vs Y, using the specified colormap. The X and Y axes are perfect, but the colormap spreads between the min and max of v. I would like to force the colormap to range between 0 and 1. I thought of using: \n```py\nplt.axis(...)\n```\nTo set the ranges of the axes, but this only takes arguments for the min and max of X and Y, not the colormap. Edit: For clarity, let's say I have one graph whose values range (0 ... 0.3), and another graph whose values (0.2 ... 0.8). In both graphs, I will want the range of the colorbar to be (0 ... 1). In both graphs, I want this range of colour to be identical using the full range of cdict above (so 0.25 in both graphs will be the same colour). In the first graph, all colours between 0.3 and 1.0 won't feature in the graph, but will in the colourbar key at the side. In the other, all colours between 0 and 0.2, and between 0.8 and 1 will not feature in the graph, but will in the colourbar at the side.","response":"Using vmin and vmax forces the range for the colors. Here's an example: \n```\nimport matplotlib as m\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ncdict = {\n 'red' : ( (0.0, 0.25, .25), (0.02, .59, .59), (1., 1., 1.)),\n 'green': ( (0.0, 0.0, 0.0), (0.02, .45, .45), (1., .97, .97)),\n 'blue' : ( (0.0, 1.0, 1.0), (0.02, .75, .75), (1., 0.45, 0.45))\n}\n\ncm = m.colors.LinearSegmentedColormap('my_colormap', cdict, 1024)\n\nx = np.arange(0, 10, .1)\ny = np.arange(0, 10, .1)\nX, Y = np.meshgrid(x,y)\n\ndata = 2*( np.sin(X) + np.sin(3*Y) )\n\ndef do_plot(n, f, title):\n #plt.clf()\n plt.subplot(1, 3, n)\n plt.pcolor(X, Y, f(data), cmap=cm, vmin=-4, vmax=4)\n plt.title(title)\n plt.colorbar()\n\nplt.figure()\ndo_plot(1, lambda x:x, \"all\")\ndo_plot(2, lambda x:np.clip(x, -4, 0), \"<0\")\ndo_plot(3, lambda x:np.clip(x, 0, 4), \">0\")\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/3373256\/set-colorbar-range","best_answers_votes":242,"tags":["python","matplotlib","graph","colorbar","colormap"],"question_length":1482,"response_length":852,"tag_count":5} {"question":"How to put individual tags for a matplotlib scatter plot? I am trying to do a scatter plot in matplotlib and I couldn't find a way to add tags to the points. For example: \n```\nscatter1=plt.scatter(data1[\"x\"], data1[\"y\"], marker=\"o\",\n c=\"blue\",\n facecolors=\"white\",\n edgecolors=\"blue\")\n```\nI want for the points in \"y\" to have labels as \"point 1\", \"point 2\", etc. I couldn't figure it out.","response":"Perhaps use plt.annotate: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nN = 10\ndata = np.random.random((N, 4))\nlabels = ['point{0}'.format(i) for i in range(N)]\n\nplt.subplots_adjust(bottom = 0.1)\nplt.scatter(\n data[:, 0], data[:, 1], marker='o', c=data[:, 2], s=data[:, 3] * 1500,\n cmap=plt.get_cmap('Spectral'))\n\nfor label, x, y in zip(labels, data[:, 0], data[:, 1]):\n plt.annotate(\n label,\n xy=(x, y), xytext=(-20, 20),\n textcoords='offset points', ha='right', va='bottom',\n bbox=dict(boxstyle='round,pad=0.5', fc='yellow', alpha=0.5),\n arrowprops=dict(arrowstyle = '->', connectionstyle='arc3,rad=0'))\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/5147112\/how-to-put-individual-tags-for-a-matplotlib-scatter-plot","best_answers_votes":384,"tags":["python","matplotlib"],"question_length":448,"response_length":676,"tag_count":2} {"question":"Plotting time on the independent axis I have an array of timestamps in the format (HH:MM:SS.mmmmmm) and another array of floating point numbers, each corresponding to a value in the timestamp array. Can I plot time on the x axis and the numbers on the y-axis using Matplotlib? I was trying to, but somehow it was only accepting arrays of floats. How can I get it to plot the time? Do I have to modify the format in any way?","response":"Update: This answer is outdated since matplotlib version 3.5. The plot function now handles datetime data directly. See https:\/\/matplotlib.org\/3.5.1\/api\/_as_gen\/matplotlib.pyplot.plot_date.html The use of plot_date is discouraged. This method exists for historic reasons and may be deprecated in the future. datetime-like data should directly be plotted using plot. If you need to plot plain numeric data as Matplotlib date format or need to set a timezone, call ax.xaxis.axis_date \/ ax.yaxis.axis_date before plot. See Axis.axis_date. Old, outdated answer: You must first convert your timestamps to Python datetime objects (use datetime.strptime). Then use date2num to convert the dates to matplotlib format. Plot the dates and values using plot_date: \n```\nimport matplotlib.pyplot as plt\nimport matplotlib.dates\n\nfrom datetime import datetime\n\nx_values = [datetime(2021, 11, 18, 12), datetime(2021, 11, 18, 14), datetime(2021, 11, 18, 16)]\ny_values = [1.0, 3.0, 2.0]\n\ndates = matplotlib.dates.date2num(x_values)\nplt.plot_date(dates, y_values)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/1574088\/plotting-time-on-the-independent-axis","best_answers_votes":226,"tags":["python","matplotlib","timestamp","x-axis"],"question_length":423,"response_length":1048,"tag_count":4} {"question":"How do I tell matplotlib that I am done with a plot? The following code plots to two PostScript (.ps) files, but the second one contains both lines. \n```\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport matplotlib.mlab as mlab\n\nplt.subplot(111)\nx = [1,10]\ny = [30, 1000]\nplt.loglog(x, y, basex=10, basey=10, ls=\"-\")\nplt.savefig(\"first.ps\")\n\n\nplt.subplot(111)\nx = [10,100]\ny = [10, 10000]\nplt.loglog(x, y, basex=10, basey=10, ls=\"-\")\nplt.savefig(\"second.ps\")\n```\nHow can I tell matplotlib to start afresh for the second plot?","response":"There is a clear figure command, and it should do it for you: \n```\nplt.clf()\n```\nIf you have multiple subplots in the same figure \n```\nplt.cla()\n```\nclears the current axes.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/741877\/how-do-i-tell-matplotlib-that-i-am-done-with-a-plot","best_answers_votes":227,"tags":["python","matplotlib","plot"],"question_length":532,"response_length":173,"tag_count":3} {"question":"How do I create a second (new) plot, then later plot on the old one? I want to plot data, then create a new figure and plot data2, and finally come back to the original plot and plot data3, kinda like this: \n```\nimport numpy as np\nimport matplotlib as plt\n\nx = arange(5)\ny = np.exp(5)\nplt.figure()\nplt.plot(x, y)\n\nz = np.sin(x)\nplt.figure()\nplt.plot(x, z)\n\nw = np.cos(x)\nplt.figure(\"\"\"first figure\"\"\") # Here's the part I need\nplt.plot(x, w)\n```\nFYI How do I tell matplotlib that I am done with a plot? does something similar, but not quite! It doesn't let me get access to that original plot.","response":"If you find yourself doing things like this regularly it may be worth investigating the object-oriented interface to matplotlib. In your case: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nx = np.arange(5)\ny = np.exp(x)\nfig1, ax1 = plt.subplots()\nax1.plot(x, y)\nax1.set_title(\"Axis 1 title\")\nax1.set_xlabel(\"X-label for axis 1\")\n\nz = np.sin(x)\nfig2, (ax2, ax3) = plt.subplots(nrows=2, ncols=1) # two axes on figure\nax2.plot(x, z)\nax3.plot(x, -z)\n\nw = np.cos(x)\nax1.plot(x, w) # can continue plotting on the first axis\n```\nIt is a little more verbose but it's much clearer and easier to keep track of, especially with several figures each with multiple subplots.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/6916978\/how-do-i-create-a-second-new-plot-then-later-plot-on-the-old-one","best_answers_votes":197,"tags":["python","matplotlib","plot","figure"],"question_length":593,"response_length":672,"tag_count":4} {"question":"How to plot a high resolution graph I've used matplotlib for plotting some experimental results (discussed it in here: Looping over files and plotting. However, saving the picture by clicking right to the image gives very bad quality \/ low resolution images. \n```\nfrom glob import glob\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\n\n# loop over all files in the current directory ending with .txt\nfor fname in glob(\".\/*.txt\"):\n # read file, skip header (1 line) and unpack into 3 variables\n WL, ABS, T = np.genfromtxt(fname, skip_header=1, unpack=True)\n\n # first plot\n plt.plot(WL, T, label='BN', color='blue')\n\n plt.xlabel('Wavelength (nm)')\n plt.xlim(200,1000)\n plt.ylim(0,100)\n plt.ylabel('Transmittance, %')\n mpl.rcParams.update({'font.size': 14})\n #plt.legend(loc='lower center')\n plt.title('')\n plt.show()\n plt.clf()\n\n\n # second plot\n plt.plot(WL, ABS, label='BN', color='red')\n plt.xlabel('Wavelength (nm)')\n plt.xlim(200,1000)\n plt.ylabel('Absorbance, A')\n mpl.rcParams.update({'font.size': 14})\n #plt.legend()\n plt.title('')\n plt.show()\n plt.clf()\n```\nExample graph of what I'm looking for: example graph","response":"You can use savefig() to export to an image file: \n```\nplt.savefig('filename.png')\n```\nIn addition, you can specify the dpi argument to some scalar value (default is 100). For example: \n```\nplt.savefig('filename.png', dpi=300)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/39870642\/how-to-plot-a-high-resolution-graph","best_answers_votes":303,"tags":["python","matplotlib"],"question_length":1212,"response_length":230,"tag_count":2} {"question":"How to display an image I tried to use IPython.display with the following code: \n```\nfrom IPython.display import display, Image\ndisplay(Image(filename='MyImage.png'))\n```\nI also tried to use matplotlib with the following code: \n```\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nplt.imshow(mpimg.imread('MyImage.png'))\n```\nIn both cases, nothing is displayed, not even an error message.","response":"If you are using matplotlib and want to show the image in your interactive notebook, try the following: \n```\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimg = mpimg.imread('your_image.png')\nimgplot = plt.imshow(img)\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/35286540\/how-to-display-an-image","best_answers_votes":388,"tags":["python","opencv","matplotlib","imshow"],"question_length":404,"response_length":270,"tag_count":4} {"question":"Plotting a list of (x, y) coordinates I have a list of pairs (a, b) that I would like to plot with matplotlib in python as actual x-y coordinates. Currently, it is making two plots, where the index of the list gives the x-coordinate, and the first plot's y values are the as in the pairs and the second plot's y values are the bs in the pairs. To clarify, my data looks like this: li = [(a,b), (c,d), ... , (t, u)] and I want to do a one-liner that just calls plt.plot(). If I didn't require a one-liner I could trivially do: \n```py\nxs = [x[0] for x in li]\nys = [x[1] for x in li]\nplt.plot(xs, ys)\n```\nHow can I get matplotlib to plot these pairs as x-y coordinates? Sample data \n```py\n# sample data\nli = list(zip(range(1, 14), range(14, 27)))\n\nli \u2192 [(1, 14), (2, 15), (3, 16), (4, 17), (5, 18), (6, 19), (7, 20), (8, 21), (9, 22), (10, 23), (11, 24), (12, 25), (13, 26)]\n```\nIncorrect Plot \n```py\nplt.plot(li)\nplt.title('Incorrect Plot:\\nEach index of the tuple plotted as separate lines')\n```\nDesired Plot This produces the correct plot, but to many lines of code are used to unpack li. I need to unpack and plot with a single line of code, not multiple list-comprehensions. \n```py\nxs = [x[0] for x in li]\nys = [x[1] for x in li]\nplt.plot(xs, ys)\nplt.title('Correct Plot:\\nBut uses to many lines to unpack li')\n```","response":"Given li in the question: \n```\nli = list(zip(range(1, 14), range(14, 27)))\n```\nTo unpack the data from pairs into lists use zip: \n```\nx, y = zip(*li)\n\nx \u2192 (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13)\ny \u2192 (14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26)\n```\nThe one-liner uses the unpacking operator (*), to unpack the list of tuples for zip, and unpacks the zip object into the plot API. \n```\nplt.scatter(*zip(*li))\n```\n```\nplt.plot(*zip(*li))\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/21519203\/plotting-a-list-of-x-y-coordinates","best_answers_votes":280,"tags":["python","list","matplotlib","plot","coordinates"],"question_length":1316,"response_length":450,"tag_count":5} {"question":"Plt.show shows full graph but savefig is cropping the image My code is succesfully saving images to file, but it is cropping important details from the right hand side. Answers exist for fixing this problem when it arises for plt.show, but it is the savefig command that is incorrectly producing the graph in this example. How can this be fixed? The relevant sample of my code: \n```\nimport glob\nimport os\nfor file in glob.glob(\"*.oax\"):\n try:\n spc_file = open(file, 'r').read()\n newName = file[6:8] + '-' + file[4:6] + '-' + file[0:4] + ' ' + file[8:12] + ' UTC (Observed) - No Sea Breeze Day'\n plt.title(newName, fontsize=12, loc='left')\n plt.savefig('X:\/' + newName + '.png') \n plt.show()\n except Exception:\n pass\n```\nAnd the images (top is plt.show and bottom is file produced from savefig:","response":"You may try \n```\nplt.savefig('X:\/' + newName + '.png', bbox_inches='tight')\n```\nOr you may define figure size like \n```\nfig = plt.figure(figsize=(9, 11))\n...\nplt.savefig(filename, bbox_inches = 'tight')\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/37427362\/plt-show-shows-full-graph-but-savefig-is-cropping-the-image","best_answers_votes":351,"tags":["python","matplotlib"],"question_length":849,"response_length":206,"tag_count":2} {"question":"How to rotate x-axis tick labels in a pandas plot With the following code: \n```\nimport matplotlib\nmatplotlib.style.use('ggplot')\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\ndf = pd.DataFrame({ 'celltype':[\"foo\",\"bar\",\"qux\",\"woz\"], 's1':[5,9,1,7], 's2':[12,90,13,87]})\ndf = df[[\"celltype\",\"s1\",\"s2\"]]\ndf.set_index([\"celltype\"],inplace=True)\ndf.plot(kind='bar',alpha=0.75)\nplt.xlabel(\"\")\n```\nI made this plot: How can I rotate the x-axis tick labels to 0 degrees? I tried adding this but did not work: \n```\nplt.set_xticklabels(df.index,rotation=90)\n```","response":"Pass param rot=0 to rotate the xticklabels: \n```\nimport matplotlib\nmatplotlib.style.use('ggplot')\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\ndf = pd.DataFrame({ 'celltype':[\"foo\",\"bar\",\"qux\",\"woz\"], 's1':[5,9,1,7], 's2':[12,90,13,87]})\ndf = df[[\"celltype\",\"s1\",\"s2\"]]\ndf.set_index([\"celltype\"],inplace=True)\ndf.plot(kind='bar',alpha=0.75, rot=0)\nplt.xlabel(\"\")\nplt.show()\n```\nyields plot:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/32244019\/how-to-rotate-x-axis-tick-labels-in-a-pandas-plot","best_answers_votes":329,"tags":["python","pandas","matplotlib"],"question_length":559,"response_length":398,"tag_count":3} {"question":"Remove or adapt border of frame of legend using matplotlib When plotting a plot using matplotlib: How to remove the box of the legend? How to change the color of the border of the legend box? How to remove only the border of the box of the legend?","response":"When plotting a plot using matplotlib: How to remove the box of the legend? \n```\nplt.legend(frameon=False)\n```\nHow to change the color of the border of the legend box? \n```\nleg = plt.legend()\nleg.get_frame().set_edgecolor('b')\n```\nHow to remove only the border of the box of the legend? \n```\nleg = plt.legend()\nleg.get_frame().set_linewidth(0.0)\n```\nFor the matplotlib object oriented approach: \n```\naxes.legend(frameon=False)\n\nleg = axes.legend()\nleg.get_frame().set_edgecolor('b')\nleg.get_frame().set_linewidth(0.0)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/25540259\/remove-or-adapt-border-of-frame-of-legend-using-matplotlib","best_answers_votes":327,"tags":["python","matplotlib"],"question_length":247,"response_length":521,"tag_count":2} {"question":"How to pick a new color for each plotted line within a figure I'd like to NOT specify a color for each plotted line, and have each line get a distinct color. But if I run: \n```\nfrom matplotlib import pyplot as plt\nfor i in range(20):\n plt.plot([0, 1], [i, i])\n\nplt.show()\n```\nthen I get this output: If you look at the image above, you can see that matplotlib attempts to pick colors for each line that are different, but eventually it re-uses colors - the top ten lines use the same colors as the bottom ten. I just want to stop it from repeating already used colors AND\/OR feed it a list of colors to use.","response":"I usually use the second one of these: \n```py\nfrom matplotlib.pyplot import cm\nimport numpy as np\n\n#variable n below should be number of curves to plot\n\n#version 1:\n\ncolor = cm.rainbow(np.linspace(0, 1, n))\nfor i, c in enumerate(color):\n plt.plot(x, y, c=c)\n\n#or version 2:\n\ncolor = iter(cm.rainbow(np.linspace(0, 1, n)))\nfor i in range(n):\n c = next(color)\n plt.plot(x, y, c=c)\n```\nExample of 2:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/4971269\/how-to-pick-a-new-color-for-each-plotted-line-within-a-figure","best_answers_votes":223,"tags":["python","matplotlib","colormap"],"question_length":610,"response_length":402,"tag_count":3} {"question":"How to share x axes of two subplots after they have been created I'm trying to share two subplots axes, but I need to share the x axis after the figure was created. E.g. I create this figure: \n```py\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nt = np.arange(1000)\/100.\nx = np.sin(2*np.pi*10*t)\ny = np.cos(2*np.pi*10*t)\n\nfig = plt.figure()\nax1 = plt.subplot(211)\nplt.plot(t,x)\nax2 = plt.subplot(212)\nplt.plot(t,y)\n\n# some code to share both x axes\n\nplt.show()\n```\nInstead of the comment I want to insert some code to share both x axes. How do I do this? There are some relevant sounding attributes _shared_x_axes and _shared_x_axes when I check to figure axis (fig.get_axes()) but I don't know how to link them.","response":"The usual way to share axes is to create the shared properties at creation. Either \n```\nfig=plt.figure()\nax1 = plt.subplot(211)\nax2 = plt.subplot(212, sharex = ax1)\n```\nor \n```\nfig, (ax1, ax2) = plt.subplots(nrows=2, sharex=True)\n```\nSharing the axes after they have been created should therefore not be necessary. However if for any reason, you need to share axes after they have been created (actually, using a different library which creates some subplots, like here might be a reason), there would still be a solution: Using \n```\nax2.sharex(ax1)\n```\ncreates a link between the two axes, ax1 and ax2. In contrast to the sharing at creation time, you will have to set the xticklabels off manually for one of the axes (in case that is wanted). A complete example: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nt= np.arange(1000)\/100.\nx = np.sin(2*np.pi*10*t)\ny = np.cos(2*np.pi*10*t)\n\nfig=plt.figure()\nax1 = plt.subplot(211)\nax2 = plt.subplot(212)\n\nax1.plot(t,x)\nax2.plot(t,y)\n\nax2.sharex(ax1)\nax1.set_xticklabels([])\n# ax2.autoscale() ## call autoscale if needed\n\nplt.show()\n```\nFor a list of axes you would do: \n```\nfor ax in axes[1:]:\n ax.sharex(axes[0])\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/42973223\/how-to-share-x-axes-of-two-subplots-after-they-have-been-created","best_answers_votes":265,"tags":["python","matplotlib","axis"],"question_length":717,"response_length":1176,"tag_count":3} {"question":"How to connect scatterplot points with line using matplotlib I have two lists, dates and values. I want to plot them using matplotlib. The following creates a scatter plot of my data. \n```py\nimport matplotlib.pyplot as plt\n\nplt.scatter(dates,values)\nplt.show()\n```\nplt.plot(dates, values) creates a line graph. But what I really want is a scatterplot where the points are connected by a line. Similar to in R: \n```r\nplot(dates, values)\nlines(dates, value, type=\"l\")\n```\nwhich gives me a scatterplot of points overlaid with a line connecting the points. How do I do this in python?","response":"I think @Evert has the right answer: \n```\nplt.scatter(dates,values)\nplt.plot(dates, values)\nplt.show()\n```\nWhich is pretty much the same as \n```\nplt.plot(dates, values, '-o')\nplt.show()\n```\nYou can replace -o with another suitable format string as described in the documentation. You can also split the choices of line and marker styles using the linestyle= and marker= keyword arguments.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/20130227\/how-to-connect-scatterplot-points-with-line-using-matplotlib","best_answers_votes":224,"tags":["python","matplotlib","scatter-plot"],"question_length":580,"response_length":388,"tag_count":3} {"question":"How to plot multiple functions on the same figure How can I plot the following 3 functions (i.e. sin, cos and the addition), on the domain t, in the same figure? \n```py\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nt = np.linspace(0, 2*np.pi, 400)\n\na = np.sin(t)\nb = np.cos(t)\nc = a + b\n```","response":"To plot multiple graphs on the same figure you will have to do: \n```\nfrom numpy import *\nimport math\nimport matplotlib.pyplot as plt\n\nt = linspace(0, 2*math.pi, 400)\na = sin(t)\nb = cos(t)\nc = a + b\n\nplt.plot(t, a, 'r') # plotting t, a separately \nplt.plot(t, b, 'b') # plotting t, b separately \nplt.plot(t, c, 'g') # plotting t, c separately \nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/22276066\/how-to-plot-multiple-functions-on-the-same-figure","best_answers_votes":263,"tags":["python","function","matplotlib","plot","graph"],"question_length":296,"response_length":357,"tag_count":5} {"question":"Moving x-axis to the top of a plot in matplotlib Based on this question about heatmaps in matplotlib, I wanted to move the x-axis titles to the top of the plot. \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\ncolumn_labels = list('ABCD')\nrow_labels = list('WXYZ')\ndata = np.random.rand(4,4)\nfig, ax = plt.subplots()\nheatmap = ax.pcolor(data, cmap=plt.cm.Blues)\n\n# put the major ticks at the middle of each cell\nax.set_xticks(np.arange(data.shape[0])+0.5, minor=False)\nax.set_yticks(np.arange(data.shape[1])+0.5, minor=False)\n\n# want a more natural, table-like display\nax.invert_yaxis()\nax.xaxis.set_label_position('top') # <-- This doesn't work!\n\nax.set_xticklabels(row_labels, minor=False)\nax.set_yticklabels(column_labels, minor=False)\nplt.show()\n```\nHowever, calling matplotlib's set_label_position (as notated above) doesn't seem to have the desired effect. Here's my output: What am I doing wrong?","response":"Use \n```\nax.xaxis.tick_top()\n```\nto place the tick marks at the top of the image. The command \n```\nax.set_xlabel('X LABEL') \nax.xaxis.set_label_position('top')\n```\naffects the label, not the tick marks. \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\ncolumn_labels = list('ABCD')\nrow_labels = list('WXYZ')\ndata = np.random.rand(4, 4)\nfig, ax = plt.subplots()\nheatmap = ax.pcolor(data, cmap=plt.cm.Blues)\n\n# put the major ticks at the middle of each cell\nax.set_xticks(np.arange(data.shape[1]) + 0.5, minor=False)\nax.set_yticks(np.arange(data.shape[0]) + 0.5, minor=False)\n\n# want a more natural, table-like display\nax.invert_yaxis()\nax.xaxis.tick_top()\n\nax.set_xticklabels(column_labels, minor=False)\nax.set_yticklabels(row_labels, minor=False)\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/14406214\/moving-x-axis-to-the-top-of-a-plot-in-matplotlib","best_answers_votes":242,"tags":["python","matplotlib","plot","visualization"],"question_length":910,"response_length":770,"tag_count":4} {"question":"Prevent scientific notation I have the following code: \n```\nplt.plot(range(2003,2012,1),range(200300,201200,100))\n# several solutions from other questions have not worked, including\n# plt.ticklabel_format(style='sci', axis='x', scilimits=(-1000000,1000000))\n# ax.get_xaxis().get_major_formatter().set_useOffset(False)\nplt.show()\n```\nwhich produces the following plot: How do I prevent scientific notation here? Is ticklabel_format broken? does not resolve the issue of actually removing the offset. \n```py\nplt.plot(np.arange(1e6, 3 * 1e7, 1e6))\nplt.ticklabel_format(useOffset=False)\n```","response":"In your case, you're actually wanting to disable the offset. Using scientific notation is a separate setting from showing things in terms of an offset value. However, ax.ticklabel_format(useOffset=False) should have worked (though you've listed it as one of the things that didn't). For example: \n```\nfig, ax = plt.subplots()\nax.plot(range(2003,2012,1),range(200300,201200,100))\nax.ticklabel_format(useOffset=False)\nplt.show()\n```\nIf you want to disable both the offset and scientific notaion, you'd use ax.ticklabel_format(useOffset=False, style='plain'). Difference between \"offset\" and \"scientific notation\" In matplotlib axis formatting, \"scientific notation\" refers to a multiplier for the numbers show, while the \"offset\" is a separate term that is added. Consider this example: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.linspace(1000, 1001, 100)\ny = np.linspace(1e-9, 1e9, 100)\n\nfig, ax = plt.subplots()\nax.plot(x, y)\nplt.show()\n```\nThe x-axis will have an offset (note the + sign) and the y-axis will use scientific notation (as a multiplier -- No plus sign). We can disable either one separately. The most convenient way is the ax.ticklabel_format method (or plt.ticklabel_format). For example, if we call: \n```\nax.ticklabel_format(style='plain')\n```\nWe'll disable the scientific notation on the y-axis: And if we call \n```\nax.ticklabel_format(useOffset=False)\n```\nWe'll disable the offset on the x-axis, but leave the y-axis scientific notation untouched: Finally, we can disable both through: \n```\nax.ticklabel_format(useOffset=False, style='plain')\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/28371674\/prevent-scientific-notation","best_answers_votes":262,"tags":["python","matplotlib","plot","scientific-notation","xticks"],"question_length":586,"response_length":1586,"tag_count":5} {"question":"How to maximize a plt.show() window Just for curiosity I would like to know how to do this in the code below. I have been searching for an answer but is useless. \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\ndata=np.random.exponential(scale=180, size=10000)\nprint ('el valor medio de la distribucion exponencial es: ')\nprint np.average(data)\nplt.hist(data,bins=len(data)**0.5,normed=True, cumulative=True, facecolor='red', label='datos tamano paqutes acumulativa', alpha=0.5)\nplt.legend()\nplt.xlabel('algo')\nplt.ylabel('algo')\nplt.grid()\nplt.show()\n```","response":"I am on a Windows (WIN7), running Python 2.7.5 & Matplotlib 1.3.1. I was able to maximize Figure windows for TkAgg, QT4Agg, and wxAgg using the following lines: \n```py\nfrom matplotlib import pyplot as plt\n\n### for 'TkAgg' backend\nplt.figure(1)\nplt.switch_backend('TkAgg') #TkAgg (instead Qt4Agg)\nprint '#1 Backend:',plt.get_backend()\nplt.plot([1,2,6,4])\nmng = plt.get_current_fig_manager()\n### works on Ubuntu??? >> did NOT working on windows\n# mng.resize(*mng.window.maxsize())\nmng.window.state('zoomed') #works fine on Windows!\nplt.show() #close the figure to run the next section\n\n### for 'wxAgg' backend\nplt.figure(2)\nplt.switch_backend('wxAgg')\nprint '#2 Backend:',plt.get_backend()\nplt.plot([1,2,6,4])\nmng = plt.get_current_fig_manager()\nmng.frame.Maximize(True)\nplt.show() #close the figure to run the next section\n\n### for 'Qt4Agg' backend\nplt.figure(3)\nplt.switch_backend('QT4Agg') #default on my system\nprint '#3 Backend:',plt.get_backend()\nplt.plot([1,2,6,4])\nfigManager = plt.get_current_fig_manager()\nfigManager.window.showMaximized()\nplt.show()\n```\nif you want to maximize multiple figures you can use \n```\nfor fig in figs:\n mng = fig.canvas.manager\n # ...\n```\nHope this summary of the previous answers (and some additions) combined in a working example (at least for windows) helps.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/12439588\/how-to-maximize-a-plt-show-window","best_answers_votes":212,"tags":["python","matplotlib"],"question_length":562,"response_length":1303,"tag_count":2} {"question":"How to create major and minor gridlines with different linestyles I am currently using matplotlib.pyplot to create graphs and would like to have the major gridlines solid and black and the minor ones either greyed or dashed. In the grid properties, which=both\/major\/mine, and then color and linestyle are defined simply by linestyle. Is there a way to specify minor linestyle only? The appropriate code I have so far is \n```\nplt.plot(current, counts, 'rd', markersize=8)\nplt.yscale('log')\nplt.grid(b=True, which='both', color='0.65', linestyle='-')\n```","response":"Actually, it is as simple as setting major and minor separately: \n```\n\n```python\nplot([23, 456, 676, 89, 906, 34, 2345])\n#Output\n#[]\n```\n\n```python\nyscale('log')\n```\n\n```python\ngrid(visible=True, which='major', color='b', linestyle='-')\n```\n\n```python\ngrid(visible=True, which='minor', color='r', linestyle='--')\n```\nThe gotcha with minor grids is that you have to have minor tick marks turned on too. In the above code this is done by yscale('log'), but it can also be done with plt.minorticks_on(). Note: before matplotlib 3.5, visible parameter was named b\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/9127434\/how-to-create-major-and-minor-gridlines-with-different-linestyles","best_answers_votes":230,"tags":["python","matplotlib","gridlines"],"question_length":552,"response_length":578,"tag_count":3} {"question":"Adding an arbitrary line to a matplotlib plot in ipython notebook I'm rather new to both python\/matplotlib and using it through the ipython notebook. I'm trying to add some annotation lines to an existing graph and I can't figure out how to render the lines on a graph. So, for example, if I plot the following: \n```\nimport numpy as np\nnp.random.seed(5)\nx = arange(1, 101)\ny = 20 + 3 * x + np.random.normal(0, 60, 100)\np = plot(x, y, \"o\")\n```\nI get the following graph: So how would I add a vertical line from (70,100) up to (70,250)? What about a diagonal line from (70,100) to (90,200)? I've tried a few things with Line2D() resulting in nothing but confusion on my part. In R I would simply use the segments() function which would add line segments. Is there an equivalent in matplotlib?","response":"You can directly plot the lines you want by feeding the plot command with the corresponding data (boundaries of the segments): plot([x1, x2], [y1, y2], color='k', linestyle='-', linewidth=2) (of course you can choose the color, line width, line style, etc.) From your example: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nnp.random.seed(5)\nx = np.arange(1, 101)\ny = 20 + 3 * x + np.random.normal(0, 60, 100)\nplt.plot(x, y, \"o\")\n\n\n# draw vertical line from (70,100) to (70, 250)\nplt.plot([70, 70], [100, 250], 'k-', lw=2)\n\n# draw diagonal line from (70, 90) to (90, 200)\nplt.plot([70, 90], [90, 200], 'k-')\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/12864294\/adding-an-arbitrary-line-to-a-matplotlib-plot-in-ipython-notebook","best_answers_votes":231,"tags":["matplotlib","ipython"],"question_length":791,"response_length":633,"tag_count":2} {"question":"How to display multiple images in one figure [duplicate] This question already has answers here: Multiple figures in a single window (7 answers) Closed 7 years ago. I am trying to display 20 random images on a single Figure. The images are indeed displayed, but they are overlaid. I am using: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nw=10\nh=10\nfig=plt.figure()\nfor i in range(1,20):\n img = np.random.randint(10, size=(h,w))\n fig.add_subplot(i,2,1)\n plt.imshow(img)\nplt.show()\n```\nI would like them to appear naturally in a grid layout (say 4x5), each with the same size. Part of the problem is that I do not know what the arguments to add_subplot mean. The documentation states that the arguments are the number of rows, number of columns, and plot number. There is no positioning argument. Additionally, the plot number can only be 1 or 2. How can I achieve this?","response":"Here is my approach that you may try: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nw = 10\nh = 10\nfig = plt.figure(figsize=(8, 8))\ncolumns = 4\nrows = 5\nfor i in range(1, columns*rows +1):\n img = np.random.randint(10, size=(h,w))\n fig.add_subplot(rows, columns, i)\n plt.imshow(img)\nplt.show()\n```\nThe resulting image: (Original answer date: Oct 7 '17 at 4:20) Edit 1 Since this answer is popular beyond my expectation. And I see that a small change is needed to enable flexibility for the manipulation of the individual plots. So that I offer this new version to the original code. In essence, it provides:- access to individual axes of subplots possibility to plot more features on selected axes\/subplot New code: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nw = 10\nh = 10\nfig = plt.figure(figsize=(9, 13))\ncolumns = 4\nrows = 5\n\n# prep (x,y) for extra plotting\nxs = np.linspace(0, 2*np.pi, 60) # from 0 to 2pi\nys = np.abs(np.sin(xs)) # absolute of sine\n\n# ax enables access to manipulate each of subplots\nax = []\n\nfor i in range(columns*rows):\n img = np.random.randint(10, size=(h,w))\n # create subplot and append to ax\n ax.append( fig.add_subplot(rows, columns, i+1) )\n ax[-1].set_title(\"ax:\"+str(i)) # set title\n plt.imshow(img, alpha=0.25)\n\n# do extra plots on selected axes\/subplots\n# note: index starts with 0\nax[2].plot(xs, 3*ys)\nax[19].plot(ys**2, xs)\n\nplt.show() # finally, render the plot\n```\nThe resulting plot: Edit 2 In the previous example, the code provides access to the sub-plots with single index, which is inconvenient when the figure has many rows\/columns of sub-plots. Here is an alternative of it. The code below provides access to the sub-plots with [row_index][column_index], which is more suitable for manipulation of array of many sub-plots. \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# settings\nh, w = 10, 10 # for raster image\nnrows, ncols = 5, 4 # array of sub-plots\nfigsize = [6, 8] # figure size, inches\n\n# prep (x,y) for extra plotting on selected sub-plots\nxs = np.linspace(0, 2*np.pi, 60) # from 0 to 2pi\nys = np.abs(np.sin(xs)) # absolute of sine\n\n# create figure (fig), and array of axes (ax)\nfig, ax = plt.subplots(nrows=nrows, ncols=ncols, figsize=figsize)\n\n# plot simple raster image on each sub-plot\nfor i, axi in enumerate(ax.flat):\n # i runs from 0 to (nrows*ncols-1)\n # axi is equivalent with ax[rowid][colid]\n img = np.random.randint(10, size=(h,w))\n axi.imshow(img, alpha=0.25)\n # get indices of row\/column\n rowid = i \/\/ ncols\n colid = i % ncols\n # write row\/col indices as axes' title for identification\n axi.set_title(\"Row:\"+str(rowid)+\", Col:\"+str(colid))\n\n# one can access the axes by ax[row_id][col_id]\n# do additional plotting on ax[row_id][col_id] of your choice\nax[0][2].plot(xs, 3*ys, color='red', linewidth=3)\nax[4][3].plot(ys**2, xs, color='green', linewidth=3)\n\nplt.tight_layout(True)\nplt.show()\n```\nThe resulting plot: Ticks and Tick-labels for Array of Subplots Some of the ticks and tick-labels accompanying the subplots can be hidden to get cleaner plot if all of the subplots share the same value ranges. All of the ticks and tick-labels can be hidden except for the outside edges on the left and bottom like this plot. To achieve the plot with only shared tick-labels on the left and bottom edges, you can do the following:- Add options sharex=True, sharey=True in fig, ax = plt.subplots() That line of code will become: \n```\nfig,ax=plt.subplots(nrows=nrows,ncols=ncols,figsize=figsize,sharex=True,sharey=True)\n```\nTo specify required number of ticks, and labels to plot, inside the body of for i, axi in enumerate(ax.flat):, add these code \n```\naxi.xaxis.set_major_locator(plt.MaxNLocator(5))\naxi.yaxis.set_major_locator(plt.MaxNLocator(4))\n```\nthe number 5, and 4 are the number of ticks\/tick_labels to plot. You may need other values that suit your plots.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/46615554\/how-to-display-multiple-images-in-one-figure","best_answers_votes":332,"tags":["python","matplotlib","imshow"],"question_length":888,"response_length":3944,"tag_count":3} {"question":"Can Pandas plot a histogram of dates? I've taken my Series and coerced it to a datetime column of dtype=datetime64[ns] (though only need day resolution...not sure how to change). \n```\nimport pandas as pd\ndf = pd.read_csv('somefile.csv')\ncolumn = df['date']\ncolumn = pd.to_datetime(column, coerce=True)\n```\nbut plotting doesn't work: \n```\nipdb> column.plot(kind='hist')\n*** TypeError: ufunc add cannot use operands with types dtype('\", line 1, in \n# File \"\/usr\/local\/lib\/python2.7\/site-packages\/matplotlib\/axes.py\", line 4904, in bar\n# self.add_patch(r)\n# File \"\/usr\/local\/lib\/python2.7\/site-packages\/matplotlib\/axes.py\", line 1570, in add_patch\n# self._update_patch_limits(p)\n# File \"\/usr\/local\/lib\/python2.7\/site-packages\/matplotlib\/axes.py\", line 1588, in _update_patch_limits\n# xys = patch.get_patch_transform().transform(vertices)\n# File \"\/usr\/local\/lib\/python2.7\/site-packages\/matplotlib\/patches.py\", line 580, in get_patch_transform\n# self._update_patch_transform()\n# File \"\/usr\/local\/lib\/python2.7\/site-packages\/matplotlib\/patches.py\", line 576, in _update_patch_transform\n# bbox = transforms.Bbox.from_bounds(x, y, width, height)\n# File \"\/usr\/local\/lib\/python2.7\/site-packages\/matplotlib\/transforms.py\", line 786, in from_bounds\n# return Bbox.from_extents(x0, y0, x0 + width, y0 + height)\n#TypeError: coercing to Unicode: need string or buffer, float found\n#```\n```","response":"You can do it in two lines by first plotting the bar chart and then setting the appropriate ticks: \n```\nimport matplotlib.pyplot as plt\n\nD = {u'Label1':26, u'Label2': 17, u'Label3':30}\n\nplt.bar(range(len(D)), list(D.values()), align='center')\nplt.xticks(range(len(D)), list(D.keys()))\n# # for python 2.x:\n# plt.bar(range(len(D)), D.values(), align='center') # python 2.x\n# plt.xticks(range(len(D)), D.keys()) # in python 2.x\n\nplt.show()\n```\nNote that the penultimate line should read plt.xticks(range(len(D)), list(D.keys())) in python3, because D.keys() returns a generator, which matplotlib cannot use directly.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/16010869\/plot-a-bar-using-matplotlib-using-a-dictionary","best_answers_votes":208,"tags":["python","matplotlib","plot"],"question_length":1441,"response_length":615,"tag_count":3} {"question":"OpenCV giving wrong color to colored images on loading I'm loading in a color image in Python OpenCV and plotting the same. However, the image I get has it's colors all mixed up. Here is the code: \n```\nimport cv2\nimport numpy as np\nfrom numpy import array, arange, uint8 \nfrom matplotlib import pyplot as plt\n\n\nimg = cv2.imread('lena_caption.png', cv2.IMREAD_COLOR)\nbw_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n\nimages = []\nimages.append(img)\nimages.append(bw_img)\n\ntitles = ['Original Image','BW Image']\n\nfor i in xrange(len(images)):\n plt.subplot(1,2,i+1),plt.imshow(images[i],'gray')\n plt.title(titles[i])\n plt.xticks([]),plt.yticks([])\n\nplt.show()\n```\nHere is the original image: And here is the plotted image:","response":"OpenCV uses BGR as its default colour order for images, matplotlib uses RGB. When you display an image loaded with OpenCv in matplotlib the channels will be back to front. The easiest way of fixing this is to use OpenCV to explicitly convert it back to RGB, much like you do when creating the greyscale image. \n```\nRGB_img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n```\nAnd then use that in your plot.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/39316447\/opencv-giving-wrong-color-to-colored-images-on-loading","best_answers_votes":258,"tags":["python","opencv","matplotlib","colors","rgb"],"question_length":725,"response_length":397,"tag_count":5} {"question":"plot with custom text for x axis points I am drawing a plot using matplotlib and python like the sample code below. \n```\nx = array([0,1,2,3])\ny = array([20,21,22,23])\nplot(x,y)\nshow()\n```\nAs it is the code above on the x axis I will see drawn values 0.0, 0.5, 1.0, 1.5 i.e. the same values of my reference x values. Is there anyway to map each point of x to a different string? So for example I want x axis to show months names( strings Jun, July,...) or other strings like people names ( \"John\", \"Arnold\", ... ) or clock time ( \"12:20\", \"12:21\", \"12:22\", .. ). Do you know what I can do or what function to have a look at? For my purpose could it be matplotlib.ticker of help?","response":"You can manually set xticks (and yticks) using pyplot.xticks: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nx = np.array([0,1,2,3])\ny = np.array([20,21,22,23])\nmy_xticks = ['John','Arnold','Mavis','Matt']\nplt.xticks(x, my_xticks)\nplt.plot(x, y)\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/3100985\/plot-with-custom-text-for-x-axis-points","best_answers_votes":260,"tags":["python","matplotlib"],"question_length":677,"response_length":270,"tag_count":2} {"question":"Scatter plot and Color mapping in Python I have a range of points x and y stored in numpy arrays. Those represent x(t) and y(t) where t=0...T-1 I am plotting a scatter plot using \n```\nimport matplotlib.pyplot as plt\n\nplt.scatter(x,y)\nplt.show()\n```\nI would like to have a colormap representing the time (therefore coloring the points depending on the index in the numpy arrays) What is the easiest way to do so?","response":"Here is an example \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.random.rand(100)\ny = np.random.rand(100)\nt = np.arange(100)\n\nplt.scatter(x, y, c=t)\nplt.show()\n```\nHere you are setting the color based on the index, t, which is just an array of [1, 2, ..., 100]. Perhaps an easier-to-understand example is the slightly simpler \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.arange(100)\ny = x\nt = x\nplt.scatter(x, y, c=t)\nplt.show()\n```\nNote that the array you pass as c doesn't need to have any particular order or type, i.e. it doesn't need to be sorted or integers as in these examples. The plotting routine will scale the colormap such that the minimum\/maximum values in c correspond to the bottom\/top of the colormap. Colormaps You can change the colormap by adding \n```\nimport matplotlib.cm as cm\nplt.scatter(x, y, c=t, cmap=cm.cmap_name)\n```\nImporting matplotlib.cm is optional as you can call colormaps as cmap=\"cmap_name\" just as well. There is a reference page of colormaps showing what each looks like. Also know that you can reverse a colormap by simply calling it as cmap_name_r. So either \n```\nplt.scatter(x, y, c=t, cmap=cm.cmap_name_r)\n# or\nplt.scatter(x, y, c=t, cmap=\"cmap_name_r\")\n```\nwill work. Examples are \"jet_r\" or cm.plasma_r. Here's an example with the new 1.5 colormap viridis: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.arange(100)\ny = x\nt = x\nfig, (ax1, ax2) = plt.subplots(1, 2)\nax1.scatter(x, y, c=t, cmap='viridis')\nax2.scatter(x, y, c=t, cmap='viridis_r')\nplt.show()\n```\nColorbars You can add a colorbar by using \n```\nplt.scatter(x, y, c=t, cmap='viridis')\nplt.colorbar()\nplt.show()\n```\nNote that if you are using figures and subplots explicitly (e.g. fig, ax = plt.subplots() or ax = fig.add_subplot(111)), adding a colorbar can be a bit more involved. Good examples can be found here for a single subplot colorbar and here for 2 subplots 1 colorbar.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/17682216\/scatter-plot-and-color-mapping-in-python","best_answers_votes":241,"tags":["python","matplotlib"],"question_length":411,"response_length":1938,"tag_count":2} {"question":"Change figure window title in pylab How can I set a figure window's title in pylab\/python? \n```\nfig = figure(9) # 9 is now the title of the window\nfig.set_title(\"Test\") #doesn't work\nfig.title = \"Test\" #doesn't work\n```","response":"If you want to actually change the window you can do: \n```\nfig = pylab.gcf()\nfig.canvas.manager.set_window_title('Test')\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/5812960\/change-figure-window-title-in-pylab","best_answers_votes":185,"tags":["python","matplotlib"],"question_length":219,"response_length":124,"tag_count":2} {"question":"Defining the midpoint of a colormap in matplotlib I want to set the middle point of a colormap, i.e., my data goes from -5 to 10 and I want zero to be the middle point. I think the way to do it is by subclassing normalize and using the norm, but I didn't find any example and it is not clear to me, what exactly have I to implement?","response":"I know this is late to the game, but I just went through this process and came up with a solution that perhaps less robust than subclassing normalize, but much simpler. I thought it'd be good to share it here for posterity. The function \n```\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.axes_grid1 import AxesGrid\n\ndef shiftedColorMap(cmap, start=0, midpoint=0.5, stop=1.0, name='shiftedcmap'):\n '''\n Function to offset the \"center\" of a colormap. Useful for\n data with a negative min and positive max and you want the\n middle of the colormap's dynamic range to be at zero.\n\n Input\n -----\n cmap : The matplotlib colormap to be altered\n start : Offset from lowest point in the colormap's range.\n Defaults to 0.0 (no lower offset). Should be between\n 0.0 and `midpoint`.\n midpoint : The new center of the colormap. Defaults to \n 0.5 (no shift). Should be between 0.0 and 1.0. In\n general, this should be 1 - vmax \/ (vmax + abs(vmin))\n For example if your data range from -15.0 to +5.0 and\n you want the center of the colormap at 0.0, `midpoint`\n should be set to 1 - 5\/(5 + 15)) or 0.75\n stop : Offset from highest point in the colormap's range.\n Defaults to 1.0 (no upper offset). Should be between\n `midpoint` and 1.0.\n '''\n cdict = {\n 'red': [],\n 'green': [],\n 'blue': [],\n 'alpha': []\n }\n\n # regular index to compute the colors\n reg_index = np.linspace(start, stop, 257)\n\n # shifted index to match the data\n shift_index = np.hstack([\n np.linspace(0.0, midpoint, 128, endpoint=False), \n np.linspace(midpoint, 1.0, 129, endpoint=True)\n ])\n\n for ri, si in zip(reg_index, shift_index):\n r, g, b, a = cmap(ri)\n\n cdict['red'].append((si, r, r))\n cdict['green'].append((si, g, g))\n cdict['blue'].append((si, b, b))\n cdict['alpha'].append((si, a, a))\n\n newcmap = matplotlib.colors.LinearSegmentedColormap(name, cdict)\n plt.register_cmap(cmap=newcmap)\n\n return newcmap\n```\nAn example \n```\nbiased_data = np.random.random_integers(low=-15, high=5, size=(37,37))\n\norig_cmap = matplotlib.cm.coolwarm\nshifted_cmap = shiftedColorMap(orig_cmap, midpoint=0.75, name='shifted')\nshrunk_cmap = shiftedColorMap(orig_cmap, start=0.15, midpoint=0.75, stop=0.85, name='shrunk')\n\nfig = plt.figure(figsize=(6,6))\ngrid = AxesGrid(fig, 111, nrows_ncols=(2, 2), axes_pad=0.5,\n label_mode=\"1\", share_all=True,\n cbar_location=\"right\", cbar_mode=\"each\",\n cbar_size=\"7%\", cbar_pad=\"2%\")\n\n# normal cmap\nim0 = grid[0].imshow(biased_data, interpolation=\"none\", cmap=orig_cmap)\ngrid.cbar_axes[0].colorbar(im0)\ngrid[0].set_title('Default behavior (hard to see bias)', fontsize=8)\n\nim1 = grid[1].imshow(biased_data, interpolation=\"none\", cmap=orig_cmap, vmax=15, vmin=-15)\ngrid.cbar_axes[1].colorbar(im1)\ngrid[1].set_title('Centered zero manually,\\nbut lost upper end of dynamic range', fontsize=8)\n\nim2 = grid[2].imshow(biased_data, interpolation=\"none\", cmap=shifted_cmap)\ngrid.cbar_axes[2].colorbar(im2)\ngrid[2].set_title('Recentered cmap with function', fontsize=8)\n\nim3 = grid[3].imshow(biased_data, interpolation=\"none\", cmap=shrunk_cmap)\ngrid.cbar_axes[3].colorbar(im3)\ngrid[3].set_title('Recentered cmap with function\\nand shrunk range', fontsize=8)\n\nfor ax in grid:\n ax.set_yticks([])\n ax.set_xticks([])\n```\nResults of the example:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/7404116\/defining-the-midpoint-of-a-colormap-in-matplotlib","best_answers_votes":101,"tags":["python","matplotlib","colormap"],"question_length":332,"response_length":3530,"tag_count":3} {"question":"Change grid interval and specify tick labels I am trying to plot counts in gridded plots, but I haven't been able to figure out how to go about it. I want: to have dotted grids at an interval of 5; to have major tick labels only every 20; for the ticks to be outside the plot; and to have \"counts\" inside those grids. I have checked for potential duplicates, such as here and here, but have not been able to figure it out. This is my code: \n```py\nimport matplotlib.pyplot as plt\nfrom matplotlib.ticker import MultipleLocator, FormatStrFormatter\n\nfor x, y, count in data.values():\n\n fig = plt.figure()\n ax = fig.add_subplot(111)\n\n ax.annotate(count, xy = (x, y), size = 5)\n # overwrites and I only get the last data point\n\n plt.close()\n # Without this, I get a \"fail to allocate bitmap\" error.\n\nplt.suptitle('Number of counts', fontsize = 12)\nax.set_xlabel('x')\nax.set_ylabel('y')\nplt.axes().set_aspect('equal')\n\nplt.axis([0, 1000, 0, 1000])\n# This gives an interval of 200.\n\nmajorLocator = MultipleLocator(20)\nmajorFormatter = FormatStrFormatter('%d')\nminorLocator = MultipleLocator(5)\n# I want the minor grid to be 5 and the major grid to be 20.\nplt.grid()\n```\nThis is what I get.","response":"There are several problems in your code. First the big ones: You are creating a new figure and a new axes in every iteration of your loop \u2192 put fig = plt.figure and ax = fig.add_subplot(1,1,1) outside of the loop. Don't use the Locators. Call the functions ax.set_xticks() and ax.grid() with the correct keywords. With plt.axes() you are creating a new axes again. Use ax.set_aspect('equal'). The minor things: You should not mix the MATLAB-like syntax like plt.axis() with the objective syntax. Use ax.set_xlim(a,b) and ax.set_ylim(a,b) This should be a working minimal example: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfig = plt.figure()\nax = fig.add_subplot(1, 1, 1)\n\n# Major ticks every 20, minor ticks every 5\nmajor_ticks = np.arange(0, 101, 20)\nminor_ticks = np.arange(0, 101, 5)\n\nax.set_xticks(major_ticks)\nax.set_xticks(minor_ticks, minor=True)\nax.set_yticks(major_ticks)\nax.set_yticks(minor_ticks, minor=True)\n\n# And a corresponding grid\nax.grid(which='both')\n\n# Or if you want different settings for the grids:\nax.grid(which='minor', alpha=0.2)\nax.grid(which='major', alpha=0.5)\n\nplt.show()\n```\nOutput is this:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/24943991\/change-grid-interval-and-specify-tick-labels","best_answers_votes":280,"tags":["python","matplotlib","xticks","gridlines","yticks"],"question_length":1203,"response_length":1136,"tag_count":5} {"question":"How to set xticks in subplots If I plot a single imshow plot I can use \n```py\nfig, ax = plt.subplots()\nax.imshow(data)\nplt.xticks( [4, 14, 24], [5, 15, 25] )\n```\nto replace my xtick labels. Now, I am plotting 12 imshow plots using \n```py\nf, axarr = plt.subplots(4, 3)\naxarr[i, j].imshow(data)\n```\nHow can I change xticks just for one of these subplots? I can only access the axes of the subplots with axarr[i, j]. How can I access plt just for one particular subplot?","response":"There are two ways: Use the axes methods of the subplot object (e.g. ax.set_xticks and ax.set_xticklabels) or Use plt.sca to set the current axes for the pyplot state machine (i.e. the plt interface). As an example (this also illustrates using setp to change the properties of all of the subplots): \n```\nimport matplotlib.pyplot as plt\n\nfig, axes = plt.subplots(nrows=3, ncols=4)\n\n# Set the ticks and ticklabels for all axes\nplt.setp(axes, xticks=[0.1, 0.5, 0.9], xticklabels=['a', 'b', 'c'],\n yticks=[1, 2, 3])\n\n# Use the pyplot interface to change just one subplot...\nplt.sca(axes[1, 1])\nplt.xticks(range(3), ['A', 'Big', 'Cat'], color='red')\n\nfig.tight_layout()\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/19626530\/how-to-set-xticks-in-subplots","best_answers_votes":221,"tags":["python","matplotlib","subplot"],"question_length":468,"response_length":686,"tag_count":3} {"question":"Getting vertical gridlines to appear in line plot in matplotlib I want to get both horizontal and vertical grid lines on my plot but only the horizontal grid lines are appearing by default. I am using a pandas.DataFrame from an sql query in python to generate a line plot with dates on the x-axis. I'm not sure why they do not appear on the dates and I have tried to search for an answer to this but couldn't find one. All I have used to plot the graph is the simple code below. \n```\ndata.plot()\ngrid('on')\n```\ndata is the DataFrame which contains the dates and the data from the sql query. I have also tried adding the code below but I still get the same output with no vertical grid lines. \n```\nax = plt.axes() \nax.yaxis.grid() # horizontal lines\nax.xaxis.grid() # vertical lines\n```\nAny suggestions?","response":"You may need to give boolean arg in your calls, e.g. use ax.yaxis.grid(True) instead of ax.yaxis.grid(). Additionally, since you are using both of them you can combine into ax.grid, which works on both, rather than doing it once for each dimension. \n```\nax = plt.gca()\nax.grid(True)\n```\nThat should sort you out.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/16074392\/getting-vertical-gridlines-to-appear-in-line-plot-in-matplotlib","best_answers_votes":127,"tags":["python","matplotlib","pandas"],"question_length":809,"response_length":312,"tag_count":3} {"question":"prevent plot from showing in jupyter notebook How can I prevent a specific plot to be shown in Jupyter notebook? I have several plots in a notebook but I want a subset of them to be saved to a file and not shown on the notebook as this slows considerably. A minimal working example for a Jupyter notebook is: \n```\n%matplotlib inline \nfrom numpy.random import randn\nfrom matplotlib.pyplot import plot, figure\na=randn(3)\nb=randn(3)\nfor i in range(10):\n fig=figure()\n plot(b)\n fname='s%03d.png'%i\n fig.savefig(fname)\n if(i%5==0):\n figure()\n plot(a)\n```\nAs you can see I have two types of plots, a and b. I want a's to be plotted and shown and I don't want the b plots to be shown, I just want them them to be saved in a file. Hopefully this will speed things a bit and won't pollute my notebook with figures I don't need to see. Thank you for your time","response":"Perhaps just clear the axis, for example: \n```\nfig = plt.figure()\nplt.plot(range(10))\nfig.savefig(\"save_file_name.pdf\")\nplt.close()\n```\nThis will not plot the output in inline mode. I can't work out if it is really clearing the data though.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/18717877\/prevent-plot-from-showing-in-jupyter-notebook","best_answers_votes":163,"tags":["python","matplotlib","jupyter-notebook","figures"],"question_length":878,"response_length":240,"tag_count":4} {"question":"How to generate random colors in matplotlib? What's the trivial example of how to generate random colors for passing to plotting functions? I'm calling scatter inside a loop and want each plot a different color. \n```\nfor X,Y in data:\n scatter(X, Y, c=??)\n```\nc: a color. c can be a single color format string, or a sequence of color specifications of length N, or a sequence of N numbers to be mapped to colors using the cmap and norm specified via kwargs (see below). Note that c should not be a single numeric RGB or RGBA sequence because that is indistinguishable from an array of values to be colormapped. c can be a 2-D array in which the rows are RGB or RGBA, however.","response":"I'm calling scatter inside a loop and want each plot in a different color. Based on that, and on your answer: It seems to me that you actually want n distinct colors for your datasets; you want to map the integer indices 0, 1, ..., n-1 to distinct RGB colors. Something like: Here is the function to do it: \n```\nimport matplotlib.pyplot as plt\n\ndef get_cmap(n, name='hsv'):\n '''Returns a function that maps each index in 0, 1, ..., n-1 to a distinct \n RGB color; the keyword argument name must be a standard mpl colormap name.'''\n return plt.cm.get_cmap(name, n)\n```\nUsage in your pseudo-code snippet in the question: \n```\ncmap = get_cmap(len(data))\nfor i, (X, Y) in enumerate(data):\n scatter(X, Y, c=cmap(i))\n```\nI generated the figure in my answer with the following code: \n```\nimport matplotlib.pyplot as plt\n\ndef get_cmap(n, name='hsv'):\n '''Returns a function that maps each index in 0, 1, ..., n-1 to a distinct \n RGB color; the keyword argument name must be a standard mpl colormap name.'''\n return plt.cm.get_cmap(name, n)\n\ndef main():\n N = 30\n fig=plt.figure()\n ax=fig.add_subplot(111) \n plt.axis('scaled')\n ax.set_xlim([ 0, N])\n ax.set_ylim([-0.5, 0.5])\n cmap = get_cmap(N)\n for i in range(N):\n rect = plt.Rectangle((i, -0.5), 1, 1, facecolor=cmap(i))\n ax.add_artist(rect)\n ax.set_yticks([])\n plt.show()\n\nif __name__=='__main__':\n main()\n```\nTested with both Python 2.7 & matplotlib 1.5, and with Python 3.5 & matplotlib 2.0. It works as expected.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/14720331\/how-to-generate-random-colors-in-matplotlib","best_answers_votes":201,"tags":["python","matplotlib"],"question_length":676,"response_length":1526,"tag_count":2} {"question":"Plotting images side by side using matplotlib I was wondering how I am able to plot images side by side using matplotlib for example something like this: The closest I got is this: This was produced by using this code: \n```\nf, axarr = plt.subplots(2,2)\naxarr[0,0] = plt.imshow(image_datas[0])\naxarr[0,1] = plt.imshow(image_datas[1])\naxarr[1,0] = plt.imshow(image_datas[2])\naxarr[1,1] = plt.imshow(image_datas[3])\n```\nBut I can't seem to get the other images to show. I'm thinking that there must be a better way to do this as I would imagine trying to manage the indexes would be a pain. I have looked through the documentation although I have a feeling I may be look at the wrong one. Would anyone be able to provide me with an example or point me in the right direction? EDIT: See the answer from @duhaime if you want a function to automatically determine the grid size.","response":"The problem you face is that you try to assign the return of imshow (which is an matplotlib.image.AxesImage to an existing axes object. The correct way of plotting image data to the different axes in axarr would be \n```\nf, axarr = plt.subplots(2,2)\naxarr[0,0].imshow(image_datas[0])\naxarr[0,1].imshow(image_datas[1])\naxarr[1,0].imshow(image_datas[2])\naxarr[1,1].imshow(image_datas[3])\n```\nThe concept is the same for all subplots, and in most cases the axes instance provide the same methods than the pyplot (plt) interface. E.g. if ax is one of your subplot axes, for plotting a normal line plot you'd use ax.plot(..) instead of plt.plot(). This can actually be found exactly in the source from the page you link to.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/41793931\/plotting-images-side-by-side-using-matplotlib","best_answers_votes":182,"tags":["python","matplotlib"],"question_length":872,"response_length":717,"tag_count":2} {"question":"Changing the color of an axis Is there a way to change the color of an axis (not the ticks) in matplotlib? I have been looking through the docs for Axes, Axis, and Artist, but no luck; the matplotlib gallery also has no hint. Any idea?","response":"When using figures, you can easily change the spine color with: \n```\nax.spines['bottom'].set_color('#dddddd')\nax.spines['top'].set_color('#dddddd') \nax.spines['right'].set_color('red')\nax.spines['left'].set_color('red')\n```\nUse the following to change only the ticks: which=\"both\" changes both the major and minor tick colors \n```py\nax.tick_params(axis='x', colors='red')\nax.tick_params(axis='y', colors='red')\n```\nAnd the following to change only the label: \n```\nax.yaxis.label.set_color('red')\nax.xaxis.label.set_color('red')\n```\nAnd finally the title: \n```\nax.title.set_color('red')\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/1982770\/changing-the-color-of-an-axis","best_answers_votes":237,"tags":["python","matplotlib","axis"],"question_length":235,"response_length":589,"tag_count":3} {"question":"Histogram Matplotlib So I have a little problem. I have a data set in scipy that is already in the histogram format, so I have the center of the bins and the number of events per bin. How can I now plot is as a histogram. I tried just doing \n```\nbins, n=hist()\n```\nbut it didn't like that. Any recommendations?","response":"```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nmu, sigma = 100, 15\nx = mu + sigma * np.random.randn(10000)\nhist, bins = np.histogram(x, bins=50)\nwidth = 0.7 * (bins[1] - bins[0])\ncenter = (bins[:-1] + bins[1:]) \/ 2\nplt.bar(center, hist, align='center', width=width)\nplt.show()\n```\nThe object-oriented interface is also straightforward: \n```\nfig, ax = plt.subplots()\nax.bar(center, hist, align='center', width=width)\nfig.savefig(\"1.png\")\n```\nIf you are using custom (non-constant) bins, you can pass compute the widths using np.diff, pass the widths to ax.bar and use ax.set_xticks to label the bin edges: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nmu, sigma = 100, 15\nx = mu + sigma * np.random.randn(10000)\nbins = [0, 40, 60, 75, 90, 110, 125, 140, 160, 200]\nhist, bins = np.histogram(x, bins=bins)\nwidth = np.diff(bins)\ncenter = (bins[:-1] + bins[1:]) \/ 2\n\nfig, ax = plt.subplots(figsize=(8,3))\nax.bar(center, hist, align='center', width=width)\nax.set_xticks(bins)\nfig.savefig(\"\/tmp\/out.png\")\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/5328556\/histogram-matplotlib","best_answers_votes":268,"tags":["python","numpy","matplotlib","scipy","histogram"],"question_length":310,"response_length":1033,"tag_count":5} {"question":"Linear regression with matplotlib \/ numpy I'm trying to generate a linear regression on a scatter plot I have generated, however my data is in list format, and all of the examples I can find of using polyfit require using arange. arange doesn't accept lists though. I have searched high and low about how to convert a list to an array and nothing seems clear. Am I missing something? Following on, how best can I use my list of integers as inputs to the polyfit? Here is the polyfit example I am following: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.arange(data)\ny = np.arange(data)\n\nm, b = np.polyfit(x, y, 1)\n\nplt.plot(x, y, 'yo', x, m*x+b, '--k')\nplt.show()\n```","response":"arange generates lists (well, numpy arrays); type help(np.arange) for the details. You don't need to call it on existing lists. \n```py\n\n```python\nx = [1,2,3,4]\n```\n\n```python\ny = [3,5,7,9] \n```\n\n```python\n\n```\n\n```python\nm,b = np.polyfit(x, y, 1)\n```\n\n```python\nm\n#Output\n#2.0000000000000009\n```\n\n```python\nb\n#Output\n#0.99999999999999833\n#```\n#I should add that I tend to use poly1d here rather than write out \"m*x+b\" and the higher-order equivalents, so my version of your code would look something like this: \n#```py\n#import numpy as np\n#import matplotlib.pyplot as plt\n```\n\nx = [1,2,3,4]\ny = [3,5,7,10] # 10, not 9, so the fit isn't perfect\n\ncoef = np.polyfit(x,y,1)\npoly1d_fn = np.poly1d(coef) \n# poly1d_fn is now a function which takes in x and returns an estimate for y\n\nplt.plot(x,y, 'yo', x, poly1d_fn(x), '--k') #'--k'=black dashed line, 'yo' = yellow circle marker\n\nplt.xlim(0, 5)\nplt.ylim(0, 12)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/6148207\/linear-regression-with-matplotlib-numpy","best_answers_votes":246,"tags":["python","numpy","matplotlib","linear-regression","curve-fitting"],"question_length":685,"response_length":821,"tag_count":5} {"question":"Stop matplotlib repeating labels in legend Here is a very simplified example: \n```\nxvalues = [2,3,4,6]\n\nfor x in xvalues:\n plt.axvline(x,color='b',label='xvalues')\n\nplt.legend()\n```\nThe legend will now show 'xvalues' as a blue line 4 times in the legend. Is there a more elegant way of fixing this than the following? \n```\nfor i,x in enumerate(xvalues):\n if not i:\n plt.axvline(x,color='b',label='xvalues')\n else:\n plt.axvline(x,color='b')\n```","response":"plt.legend takes as parameters A list of axis handles which are Artist objects A list of labels which are strings These parameters are both optional defaulting to plt.gca().get_legend_handles_labels(). You can remove duplicate labels by putting them in a dictionary before calling legend. This is because dicts can't have duplicate keys. For example: For Python versions < 3.7 \n```\nfrom collections import OrderedDict\nimport matplotlib.pyplot as plt\n\nhandles, labels = plt.gca().get_legend_handles_labels()\nby_label = OrderedDict(zip(labels, handles))\nplt.legend(by_label.values(), by_label.keys())\n```\nFor Python versions > 3.7 As of Python 3.7, dictionaries retain input order by default. Thus, there is no need for OrderedDict form the collections module. \n```\nimport matplotlib.pyplot as plt\n\nhandles, labels = plt.gca().get_legend_handles_labels()\nby_label = dict(zip(labels, handles))\nplt.legend(by_label.values(), by_label.keys())\n```\nDocs for plt.legend","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/13588920\/stop-matplotlib-repeating-labels-in-legend","best_answers_votes":219,"tags":["python","matplotlib","legend"],"question_length":466,"response_length":961,"tag_count":3} {"question":"Why doesn't plt.imshow() display the image? I have this code, copied from a tutorial: \n```\nimport numpy as np\nnp.random.seed(123)\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout, Activation, Flatten\nfrom keras.layers import Convolution2D, MaxPooling2D\nfrom keras.utils import np_utils\nfrom keras.datasets import mnist\n(X_train,y_train),(X_test,y_test) = mnist.load_data()\nprint X_train.shape\nfrom matplotlib import pyplot as plt\nplt.imshow(X_train[0])\n```\nNo image was displayed. Why not? There doesn't appear to be anything wrong with the backend of matplotlib on my computer. I tested that like so: \n```\nimport matplotlib.pyplot as plt\n\ndata = [[0, 0.25], [0.5, 0.75]]\n\nfig, ax = plt.subplots()\nim = ax.imshow(data, cmap=plt.get_cmap('hot'), interpolation='nearest',\n vmin=0, vmax=1)\nfig.colorbar(im)\nplt.show()\n```\nand was able to produce an image: I also tried printing X_train[0] and it looks right.","response":"The solution was as simple as adding plt.show() at the end of the code snippet: \n```\nimport numpy as np\nnp.random.seed(123)\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout, Activation, Flatten\nfrom keras.layers import Convolution2D, MaxPooling2D\nfrom keras.utils import np_utils\nfrom keras.datasets import mnist\n(X_train,y_train),(X_test,y_test) = mnist.load_data()\nprint X_train.shape\nfrom matplotlib import pyplot as plt\nplt.imshow(X_train[0])\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/42812230\/why-doesnt-plt-imshow-display-the-image","best_answers_votes":257,"tags":["python","matplotlib","keras"],"question_length":946,"response_length":488,"tag_count":3} {"question":"How to add a second x-axis I have a very simple question. I need to have a second x-axis on my plot and I want that this axis has a certain number of tics that correspond to certain position of the first axis. Let's try with an example. Here I am plotting the dark matter mass as a function of the expansion factor, defined as 1\/(1+z), that ranges from 0 to 1. \n```\nsemilogy(1\/(1+z),mass_acc_massive,'-',label='DM')\nxlim(0,1)\nylim(1e8,5e12)\n```\nI would like to have another x-axis, on the top of my plot, showing the corresponding z for some values of the expansion factor. Is that possible? If yes, how can I have xtics ax","response":"I'm taking a cue from the comments in @Dhara's answer, it sounds like you want to set a list of new_tick_locations by a function from the old x-axis to the new x-axis. The tick_function below takes in a numpy array of points, maps them to a new value and formats them: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfig = plt.figure()\nax1 = fig.add_subplot(111)\nax2 = ax1.twiny()\n\nX = np.linspace(0,1,1000)\nY = np.cos(X*20)\n\nax1.plot(X,Y)\nax1.set_xlabel(r\"Original x-axis: $X$\")\n\nnew_tick_locations = np.array([.2, .5, .9])\n\ndef tick_function(X):\n V = 1\/(1+X)\n return [\"%.3f\" % z for z in V]\n\nax2.set_xlim(ax1.get_xlim())\nax2.set_xticks(new_tick_locations)\nax2.set_xticklabels(tick_function(new_tick_locations))\nax2.set_xlabel(r\"Modified x-axis: $1\/(1+X)$\")\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/10514315\/how-to-add-a-second-x-axis","best_answers_votes":157,"tags":["python","matplotlib","twiny"],"question_length":623,"response_length":788,"tag_count":3} {"question":"How to set the range of y-axis for a seaborn boxplot [duplicate] This question already has answers here: How to set the axis limits in Matplotlib? (10 answers) Closed 2 years ago. From the official seaborn documentation, I learned that you can create a boxplot as below: \n```py\nimport seaborn as sns\nsns.set_style(\"whitegrid\")\ntips = sns.load_dataset(\"tips\")\nax = sns.boxplot(x=\"day\", y=\"total_bill\", data=tips)\n```\nMy question is: how do I limit the range of y-axis of this plot? For example, I want the y-axis to be within [10, 40]. Is there any easy way to do this?","response":"It is standard matplotlib.pyplot: \n```\nimport matplotlib.pyplot as plt\n\nplt.ylim(10, 40)\n```\nOr simpler, as mwaskom comments below: \n```\nax.set(ylim=(10, 40))\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/33227473\/how-to-set-the-range-of-y-axis-for-a-seaborn-boxplot","best_answers_votes":174,"tags":["python","matplotlib","plot","seaborn","boxplot"],"question_length":568,"response_length":162,"tag_count":5} {"question":"How to forget previous plots - how can I flush\/refresh? How do you get matplotlib.pyplot to \"forget\" previous plots I am trying to plot multiple time using matplotlib.pyplot The code looks like this: \n```\ndef plottest():\n import numpy as np\n import matplotlib.pyplot as plt\n\n\n a=np.random.rand(10,)\n b=np.random.rand(10,)\n c=np.random.rand(10,)\n\n\n plt.plot(a,label='a')\n plt.plot(b,label='b')\n plt.plot(c,label='c')\n plt.legend(loc='upper left')\n plt.ylabel('mag')\n plt.xlabel('element)')\n plt.show()\n\n e=np.random.rand(10,)\n f=np.random.rand(10,)\n g=np.random.rand(10,)\n\n\n plt.plot(e,label='e')\n plt.plot(f,label='f')\n plt.plot(g,label='g')\n plt.legend(loc='upper left')\n plt.ylabel('mag')\n plt.xlabel('element)')\n plt.show()\n```\nUnfortunately I keep getting the same plot (actually from some other code which I ran and completed a while ago) no matter what I do. Similar code has worked previously for me. I have looked at these questions: How to \"clean the slate\"? Matplotlib pyplot show() doesn't work once closed (python) matplotlib pyplot show() .. blocking or not? and tried using plt.show(), plt.clf() and plt.close to no avail. Any ideas?","response":"I would rather use plt.clf() after every plt.show() to just clear the current figure instead of closing and reopening it, keeping the window size and giving you a better performance and much better memory usage. Similarly, you could do plt.cla() to just clear the current axes. To clear a specific axes, useful when you have multiple axes within one figure, you could do for example: \n```\nfig, axes = plt.subplots(nrows=2, ncols=2)\n\naxes[0, 1].clear()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/17106288\/how-to-forget-previous-plots-how-can-i-flush-refresh","best_answers_votes":137,"tags":["python","matplotlib"],"question_length":1213,"response_length":455,"tag_count":2} {"question":"How to draw a line with matplotlib? I cannot find a way to draw an arbitrary line with matplotlib Python library. It allows to draw horizontal and vertical lines (with matplotlib.pyplot.axhline and matplotlib.pyplot.axvline, for example), but i do not see how to draw a line through two given points (x1, y1) and (x2, y2). Is there a way? Is there a simple way?","response":"This will draw a line that passes through the points (-1, 1) and (12, 4), and another one that passes through the points (1, 3) et (10, 2) x1 are the x coordinates of the points for the first line, y1 are the y coordinates for the same -- the elements in x1 and y1 must be in sequence. x2 and y2 are the same for the other line. \n```\nimport matplotlib.pyplot as plt\nx1, y1 = [-1, 12], [1, 4]\nx2, y2 = [1, 10], [3, 2]\nplt.plot(x1, y1, x2, y2, marker = 'o')\nplt.show()\n```\nI suggest you spend some time reading \/ studying the basic tutorials found on the very rich matplotlib website to familiarize yourself with the library. What if I don't want line segments? [edit]: As shown by @thomaskeefe, starting with matplotlib 3.3, this is now builtin as a convenience: plt.axline((x1, y1), (x2, y2)), rendering the following obsolete. There are no direct ways to have lines extend to infinity... matplotlib will either resize\/rescale the plot so that the furthest point will be on the boundary and the other inside, drawing line segments in effect; or you must choose points outside of the boundary of the surface you want to set visible, and set limits for the x and y axis. As follows: \n```\nimport matplotlib.pyplot as plt\nx1, y1 = [-1, 12], [1, 10]\nx2, y2 = [-1, 10], [3, -1]\nplt.xlim(0, 8), plt.ylim(-2, 8)\nplt.plot(x1, y1, x2, y2, marker = 'o')\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/36470343\/how-to-draw-a-line-with-matplotlib","best_answers_votes":128,"tags":["python","python-3.x","matplotlib"],"question_length":361,"response_length":1357,"tag_count":3} {"question":"Relationship between dpi and figure size I have created a figure using matplotlib but I have realized the plot axis and the drawn line gets zoomed out. Reading this earlier discussion thread, it explains how to set the figure size. \n```\nfig, ax = plt.subplots()\nfig.set_size_inches(3, 1.5)\nplt.savefig(file.jpeg, edgecolor='black', dpi=400, facecolor='black', transparent=True)\n```\nWith the above code (other configurations removed for brevity), I do get a resulting image file with 1200 X 600 desired dimensions(should we say resolution too?) and desired file size. The projected image is scaled out in an unusual way, annotations for example are enlarged. While I can set the size of the labels on the axis, the figure doesn't look proportional with respect to the scale since the bottom and right spines are large and so are the plotted lines. The question, therefore, is, what configurations are going wrong?","response":"Figure size (figsize) determines the size of the figure in inches. This gives the amount of space the axes (and other elements) have inside the figure. The default figure size is (6.4, 4.8) inches in matplotlib 2. A larger figure size will allow for longer texts, more axes or more ticklabels to be shown. Dots per inches (dpi) determines how many pixels the figure comprises. The default dpi in matplotlib is 100. A figure of figsize=(w,h) will have \n```\npx, py = w*dpi, h*dpi # pixels\n# e.g.\n# 6.4 inches * 100 dpi = 640 pixels\n```\nSo in order to obtain a figure with a pixel size of e.g. (1200,600) you may chose several combinations of figure size and dpi, e.g. \n```\nfigsize=(15,7.5), dpi= 80\nfigsize=(12,6) , dpi=100\nfigsize=( 8,4) , dpi=150\nfigsize=( 6,3) , dpi=200\netc.\n```\nNow, what is the difference? This is determined by the size of the elements inside the figure. Most elements like lines, markers, texts have a size given in points. Matplotlib figures use Points per inch (ppi) of 72. A line with thickness 1 point will be 1.\/72. inch wide. A text with fontsize 12 points will be 12.\/72. inch heigh. Of course if you change the figure size in inches, points will not change, so a larger figure in inches still has the same size of the elements. Changing the figure size is thus like taking a piece of paper of a different size. Doing so, would of course not change the width of the line drawn with the same pen. On the other hand, changing the dpi scales those elements. At 72 dpi, a line of 1 point size is one pixel strong. At 144 dpi, this line is 2 pixels strong. A larger dpi will therefore act like a magnifying glass. All elements are scaled by the magnifying power of the lens. A comparisson for constant figure size and varying dpi is shown in the image below on the left. On the right you see a constant dpi and varying figure size. Figures in each row have the same pixel size. Code to reproduce: \n```\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ndef plot(fs,dpi):\n fig, ax=plt.subplots(figsize=fs, dpi=dpi)\n ax.set_title(\"Figsize: {}, dpi: {}\".format(fs,dpi))\n ax.plot([2,4,1,5], label=\"Label\")\n ax.legend()\n\nfigsize=(2,2)\nfor i in range(1,4):\n plot(figsize, i*72)\n\ndpi=72\nfor i in [2,4,6]:\n plot((i,i), dpi)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/47633546\/relationship-between-dpi-and-figure-size","best_answers_votes":243,"tags":["matplotlib","plot","graph","visualization"],"question_length":912,"response_length":2266,"tag_count":4} {"question":"Get matplotlib color cycle state Is it possible to query the current state of the matplotlib color cycle? In other words is there a function get_cycle_state that will behave in the following way? \n```\n\n```python\nplot(x1, y1)\n```\n\n```python\nplot(x2, y2)\n```\n\n```python\nstate = get_cycle_state()\n```\n\n```python\nprint state\n#Output\n#2\n#```\n#Where I expect the state to be the index of the next color that will be used in a plot. Alternatively, if it returned the next color (\"r\" for the default cycle in the example above), that would be fine too.\n```","response":"Accessing the color cycle iterator There's no \"user-facing\" (a.k.a. \"public\") method to access the underlying iterator, but you can access it through \"private\" (by convention) methods. However, you'd can't get the state of an iterator without changing it. Setting the color cycle Quick aside: You can set the color\/property cycle in a variety of ways (e.g. ax.set_color_cycle in versions =1.5). Have a look at the example here for version 1.5 or greater, or the previous style here. Accessing the underlying iterator However, while there's no public-facing method to access the iterable, you can access it for a given axes object (ax) through the _get_lines helper class instance. ax._get_lines is a touch confusingly named, but it's the behind-the-scenes machinery that allows the plot command to process all of the odd and varied ways that plot can be called. Among other things, it's what keeps track of what colors to automatically assign. Similarly, there's ax._get_patches_for_fill to control cycling through default fill colors and patch properties. At any rate, the color cycle iterable is ax._get_lines.color_cycle for lines and ax._get_patches_for_fill.color_cycle for patches. On matplotlib >=1.5, this has changed to use the cycler library, and the iterable is called prop_cycler instead of color_cycle and yields a dict of properties instead of only a color. All in all, you'd do something like: \n```\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots()\ncolor_cycle = ax._get_lines.color_cycle\n# or ax._get_lines.prop_cycler on version >= 1.5\n# Note that prop_cycler cycles over dicts, so you'll want next(cycle)['color']\n```\nYou can't view the state of an iterator However, this object is a \"bare\" iterator. We can easily get the next item (e.g. next_color = next(color_cycle), but that means that the next color after that is what will be plotted. By design, there's no way to get the current state of an iterator without changing it. In v1.5 or greater, it would be nice to get the cycler object that's used, as we could infer its current state. However, the cycler object itself isn't accessible (publicly or privately) anywhere. Instead, only the itertools.cycle instance created from the cycler object is accessible. Either way, there's no way to get to the underlying state of the color\/property cycler. Match the color of the previously plotted item instead In your case, it sounds like you're wanting to match the color of something that was just plotted. Instead of trying to determine what the color\/property will be, set the color\/etc of your new item based on the properties of what's plotted. For example, in the case you described, I'd do something like this: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndef custom_plot(x, y, **kwargs):\n ax = kwargs.pop('ax', plt.gca())\n base_line, = ax.plot(x, y, **kwargs)\n ax.fill_between(x, 0.9*y, 1.1*y, facecolor=base_line.get_color(), alpha=0.5)\n\nx = np.linspace(0, 1, 10)\ncustom_plot(x, x)\ncustom_plot(x, 2*x)\ncustom_plot(x, -x, color='yellow', lw=3)\n\nplt.show()\n```\nIt's not the only way, but its cleaner than trying to get the color of the plotted line before-hand, in this case.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/13831549\/get-matplotlib-color-cycle-state","best_answers_votes":130,"tags":["python","matplotlib"],"question_length":493,"response_length":3178,"tag_count":2} {"question":"How do I write a Latex formula in the legend of a plot using Matplotlib inside a .py file? I am writing a script in Python (.py file) and I am using Matplotlib to plot an array. I want to add a legend with a formula to the plot, but I haven't been able to do it. I have done this before in IPython or the terminal. In this case, writing something like this: \n```\nlegend(ur'$The_formula$')\n```\nworked perfectly. However, this doesn't work when I call my .py script from the terminal\/IPython.","response":"The easiest way is to assign the label when you plot the data, e.g.: \n```\nimport matplotlib.pyplot as plt\nax = plt.gca() # or any other way to get an axis object\nax.plot(x, y, label=r'$\\sin (x)$')\n\nax.legend()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/14016217\/how-do-i-write-a-latex-formula-in-the-legend-of-a-plot-using-matplotlib-inside-a","best_answers_votes":118,"tags":["python","matplotlib","latex"],"question_length":490,"response_length":214,"tag_count":3} {"question":"Drawing average line in histogram I am drawing a histogram using matplotlib in python, and would like to draw a line representing the average of the dataset, overlaid on the histogram as a dotted line (or maybe some other color would do too). Any ideas on how to draw a line overlaid on the histogram? I am using the plot() command, but not sure how to draw a vertical line (i.e. what value should I give for the y-axis? thanks!","response":"You can use plot or vlines to draw a vertical line, but to draw a vertical line from the bottom to the top of the y axis, axvline is the probably the simplest function to use. Here's an example: \n```\n\n```python\nimport numpy as np\n```\n\n```python\nimport matplotlib.pyplot as plt\n```\n\n```python\nnp.random.seed(6789)\n```\n\n```python\nx = np.random.gamma(4, 0.5, 1000)\n```\n\n```python\nresult = plt.hist(x, bins=20, color='c', edgecolor='k', alpha=0.65)\n```\n\n```python\nplt.axvline(x.mean(), color='k', linestyle='dashed', linewidth=1)\n#Output\n#\n#```\n#Result:\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/16180946\/drawing-average-line-in-histogram","best_answers_votes":178,"tags":["python","matplotlib","axis"],"question_length":428,"response_length":560,"tag_count":3} {"question":"How to remove lines in a Matplotlib plot How can I remove a line (or lines) of a matplotlib axes in such a way as it actually gets garbage collected and releases the memory back? The below code appears to delete the line, but never releases the memory (even with explicit calls to gc.collect()) \n```\nfrom matplotlib import pyplot\nimport numpy\na = numpy.arange(int(1e7))\n# large so you can easily see the memory footprint on the system monitor.\nfig = pyplot.Figure()\nax = pyplot.add_subplot(1, 1, 1)\nlines = ax.plot(a) # this uses up an additional 230 Mb of memory.\n# can I get the memory back?\nl = lines[0]\nl.remove()\ndel l\ndel lines\n# not releasing memory\nax.cla() # this does release the memory, but also wipes out all other lines.\n```\nSo is there a way to just delete one line from an axes and get the memory back? This potential solution also does not work.","response":"This is a very long explanation that I typed up for a coworker of mine. I think it would be helpful here as well. Be patient, though. I get to the real issue that you are having toward the end. Just as a teaser, it's an issue of having extra references to your Line2D objects hanging around. WARNING: One other note before we dive in. If you are using IPython to test this out, IPython keeps references of its own and not all of them are weakrefs. So, testing garbage collection in IPython does not work. It just confuses matters. Okay, here we go. Each matplotlib object (Figure, Axes, etc) provides access to its child artists via various attributes. The following example is getting quite long, but should be illuminating. We start out by creating a Figure object, then add an Axes object to that figure. Note that ax and fig.axes[0] are the same object (same id()). \n```\n\n```python\n#Create a figure\n```\n\n```python\nfig = plt.figure()\n```\n\n```python\nfig.axes\n#Output\n#[]\n```\n\n```python\n#Add an axes object\n```\n\n```python\nax = fig.add_subplot(1,1,1)\n```\n\n```python\n#The object in ax is the same as the object in fig.axes[0], which is \n```\n\n```python\n# a list of axes objects attached to fig \n```\n\n```python\nprint ax\n#Output\n#Axes(0.125,0.1;0.775x0.8)\n```\n\n```python\nprint fig.axes[0]\n#Output\n#Axes(0.125,0.1;0.775x0.8) #Same as \"print ax\"\n```\n\n```python\nid(ax), id(fig.axes[0])\n#Output\n#(212603664, 212603664) #Same ids => same objects\n#```\n#This also extends to lines in an axes object: \n#```\n```\n\n```python\n#Add a line to ax\n```\n\n```python\nlines = ax.plot(np.arange(1000))\n```\n\n```python\n#Lines and ax.lines contain the same line2D instances \n```\n\n```python\nprint lines\n#Output\n#[]\n```\n\n```python\nprint ax.lines\n#Output\n#[]\n```\n\n```python\nprint lines[0]\n#Output\n#Line2D(_line0)\n```\n\n```python\nprint ax.lines[0]\n#Output\n#Line2D(_line0)\n```\n\n```python\n#Same ID => same object\n```\n\n```python\nid(lines[0]), id(ax.lines[0])\n#Output\n#(216550352, 216550352)\n#```\n#If you were to call plt.show() using what was done above, you would see a figure containing a set of axes and a single line: Now, while we have seen that the contents of lines and ax.lines is the same, it is very important to note that the object referenced by the lines variable is not the same as the object reverenced by ax.lines as can be seen by the following: \n#```\n```\n\n```python\nid(lines), id(ax.lines)\n#Output\n#(212754584, 211335288)\n#```\n#As a consequence, removing an element from lines does nothing to the current plot, but removing an element from ax.lines removes that line from the current plot. So: \n#```\n```\n\n```python\n#THIS DOES NOTHING:\n```\n\n```python\nlines.pop(0)\n```\n\n```python\n#THIS REMOVES THE FIRST LINE:\n```\n\n```python\nax.lines.pop(0)\n#Output\n#```\n#So, if you were to run the second line of code, you would remove the Line2D object contained in ax.lines[0] from the current plot and it would be gone. Note that this can also be done via ax.lines.remove() meaning that you can save a Line2D instance in a variable, then pass it to ax.lines.remove() to delete that line, like so: \n#```\n```\n\n```python\n#Create a new line\n```\n\n```python\nlines.append(ax.plot(np.arange(1000)\/2.0))\n```\n\n```python\nax.lines\n#Output\n#[, ]\n#```\n#```\n```\n\n```python\n#Remove that new line\n```\n\n```python\nax.lines.remove(lines[0])\n```\n\n```python\nax.lines\n#Output\n#[]\n#```\n#All of the above works for fig.axes just as well as it works for ax.lines Now, the real problem here. If we store the reference contained in ax.lines[0] into a weakref.ref object, then attempt to delete it, we will notice that it doesn't get garbage collected: \n#```\n```\n\n```python\n#Create weak reference to Line2D object\n```\n\n```python\nfrom weakref import ref\n```\n\n```python\nwr = ref(ax.lines[0])\n```\n\n```python\nprint wr\n#Output\n#\n```\n\n```python\nprint wr()\n#Output\n#\n```\n\n```python\n#Delete the line from the axes\n```\n\n```python\nax.lines.remove(wr())\n```\n\n```python\nax.lines\n#Output\n#[]\n```\n\n```python\n#Test weakref again\n```\n\n```python\nprint wr\n#Output\n#\n```\n\n```python\nprint wr()\n#Output\n#\n#```\n#The reference is still live! Why? This is because there is still another reference to the Line2D object that the reference in wr points to. Remember how lines didn't have the same ID as ax.lines but contained the same elements? Well, that's the problem. \n#```\n```\n\n```python\n#Print out lines\n```\n\n```python\nprint lines\n#Output\n#[, ]\n```\n\nTo fix this problem, we simply need to delete `lines`, empty it, or let it go out of scope.\n\n```python\n#Reinitialize lines to empty list\n```\n\n```python\nlines = []\n```\n\n```python\nprint lines\n#Output\n#[]\n```\n\n```python\nprint wr\n#Output\n#\n#```\n#So, the moral of the story is, clean up after yourself. If you expect something to be garbage collected but it isn't, you are likely leaving a reference hanging out somewhere.\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/4981815\/how-to-remove-lines-in-a-matplotlib-plot","best_answers_votes":95,"tags":["python","matplotlib","plot"],"question_length":862,"response_length":4588,"tag_count":3} {"question":"Matplotlib\/Pyplot: How to zoom subplots together? I have plots of 3-axis accelerometer time-series data (t,x,y,z) in separate subplots I'd like to zoom together. That is, when I use the \"Zoom to Rectangle\" tool on one plot, when I release the mouse all 3 plots zoom together. Previously, I simply plotted all 3 axes on a single plot using different colors. But this is useful only with small amounts of data: I have over 2 million data points, so the last axis plotted obscures the other two. Hence the need for separate subplots. I know I can capture matplotlib\/pyplot mouse events (http:\/\/matplotlib.sourceforge.net\/users\/event_handling.html), and I know I can catch other events (http:\/\/matplotlib.sourceforge.net\/api\/backend_bases_api.html#matplotlib.backend_bases.ResizeEvent), but I don't know how to tell what zoom has been requested on any one subplot, and how to replicate it on the other two subplots. I suspect I have the all the pieces, and need only that one last precious clue... -BobC","response":"The easiest way to do this is by using the sharex and\/or sharey keywords when creating the axes: \n```\nfrom matplotlib import pyplot as plt\n\nax1 = plt.subplot(2,1,1)\nax1.plot(...)\nax2 = plt.subplot(2,1,2, sharex=ax1)\nax2.plot(...)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/4200586\/matplotlib-pyplot-how-to-zoom-subplots-together","best_answers_votes":161,"tags":["zooming","matplotlib"],"question_length":999,"response_length":233,"tag_count":2} {"question":"Is there a parameter in matplotlib\/pandas to have the Y axis of a histogram as percentage? I would like to compare two histograms by having the Y axis show the percentage of each column from the overall dataset size instead of an absolute value. Is that possible? I am using Pandas and matplotlib. Thanks","response":"The density=True (normed=True for matplotlib < 2.2.0) returns a histogram for which np.sum(pdf * np.diff(bins)) equals 1. If you want the sum of the histogram to be 1 you can use Numpy's histogram() and normalize the results yourself. \n```\nx = np.random.randn(30)\n\nfig, ax = plt.subplots(1,2, figsize=(10,4))\n\nax[0].hist(x, density=True, color='grey')\n\nhist, bins = np.histogram(x)\nax[1].bar(bins[:-1], hist.astype(np.float32) \/ hist.sum(), width=(bins[1]-bins[0]), color='grey')\n\nax[0].set_title('normed=True')\nax[1].set_title('hist = hist \/ hist.sum()')\n```\nBtw: Strange plotting glitch at the first bin of the left plot.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/17874063\/is-there-a-parameter-in-matplotlib-pandas-to-have-the-y-axis-of-a-histogram-as-p","best_answers_votes":108,"tags":["python","pandas","matplotlib"],"question_length":304,"response_length":623,"tag_count":3} {"question":"Plotting a 3d cube, a sphere and a vector I search how to plot something with less instruction as possible with Matplotlib but I don't find any help for this in the documentation. I want to plot the following things: a wireframe cube centered in 0 with a side length of 2 a \"wireframe\" sphere centered in 0 with a radius of 1 a point at coordinates [0, 0, 0] a vector that starts at this point and goes to [1, 1, 1] How to do that?","response":"It is a little complicated, but you can draw all the objects by the following code: \n```\nfrom mpl_toolkits.mplot3d import Axes3D\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom itertools import product, combinations\n\n\nfig = plt.figure()\nax = fig.gca(projection='3d')\nax.set_aspect(\"equal\")\n\n# draw cube\nr = [-1, 1]\nfor s, e in combinations(np.array(list(product(r, r, r))), 2):\n if np.sum(np.abs(s-e)) == r[1]-r[0]:\n ax.plot3D(*zip(s, e), color=\"b\")\n\n# draw sphere\nu, v = np.mgrid[0:2*np.pi:20j, 0:np.pi:10j]\nx = np.cos(u)*np.sin(v)\ny = np.sin(u)*np.sin(v)\nz = np.cos(v)\nax.plot_wireframe(x, y, z, color=\"r\")\n\n# draw a point\nax.scatter([0], [0], [0], color=\"g\", s=100)\n\n# draw a vector\nfrom matplotlib.patches import FancyArrowPatch\nfrom mpl_toolkits.mplot3d import proj3d\n\n\nclass Arrow3D(FancyArrowPatch):\n\n def __init__(self, xs, ys, zs, *args, **kwargs):\n FancyArrowPatch.__init__(self, (0, 0), (0, 0), *args, **kwargs)\n self._verts3d = xs, ys, zs\n\n def draw(self, renderer):\n xs3d, ys3d, zs3d = self._verts3d\n xs, ys, zs = proj3d.proj_transform(xs3d, ys3d, zs3d, renderer.M)\n self.set_positions((xs[0], ys[0]), (xs[1], ys[1]))\n FancyArrowPatch.draw(self, renderer)\n\na = Arrow3D([0, 1], [0, 1], [0, 1], mutation_scale=20,\n lw=1, arrowstyle=\"-|>\", color=\"k\")\nax.add_artist(a)\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/11140163\/plotting-a-3d-cube-a-sphere-and-a-vector","best_answers_votes":216,"tags":["python","matplotlib","geometry","matplotlib-3d"],"question_length":431,"response_length":1369,"tag_count":4} {"question":"How to show two figures using matplotlib? I have some troubles while drawing two figures at the same time, not shown in a single plot. But according to the documentation, I wrote the code and only the figure one shows. I think maybe I lost something important. Could anyone help me to figure out? Thanks. (The *tlist_first* used in the code is a list of data.) \n```\nplt.figure(1)\nplt.hist(tlist_first, bins=2000000, normed = True, histtype =\"step\", cumulative = True, color = 'g',label = 'first answer')\nplt.ylabel('Percentage of answered questions')\nplt.xlabel('Minutes elapsed after questions are posted')\n\nplt.axvline(x = 30, ymin = 0, ymax = 1, color = 'r', linestyle = '--', label = '30 min')\nplt.axvline(x = 60, ymin = 0, ymax = 1, color = 'c', linestyle = '--', label = '1 hour')\nplt.legend()\nplt.xlim(0,120)\nplt.ylim(0,1) \nplt.show()\nplt.close() ### not working either with this line or without it\n\nplt.figure(2)\nplt.hist(tlist_first, bins=2000000, normed = True, histtype =\"step\", cumulative = True, color = 'g',label = 'first answer')\n\nplt.ylabel('Percentage of answered questions')\nplt.xlabel('Minutes elapsed after questions are posted')\n\nplt.axvline(x = 240, ymin = 0, ymax = 1, color = 'r', linestyle = '--', label = '30 min')\nplt.axvline(x = 1440, ymin = 0, ymax = 1, color = 'c', linestyle = '--', label = '1 hour')\nplt.legend(loc= 4)\nplt.xlim(0,2640)\nplt.ylim(0,1)\nplt.show()\n```","response":"Alternatively to calling plt.show() at the end of the script, you can also control each figure separately doing: \n```\nf = plt.figure(1)\nplt.hist........\n............\nf.show()\n\ng = plt.figure(2)\nplt.hist(........\n................\ng.show()\n\nraw_input()\n```\nIn this case you must call raw_input to keep the figures alive. This way you can select dynamically which figures you want to show Note: raw_input() was renamed to input() in Python 3","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/7744697\/how-to-show-two-figures-using-matplotlib","best_answers_votes":110,"tags":["python","matplotlib"],"question_length":1396,"response_length":438,"tag_count":2} {"question":"Putting newline in matplotlib label with TeX in Python? How can I add a newline to a plot's label (e.g. xlabel or ylabel) in matplotlib? For example, \n```\nplt.bar([1, 2], [4, 5])\nplt.xlabel(\"My x label\")\nplt.ylabel(r\"My long label with $\\Sigma_{C}$ math \\n continues here\")\n```\nIdeally I'd like the y-labeled to be centered too. Is there a way to do this? It's important that the label have both TeX (enclosed in '$') and the newline.","response":"You can have the best of both worlds: automatic \"escaping\" of LaTeX commands and newlines: \n```\nplt.ylabel(r\"My long label with unescaped {\\LaTeX} $\\Sigma_{C}$ math\"\n \"\\n\" # Newline: the backslash is interpreted as usual\n r\"continues here with $\\pi$\")\n```\n(instead of using three lines, separating the strings by single spaces is another option). In fact, Python automatically concatenates string literals that follow each other, and you can mix raw strings (r\"\u2026\") and strings with character interpolation (\"\\n\").","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/2660319\/putting-newline-in-matplotlib-label-with-tex-in-python","best_answers_votes":143,"tags":["python","plot","graphing","matplotlib"],"question_length":434,"response_length":534,"tag_count":4} {"question":"How to display custom values on a bar plot I'm looking to see how to do two things in Seaborn with using a bar chart to display values that are in the dataframe, but not in the graph. I'm looking to display the values of one field in a dataframe while graphing another. For example, below, I'm graphing 'tip', but I would like to place the value of 'total_bill' centered above each of the bars (i.e.325.88 above Friday, 1778.40 above Saturday, etc.) Is there a way to scale the colors of the bars, with the lowest value of 'total_bill' having the lightest color (in this case Friday) and the highest value of 'total_bill' having the darkest? Obviously, I'd stick with one color (i.e., blue) when I do the scaling. While I see that others think that this is a duplicate of another problem (or two), I am missing the part of how I use a value that is not in the graph as the basis for the label or the shading. How do I say, use total_bill as the basis. I'm sorry, but I just can't figure it out based on those answers. Starting with the following code, \n```\nimport pandas as pd\nimport seaborn as sns\n%matplotlib inline\n\ndf = pd.read_csv(\"https:\/\/raw.githubusercontent.com\/wesm\/pydata-book\/1st-edition\/ch08\/tips.csv\", sep=',')\ngroupedvalues = df.groupby('day').sum().reset_index()\ng = sns.barplot(x='day', y='tip', data=groupedvalues)\n```\nI get the following result: Interim Solution: \n```\nfor index, row in groupedvalues.iterrows():\n g.text(row.name, row.tip, round(row.total_bill, 2), color='black', ha=\"center\")\n```\nOn the shading, using the example below, I tried the following: \n```\nimport pandas as pd\nimport seaborn as sns\n%matplotlib inline\n\ndf = pd.read_csv(\"https:\/\/raw.githubusercontent.com\/wesm\/pydata-book\/1st-edition\/ch08\/tips.csv\", sep=',')\ngroupedvalues = df.groupby('day').sum().reset_index()\n\npal = sns.color_palette(\"Greens_d\", len(data))\nrank = groupedvalues.argsort().argsort()\ng = sns.barplot(x='day', y='tip', data=groupedvalues)\n\nfor index, row in groupedvalues.iterrows():\n g.text(row.name, row.tip, round(row.total_bill, 2), color='black', ha=\"center\")\n```\nBut that gave me the following error: AttributeError: 'DataFrame' object has no attribute 'argsort' So I tried a modification: \n```\nimport pandas as pd\nimport seaborn as sns\n%matplotlib inline\n\ndf = pd.read_csv(\"https:\/\/raw.githubusercontent.com\/wesm\/pydata-book\/1st-edition\/ch08\/tips.csv\", sep=',')\ngroupedvalues = df.groupby('day').sum().reset_index()\n\npal = sns.color_palette(\"Greens_d\", len(data))\nrank = groupedvalues['total_bill'].rank(ascending=True)\ng = sns.barplot(x='day', y='tip', data=groupedvalues, palette=np.array(pal[::-1])[rank])\n```\nand that leaves me with IndexError: index 4 is out of bounds for axis 0 with size 4","response":"New in matplotlib 3.4.0 There is now a built-in Axes.bar_label to automatically label bar containers: \n```py\nax = sns.barplot(x='day', y='tip', data=groupedvalues)\nax.bar_label(ax.containers[0]) # only 1 container needed unless using `hue`\n```\nFor custom labels (e.g., tip bars with total_bill values), use the labels parameter: \n```py\nax = sns.barplot(x='day', y='tip', data=groupedvalues)\nax.bar_label(ax.containers[0], labels=groupedvalues['total_bill'])\n# ----------------------------------\n```\nFor multi-group bar plots (i.e., with hue), there will be multiple bar containers that need to be iterated: \n```py\nax = sns.barplot(x='day', y='tip', hue='sex', data=df)\nfor container in ax.containers:\n ax.bar_label(container)\n```\nMore details: How to label percentage counts (fmt param) How to rotate labels (rotation param) How to vertically center labels (label_type param) How to add spacing to labels (padding param) Color-ranked version Is there a way to scale the colors of the bars, with the lowest value of total_bill having the lightest color (in this case Friday) and the highest value of total_bill having the darkest? Find the rank of each total_bill value: Either use Series.sort_values: \n```py\nranks = groupedvalues.total_bill.sort_values().index\n# Int64Index([1, 0, 3, 2], dtype='int64')\n```\nOr condense Ernest's Series.rank version by chaining Series.sub: \n```py\nranks = groupedvalues.total_bill.rank().sub(1).astype(int).array\n# [1, 0, 3, 2]\n```\nThen reindex the color palette using ranks: \n```py\npalette = sns.color_palette('Blues_d', len(ranks))\nax = sns.barplot(x='day', y='tip', palette=np.array(palette)[ranks], data=groupedvalues)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/43214978\/how-to-display-custom-values-on-a-bar-plot","best_answers_votes":165,"tags":["python","pandas","matplotlib","seaborn","bar-chart"],"question_length":2721,"response_length":1689,"tag_count":5} {"question":"Matplotlib scatter plot legend I created a 4D scatter plot graph to represent different temperatures in a specific area. When I create the legend, the legend shows the correct symbol and color but adds a line through it. The code I'm using is: \n```\ncolors=['b', 'c', 'y', 'm', 'r']\nlo = plt.Line2D(range(10), range(10), marker='x', color=colors[0])\nll = plt.Line2D(range(10), range(10), marker='o', color=colors[0])\nl = plt.Line2D(range(10), range(10), marker='o',color=colors[1])\na = plt.Line2D(range(10), range(10), marker='o',color=colors[2])\nh = plt.Line2D(range(10), range(10), marker='o',color=colors[3])\nhh = plt.Line2D(range(10), range(10), marker='o',color=colors[4])\nho = plt.Line2D(range(10), range(10), marker='x', color=colors[4])\nplt.legend((lo,ll,l,a, h, hh, ho),('Low Outlier', 'LoLo','Lo', 'Average', 'Hi', 'HiHi', 'High Outlier'),numpoints=1, loc='lower left', ncol=3, fontsize=8)\n```\nI tried changing Line2D to Scatter and scatter. Scatter returned an error and scatter changed the graph and returned an error. With scatter, I changed the range(10) to the lists containing the data points. Each list contains either the x, y, or z variable. \n```\nlo = plt.scatter(xLOutlier, yLOutlier, zLOutlier, marker='x', color=colors[0])\nll = plt.scatter(xLoLo, yLoLo, zLoLo, marker='o', color=colors[0])\nl = plt.scatter(xLo, yLo, zLo, marker='o',color=colors[1])\na = plt.scatter(xAverage, yAverage, zAverage, marker='o',color=colors[2])\nh = plt.scatter(xHi, yHi, zHi, marker='o',color=colors[3])\nhh = plt.scatter(xHiHi, yHiHi, zHiHi, marker='o',color=colors[4])\nho = plt.scatter(xHOutlier, yHOutlier, zHOutlier, marker='x', color=colors[4])\nplt.legend((lo,ll,l,a, h, hh, ho),('Low Outlier', 'LoLo','Lo', 'Average', 'Hi', 'HiHi', 'High Outlier'),scatterpoints=1, loc='lower left', ncol=3, fontsize=8)\n```\nWhen I run this, the legend no longer exists, it is a small white box in the corner with nothing in it. Any advice?","response":"2D scatter plot Using the scatter method of the matplotlib.pyplot module should work (at least with matplotlib 1.2.1 with Python 2.7.5), as in the example code below. Also, if you are using scatter plots, use scatterpoints=1 rather than numpoints=1 in the legend call to have only one point for each legend entry. In the code below I've used random values rather than plotting the same range over and over, making all the plots visible (i.e. not overlapping each other). \n```\nimport matplotlib.pyplot as plt\nfrom numpy.random import random\n\ncolors = ['b', 'c', 'y', 'm', 'r']\n\nlo = plt.scatter(random(10), random(10), marker='x', color=colors[0])\nll = plt.scatter(random(10), random(10), marker='o', color=colors[0])\nl = plt.scatter(random(10), random(10), marker='o', color=colors[1])\na = plt.scatter(random(10), random(10), marker='o', color=colors[2])\nh = plt.scatter(random(10), random(10), marker='o', color=colors[3])\nhh = plt.scatter(random(10), random(10), marker='o', color=colors[4])\nho = plt.scatter(random(10), random(10), marker='x', color=colors[4])\n\nplt.legend((lo, ll, l, a, h, hh, ho),\n ('Low Outlier', 'LoLo', 'Lo', 'Average', 'Hi', 'HiHi', 'High Outlier'),\n scatterpoints=1,\n loc='lower left',\n ncol=3,\n fontsize=8)\n\nplt.show()\n```\n3D scatter plot To plot a scatter in 3D, use the plot method, as the legend does not support Patch3DCollection as is returned by the scatter method of an Axes3D instance. To specify the markerstyle you can include this as a positional argument in the method call, as seen in the example below. Optionally one can include argument to both the linestyle and marker parameters. \n```\nimport matplotlib.pyplot as plt\nfrom numpy.random import random\nfrom mpl_toolkits.mplot3d import Axes3D\n\ncolors=['b', 'c', 'y', 'm', 'r']\n\nax = plt.subplot(111, projection='3d')\n\nax.plot(random(10), random(10), random(10), 'x', color=colors[0], label='Low Outlier')\nax.plot(random(10), random(10), random(10), 'o', color=colors[0], label='LoLo')\nax.plot(random(10), random(10), random(10), 'o', color=colors[1], label='Lo')\nax.plot(random(10), random(10), random(10), 'o', color=colors[2], label='Average')\nax.plot(random(10), random(10), random(10), 'o', color=colors[3], label='Hi')\nax.plot(random(10), random(10), random(10), 'o', color=colors[4], label='HiHi')\nax.plot(random(10), random(10), random(10), 'x', color=colors[4], label='High Outlier')\n\nplt.legend(loc='upper left', numpoints=1, ncol=3, fontsize=8, bbox_to_anchor=(0, 0))\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/17411940\/matplotlib-scatter-plot-legend","best_answers_votes":163,"tags":["python","matplotlib","legend","scatter-plot"],"question_length":1930,"response_length":2538,"tag_count":4} {"question":"How to add line based on slope and intercept In R, there is a function called abline in which a line can be drawn on a plot based on the specification of the intercept (first argument) and the slope (second argument). For instance, \n```\nplot(1:10, 1:10)\nabline(0, 1)\n```\nwhere the line with an intercept of 0 and the slope of 1 spans the entire range of the plot. Is there such a function in Matplotlib?","response":"A lot of these solutions are focusing on adding a line to the plot that fits the data. Here's a simple solution for adding an arbitrary line to the plot based on a slope and intercept. \n```\nimport matplotlib.pyplot as plt \nimport numpy as np \n\ndef abline(slope, intercept):\n \"\"\"Plot a line from slope and intercept\"\"\"\n axes = plt.gca()\n x_vals = np.array(axes.get_xlim())\n y_vals = intercept + slope * x_vals\n plt.plot(x_vals, y_vals, '--')\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/7941226\/how-to-add-line-based-on-slope-and-intercept","best_answers_votes":128,"tags":["python","matplotlib"],"question_length":403,"response_length":462,"tag_count":2} {"question":"How to use matplotlib tight layout with Figure? [duplicate] This question already has answers here: Improve subplot size\/spacing with many subplots (12 answers) Closed 2 years ago. I found tight_layout function for pyplot and want to use it. In my application I embed matplotlib plots into Qt GUI and use figure and not pyplot. Is there any way I can apply tight_layout there? Would it also work if I have several axes in one figure?","response":"Just call fig.tight_layout() as you normally would. (pyplot is just a convenience wrapper. In most cases, you only use it to quickly generate figure and axes objects and then call their methods directly.) There shouldn't be a difference between the QtAgg backend and the default backend (or if there is, it's a bug). E.g. \n```\nimport matplotlib.pyplot as plt\n\n#-- In your case, you'd do something more like:\n# from matplotlib.figure import Figure\n# fig = Figure()\n#-- ...but we want to use it interactive for a quick example, so \n#-- we'll do it this way\nfig, axes = plt.subplots(nrows=4, ncols=4)\n\nfor i, ax in enumerate(axes.flat, start=1):\n ax.set_title('Test Axes {}'.format(i))\n ax.set_xlabel('X axis')\n ax.set_ylabel('Y axis')\n\nplt.show()\n```\nBefore Tight Layout After Tight Layout \n```\nimport matplotlib.pyplot as plt\n\nfig, axes = plt.subplots(nrows=4, ncols=4)\n\nfor i, ax in enumerate(axes.flat, start=1):\n ax.set_title('Test Axes {}'.format(i))\n ax.set_xlabel('X axis')\n ax.set_ylabel('Y axis')\n\nfig.tight_layout()\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/9603230\/how-to-use-matplotlib-tight-layout-with-figure","best_answers_votes":152,"tags":["python","matplotlib","figure"],"question_length":433,"response_length":1060,"tag_count":3} {"question":"Can i cycle through line styles in matplotlib I know how to cycle through a list of colors in matplotlib. But is it possible to do something similar with line styles (plain, dotted, dashed, etc.)? I'd need to do that so my graphs would be easier to read when printed. Any suggestions how to do that?","response":"Something like this might do the trick: \n```\nimport matplotlib.pyplot as plt\nfrom itertools import cycle\nlines = [\"-\",\"--\",\"-.\",\":\"]\nlinecycler = cycle(lines)\nplt.figure()\nfor i in range(10):\n x = range(i,i+10)\n plt.plot(range(10),x,next(linecycler))\nplt.show()\n```\nResult: Edit for newer version (v2.22) \n```\nimport matplotlib.pyplot as plt\nfrom cycler import cycler\n#\nplt.figure()\nfor i in range(5):\n x = range(i,i+5)\n linestyle_cycler = cycler('linestyle',['-','--',':','-.'])\n plt.rc('axes', prop_cycle=linestyle_cycler)\n plt.plot(range(5),x)\n plt.legend(['first','second','third','fourth','fifth'], loc='upper left', fancybox=True, shadow=True)\nplt.show()\n```\nFor more detailed information consult the matplotlib tutorial on \"Styling with cycler\" To see the output click \"show figure\"","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/7799156\/can-i-cycle-through-line-styles-in-matplotlib","best_answers_votes":136,"tags":["python","matplotlib"],"question_length":299,"response_length":810,"tag_count":2} {"question":"Barchart with vertical ytick labels I'm using matplotlib to generate a (vertical) barchart. The problem is my labels are rather long. Is there any way to display them vertically, either in the bar or above it or below it?","response":"Do you mean something like this: \n```\n\n```python\nfrom matplotlib import *\n```\n\n```python\nplot(xrange(10))\n```\n\n```python\nyticks(xrange(10), rotation='vertical')\n#Output\n#```\n#? In general, to show any text in matplotlib with a vertical orientation, you can add the keyword rotation='vertical'. For further options, you can look at help(matplotlib.pyplot.text) The yticks function plots the ticks on the y axis; I am not sure whether you originally meant this or the ylabel function, but the procedure is alwasy the same, you have to add rotation='vertical' Maybe you can also find useful the options 'verticalalignment' and 'horizontalalignment', which allows you to define how to align the text with respect to the ticks or the other elements.\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/1221108\/barchart-with-vertical-ytick-labels","best_answers_votes":116,"tags":["python","matplotlib","bar-chart","yaxis"],"question_length":221,"response_length":705,"tag_count":4} {"question":"What are the differences between add_axes and add_subplot? In a previous answer it was recommended to me to use add_subplot instead of add_axes to show axes correctly, but searching the documentation I couldn't understand when and why I should use either one of these functions. Can anyone explain the differences?","response":"Common grounds Both, add_axes and add_subplot add an axes to a figure. They both return a (subclass of a) matplotlib.axes.Axes object. However, the mechanism which is used to add the axes differs substantially. add_axes The calling signature of add_axes is add_axes(rect), where rect is a list [x0, y0, width, height] denoting the lower left point of the new axes in figure coodinates (x0,y0) and its width and height. So the axes is positionned in absolute coordinates on the canvas. E.g. \n```\nfig = plt.figure()\nax = fig.add_axes([0,0,1,1])\n```\nplaces a figure in the canvas that is exactly as large as the canvas itself. add_subplot The calling signature of add_subplot does not directly provide the option to place the axes at a predefined position. It rather allows to specify where the axes should be situated according to a subplot grid. The usual and easiest way to specify this position is the 3 integer notation, \n```\nfig = plt.figure()\nax = fig.add_subplot(231)\n```\nIn this example a new axes is created at the first position (1) on a grid of 2 rows and 3 columns. To produce only a single axes, add_subplot(111) would be used (First plot on a 1 by 1 subplot grid). (In newer matplotlib versions, add_subplot() without any arguments is possible as well.) The advantage of this method is that matplotlib takes care of the exact positioning. By default add_subplot(111) would produce an axes positioned at [0.125,0.11,0.775,0.77] or similar, which already leaves enough space around the axes for the title and the (tick)labels. However, this position may also change depending on other elements in the plot, titles set, etc. It can also be adjusted using pyplot.subplots_adjust(...) or pyplot.tight_layout(). In most cases, add_subplot would be the prefered method to create axes for plots on a canvas. Only in cases where exact positioning matters, add_axes might be useful. Example \n```\nimport matplotlib.pyplot as plt\nplt.rcParams[\"figure.figsize\"] = (5,3)\n\nfig = plt.figure()\nfig.add_subplot(241)\nfig.add_subplot(242)\nax = fig.add_subplot(223)\nax.set_title(\"subplots\")\n\nfig.add_axes([0.77,.3,.2,.6])\nax2 =fig.add_axes([0.67,.5,.2,.3])\nfig.add_axes([0.6,.1,.35,.3])\nax2.set_title(\"random axes\")\n\nplt.tight_layout()\nplt.show()\n```\nAlternative The easiest way to obtain one or more subplots together with their handles is plt.subplots(). For one axes, use \n```\nfig, ax = plt.subplots()\n```\nor, if more subplots are needed, \n```\nfig, axes = plt.subplots(nrows=3, ncols=4)\n```\nThe initial question In the initial question an axes was placed using fig.add_axes([0,0,1,1]), such that it sits tight to the figure boundaries. The disadvantage of this is of course that ticks, ticklabels, axes labels and titles are cut off. Therefore I suggested in one of the comments to the answer to use fig.add_subplot as this will automatically allow for enough space for those elements, and, if this is not enough, can be adjusted using pyplot.subplots_adjust(...) or pyplot.tight_layout().","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/43326680\/what-are-the-differences-between-add-axes-and-add-subplot","best_answers_votes":172,"tags":["python","matplotlib","subplot","figure","axes"],"question_length":314,"response_length":2983,"tag_count":5} {"question":"Automatically run %matplotlib inline in IPython Notebook Every time I launch IPython Notebook, the first command I run is \n```\n%matplotlib inline\n```\nIs there some way to change my config file so that when I launch IPython, it is automatically in this mode?","response":"The configuration way IPython has profiles for configuration, located at ~\/.ipython\/profile_*. The default profile is called profile_default. Within this folder there are two primary configuration files: ipython_config.py ipython_kernel_config.py Add the inline option for matplotlib to ipython_kernel_config.py: \n```\nc = get_config()\n# ... Any other configurables you want to set\nc.InteractiveShellApp.matplotlib = \"inline\"\n```\nmatplotlib vs. pylab Usage of %pylab to get inline plotting is discouraged. It introduces all sorts of gunk into your namespace that you just don't need. %matplotlib on the other hand enables inline plotting without injecting your namespace. You'll need to do explicit calls to get matplotlib and numpy imported. \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n```\nThe small price of typing out your imports explicitly should be completely overcome by the fact that you now have reproducible code.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/21176731\/automatically-run-matplotlib-inline-in-ipython-notebook","best_answers_votes":87,"tags":["python","matplotlib","jupyter-notebook"],"question_length":257,"response_length":934,"tag_count":3} {"question":"How to create grouped boxplots Is there a way to group boxplots in matplotlib? Assume we have three groups \"A\", \"B\", and \"C\" and for each we want to create a boxplot for both \"apples\" and \"oranges\". If a grouping is not possible directly, we can create all six combinations and place them linearly side by side. What would be to simplest way to visualize the groupings? I'm trying to avoid setting the tick labels to something like \"A + apples\" since my scenario involves much longer names than \"A\".","response":"How about using colors to differentiate between \"apples\" and \"oranges\" and spacing to separate \"A\", \"B\" and \"C\"? Something like this: \n```\nfrom pylab import plot, show, savefig, xlim, figure, \\\n hold, ylim, legend, boxplot, setp, axes\n\n# function for setting the colors of the box plots pairs\ndef setBoxColors(bp):\n setp(bp['boxes'][0], color='blue')\n setp(bp['caps'][0], color='blue')\n setp(bp['caps'][1], color='blue')\n setp(bp['whiskers'][0], color='blue')\n setp(bp['whiskers'][1], color='blue')\n setp(bp['fliers'][0], color='blue')\n setp(bp['fliers'][1], color='blue')\n setp(bp['medians'][0], color='blue')\n\n setp(bp['boxes'][1], color='red')\n setp(bp['caps'][2], color='red')\n setp(bp['caps'][3], color='red')\n setp(bp['whiskers'][2], color='red')\n setp(bp['whiskers'][3], color='red')\n setp(bp['fliers'][2], color='red')\n setp(bp['fliers'][3], color='red')\n setp(bp['medians'][1], color='red')\n\n# Some fake data to plot\nA= [[1, 2, 5,], [7, 2]]\nB = [[5, 7, 2, 2, 5], [7, 2, 5]]\nC = [[3,2,5,7], [6, 7, 3]]\n\nfig = figure()\nax = axes()\nhold(True)\n\n# first boxplot pair\nbp = boxplot(A, positions = [1, 2], widths = 0.6)\nsetBoxColors(bp)\n\n# second boxplot pair\nbp = boxplot(B, positions = [4, 5], widths = 0.6)\nsetBoxColors(bp)\n\n# thrid boxplot pair\nbp = boxplot(C, positions = [7, 8], widths = 0.6)\nsetBoxColors(bp)\n\n# set axes limits and labels\nxlim(0,9)\nylim(0,9)\nax.set_xticklabels(['A', 'B', 'C'])\nax.set_xticks([1.5, 4.5, 7.5])\n\n# draw temporary red and blue lines and use them to create a legend\nhB, = plot([1,1],'b-')\nhR, = plot([1,1],'r-')\nlegend((hB, hR),('Apples', 'Oranges'))\nhB.set_visible(False)\nhR.set_visible(False)\n\nsavefig('boxcompare.png')\nshow()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/16592222\/how-to-create-grouped-boxplots","best_answers_votes":119,"tags":["python","matplotlib","boxplot"],"question_length":499,"response_length":1733,"tag_count":3} {"question":"Superscript in Python plots I want to label my x axis at follows : \n```\npylab.xlabel('metres 10^1')\n```\nBut I don't want to have the ^ symbol included . \n```\npylab.xlabel('metres 10$^{one}$')\n```\nThis method works and will superscript letters but doesn't seem to work for numbers . If I try : \n```\npylab.xlabel('metres 10$^1$')\n```\nIt superscripts a letter N for some reason . Anyone know how to superscript numbers in python plots ? thanks .","response":"You just need to have the full expression inside the $. Basically, you need \"meters $10^1$\". You don't need usetex=True to do this (or most any mathematical formula). You may also want to use a raw string (e.g. r\"\\t\", vs \"\\t\") to avoid problems with things like \\n, \\a, \\b, \\t, \\f, etc. For example: \n```\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots()\nax.set(title=r'This is an expression $e^{\\sin(\\omega\\phi)}$',\n xlabel='meters $10^1$', ylabel=r'Hertz $(\\frac{1}{s})$')\nplt.show()\n```\nIf you don't want the superscripted text to be in a different font than the rest of the text, use \\mathregular (or equivalently \\mathdefault). Some symbols won't be available, but most will. This is especially useful for simple superscripts like yours, where you want the expression to blend in with the rest of the text. \n```\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots()\nax.set(title=r'This is an expression $\\mathregular{e^{\\sin(\\omega\\phi)}}$',\n xlabel='meters $\\mathregular{10^1}$',\n ylabel=r'Hertz $\\mathregular{(\\frac{1}{s})}$')\nplt.show()\n```\nFor more information (and a general overview of matplotlib's \"mathtext\"), see: http:\/\/matplotlib.org\/users\/mathtext.html","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/21226868\/superscript-in-python-plots","best_answers_votes":164,"tags":["python","matplotlib"],"question_length":442,"response_length":1199,"tag_count":2} {"question":"How to draw grid lines behind matplotlib bar graph \n```\nx = ['01-02', '02-02', '03-02', '04-02', '05-02']\ny = [2, 2, 3, 7, 2]\n\nfig, ax = plt.subplots(1, 1)\nax.bar(range(len(y)), y, width=0.3,align='center',color='skyblue')\nplt.xticks(range(len(y)), x, size='small')\nplt.savefig('\/home\/user\/graphimages\/foo2.png')\nplt.close()\n```\nI want to draw grid lines (of x & y) behind the bar graph.","response":"To add a grid you simply need to add ax.grid() If you want the grid to be behind the bars then add \n```\nax.grid(zorder=0)\nax.bar(range(len(y)), y, width=0.3, align='center', color='skyblue', zorder=3)\n```\nThe important part is that the zorder of the bars is greater than grid. Experimenting it seems zorder=3 is the lowest value that actually gives the desired effect. I have no idea why zorder=1 isn't sufficient. EDIT: I have noticed this question has already been answered here using a different method although it suffers some link rot. Both methods yield the same result as far as I can see but andrew cooke's answer is more elegant.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/23357798\/how-to-draw-grid-lines-behind-matplotlib-bar-graph","best_answers_votes":147,"tags":["python","matplotlib"],"question_length":387,"response_length":638,"tag_count":2} {"question":"How to hide axes and gridlines I would like to be able to hide the axes and gridlines on a 3D matplotlib graph. I want to do this because when zooming in and out the image gets pretty nasty. I'm not sure what code to include here but this is what I use to create the graph. \n```\nfig = plt.figure()\nax = fig.gca(projection='3d')\nax.view_init(30, -90)\nax.set_xlabel(\"X\")\nax.set_ylabel(\"Y\")\nax.set_zlabel(\"Z\")\nplt.xlim(0,pL)\nplt.ylim(0,pW)\nax.set_aspect(\"equal\")\n\nplt.show()\n```\nThis is an example of the plot that I am looking at:","response":"```\n# Hide grid lines\nax.grid(False)\n\n# Hide axes ticks\nax.set_xticks([])\nax.set_yticks([])\nax.set_zticks([])\n```\nNote, you need matplotlib>=1.2 for set_zticks() to work.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/45148704\/how-to-hide-axes-and-gridlines","best_answers_votes":181,"tags":["python","matplotlib","matplotlib-3d"],"question_length":528,"response_length":170,"tag_count":3} {"question":"Pandas dataframe groupby plot I have a dataframe which is structured as: \n```\nDate ticker adj_close \n0 2016-11-21 AAPL 111.730 \n1 2016-11-22 AAPL 111.800 \n2 2016-11-23 AAPL 111.230 \n3 2016-11-25 AAPL 111.790 \n4 2016-11-28 AAPL 111.570 \n... \n8 2016-11-21 ACN 119.680 \n9 2016-11-22 ACN 119.480 \n10 2016-11-23 ACN 119.820 \n11 2016-11-25 ACN 120.740 \n...\n```\nHow can I plot based on the ticker the adj_close versus Date?","response":"Simple plot, you can use: \n```\ndf.plot(x='Date',y='adj_close')\n```\nOr you can set the index to be Date beforehand, then it's easy to plot the column you want: \n```\ndf.set_index('Date', inplace=True)\ndf['adj_close'].plot()\n```\nIf you want a chart with one series by ticker on it You need to groupby before: \n```\ndf.set_index('Date', inplace=True)\ndf.groupby('ticker')['adj_close'].plot(legend=True)\n```\nIf you want a chart with individual subplots: \n```\ngrouped = df.groupby('ticker')\n\nncols=2\nnrows = int(np.ceil(grouped.ngroups\/ncols))\n\nfig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(12,4), sharey=True)\n\nfor (key, ax) in zip(grouped.groups.keys(), axes.flatten()):\n grouped.get_group(key).plot(ax=ax)\n\nax.legend()\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/41494942\/pandas-dataframe-groupby-plot","best_answers_votes":158,"tags":["python","pandas","matplotlib","time-series","seaborn"],"question_length":565,"response_length":746,"tag_count":5} {"question":"Histogram values of a Pandas Series I have some values in a Python Pandas Series (type: pandas.core.series.Series) \n```\n\n```python\nseries = pd.Series([0.0,950.0,-70.0,812.0,0.0,-90.0,0.0,0.0,-90.0,0.0,-64.0,208.0,0.0,-90.0,0.0,-80.0,0.0,0.0,-80.0,-48.0,840.0,-100.0,190.0,130.0,-100.0,-100.0,0.0,-50.0,0.0,-100.0,-100.0,0.0,-90.0,0.0,-90.0,-90.0,63.0,-90.0,0.0,0.0,-90.0,-80.0,0.0,])\n```\n\n```python\nseries.min()\n#Output\n#-100.0\n```\n\n```python\nseries.max()\n#Output\n#950.0\n#```\n#I would like to get values of histogram (not necessary plotting histogram)... I just need to get the frequency for each interval. Let's say that my intervals are going from [-200; -150] to [950; 1000] so lower bounds are \n#```\n#lwb = range(-200,1000,50)\n#```\n#and upper bounds are \n#```\n#upb = range(-150,1050,50)\n#```\n#I don't know how to get frequency (the number of values that are inside each interval) now... I'm sure that defining lwb and upb is not necessary... but I don't know what function I should use to perform this! (after diving in Pandas doc, I think cut function can help me because it's a discretization problem... but I'm don't understand how to use it) After being able to do this, I will have a look at the way to display histogram (but that's an other problem)\n```","response":"You just need to use the histogram function of NumPy: \n```\nimport numpy as np\ncount, division = np.histogram(series)\n```\nwhere division is the automatically calculated border for your bins and count is the population inside each bin. If you need to fix a certain number of bins, you can use the argument bins and specify a number of bins, or give it directly the boundaries between each bin. \n```\ncount, division = np.histogram(series, bins = [-201,-149,949,1001])\n```\nto plot the results you can use the matplotlib function hist, but if you are working in pandas each Series has its own handle to the hist function, and you can give it the chosen binning: \n```\nseries.hist(bins=division)\n```\nEdit: As mentioned by another poster, Pandas is built on top of NumPy. Since OP is explicitly using Pandas, we can do away with the additional import by accessing NumPy through Pandas: \n```py\ncount, division = pd.np.histogram(series)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/13129618\/histogram-values-of-a-pandas-series","best_answers_votes":122,"tags":["python","pandas","numpy","matplotlib"],"question_length":1232,"response_length":930,"tag_count":4} {"question":"matplotlib: drawing lines between points ignoring missing data I have a set of data which I want plotted as a line-graph. For each series, some data is missing (but different for each series). Currently matplotlib does not draw lines which skip missing data: for example \n```\nimport matplotlib.pyplot as plt\n\nxs = range(8)\nseries1 = [1, 3, 3, None, None, 5, 8, 9]\nseries2 = [2, None, 5, None, 4, None, 3, 2]\n\nplt.plot(xs, series1, linestyle='-', marker='o')\nplt.plot(xs, series2, linestyle='-', marker='o')\n\nplt.show()\n```\nresults in a plot with gaps in the lines. How can I tell matplotlib to draw lines through the gaps? (I'd rather not have to interpolate the data).","response":"You can mask the NaN values this way: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nxs = np.arange(8)\nseries1 = np.array([1, 3, 3, None, None, 5, 8, 9]).astype(np.double)\ns1mask = np.isfinite(series1)\nseries2 = np.array([2, None, 5, None, 4, None, 3, 2]).astype(np.double)\ns2mask = np.isfinite(series2)\n\nplt.plot(xs[s1mask], series1[s1mask], linestyle='-', marker='o')\nplt.plot(xs[s2mask], series2[s2mask], linestyle='-', marker='o')\n\nplt.show()\n```\nThis leads to","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/14399689\/matplotlib-drawing-lines-between-points-ignoring-missing-data","best_answers_votes":119,"tags":["python","matplotlib"],"question_length":669,"response_length":474,"tag_count":2} {"question":"GridSpec with shared axes in Python This solution to another thread suggests using gridspec.GridSpec instead of plt.subplots. However, when I share axes between subplots, I usually use a syntax like the following \n```\nfig, axes = plt.subplots(N, 1, sharex='col', sharey=True, figsize=(3,18))\n```\nHow can I specify sharex and sharey when I use GridSpec ?","response":"First off, there's an easier workaround for your original problem, as long as you're okay with being slightly imprecise. Just reset the top extent of the subplots to the default after calling tight_layout: \n```\nfig, axes = plt.subplots(ncols=2, sharey=True)\nplt.setp(axes, title='Test')\nfig.suptitle('An overall title', size=20)\n\nfig.tight_layout()\nfig.subplots_adjust(top=0.9) \n\nplt.show()\n```\nHowever, to answer your question, you'll need to create the subplots at a slightly lower level to use gridspec. If you want to replicate the hiding of shared axes like subplots does, you'll need to do that manually, by using the sharey argument to Figure.add_subplot and hiding the duplicated ticks with plt.setp(ax.get_yticklabels(), visible=False). As an example: \n```\nimport matplotlib.pyplot as plt\nfrom matplotlib import gridspec\n\nfig = plt.figure()\ngs = gridspec.GridSpec(1,2)\nax1 = fig.add_subplot(gs[0])\nax2 = fig.add_subplot(gs[1], sharey=ax1)\nplt.setp(ax2.get_yticklabels(), visible=False)\n\nplt.setp([ax1, ax2], title='Test')\nfig.suptitle('An overall title', size=20)\ngs.tight_layout(fig, rect=[0, 0, 1, 0.97])\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/22511550\/gridspec-with-shared-axes-in-python","best_answers_votes":95,"tags":["python","matplotlib"],"question_length":353,"response_length":1131,"tag_count":2} {"question":"Fitting a Normal distribution to 1D data I have a 1 dimensional array. I can compute the \"mean\" and \"standard deviation\" of this sample and plot the \"Normal distribution\" but I have a problem: I want to plot the data and Normal distribution in the same figure. I dont know how to plot both the data and the normal distribution. Any Idea about \"Gaussian probability density function in scipy.stats\"? \n```\ns = np.std(array)\nm = np.mean(array)\nplt.plot(norm.pdf(array,m,s))\n```","response":"You can use matplotlib to plot the histogram and the PDF (as in the link in @MrE's answer). For fitting and for computing the PDF, you can use scipy.stats.norm, as follows. \n```\nimport numpy as np\nfrom scipy.stats import norm\nimport matplotlib.pyplot as plt\n\n\n# Generate some data for this demonstration.\ndata = norm.rvs(10.0, 2.5, size=500)\n\n# Fit a normal distribution to the data:\nmu, std = norm.fit(data)\n\n# Plot the histogram.\nplt.hist(data, bins=25, density=True, alpha=0.6, color='g')\n\n# Plot the PDF.\nxmin, xmax = plt.xlim()\nx = np.linspace(xmin, xmax, 100)\np = norm.pdf(x, mu, std)\nplt.plot(x, p, 'k', linewidth=2)\ntitle = \"Fit results: mu = %.2f, std = %.2f\" % (mu, std)\nplt.title(title)\n\nplt.show()\n```\nHere's the plot generated by the script:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/20011122\/fitting-a-normal-distribution-to-1d-data","best_answers_votes":191,"tags":["python","numpy","matplotlib","scipy"],"question_length":474,"response_length":755,"tag_count":4} {"question":"Delete a subplot I'm trying to figure out a way of deleting (dynamically) subplots in matplotlib. I see they have a remove method, but I get the error \n```\nNotImplementedError: cannot remove artist\n```\nI'm surprised that I can't find this anywhere. Does anyone know how to do this? \n```py\nfrom matplotlib import pyplot as plt\n\nfig, axs = plt.subplots(1,3)\n\naxs[0].plot([1,2],[3,4])\naxs[2].plot([0,1],[2,3])\n\nplt.draw()\nplt.tight_layout()\n```","response":"Use fig.delaxes or plt.delaxes to remove unwanted subplots \n```py\nfig, axs = plt.subplots(1,3)\naxs[0].plot([1,2],[3,4])\naxs[2].plot([0,1],[2,3])\n\nfig.delaxes(axs[1])\n\nplt.draw()\nplt.tight_layout()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/14694501\/delete-a-subplot","best_answers_votes":167,"tags":["python","matplotlib","subplot"],"question_length":441,"response_length":200,"tag_count":3} {"question":"Keras - Plot training, validation and test set accuracy I want to plot the output of this simple neural network: \n```\nmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\nhistory = model.fit(x_test, y_test, nb_epoch=10, validation_split=0.2, shuffle=True)\n\nmodel.test_on_batch(x_test, y_test)\nmodel.metrics_names\n```\nI have plotted accuracy and loss of training and validation: \n```\nprint(history.history.keys())\n# \"Accuracy\"\nplt.plot(history.history['acc'])\nplt.plot(history.history['val_acc'])\nplt.title('model accuracy')\nplt.ylabel('accuracy')\nplt.xlabel('epoch')\nplt.legend(['train', 'validation'], loc='upper left')\nplt.show()\n# \"Loss\"\nplt.plot(history.history['loss'])\nplt.plot(history.history['val_loss'])\nplt.title('model loss')\nplt.ylabel('loss')\nplt.xlabel('epoch')\nplt.legend(['train', 'validation'], loc='upper left')\nplt.show()\n```\nNow I want to add and plot test set's accuracy from model.test_on_batch(x_test, y_test), but from model.metrics_names I obtain the same value 'acc' utilized for plotting accuracy on training data plt.plot(history.history['acc']). How could I plot test set's accuracy?","response":"```\nimport keras\nfrom matplotlib import pyplot as plt\nhistory = model1.fit(train_x, train_y,validation_split = 0.1, epochs=50, batch_size=4)\nplt.plot(history.history['accuracy'])\nplt.plot(history.history['val_accuracy'])\nplt.title('model accuracy')\nplt.ylabel('accuracy')\nplt.xlabel('epoch')\nplt.legend(['train', 'val'], loc='upper left')\nplt.show()\n```\n```\nplt.plot(history.history['loss'])\nplt.plot(history.history['val_loss'])\nplt.title('model loss')\nplt.ylabel('loss')\nplt.xlabel('epoch')\nplt.legend(['train', 'val'], loc='upper left')\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/41908379\/keras-plot-training-validation-and-test-set-accuracy","best_answers_votes":152,"tags":["python","matplotlib","keras"],"question_length":1142,"response_length":554,"tag_count":3} {"question":"creating over 20 unique legend colors using matplotlib I am plotting 20 different lines on a single plot using matplotlib. I use a for loop for plotting and label every line with its key and then use the legend function \n```\nfor key in dict.keys():\n plot(x,dict[key], label = key)\ngraph.legend()\n```\nBut using this way, the graph repeats a lot of colors in the legend. Is there any way to ensure a unique color is assigned to each line using matplotlib and over 20 lines? thanks","response":"The answer to your question is related to two other SO questions. The answer to How to pick a new color for each plotted line within a figure in matplotlib? explains how to define the default list of colors that is cycled through to pick the next color to plot. This is done with the Axes.set_color_cycle method. You want to get the correct list of colors though, and this is most easily done using a color map, as is explained in the answer to this question: Create a color generator from given colormap in matplotlib. There a color map takes a value from 0 to 1 and returns a color. So for your 20 lines, you want to cycle from 0 to 1 in steps of 1\/20. Specifically you want to cycle form 0 to 19\/20, because 1 maps back to 0. This is done in this example: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nNUM_COLORS = 20\n\ncm = plt.get_cmap('gist_rainbow')\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.set_prop_cycle(color=[cm(1.*i\/NUM_COLORS) for i in range(NUM_COLORS)])\nfor i in range(NUM_COLORS):\n ax.plot(np.arange(10)*(i+1))\n\nfig.savefig('moreColors.png')\nplt.show()\n```\nThis is the resulting figure: Alternative, better (debatable) solution There is an alternative way that uses a ScalarMappable object to convert a range of values to colors. The advantage of this method is that you can use a non-linear Normalization to convert from line index to actual color. The following code produces the same exact result: \n```\nimport matplotlib.pyplot as plt\nimport matplotlib.cm as mplcm\nimport matplotlib.colors as colors\nimport numpy as np\n\nNUM_COLORS = 20\n\ncm = plt.get_cmap('gist_rainbow')\ncNorm = colors.Normalize(vmin=0, vmax=NUM_COLORS-1)\nscalarMap = mplcm.ScalarMappable(norm=cNorm, cmap=cm)\nfig = plt.figure()\nax = fig.add_subplot(111)\n# old way:\n#ax.set_prop_cycle(color=[cm(1.*i\/NUM_COLORS) for i in range(NUM_COLORS)])\n# new way:\nax.set_prop_cycle(color=[scalarMap.to_rgba(i) for i in range(NUM_COLORS)])\nfor i in range(NUM_COLORS):\n ax.plot(np.arange(10)*(i+1))\n\nfig.savefig('moreColors.png')\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/8389636\/creating-over-20-unique-legend-colors-using-matplotlib","best_answers_votes":149,"tags":["python","matplotlib","legend"],"question_length":481,"response_length":2040,"tag_count":3} {"question":"Reset color cycle in Matplotlib Say I have data about 3 trading strategies, each with and without transaction costs. I want to plot, on the same axes, the time series of each of the 6 variants (3 strategies * 2 trading costs). I would like the \"with transaction cost\" lines to be plotted with alpha=1 and linewidth=1 while I want the \"no transaction costs\" to be plotted with alpha=0.25 and linewidth=5. But I would like the color to be the same for both versions of each strategy. I would like something along the lines of: \n```\nfig, ax = plt.subplots(1, 1, figsize=(10, 10))\n\nfor c in with_transaction_frame.columns:\n ax.plot(with_transaction_frame[c], label=c, alpha=1, linewidth=1)\n\n****SOME MAGIC GOES HERE TO RESET THE COLOR CYCLE\n\nfor c in no_transaction_frame.columns:\n ax.plot(no_transaction_frame[c], label=c, alpha=0.25, linewidth=5)\n\nax.legend()\n```\nWhat is the appropriate code to put on the indicated line to reset the color cycle so it is \"back to the start\" when the second loop is invoked?","response":"In Matplotlib <1.5.0, you can reset the colorcycle to the original with Axes.set_color_cycle. Looking at the code for this, there is a function to do the actual work: \n```\ndef set_color_cycle(self, clist=None):\n if clist is None:\n clist = rcParams['axes.color_cycle']\n self.color_cycle = itertools.cycle(clist)\n```\nAnd a method on the Axes which uses it: \n```\ndef set_color_cycle(self, clist):\n \"\"\"\n Set the color cycle for any future plot commands on this Axes.\n\n *clist* is a list of mpl color specifiers.\n \"\"\"\n self._get_lines.set_color_cycle(clist)\n self._get_patches_for_fill.set_color_cycle(clist)\n```\nThis basically means you can call the set_color_cycle with None as the only argument, and it will be replaced with the default cycle found in rcParams['axes.color_cycle']. I tried this with the following code and got the expected result: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfor i in range(3):\n plt.plot(np.arange(10) + i)\n\n# for Matplotlib version < 1.5\nplt.gca().set_color_cycle(None)\n# for Matplotlib version >= 1.5\nplt.gca().set_prop_cycle(None)\n\nfor i in range(3):\n plt.plot(np.arange(10, 1, -1) + i)\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/24193174\/reset-color-cycle-in-matplotlib","best_answers_votes":121,"tags":["python","matplotlib","pandas"],"question_length":1012,"response_length":1186,"tag_count":3} {"question":"Change figure size and figure format in matplotlib [duplicate] This question already has answers here: How do I change the size of figures drawn with Matplotlib? (16 answers) Closed 4 years ago. I want to obtain fig1 exactly of 4 by 3 inch sized, and in tiff format correcting the program below: \n```py\nimport matplotlib.pyplot as plt\n\nlist1 = [3,4,5,6,9,12]\nlist2 = [8,12,14,15,17,20]\n\nplt.plot(list1, list2)\nplt.savefig('fig1.png', dpi = 300)\nplt.close()\n```","response":"You can set the figure size if you explicitly create the figure with \n```\nplt.figure(figsize=(3,4))\n```\nYou need to set figure size before calling plt.plot() To change the format of the saved figure just change the extension in the file name. However, I don't know if any of matplotlib backends support tiff","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/17109608\/change-figure-size-and-figure-format-in-matplotlib","best_answers_votes":159,"tags":["python","python-2.7","python-3.x","matplotlib"],"question_length":460,"response_length":307,"tag_count":4} {"question":"How to embed matplotlib in pyqt I want to embed a graph in a pyqt4 user interface. I do not understand how people did the embedding in the examples I found - this one (at the bottom) and that one. Could anyone post a step-by-step explanation or at least a very small, very simple code only creating e.g. a graph and a button in one pyqt4 GUI.","response":"It is not that complicated actually. Relevant Qt widgets are in matplotlib.backends.backend_qt4agg. FigureCanvasQTAgg and NavigationToolbar2QT are usually what you need. These are regular Qt widgets. You treat them as any other widget. Below is a very simple example with a Figure, Navigation and a single button that draws some random data. I've added comments to explain things. \n```\nimport sys\nfrom PyQt4 import QtGui\n\nfrom matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg as FigureCanvas\nfrom matplotlib.backends.backend_qt4agg import NavigationToolbar2QT as NavigationToolbar\nfrom matplotlib.figure import Figure\n\nimport random\n\nclass Window(QtGui.QDialog):\n def __init__(self, parent=None):\n super(Window, self).__init__(parent)\n\n # a figure instance to plot on\n self.figure = Figure()\n\n # this is the Canvas Widget that displays the `figure`\n # it takes the `figure` instance as a parameter to __init__\n self.canvas = FigureCanvas(self.figure)\n\n # this is the Navigation widget\n # it takes the Canvas widget and a parent\n self.toolbar = NavigationToolbar(self.canvas, self)\n\n # Just some button connected to `plot` method\n self.button = QtGui.QPushButton('Plot')\n self.button.clicked.connect(self.plot)\n\n # set the layout\n layout = QtGui.QVBoxLayout()\n layout.addWidget(self.toolbar)\n layout.addWidget(self.canvas)\n layout.addWidget(self.button)\n self.setLayout(layout)\n\n def plot(self):\n ''' plot some random stuff '''\n # random data\n data = [random.random() for i in range(10)]\n\n # create an axis\n ax = self.figure.add_subplot(111)\n\n # discards the old graph\n ax.clear()\n\n # plot data\n ax.plot(data, '*-')\n\n # refresh canvas\n self.canvas.draw()\n\nif __name__ == '__main__':\n app = QtGui.QApplication(sys.argv)\n\n main = Window()\n main.show()\n\n sys.exit(app.exec_())\n```\nEdit: Updated to reflect comments and API changes. NavigationToolbar2QTAgg changed with NavigationToolbar2QT Directly import Figure instead of pyplot Replace deprecated ax.hold(False) with ax.clear()","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/12459811\/how-to-embed-matplotlib-in-pyqt","best_answers_votes":124,"tags":["python","matplotlib","pyqt4"],"question_length":342,"response_length":2208,"tag_count":3} {"question":"How to prevent numbers being changed to exponential form in a plot I'm using Matplotlib in Python to plot simple x-y datasets. This produces nice-looking graphs, although when I \"zoom in\" too close on various sections of the plotted graph using the Figure View (which appears when you execute plt.show() ), the x-axis values change from standard number form (1050, 1060, 1070 etc.) to scientific form with exponential notation (e.g. 1, 1.5, 2.0 with the x-axis label given as +1.057e3). I'd prefer my figures to retain the simple numbering of the axis, rather than using exponential form. Is there a way I can force Matplotlib to do this?","response":"The formatting of tick labels is controlled by a Formatter object, which assuming you haven't done anything fancy will be a ScalerFormatterby default. This formatter will use a constant shift if the fractional change of the values visible is very small. To avoid this, simply turn it off: \n```\nplt.plot(arange(0,100,10) + 1000, arange(0,100,10))\nax = plt.gca()\nax.get_xaxis().get_major_formatter().set_useOffset(False)\nplt.draw()\n```\nIf you want to avoid scientific notation in general, \n```\nax.get_xaxis().get_major_formatter().set_scientific(False)\n```\nCan control this with globally via the axes.formatter.useoffset rcparam.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/14711655\/how-to-prevent-numbers-being-changed-to-exponential-form-in-a-plot","best_answers_votes":106,"tags":["python","matplotlib","figure"],"question_length":638,"response_length":627,"tag_count":3} {"question":"How to generate audio from a numpy array? I want to create \"heart rate monitor\" effect from a 2D array in numpy and want the tone to reflect the values in the array.","response":"You can use the write function from scipy.io.wavfile to create a wav file which you can then play however you wish. Note that the array must be integers, so if you have floats, you might want to scale them appropriately: \n```\nimport numpy as np\nfrom scipy.io.wavfile import write\n\nrate = 44100\ndata = np.random.uniform(-1, 1, rate) # 1 second worth of random samples between -1 and 1\nscaled = np.int16(data \/ np.max(np.abs(data)) * 32767)\nwrite('test.wav', rate, scaled)\n```\nIf you want Python to actually play audio, then this page provides an overview of some of the packages\/modules.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/10357992\/how-to-generate-audio-from-a-numpy-array","best_answers_votes":112,"tags":["python","audio","numpy","matplotlib"],"question_length":165,"response_length":586,"tag_count":4} {"question":"How to recover matplotlib defaults after setting stylesheet In an ipython notebook, I used a matplotlib stylesheet to change the look of my plots using \n```\nfrom matplotlib.pyplot import *\n%matplotlib inline\nstyle.use('ggplot')\n```\nMy version of matplotlib is 1.4.0. How do I go back to the default matplotlib styling? I tried all the available styles in \n```\nprint style.available\n```\nbut there doesn't seem to be a \"default\" option. I also tried \n```\nmatplotlib.rcdefaults()\n```\nFor some reason, this gives me a gray background. It also changes the text from gray (ggplot style) to black, which may be the default, but also could be another random style.","response":"You should be able to set it back to default by: \n```\nimport matplotlib as mpl\nmpl.rcParams.update(mpl.rcParamsDefault)\n```\nIn ipython, things are a little different, especially with inline backend: \n```\n\n```python\n%matplotlib inline\n```\n\n```python\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\n```\n\n```python\ninline_rc = dict(mpl.rcParams)\n```\n\n```python\nplt.plot(range(10))\n#Output\n#[]\n#```\n#```\n```\n\n```python\nmpl.rcParams.update(mpl.rcParamsDefault)\nplt.plot(range(10))\n#Output\n#[]\n#```\n#```\n```\n\n```python\nmpl.rcParams.update(inline_rc)\nplt.plot(range(10))\n#Output\n#[]\n#```\n#Basically, %matplotlib inline uses its own rcParams. You can grab that from the source, but the arguably easier way is probably just save the rcParams as inline_rc after %matplotlib inline cell magic in this example, and reuse that later.\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/26413185\/how-to-recover-matplotlib-defaults-after-setting-stylesheet","best_answers_votes":110,"tags":["python","matplotlib"],"question_length":656,"response_length":902,"tag_count":2} {"question":"How to overplot a line on a scatter plot in python? I have two vectors of data and I've put them into pyplot.scatter(). Now I'd like to over plot a linear fit to these data. How would I do this? I've tried using scikitlearn and np.polyfit().","response":"```\nimport numpy as np\nfrom numpy.polynomial.polynomial import polyfit\nimport matplotlib.pyplot as plt\n\n# Sample data\nx = np.arange(10)\ny = 5 * x + 10\n\n# Fit with polyfit\nb, m = polyfit(x, y, 1)\n\nplt.plot(x, y, '.')\nplt.plot(x, b + m * x, '-')\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/19068862\/how-to-overplot-a-line-on-a-scatter-plot-in-python","best_answers_votes":152,"tags":["python","numpy","matplotlib","linear-regression","scatter-plot"],"question_length":241,"response_length":258,"tag_count":5} {"question":"How to set font size of Matplotlib axis Legend? I have a code like this: \n```\nimport matplotlib.pyplot as plt\nfrom matplotlib.pyplot import *\nfrom matplotlib.font_manager import FontProperties\n\nfontP = FontProperties()\nfontP.set_size('xx-small')\nfig=plt.figure()\nax1=fig.add_subplot(111)\nplot([1,2,3], label=\"test1\")\nax1.legend(loc=0, ncol=1, bbox_to_anchor=(0, 0, 1, 1),\n prop = fontP,fancybox=True,shadow=False,title='LEGEND')\nplt.show()\n```\nIt can be seen in the plot that the setting in Fontsize does not affect the Legend Title font size. How to set the font size of the legend title to a smaller size?","response":"This is definitely an old question, but was frustrating me too and none of the other answers changed the legend title fontsize at all, but instead just changed the rest of the text. So after banging my head against the matplotlib documentation for awhile I came up with this. \n```\nlegend = ax1.legend(loc=0, ncol=1, bbox_to_anchor=(0, 0, 1, 1),\n prop = fontP,fancybox=True,shadow=False,title='LEGEND')\n\nplt.setp(legend.get_title(),fontsize='xx-small')\n```\nAs of Matplotlib 3.0.3, you can also set it globally with \n```\nplt.rcParams['legend.title_fontsize'] = 'xx-small'\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/12402561\/how-to-set-font-size-of-matplotlib-axis-legend","best_answers_votes":114,"tags":["python","matplotlib","font-size","legend"],"question_length":617,"response_length":583,"tag_count":4} {"question":"Difference between \"axes\" and \"axis\" in matplotlib? I'm confused about what the different between axes and axis is in matplotlib. Could someone please explain in an easy-to-understand way?","response":"This figure from the documentation will answer your question: You can find this image here (in the Matplotlib 1.x docs); it's actually been replaced in the Matplotlib 2.x docs.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/5575451\/difference-between-axes-and-axis-in-matplotlib","best_answers_votes":121,"tags":["python","matplotlib"],"question_length":188,"response_length":176,"tag_count":2} {"question":"Matplotlib xticks not lining up with histogram I'm generating some histograms with matplotlib and I'm having some trouble figuring out how to get the xticks of a histogram to align with the bars. Here's a sample of the code I use to generate the histogram: \n```\nfrom matplotlib import pyplot as py\n\npy.hist(histogram_data, 49, alpha=0.75)\npy.title(column_name)\npy.xticks(range(49))\npy.show()\n```\nI know that all of values in the histogram_data array are in [0,1,...,48]. Which, assuming I did the math right, means there are 49 unique values. I'd like to show a histogram of each of those values. Here's a picture of what's generated. How can I set up the graph such that all of the xticks are aligned to the left, middle or right of each of the bars?","response":"Short answer: Use plt.hist(data, bins=range(50)) instead to get left-aligned bins, plt.hist(data, bins=np.arange(50)-0.5) to get center-aligned bins, etc. Also, if performance matters, because you want counts of unique integers, there are a couple of slightly more efficient methods (np.bincount) that I'll show at the end. Problem Statement As a stand-alone example of what you're seeing, consider the following: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# Generate a random array of integers between 0-9\n# data.min() will be 0 and data.max() will be 9 (not 10)\ndata = np.random.randint(0, 10, 1000)\n\nplt.hist(data, bins=10)\nplt.xticks(range(10))\nplt.show()\n```\nAs you've noticed, the bins aren't aligned with integer intervals. This is basically because you asked for 10 bins between 0 and 9, which isn't quite the same as asking for bins for the 10 unique values. The number of bins you want isn't exactly the same as the number of unique values. What you actually should do in this case is manually specify the bin edges. To explain what's going on, let's skip matplotlib.pyplot.hist and just use the underlying numpy.histogram function. For example, let's say you have the values [0, 1, 2, 3]. Your first instinct would be to do: \n```\n\n```python\nimport numpy as np\n```\n\n```python\nnp.histogram([0, 1, 2, 3], bins=4)\n#Output\n#(array([1, 1, 1, 1]), array([ 0. , 0.75, 1.5 , 2.25, 3. ]))\n#```\n#The first array returned is the counts and the second is the bin edges (in other words, where bar edges would be in your plot). Notice that we get the counts we'd expect, but because we asked for 4 bins between the min and max of the data, the bin edges aren't on integer values. Next, you might try: \n#```\n```\n\n```python\nnp.histogram([0, 1, 2, 3], bins=3)\n#Output\n#(array([1, 1, 2]), array([ 0., 1., 2., 3.]))\n#```\n#Note that the bin edges (the second array) are what you were expecting, but the counts aren't. That's because the last bin behaves differently than the others, as noted in the documentation for numpy.histogram: \n#```\n#Notes\n#-----\n#All but the last (righthand-most) bin is half-open. In other words, if\n#`bins` is::\n#\n# [1, 2, 3, 4]\n#\n#then the first bin is ``[1, 2)`` (including 1, but excluding 2) and the\n#second ``[2, 3)``. The last bin, however, is ``[3, 4]``, which *includes*\n#4.\n#```\n#Therefore, what you actually should do is specify exactly what bin edges you want, and either include one beyond your last data point or shift the bin edges to the 0.5 intervals. For example: \n#```\n```\n\n```python\nnp.histogram([0, 1, 2, 3], bins=range(5))\n#Output\n#(array([1, 1, 1, 1]), array([0, 1, 2, 3, 4]))\n#```\n#Bin Alignment Now let's apply this to the first example and see what it looks like: \n#```\n#import matplotlib.pyplot as plt\n#import numpy as np\n#\n## Generate a random array of integers between 0-9\n## data.min() will be 0 and data.max() will be 9 (not 10)\n#data = np.random.randint(0, 10, 1000)\n#\n#plt.hist(data, bins=range(11)) # <- The only difference\n#plt.xticks(range(10))\n#plt.show()\n#```\n#Okay, great! However, we now effectively have left-aligned bins. What if we wanted center-aligned bins to better reflect the fact that these are unique values? The quick way is to just shift the bin edges: \n#```\n#import matplotlib.pyplot as plt\n#import numpy as np\n#\n## Generate a random array of integers between 0-9\n## data.min() will be 0 and data.max() will be 9 (not 10)\n#data = np.random.randint(0, 10, 1000)\n#\n#bins = np.arange(11) - 0.5\n#plt.hist(data, bins)\n#plt.xticks(range(10))\n#plt.xlim([-1, 10])\n#\n#plt.show()\n#```\n#Similarly for right-aligned bins, just shift by -1. Another approach For the particular case of unique integer values, there's another, more efficient approach we can take. If you're dealing with unique integer counts starting with 0, you're better off using numpy.bincount than using numpy.hist. For example: \n#```\n#import matplotlib.pyplot as plt\n#import numpy as np\n#\n#data = np.random.randint(0, 10, 1000)\n#counts = np.bincount(data)\n#\n## Switching to the OO-interface. You can do all of this with \"plt\" as well.\n#fig, ax = plt.subplots()\n#ax.bar(range(10), counts, width=1, align='center')\n#ax.set(xticks=range(10), xlim=[-1, 10])\n#\n#plt.show()\n#```\n#There are two big advantages to this approach. One is speed. numpy.histogram (and therefore plt.hist) basically runs the data through numpy.digitize and then numpy.bincount. Because you're dealing with unique integer values, there's no need to take the numpy.digitize step. However, the bigger advantage is more control over display. If you'd prefer thinner rectangles, just use a smaller width: \n#```\n#import matplotlib.pyplot as plt\n#import numpy as np\n#\n#data = np.random.randint(0, 10, 1000)\n#counts = np.bincount(data)\n#\n## Switching to the OO-interface. You can do all of this with \"plt\" as well.\n#fig, ax = plt.subplots()\n#ax.bar(range(10), counts, width=0.8, align='center')\n#ax.set(xticks=range(10), xlim=[-1, 10])\n#\n#plt.show()\n#```\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/27083051\/matplotlib-xticks-not-lining-up-with-histogram","best_answers_votes":190,"tags":["python","matplotlib","xticks"],"question_length":751,"response_length":4864,"tag_count":3} {"question":"Python equivalent to 'hold on' in Matlab Is there an explicit equivalent command in Python's matplotlib for Matlab's hold on? I'm trying to plot all my graphs on the same axes. Some graphs are generated inside a for loop, and these are plotted separately from su and sl: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfor i in np.arange(1,5):\n z = 68 + 4 * np.random.randn(50)\n zm = np.cumsum(z) \/ range(1,len(z)+1)\n plt.plot(zm)\n plt.axis([0,50,60,80])\n\nplt.show()\n\nn = np.arange(1,51)\nsu = 68 + 4 \/ np.sqrt(n)\nsl = 68 - 4 \/ np.sqrt(n)\n\nplt.plot(n,su,n,sl)\n\nplt.axis([0,50,60,80])\nplt.show()\n```","response":"Just call plt.show() at the end: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nplt.axis([0,50,60,80])\nfor i in np.arange(1,5):\n z = 68 + 4 * np.random.randn(50)\n zm = np.cumsum(z) \/ range(1,len(z)+1)\n plt.plot(zm) \n\nn = np.arange(1,51)\nsu = 68 + 4 \/ np.sqrt(n)\nsl = 68 - 4 \/ np.sqrt(n)\n\nplt.plot(n,su,n,sl)\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/21465988\/python-equivalent-to-hold-on-in-matlab","best_answers_votes":65,"tags":["python","matlab","graph","matplotlib"],"question_length":618,"response_length":345,"tag_count":4} {"question":"Using Colormaps to set color of line in matplotlib How does one set the color of a line in matplotlib with scalar values provided at run time using a colormap (say jet)? I tried a couple of different approaches here and I think I'm stumped. values[] is a storted array of scalars. curves are a set of 1-d arrays, and labels are an array of text strings. Each of the arrays have the same length. \n```\nfig = plt.figure()\nax = fig.add_subplot(111)\njet = colors.Colormap('jet')\ncNorm = colors.Normalize(vmin=0, vmax=values[-1])\nscalarMap = cmx.ScalarMappable(norm=cNorm, cmap=jet)\nlines = []\nfor idx in range(len(curves)):\n line = curves[idx]\n colorVal = scalarMap.to_rgba(values[idx])\n retLine, = ax.plot(line, color=colorVal)\n #retLine.set_color()\n lines.append(retLine)\nax.legend(lines, labels, loc='upper right')\nax.grid()\nplt.show()\n```","response":"The error you are receiving is due to how you define jet. You are creating the base class Colormap with the name 'jet', but this is very different from getting the default definition of the 'jet' colormap. This base class should never be created directly, and only the subclasses should be instantiated. What you've found with your example is a buggy behavior in Matplotlib. There should be a clearer error message generated when this code is run. This is an updated version of your example: \n```\nimport matplotlib.pyplot as plt\nimport matplotlib.colors as colors\nimport matplotlib.cm as cmx\nimport numpy as np\n\n# define some random data that emulates your indeded code:\nNCURVES = 10\nnp.random.seed(101)\ncurves = [np.random.random(20) for i in range(NCURVES)]\nvalues = range(NCURVES)\n\nfig = plt.figure()\nax = fig.add_subplot(111)\n# replace the next line \n#jet = colors.Colormap('jet')\n# with\njet = cm = plt.get_cmap('jet') \ncNorm = colors.Normalize(vmin=0, vmax=values[-1])\nscalarMap = cmx.ScalarMappable(norm=cNorm, cmap=jet)\nprint scalarMap.get_clim()\n\nlines = []\nfor idx in range(len(curves)):\n line = curves[idx]\n colorVal = scalarMap.to_rgba(values[idx])\n colorText = (\n 'color: (%4.2f,%4.2f,%4.2f)'%(colorVal[0],colorVal[1],colorVal[2])\n )\n retLine, = ax.plot(line,\n color=colorVal,\n label=colorText)\n lines.append(retLine)\n#added this to get the legend to work\nhandles,labels = ax.get_legend_handles_labels()\nax.legend(handles, labels, loc='upper right')\nax.grid()\nplt.show()\n```\nResulting in: Using a ScalarMappable is an improvement over the approach presented in my related answer: creating over 20 unique legend colors using matplotlib","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/8931268\/using-colormaps-to-set-color-of-line-in-matplotlib","best_answers_votes":93,"tags":["python","matplotlib"],"question_length":853,"response_length":1720,"tag_count":2} {"question":"Plot an histogram with y-axis as percentage (using FuncFormatter?) I have a list of data in which the numbers are between 1000 and 20 000. \n```\ndata = [1000, 1000, 5000, 3000, 4000, 16000, 2000]\n```\nWhen I plot a histogram using the hist() function, the y-axis represents the number of occurrences of the values within a bin. Instead of the number of occurrences, I would like to have the percentage of occurrences. Code for the above plot: \n```\nf, ax = plt.subplots(1, 1, figsize=(10,5))\nax.hist(data, bins = len(list(set(data))))\n```\nI've been looking at this post which describes an example using FuncFormatter but I can't figure out how to adapt it to my problem. Some help and guidance would be welcome :) EDIT: Main issue with the to_percent(y, position) function used by the FuncFormatter. The y corresponds to one given value on the y-axis I guess. I need to divide this value by the total number of elements which I apparently can' t pass to the function... EDIT 2: Current solution I dislike because of the use of a global variable: \n```\ndef to_percent(y, position):\n # Ignore the passed in position. This has the effect of scaling the default\n # tick locations.\n global n\n\n s = str(round(100 * y \/ n, 3))\n print (y)\n\n # The percent symbol needs escaping in latex\n if matplotlib.rcParams['text.usetex'] is True:\n return s + r'$\\%$'\n else:\n return s + '%'\n\ndef plotting_hist(folder, output):\n global n\n\n data = list()\n # Do stuff to create data from folder\n\n n = len(data)\n f, ax = plt.subplots(1, 1, figsize=(10,5))\n ax.hist(data, bins = len(list(set(data))), rwidth = 1)\n\n formatter = FuncFormatter(to_percent)\n plt.gca().yaxis.set_major_formatter(formatter)\n\n plt.savefig(\"{}.png\".format(output), dpi=500)\n```\nEDIT 3: Method with density = True Actual desired output (method with global variable):","response":"Other answers seem utterly complicated. A histogram which shows the proportion instead of the absolute amount can easily produced by weighting the data with 1\/n, where n is the number of datapoints. Then a PercentFormatter can be used to show the proportion (e.g. 0.45) as percentage (45%). \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.ticker import PercentFormatter\n\ndata = [1000, 1000, 5000, 3000, 4000, 16000, 2000]\n\nplt.hist(data, weights=np.ones(len(data)) \/ len(data))\n\nplt.gca().yaxis.set_major_formatter(PercentFormatter(1))\nplt.show()\n```\nHere we see that three of the 7 values are in the first bin, i.e. 3\/7=43%.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/51473993\/plot-an-histogram-with-y-axis-as-percentage-using-funcformatter","best_answers_votes":169,"tags":["python","matplotlib"],"question_length":1874,"response_length":649,"tag_count":2} {"question":"Does matplotlib have a function for drawing diagonal lines in axis coordinates? Matplotlib Axes have the functions axhline and axvline for drawing horizontal or vertical lines at a given y or x coordinate (respectively) independently of the data scale on an Axes. Is there a similar function for plotting a constant diagonal? For example, if I have a scatterplot of variables with a similar domain, it is often useful to know whether they fall above or below the line of y = x: \n```\nmean, cov = [0, 0], [(1, .6), (.6, 1)]\nx, y = np.random.multivariate_normal(mean, cov, 100).T\ny += x + 1\nf, ax = plt.subplots(figsize=(6, 6))\nax.scatter(x, y, c=\".3\")\nax.plot([-3, 3], [-3, 3], ls=\"--\", c=\".3\")\nax.set(xlim=(-3, 3), ylim=(-3, 3))\n```\nThis can of course be done programmatically by grabbing the axis limits, (ax.get_xlim(), etc.), but that a) takes a few extra steps and b) is brittle in cases where more data might end up on the plot and shift the limits. (Actually in some cases just adding the constant line itself stretches the axes). It would be preferable to just do, e.g., ax.axdline(ls=\"--\", c=\".3\"), but it's not clear if something like this exists in the matplotlib codebase. All you would need to do would be modify the axhline code to plot from [0, 1] in axes coordinates for both x and y, I think.","response":"Drawing a diagonal from the lower left to the upper right corners of your plot would be accomplished by the following ax.plot([0, 1], [0, 1], transform=ax.transAxes) Using transform=ax.transAxes, the supplied x and y coordinates are interpreted as axes coordinates instead of data coordinates. This, as @fqq pointed out, is only the identity line when your x and y limits are equal. To draw the line y=x such that it always extends to the limits of your plot, an approach similar to the one given by @Ffisegydd would work, and can be written as the following function. \n```\ndef add_identity(axes, *line_args, **line_kwargs):\n identity, = axes.plot([], [], *line_args, **line_kwargs)\n def callback(axes):\n low_x, high_x = axes.get_xlim()\n low_y, high_y = axes.get_ylim()\n low = max(low_x, low_y)\n high = min(high_x, high_y)\n identity.set_data([low, high], [low, high])\n callback(axes)\n axes.callbacks.connect('xlim_changed', callback)\n axes.callbacks.connect('ylim_changed', callback)\n return axes\n```\nExample usage: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nmean, cov = [0, 0], [(1, .6), (.6, 1)]\nx, y = np.random.multivariate_normal(mean, cov, 100).T\ny += x + 1\n\nf, ax = plt.subplots(figsize=(6, 6))\nax.scatter(x, y, c=\".3\")\nadd_identity(ax, color='r', ls='--')\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/22104256\/does-matplotlib-have-a-function-for-drawing-diagonal-lines-in-axis-coordinates","best_answers_votes":68,"tags":["python","numpy","matplotlib"],"question_length":1307,"response_length":1346,"tag_count":3} {"question":"how to extract a subset of a colormap as a new colormap in matplotlib? I would like to use a colormap from matplotlib e.g. CMRmap. But I don't want to use the \"black\" color at the beginning and the \"white\" color at the end. I'm interested to plot my data using the in-between colors. I think ppl use it quite often but I was searching over internet and could not manage to find any simple solution. I'll appreciate if someone suggest any solution.","response":"The staticmethod colors.LinearSegmentedColormap.from_list can be used to create new LinearSegmentedColormaps. Below, I sample the original colormap at 100 points between 0.2 and 0.8: \n```\ncmap(np.linspace(0.2, 0.8, 100))\n```\nand use these colors to generate a new colormap: \n```\nimport matplotlib.pyplot as plt\nimport matplotlib.colors as colors\nimport numpy as np\n\ndef truncate_colormap(cmap, minval=0.0, maxval=1.0, n=100):\n new_cmap = colors.LinearSegmentedColormap.from_list(\n 'trunc({n},{a:.2f},{b:.2f})'.format(n=cmap.name, a=minval, b=maxval),\n cmap(np.linspace(minval, maxval, n)))\n return new_cmap\n\narr = np.linspace(0, 50, 100).reshape((10, 10))\nfig, ax = plt.subplots(ncols=2)\n\ncmap = plt.get_cmap('jet')\nnew_cmap = truncate_colormap(cmap, 0.2, 0.8)\nax[0].imshow(arr, interpolation='nearest', cmap=cmap)\nax[1].imshow(arr, interpolation='nearest', cmap=new_cmap)\nplt.show()\n```\nThe plot on the left shows the image using the original colormap (in this example, jet). The plot on the right shows the same image using new_cmap.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/18926031\/how-to-extract-a-subset-of-a-colormap-as-a-new-colormap-in-matplotlib","best_answers_votes":121,"tags":["python","matplotlib"],"question_length":447,"response_length":1055,"tag_count":2} {"question":"matplotlib iterate subplot axis array through single list Is there a simple\/clean way to iterate an array of axis returned by subplots like \n```\nnrow = ncol = 2\na = []\nfig, axs = plt.subplots(nrows=nrow, ncols=ncol)\nfor i, row in enumerate(axs):\n for j, ax in enumerate(row):\n a.append(ax)\n\nfor i, ax in enumerate(a):\n ax.set_ylabel(str(i))\n```\nwhich even works for nrow or ncol == 1. I tried list comprehension like: \n```\n[element for tupl in tupleOfTuples for element in tupl]\n```\nbut that fails if nrows or ncols == 1","response":"The ax return value is a numpy array, which can be reshaped, I believe, without any copying of the data. If you use the following, you'll get a linear array that you can iterate over cleanly. \n```\nnrow = 1; ncol = 2;\nfig, axs = plt.subplots(nrows=nrow, ncols=ncol)\n\nfor ax in axs.reshape(-1): \n ax.set_ylabel(str(i))\n```\nThis doesn't hold when ncols and nrows are both 1, since the return value is not an array; you could turn the return value into an array with one element for consistency, though it feels a bit like a cludge: \n```\nnrow = 1; ncol = 1;\nfig, axs = plt.subplots(nrows=nrow, ncols=nrow)\naxs = np.array(axs)\n\nfor ax in axs.reshape(-1):\n ax.set_ylabel(str(i))\n```\nreshape docs. The argument -1 causes reshape to infer dimensions of the output.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/20288842\/matplotlib-iterate-subplot-axis-array-through-single-list","best_answers_votes":106,"tags":["python","matplotlib"],"question_length":533,"response_length":758,"tag_count":2} {"question":"How do I plot list of tuples? I have the following data set. I would like to use Python or Gnuplot to plot the data. The tuples are of the form (x, y). The Y-axis should be a log axis, that is, log(y). A scatter plot or line plot would be ideal. How can this be done? \n```\n[(0, 6.0705199999997801e-08), (1, 2.1015700100300739e-08), \n (2, 7.6280656623374823e-09), (3, 5.7348209304555086e-09), \n (4, 3.6812203579604238e-09), (5, 4.1572516753310418e-09)]\n```","response":"If I get your question correctly, you could do something like this. \n```\n\n```python\nimport matplotlib.pyplot as plt\n```\n\n```python\ntestList =[(0, 6.0705199999997801e-08), (1, 2.1015700100300739e-08), \n#Output\n# (2, 7.6280656623374823e-09), (3, 5.7348209304555086e-09), \n# (4, 3.6812203579604238e-09), (5, 4.1572516753310418e-09)]\n```\n\n```python\nfrom math import log\n```\n\n```python\ntestList2 = [(elem1, log(elem2)) for elem1, elem2 in testList]\n```\n\n```python\ntestList2\n#Output\n#[(0, -16.617236475334405), (1, -17.67799605473062), (2, -18.691431541177973), (3, -18.9767093108359), (4, -19.420021520728017), (5, -19.298411635970396)]\n```\n\n```python\nzip(*testList2)\n#Output\n#[(0, 1, 2, 3, 4, 5), (-16.617236475334405, -17.67799605473062, -18.691431541177973, -18.9767093108359, -19.420021520728017, -19.298411635970396)]\n```\n\n```python\nplt.scatter(*zip(*testList2))\n```\n\n```python\nplt.show()\n#Output\n#```\n#which would give you something like Or as a line plot, \n#```\n```\n\n```python\nplt.plot(*zip(*testList2))\n```\n\n```python\nplt.show()\n#Output\n#```\n#EDIT - If you want to add a title and labels for the axis, you could do something like \n#```\n```\n\n```python\nplt.scatter(*zip(*testList2))\n```\n\n```python\nplt.title('Random Figure')\n```\n\n```python\nplt.xlabel('X-Axis')\n```\n\n```python\nplt.ylabel('Y-Axis')\n```\n\n```python\nplt.show()\n#Output\n#```\n#which would give you\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/18458734\/how-do-i-plot-list-of-tuples","best_answers_votes":119,"tags":["python","numpy","matplotlib","scipy","gnuplot"],"question_length":455,"response_length":1137,"tag_count":5} {"question":"How to map number to color using matplotlib's colormap? Consider a variable x containing a floating point number. I want to use matplotlib's colormaps to map this number to a color, but not plot anything. Basically, I want to be able to choose the colormap with mpl.cm.autumn for example, use mpl.colors.Normalize(vmin = -20, vmax = 10) to set the range, and then map x to the corresponding color. But I really don't get the documentation of mpl.cm, so if anyone could give me a hint.","response":"It's as simple as cm.hot(0.3): \n```py\nimport matplotlib.cm as cm\n \nprint(cm.hot(0.3))\n```\n```\n(0.8240081481370484, 0.0, 0.0, 1.0)\n```\nIf you also want to have the normalizer, use \n```py\nimport matplotlib as mpl\nimport matplotlib.cm as cm\n \nnorm = mpl.colors.Normalize(vmin=-20, vmax=10)\ncmap = cm.hot\nx = 0.3\n\nm = cm.ScalarMappable(norm=norm, cmap=cmap)\nprint(m.to_rgba(x))\n```\n```\n(1.0, 0.8225486412996345, 0.0, 1.0)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/15140072\/how-to-map-number-to-color-using-matplotlibs-colormap","best_answers_votes":130,"tags":["python","matplotlib"],"question_length":484,"response_length":426,"tag_count":2} {"question":"How to get matplotlib figure size For a project, I need to know the current size (in pixels) of my matplotlib figure, but I can't find how to do this. Does anyone know how to do this ?","response":"```\nimport matplotlib.plt\nfig = plt.figure()\nsize = fig.get_size_inches()*fig.dpi # size in pixels\n```\nTo do it for the current figure, \n```\nfig = plt.gcf()\nsize = fig.get_size_inches()*fig.dpi # size in pixels\n```\nYou can get the same info by doing: \n```\nbbox = fig.get_window_extent().transformed(fig.dpi_scale_trans.inverted())\nwidth, height = bbox.width*fig.dpi, bbox.height*fig.dpi\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/29702424\/how-to-get-matplotlib-figure-size","best_answers_votes":103,"tags":["python","matplotlib","size","pixel","figure"],"question_length":184,"response_length":390,"tag_count":5} {"question":"How to set the default color cycle for all subplots with matplotlib? How can I set a default set of colors for plots made with matplotlib? I can set a particular color map like this \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfig=plt.figure(i)\nax=plt.gca()\ncolormap = plt.get_cmap('jet')\nax.set_color_cycle([colormap(k) for k in np.linspace(0, 1, 10)])\n```\nbut is there some way to set the same set of colors for all plots, including subplots?","response":"Sure! Either specify axes.color_cycle in your .matplotlibrc file or set it at runtime using matplotlib.rcParams or matplotlib.rc. As an example of the latter: \n```py\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\nimport numpy as np\n\n# Set the default color cycle\nmpl.rcParams['axes.prop_cycle'] = mpl.cycler(color=[\"r\", \"k\", \"c\"]) \n\nx = np.linspace(0, 20, 100)\n\nfig, axes = plt.subplots(nrows=2)\n\nfor i in range(10):\n axes[0].plot(x, i * (x - 10)**2)\n\nfor i in range(10):\n axes[1].plot(x, i * np.cos(x))\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/9397944\/how-to-set-the-default-color-cycle-for-all-subplots-with-matplotlib","best_answers_votes":93,"tags":["python","matplotlib","colors"],"question_length":456,"response_length":535,"tag_count":3} {"question":"How do I assign multiple legend labels at once? I have the following dataset: \n```\nx = [0, 1, 2, 3, 4]\ny = [ [0, 1, 2, 3, 4],\n [5, 6, 7, 8, 9],\n [9, 8, 7, 6, 5] ]\n```\nNow I plot it with: \n```\nimport matplotlib.pyplot as plt\nplt.plot(x, y)\n```\nHowever, I want to label the 3 y-datasets with this command, which raises an error when .legend() is called: \n```\nlineObjects = plt.plot(x, y, label=['foo', 'bar', 'baz'])\nplt.legend()\n\nFile \".\/plot_nmos.py\", line 33, in \n plt.legend()\n...\nAttributeError: 'list' object has no attribute 'startswith'\n```\nWhen I inspect the lineObjects: \n```\n\n```python\nlineObjects[0].get_label()\n#Output\n#['foo', 'bar', 'baz']\n```\n\n```python\nlineObjects[1].get_label()\n#Output\n#['foo', 'bar', 'baz']\n```\n\n```python\nlineObjects[2].get_label()\n#Output\n#['foo', 'bar', 'baz']\n#```\n#Question Is there an elegant way to assign multiple labels by just using the .plot() method?\n```","response":"You can iterate over your line objects list, so labels are individually assigned. An example with the built-in python iter function: \n```\nlineObjects = plt.plot(x, y)\nplt.legend(iter(lineObjects), ('foo', 'bar', 'baz'))`\n```\nEdit: after updating to matplotlib 1.1.1, it looks like the plt.plot(x, y), with y as a list of lists (as provided by the author of the question), doesn't work anymore. The one step plotting without iteration over the y arrays is still possible thought after passing y as numpy.array (assuming (numpy)[http:\/\/numpy.scipy.org\/] as been previously imported). In this case, use plt.plot(x, y) (if the data in the 2D y array are arranged as columns [axis 1]) or plt.plot(x, y.transpose()) (if the data in the 2D y array are arranged as rows [axis 0]) Edit 2: as pointed by @pelson (see commentary below), the iter function is unnecessary and a simple plt.legend(lineObjects, ('foo', 'bar', 'baz')) works perfectly","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/11481644\/how-do-i-assign-multiple-legend-labels-at-once","best_answers_votes":70,"tags":["python","matplotlib","legend","legend-properties"],"question_length":858,"response_length":934,"tag_count":4} {"question":"Embedding small plots inside subplots in matplotlib If you want to insert a small plot inside a bigger one you can use Axes, like here. The problem is that I don't know how to do the same inside a subplot. I have several subplots and I would like to plot a small plot inside each subplot. The example code would be something like this: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfig = plt.figure()\n\nfor i in range(4):\n ax = fig.add_subplot(2,2,i)\n ax.plot(np.arange(11),np.arange(11),'b')\n\n #b = ax.axes([0.7,0.7,0.2,0.2]) \n #it gives an error, AxesSubplot is not callable\n\n #b = plt.axes([0.7,0.7,0.2,0.2])\n #plt.plot(np.arange(3),np.arange(3)+11,'g')\n #it plots the small plot in the selected position of the whole figure, not inside the subplot\n```\nAny ideas?","response":"I wrote a function very similar to plt.axes. You could use it for plotting yours sub-subplots. There is an example... \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n#def add_subplot_axes(ax,rect,facecolor='w'): # matplotlib 2.0+\ndef add_subplot_axes(ax,rect,axisbg='w'):\n fig = plt.gcf()\n box = ax.get_position()\n width = box.width\n height = box.height\n inax_position = ax.transAxes.transform(rect[0:2])\n transFigure = fig.transFigure.inverted()\n infig_position = transFigure.transform(inax_position) \n x = infig_position[0]\n y = infig_position[1]\n width *= rect[2]\n height *= rect[3] # <= Typo was here\n #subax = fig.add_axes([x,y,width,height],facecolor=facecolor) # matplotlib 2.0+\n subax = fig.add_axes([x,y,width,height],axisbg=axisbg)\n x_labelsize = subax.get_xticklabels()[0].get_size()\n y_labelsize = subax.get_yticklabels()[0].get_size()\n x_labelsize *= rect[2]**0.5\n y_labelsize *= rect[3]**0.5\n subax.xaxis.set_tick_params(labelsize=x_labelsize)\n subax.yaxis.set_tick_params(labelsize=y_labelsize)\n return subax\n \ndef example1():\n fig = plt.figure(figsize=(10,10))\n ax = fig.add_subplot(111)\n rect = [0.2,0.2,0.7,0.7]\n ax1 = add_subplot_axes(ax,rect)\n ax2 = add_subplot_axes(ax1,rect)\n ax3 = add_subplot_axes(ax2,rect)\n plt.show()\n\ndef example2():\n fig = plt.figure(figsize=(10,10))\n axes = []\n subpos = [0.2,0.6,0.3,0.3]\n x = np.linspace(-np.pi,np.pi)\n for i in range(4):\n axes.append(fig.add_subplot(2,2,i))\n for axis in axes:\n axis.set_xlim(-np.pi,np.pi)\n axis.set_ylim(-1,3)\n axis.plot(x,np.sin(x))\n subax1 = add_subplot_axes(axis,subpos)\n subax2 = add_subplot_axes(subax1,subpos)\n subax1.plot(x,np.sin(x))\n subax2.plot(x,np.sin(x))\nif __name__ == '__main__':\n example2()\n plt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/17458580\/embedding-small-plots-inside-subplots-in-matplotlib","best_answers_votes":70,"tags":["matplotlib","plot","embedding","axes","subplot"],"question_length":797,"response_length":1882,"tag_count":5} {"question":"Remove line through legend marker with .plot I have a matplotlib plot generated with the following code: \n```\nimport matplotlib.pyplot as pyplot\n\nFig, ax = pyplot.subplots()\nfor i, (mark, color) in enumerate(zip(\n ['s', 'o', 'D', 'v'], ['r', 'g', 'b', 'purple'])):\n ax.plot(i+1, i+1, color=color,\n marker=mark,\n markerfacecolor='None',\n markeredgecolor=color,\n label=i)\n\nax.set_xlim(0,5)\nax.set_ylim(0,5)\nax.legend()\n```\nwith this as the generated figure: I don't like the lines through the markers in the legend. How can I get rid of them?","response":"You can specify linestyle='None' or linestyle='' as a keyword argument in the plot command. Also, ls= can replace linestyle=. \n```py\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots()\nfor i, (mark, color) in enumerate(zip(\n ['s', 'o', 'D', 'v'], ['r', 'g', 'b', 'purple'])):\n ax.plot(i+1, i+1, color=color,\n marker=mark,\n markerfacecolor='None',\n markeredgecolor=color,\n linestyle='None',\n label=i)\n\nax.set_xlim(0, 5)\nax.set_ylim(0, 5)\nax.legend(numpoints=1)\nplt.show()\n```\nSince you're only plotting single points, you can't see the line attribute except for in the legend.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/21285885\/remove-line-through-legend-marker-with-plot","best_answers_votes":125,"tags":["python","matplotlib","legend","linechart","line-plot"],"question_length":590,"response_length":642,"tag_count":5} {"question":"Interactive matplotlib plot with two sliders I used matplotlib to create some plot, which depends on 8 variables. I would like to study how the plot changes when I change some of them. I created some script that calls the matplotlib one and generates different snapshots that later I convert into a movie, it is not bad, but a bit clumsy. I wonder if somehow I could interact with the plot regeneration using keyboard keys to increase \/ decrease values of some of the variables and see instantly how the plot changes. What is the best approach for this? Also if you can point me to interesting links or a link with a plot example with just two sliders?","response":"In addition to what @triplepoint mentioned, have a look at the slider widget. There's an example on the matplotlib examples page. It's a graphical slider bar rather than keyboard bindings, but it works quite well for what you want to do. Also note that to guarantee the sliders and buttons remain responsive and not garbage-collected, references to the objects (amp_slider, freq_slider, etc.) should be maintained by yourself. (I'm making this community wiki, as I'm just copy-pasting from the example. This particular example teaches bad habits (e.g. from pylab import *), but it gets the point across. The example has been fixed to avoid the use of pylab.) \n```\nfrom numpy import pi, sin\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.widgets import Slider, Button, RadioButtons\n\ndef signal(amp, freq):\n return amp * sin(2 * pi * freq * t)\n\naxis_color = 'lightgoldenrodyellow'\n\nfig = plt.figure()\nax = fig.add_subplot(111)\n\n# Adjust the subplots region to leave some space for the sliders and buttons\nfig.subplots_adjust(left=0.25, bottom=0.25)\n\nt = np.arange(0.0, 1.0, 0.001)\namp_0 = 5\nfreq_0 = 3\n\n# Draw the initial plot\n# The 'line' variable is used for modifying the line later\n[line] = ax.plot(t, signal(amp_0, freq_0), linewidth=2, color='red')\nax.set_xlim([0, 1])\nax.set_ylim([-10, 10])\n\n# Add two sliders for tweaking the parameters\n\n# Define an axes area and draw a slider in it\namp_slider_ax = fig.add_axes([0.25, 0.15, 0.65, 0.03], facecolor=axis_color)\namp_slider = Slider(amp_slider_ax, 'Amp', 0.1, 10.0, valinit=amp_0)\n\n# Draw another slider\nfreq_slider_ax = fig.add_axes([0.25, 0.1, 0.65, 0.03], facecolor=axis_color)\nfreq_slider = Slider(freq_slider_ax, 'Freq', 0.1, 30.0, valinit=freq_0)\n\n# Define an action for modifying the line when any slider's value changes\ndef sliders_on_changed(val):\n line.set_ydata(signal(amp_slider.val, freq_slider.val))\n fig.canvas.draw_idle()\namp_slider.on_changed(sliders_on_changed)\nfreq_slider.on_changed(sliders_on_changed)\n\n# Add a button for resetting the parameters\nreset_button_ax = fig.add_axes([0.8, 0.025, 0.1, 0.04])\nreset_button = Button(reset_button_ax, 'Reset', color=axis_color, hovercolor='0.975')\ndef reset_button_on_clicked(mouse_event):\n freq_slider.reset()\n amp_slider.reset()\nreset_button.on_clicked(reset_button_on_clicked)\n\n# Add a set of radio buttons for changing color\ncolor_radios_ax = fig.add_axes([0.025, 0.5, 0.15, 0.15], facecolor=axis_color)\ncolor_radios = RadioButtons(color_radios_ax, ('red', 'blue', 'green'), active=0)\ndef color_radios_on_clicked(label):\n line.set_color(label)\n fig.canvas.draw_idle()\ncolor_radios.on_clicked(color_radios_on_clicked)\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/6697259\/interactive-matplotlib-plot-with-two-sliders","best_answers_votes":87,"tags":["python","keyboard","matplotlib","interactive"],"question_length":652,"response_length":2694,"tag_count":4} {"question":"\\text does not work in a matplotlib label I am using matplotlib together with latex labels for the axis, title and colorbar labels While it works really great most of the time, it has some issues when you have a formula using \\text. One really simple example. \n```\nfrom matplotlib import pyplot as plt\nplt.plot([1,2,3])\nplt.title(r\"$f_{\\text{cor, r}}$\")\n\nplt.show()\n```\nThis will result in an error message like: \n```\nIPython\/core\/formatters.py:239: FormatterWarning: Exception in image\/png formatter: \nf_{\\text{1cor, r}}\n ^\nUnknown symbol: \\text (at char 3), (line:1, col:4)\n FormatterWarning,\n```\nIs there an easy way to use \\text in there?","response":"\\text won't work because it requires the amsmath package (not included in mathtext - the math rendering engine of matplotlib). So you basically have two options: use latex based font rendering \n```py\nfrom matplotlib import pyplot as plt\nimport matplotlib as mpl\nmpl.rcParams['text.usetex'] = True\nmpl.rcParams['text.latex.preamble'] = [r'\\usepackage{amsmath}'] #for \\text command\nplt.plot([1,2,3])\nplt.title(r\"$f_{\\text{cor, r}}$\")\nplt.show()\n```\nuse mathtext but use \\mathrm instead of \\text \n```py\nfrom matplotlib import pyplot as plt\nimport matplotlib as mpl\nmpl.rcParams['text.usetex'] = False # not really needed\nplt.plot([1,2,3])\nplt.title(r\"$f_{\\mathrm{cor, r}}$\")\nplt.show()\n```\nThe latter approach creates a figure like Be aware that unlike with the \\text command, spaces inside the \\mathrm environment are not respected. If you want more space between the variables you have to use latex style commands (\\, \\;, ...).","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/23824687\/text-does-not-work-in-a-matplotlib-label","best_answers_votes":101,"tags":["python","matplotlib"],"question_length":645,"response_length":927,"tag_count":2} {"question":"show origin axis (x,y) in matplotlib plot I have following simple plot, and I would like to display the origin axis (x, y). I already have grid, but I need the x, y axis to be emphasized. this is my code: \n```\nx = linspace(0.2,10,100)\nplot(x, 1\/x)\nplot(x, log(x))\naxis('equal')\ngrid()\n```\nI have seen this question. The accepted answer suggests to use \"Axis spine\" and just links to some example. The example is however too complicated, using subplots. I am unable to figure out, how to use \"Axis spine\" in my simple example.","response":"Using subplots is not too complicated, the spines might be. Dumb, simple way: \n```\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.linspace(0.2,10,100)\nfig, ax = plt.subplots()\nax.plot(x, 1\/x)\nax.plot(x, np.log(x))\nax.set_aspect('equal')\nax.grid(True, which='both')\n\nax.axhline(y=0, color='k')\nax.axvline(x=0, color='k')\n```\nAnd I get: (you can't see the vertical axis since the lower x-limit is zero.) Alternative using simple spines \n```\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.linspace(0.2,10,100)\nfig, ax = plt.subplots()\nax.plot(x, 1\/x)\nax.plot(x, np.log(x))\nax.set_aspect('equal')\nax.grid(True, which='both')\n\n# set the x-spine (see below for more info on `set_position`)\nax.spines['left'].set_position('zero')\n\n# turn off the right spine\/ticks\nax.spines['right'].set_color('none')\nax.yaxis.tick_left()\n\n# set the y-spine\nax.spines['bottom'].set_position('zero')\n\n# turn off the top spine\/ticks\nax.spines['top'].set_color('none')\nax.xaxis.tick_bottom()\n```\nAlternative using seaborn (my favorite) \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn\nseaborn.set(style='ticks')\n\nx = np.linspace(0.2,10,100)\nfig, ax = plt.subplots()\nax.plot(x, 1\/x)\nax.plot(x, np.log(x))\nax.set_aspect('equal')\nax.grid(True, which='both')\nseaborn.despine(ax=ax, offset=0) # the important part here\n```\nUsing the set_position method of a spine Here are the docs for a the set_position method of spines: Spine position is specified by a 2 tuple of (position type, amount). The position types are: 'outward' : place the spine out from the data area by the specified number of points. (Negative values specify placing the spine inward.) 'axes' : place the spine at the specified Axes coordinate (from 0.0-1.0). 'data' : place the spine at the specified data coordinate. Additionally, shorthand notations define a special positions: 'center' -> ('axes',0.5) 'zero' -> ('data', 0.0) So you can place, say the left spine anywhere with: ax.spines['left'].set_position((system, poisition)) where system is 'outward', 'axes', or 'data' and position in the place in that coordinate system.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/25689238\/show-origin-axis-x-y-in-matplotlib-plot","best_answers_votes":129,"tags":["matplotlib","ipython","jupyter-notebook"],"question_length":525,"response_length":2151,"tag_count":3} {"question":"How to plot empirical CDF (ECDF) How can I plot the empirical CDF of an array of numbers with Matplotlib in Python? I'm looking for the CDF analog of Pylab\u2019s hist function. One thing I can think of is: \n```\nfrom scipy.stats import cumfreq\na = array([...]) # my array of numbers\nnum_bins = 20\nb = cumfreq(a, num_bins)\nplt.plot(b)\n```","response":"If you like linspace and prefer one-liners, you can do: \n```\nplt.plot(np.sort(a), np.linspace(0, 1, len(a), endpoint=False))\n```\nGiven my tastes, I almost always do: \n```\n# a is the data array\nx = np.sort(a)\ny = np.arange(len(x))\/float(len(x))\nplt.plot(x, y)\n```\nWhich works for me even if there are >O(1e6) data values. If you really need to downsample I'd set \n```\nx = np.sort(a)[::down_sampling_step]\n```\nEdit to respond to comment\/edit on why I use endpoint=False or the y as defined above. The following are some technical details. The empirical CDF is usually formally defined as \n```\nCDF(x) = \"number of samples <= x\"\/\"number of samples\"\n```\nin order to exactly match this formal definition you would need to use y = np.arange(1,len(x)+1)\/float(len(x)) so that we get y = [1\/N, 2\/N ... 1]. This estimator is an unbiased estimator that will converge to the true CDF in the limit of infinite samples Wikipedia ref.. I tend to use y = [0, 1\/N, 2\/N ... (N-1)\/N] since: (a) it is easier to code\/more idiomatic, (b) but is still formally justified since one can always exchange CDF(x) with 1-CDF(x) in the convergence proof, and (c) works with the (easy) downsampling method described above. In some particular cases, it is useful to define \n```\ny = (arange(len(x))+0.5)\/len(x)\n```\nwhich is intermediate between these two conventions. Which, in effect, says \"there is a 1\/(2N) chance of a value less than the lowest one I've seen in my sample, and a 1\/(2N) chance of a value greater than the largest one I've seen so far. Note that the selection of this convention interacts with the where parameter used in the plt.step if it seems more useful to display the CDF as a piecewise constant function. In order to exactly match the formal definition mentioned above, one would need to use where=pre the suggested y=[0,1\/N..., 1-1\/N] convention, or where=post with the y=[1\/N, 2\/N ... 1] convention, but not the other way around. However, for large samples, and reasonable distributions, the convention is given in the main body of the answer is easy to write, is an unbiased estimator of the true CDF, and works with the downsampling methodology.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/3209362\/how-to-plot-empirical-cdf-ecdf","best_answers_votes":134,"tags":["python","numpy","matplotlib","seaborn","ecdf"],"question_length":333,"response_length":2143,"tag_count":5} {"question":"Use a loop to plot n charts Python I have a set of data that I load into python using a pandas dataframe. What I would like to do is create a loop that will print a plot for all the elements in their own frame, not all on one. My data is in an excel file structured in this fashion: \n```\nIndex | DATE | AMB CO 1 | AMB CO 2 |...|AMB CO_n | TOTAL\n1 | 1\/1\/12| 14 | 33 |...| 236 | 1600\n. | ... | ... | ... |...| ... | ...\n. | ... | ... | ... |...| ... | ...\n. | ... | ... | ... |...| ... | ...\nn\n```\nThis is what I have for code so far: \n```\nimport pandas as pd\nimport matplotlib.pyplot as plt\nambdf = pd.read_excel('Ambulance.xlsx', \n sheetname='Sheet2', index_col=0, na_values=['NA'])\nprint type(ambdf)\nprint ambdf\nprint ambdf['EAS']\n\namb_plot = plt.plot(ambdf['EAS'], linewidth=2)\nplt.title('EAS Ambulance Numbers')\nplt.xlabel('Month')\nplt.ylabel('Count of Deliveries')\nprint amb_plot\n\nfor i in ambdf:\n print plt.plot(ambdf[i], linewidth = 2)\n```\nI am thinking of doing something like this: \n```\nfor i in ambdf:\n ambdf_plot = plt.plot(ambdf, linewidth = 2)\n```\nThe above was not remotely what i wanted and it stems from my unfamiliarity with Pandas, MatplotLib etc, looking at some documentation though to me it looks like matplotlib is not even needed (question 2) So A) How can I produce a plot of data for every column in my df and B) do I need to use matplotlib or should I just use pandas to do it all? Thank you,","response":"Ok, so the easiest method to create several plots is this: \n```\nimport matplotlib.pyplot as plt\nx=[[1,2,3,4],[1,2,3,4],[1,2,3,4],[1,2,3,4]]\ny=[[1,2,3,4],[1,2,3,4],[1,2,3,4],[1,2,3,4]]\nfor i in range(len(x)):\n plt.figure()\n plt.plot(x[i],y[i])\n # Show\/save figure as desired.\n plt.show()\n# Can show all four figures at once by calling plt.show() here, outside the loop.\n#plt.show()\n```\nNote that you need to create a figure every time or pyplot will plot in the first one created. If you want to create several data series all you need to do is: \n```\nimport matplotlib.pyplot as plt\nplt.figure()\nx=[[1,2,3,4],[1,2,3,4],[1,2,3,4],[1,2,3,4]]\ny=[[1,2,3,4],[2,3,4,5],[3,4,5,6],[7,8,9,10]]\nplt.plot(x[0],y[0],'r',x[1],y[1],'g',x[2],y[2],'b',x[3],y[3],'k')\n```\nYou could automate it by having a list of colours like ['r','g','b','k'] and then just calling both entries in this list and corresponding data to be plotted in a loop if you wanted to. If you just want to programmatically add data series to one plot something like this will do it (no new figure is created each time so everything is plotted in the same figure): \n```\nimport matplotlib.pyplot as plt\nx=[[1,2,3,4],[1,2,3,4],[1,2,3,4],[1,2,3,4]]\ny=[[1,2,3,4],[2,3,4,5],[3,4,5,6],[7,8,9,10]]\ncolours=['r','g','b','k']\nplt.figure() # In this example, all the plots will be in one figure. \nfor i in range(len(x)):\n plt.plot(x[i],y[i],colours[i])\nplt.show()\n```\nIf anything matplotlib has a very good documentation page with plenty of examples. 17 Dec 2019: added plt.show() and plt.figure() calls to clarify this part of the story.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/19189488\/use-a-loop-to-plot-n-charts-python","best_answers_votes":104,"tags":["python-2.7","matplotlib","pandas"],"question_length":1525,"response_length":1599,"tag_count":3} {"question":"How to move a tick label I would like to move some ticks' labels horizontally along the x-axis, without moving the corresponding ticks. More specifically, when rotating labels with plt.setp, the centers of the labels' text stay aligned with the ticks. I would like to shift those labels to the right, so that the near ends of the labels get aligned instead as suggested on the image below. I am aware of this post and this one, however the answers are interesting kludges rather than strict answers to the question. my code: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport datetime\n\n# my fake data\ndates = np.array([datetime.datetime(2000,1,1) + datetime.timedelta(days=i) for i in range(365*5)])\ndata = np.sin(np.arange(365*5)\/365.0*2*np.pi - 0.25*np.pi) + np.random.rand(365*5) \/3\n\n# creates fig with 2 subplots\nfig = plt.figure(figsize=(10.0, 6.0))\nax = plt.subplot2grid((2,1), (0, 0))\nax2 = plt.subplot2grid((2,1), (1, 0))\n## plot dates\nax2.plot_date( dates, data )\n\n# rotates labels \nplt.setp( ax2.xaxis.get_majorticklabels(), rotation=-45 ) \n\n# try to shift labels to the right\nax2.xaxis.get_majorticklabels()[2].set_y(-.1)\nax2.xaxis.get_majorticklabels()[2].set_x(10**99)\n\nplt.show()\n```\nStrangely enough, set_y behaves as expected, but even if I set x to a fantasillion, the labels would not move by one iota. (The use of plot_date may introduce additional confusion, but the same actually happens with plot.)","response":"First of all, let's use a mcve to show the problem. \n```\nimport numpy as np\nimport datetime\nimport matplotlib.pyplot as plt\nplt.rcParams[\"date.autoformatter.month\"] = \"%b %Y\"\n\n# my fake data\ndates = np.array([datetime.datetime(2000,1,1) + datetime.timedelta(days=i) for i in range(365)])\ndata = np.sin(np.arange(365)\/365.0*2*np.pi - 0.25*np.pi) + np.random.rand(365) \/3\n\n# creates fig with 2 subplots\nfig, ax = plt.subplots(figsize=(6,2))\n## plot dates\nax.plot_date( dates, data )\n\n# rotates labels \nplt.setp( ax.xaxis.get_majorticklabels(), rotation=-45 ) \n\nplt.tight_layout()\nplt.show()\n```\nNow as other anwers pointed out already, you may use horizontal alignment of the text. \n```\n# rotates labels and aligns them horizontally to left \nplt.setp( ax.xaxis.get_majorticklabels(), rotation=-45, ha=\"left\" )\n```\nYou may use the rotation_mode argument to let the rotation happen about the top left point of the text, giving a slightly nicer result in this case. \n```\n# rotates labels and aligns them horizontally to left \nplt.setp( ax.xaxis.get_majorticklabels(), rotation=-45, ha=\"left\", rotation_mode=\"anchor\")\n```\nIn case those options are not fine grained enough, i.e. you want to position the labels more accurately, e.g. shifting it to the side by some points, you may use a transform. The following would offset the label by 5 points in horizontal direction, using a matplotlib.transforms.ScaledTranslation. \n```\nimport matplotlib.transforms\n\nplt.setp( ax.xaxis.get_majorticklabels(), rotation=-45) \n\n# Create offset transform by 5 points in x direction\ndx = 5\/72.; dy = 0\/72. \noffset = matplotlib.transforms.ScaledTranslation(dx, dy, fig.dpi_scale_trans)\n\n# apply offset transform to all x ticklabels.\nfor label in ax.xaxis.get_majorticklabels():\n label.set_transform(label.get_transform() + offset)\n```\nThe advantage of this, compared to e.g. the solution provided by @explorerDude is that the offset is independent on the data in the graph, such that it is generally applicable to any plot and would look the same for a given fontsize.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/28615887\/how-to-move-a-tick-label","best_answers_votes":108,"tags":["python","matplotlib","label"],"question_length":1432,"response_length":2047,"tag_count":3} {"question":"How to show matplotlib plots? I am sure the configuration of matplotlib for python is correct since I have used it to plot some figures. But today it just stop working for some reason. I tested it with really simple code like: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\nx = np.arange(0, 5, 0.1)\ny = np.sin(x)\nplt.plot(x, y)\n```\nThere's no error but just no figure shown up. I am using python 2.6, Eclipse in Ubuntu","response":"In matplotlib you have two main options: Create your plots and draw them at the end: \n```\nimport matplotlib.pyplot as plt\n\nplt.plot(x, y)\nplt.plot(z, t)\nplt.show()\n```\nCreate your plots and draw them as soon as they are created: \n```\nimport matplotlib.pyplot as plt\nfrom matplotlib import interactive\ninteractive(True)\n\nplt.plot(x, y)\nraw_input('press return to continue')\n\nplt.plot(z, t)\nraw_input('press return to end')\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/8575062\/how-to-show-matplotlib-plots","best_answers_votes":96,"tags":["python","matplotlib"],"question_length":427,"response_length":425,"tag_count":2} {"question":"How can I change the x axis so there is no white space? So currently learning how to import data and work with it in matplotlib and I am having trouble even tho I have the exact code from the book. This is what the plot looks like, but my question is how can I get it where there is no white space between the start and the end of the x-axis. Here is the code: \n```\nimport csv\n\nfrom matplotlib import pyplot as plt\nfrom datetime import datetime\n\n# Get dates and high temperatures from file.\nfilename = 'sitka_weather_07-2014.csv'\nwith open(filename) as f:\n reader = csv.reader(f)\n header_row = next(reader)\n\n #for index, column_header in enumerate(header_row):\n #print(index, column_header)\n dates, highs = [], []\n for row in reader:\n current_date = datetime.strptime(row[0], \"%Y-%m-%d\")\n dates.append(current_date)\n\n high = int(row[1])\n highs.append(high)\n\n# Plot data. \nfig = plt.figure(dpi=128, figsize=(10,6))\nplt.plot(dates, highs, c='red')\n\n\n# Format plot.\nplt.title(\"Daily high temperatures, July 2014\", fontsize=24)\nplt.xlabel('', fontsize=16)\nfig.autofmt_xdate()\nplt.ylabel(\"Temperature (F)\", fontsize=16)\nplt.tick_params(axis='both', which='major', labelsize=16)\n\nplt.show()\n```","response":"There is an automatic margin set at the edges, which ensures the data to be nicely fitting within the axis spines. In this case such a margin is probably desired on the y axis. By default it is set to 0.05 in units of axis span. To set the margin to 0 on the x axis, use \n```\nplt.margins(x=0)\n```\nor \n```\nax.margins(x=0)\n```\ndepending on the context. Also see the documentation. In case you want to get rid of the margin in the whole script, you can use \n```\nplt.rcParams['axes.xmargin'] = 0\n```\nat the beginning of your script (same for y of course). If you want to get rid of the margin entirely and forever, you might want to change the according line in the matplotlib rc file: \n```\naxes.xmargin : 0\naxes.ymargin : 0\n```\nExample \n```py\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\ntips = sns.load_dataset('tips')\n\nfig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 4))\ntips.plot(ax=ax1, title='Default Margin')\ntips.plot(ax=ax2, title='Margins: x=0')\nax2.margins(x=0)\n```\nAlternatively, use plt.xlim(..) or ax.set_xlim(..) to manually set the limits of the axes such that there is no white space left.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/42045767\/how-can-i-change-the-x-axis-so-there-is-no-white-space","best_answers_votes":137,"tags":["python","matplotlib"],"question_length":1238,"response_length":1114,"tag_count":2} {"question":"How to hide in IPython notebook [duplicate] This question already has answers here: Disable the output of matplotlib pyplot (4 answers) Closed 1 year ago. I am plotting a NumPy array of values, I, using IPython notebook in %matplotlib inline mode with the plot command plt.plot(I,'o'). The resulting output is: \n```\n\nOut[159]:\n[,\n ,\n ,\n ,\n ,\n ,\n ,\n ....\n ....\n]\n```\nThen my plot shows up below these lines of output. Is there a way to just show the plot and hide the from the output?","response":"You can use a semi-colon ; to end the line. This suppresses the unwanted output when generating plots: \n```\nplt.plot(I,'o');\n```\nIn general, using a semi-colon stops IPython from printing any output value from that line of a code block. For example, the executing the cell containing the code 1+1; would not output 2. An alternative way would be to bind a variable to the plot: \n```\n_ = plt.plot(a)\n```\nThis way, IPython only shows you the plots and the name _ is bound to the unwanted output.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/25790062\/how-to-hide-matplotlib-lines-line2d-in-ipython-notebook","best_answers_votes":118,"tags":["python","matplotlib","plot","ipython","jupyter-notebook"],"question_length":806,"response_length":493,"tag_count":5} {"question":"How to shade region under the curve in matplotlib I want to use matplotlib to illustrate the definite integral between two regions: x_0, and x_1. How can I shade a region under a curve in matplotlib from x=-1, to x=1 given the following plot \n```\nimport numpy as np\nfrom matplotlib import pyplot as plt\ndef f(t):\n return t * t\n\nt = np.arange(-4,4,1\/40.)\nplt.plot(t,f(t))\n```","response":"The final answer I came up with is to use fill_between. I thought there would have been a simple shade between type method, but this does exactly what I want. \n```\nsection = np.arange(-1, 1, 1\/20.)\nplt.fill_between(section,f(section))\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/10046262\/how-to-shade-region-under-the-curve-in-matplotlib","best_answers_votes":84,"tags":["python","numpy","matplotlib"],"question_length":377,"response_length":238,"tag_count":3} {"question":"How can I use seaborn without changing the matplotlib defaults? I am trying to use seaborn, because of its distplot function. But I prefer the default matplotlib settings. When I import seaborn, it changes automatically the appearance of my figure. How can I use seaborn functions without changing the look of the plots?","response":"Version 0.8 (july 2017) changed this behaviour. From https:\/\/seaborn.pydata.org\/whatsnew.html#v0-8-0-july-2017: The default (seaborn) style is no longer applied when seaborn is imported. It is now necessary to explicitly call set() or one or more of set_style(), set_context(), and set_palette(). Correspondingly, the seaborn.apionly module has been deprecated. For older versions, Import seaborn like this: \n```\nimport seaborn.apionly as sns\n```\nand then you should be able to use sns.distplot but maintain the default matplotlib styling + your personal rc configuration.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/25393936\/how-can-i-use-seaborn-without-changing-the-matplotlib-defaults","best_answers_votes":82,"tags":["python","matplotlib","seaborn"],"question_length":320,"response_length":572,"tag_count":3} {"question":"Jupyter Notebook: interactive plot with widgets I am trying to generate an interactive plot that depends on widgets. The problem I have is that when I change parameters using the slider, a new plot is done after the previous one, instead I would expect only one plot changing according to the parameters. Example: \n```\nfrom ipywidgets import interact, interactive, fixed, interact_manual\nimport ipywidgets as widgets\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport numpy as np\n\ndef plot_func(freq):\n x = np.linspace(0, 2*np.pi)\n y = np.sin(x * freq)\n plt.plot(x, y)\n\ninteract(plot_func, freq = widgets.FloatSlider(value=7.5,\n min=1,\n max=5.0,\n step=0.5))\n```\nAfter moving the slider to 4.0, I have: while I just want one figure to change as I move the slider. How can I achieve this? (I am using Python 2.7, matplotlib 2.0 and I have just updated notebook and jupyter to the latest version. let me know if further info is needed.)","response":"As you want to change the figure, instead of creating a new one, may I suggest the following way: Use an interactive backend; %matplotlib notebook Update the line in the plot, instead of drawing new ones. So the code could look something like this: \n```\n%matplotlib notebook\nfrom ipywidgets import *\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.linspace(0, 2 * np.pi)\nfig = plt.figure()\nax = fig.add_subplot(1, 1, 1)\nline, = ax.plot(x, np.sin(x))\n\ndef update(w = 1.0):\n line.set_ydata(np.sin(w * x))\n fig.canvas.draw_idle()\n\ninteract(update);\n```\nAlternatively you may use plt.show() as in this answer.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/44329068\/jupyter-notebook-interactive-plot-with-widgets","best_answers_votes":76,"tags":["python","matplotlib","jupyter-notebook","jupyter"],"question_length":1089,"response_length":622,"tag_count":4} {"question":"Remove seaborn lineplot legend title I would like to remove the title from my seaborn lineplot legend. I tried using this answer to no avail: \n```\nimport matplotlib.pyplot as plt\nimport seaborn as sns; sns.set()\nfmri = sns.load_dataset(\"fmri\")\nfig, ax = plt.subplots()\ng = sns.lineplot(x=\"timepoint\", y=\"signal\", hue=\"event\", data=fmri, ax=ax)\nax.legend().set_title('')\n```\nI get the same if I try to set the title to None. Interestingly, setting the title to something else seems to prepend to the existing title: \n```\nax.legend().set_title('Something else')\n```\nIt almost looks like seaborn is treating the title as a hidden legend entry. How can I resolve this?","response":"Important: This answer is about the case when a hue is used that appears as a legend title. In all other cases, the question itself already contains the usual way to get rid of a title. Indeed, seaborn is misusing a legend label as a (subgroup-)title. Hence the idea can be to either remove this label, or replace it with custom text. Replacing with custom text: \n```\nlegend = ax.legend()\nlegend.texts[0].set_text(\"Whatever else\")\n```\nRemoving the label: \n```\nhandles, labels = ax.get_legend_handles_labels()\nax.legend(handles=handles[1:], labels=labels[1:])\n```\nAfter having removed the label you may of course still set another (real) title: \n```\nhandles, labels = ax.get_legend_handles_labels()\nax.legend(handles=handles[1:], labels=labels[1:], title=\"Whatever else\")\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/51579215\/remove-seaborn-lineplot-legend-title","best_answers_votes":111,"tags":["python","matplotlib","seaborn","legend","line-plot"],"question_length":664,"response_length":774,"tag_count":5} {"question":"How to change the plot line color from blue to black I am stuck when I have generated a set of data and tried to color the plot line in python. For example I would like to change the line color from blue to black here. This is what I have and returns is the set of data that I got from pandas. \n```\nax=plt.gca()\nax.set_axis_bgcolor('#cccccc')\nreturns.plot()\n```","response":"The usual way to set the line color in matplotlib is to specify it in the plot command. This can either be done by a string after the data, e.g. \"r-\" for a red line, or by explicitely stating the color argument. \n```\nimport matplotlib.pyplot as plt\n\nplt.plot([1,2,3], [2,3,1], \"r-\") # red line\nplt.plot([1,2,3], [5,5,3], color=\"blue\") # blue line\n\nplt.show()\n```\nSee also the plot command's documentation. In case you already have a line with a certain color, you can change that with the lines2D.set_color() method. \n```\nline, = plt.plot([1,2,3], [4,5,3], color=\"blue\")\nline.set_color(\"black\")\n```\nSetting the color of a line in a pandas plot is also best done at the point of creating the plot: \n```\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\ndf = pd.DataFrame({ \"x\" : [1,2,3,5], \"y\" : [3,5,2,6]})\ndf.plot(\"x\", \"y\", color=\"r\") #plot red line\n\nplt.show()\n```\nIf you want to change this color later on, you can do so by \n```\nplt.gca().get_lines()[0].set_color(\"black\")\n```\nThis will get you the first (possibly the only) line of the current active axes. In case you have more axes in the plot, you could loop through them \n```\nfor ax in plt.gcf().axes:\n ax.get_lines()[0].set_color(\"black\")\n```\nand if you have more lines you can loop over them as well.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/41709257\/how-to-change-the-plot-line-color-from-blue-to-black","best_answers_votes":129,"tags":["python","matplotlib"],"question_length":361,"response_length":1266,"tag_count":2} {"question":"Resize a figure automatically in matplotlib Is there a way to automatically resize a figure to properly fit contained plots in a matplotlib\/pylab image? I'm creating heatmap (sub)plots that differ in aspect ratio according to the data used. I realise I could calculate the aspect ratio and manually set it, but surely there's an easier way?","response":"Use bbox_inches='tight' \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.cm as cm\n\nX = 10*np.random.rand(5,3)\n\nfig = plt.figure(figsize=(15,5),facecolor='w') \nax = fig.add_subplot(111)\nax.imshow(X, cmap=cm.jet)\n\nplt.savefig(\"image.png\",bbox_inches='tight',dpi=100)\n```\n...only works when saving images though, not showing them.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/1271023\/resize-a-figure-automatically-in-matplotlib","best_answers_votes":77,"tags":["python","matplotlib"],"question_length":340,"response_length":352,"tag_count":2} {"question":"Matplotlib: how to show legend elements horizontally? I'd like to set the legend to be displayed horizontally. I do not mean the text of the legend like described in the post Matplotlib legend vertical rotation. My actual case includes an arbitrary number of series specified with a widget. But the following example represents the gist of the challenge: Snippet: \n```\n# Imports\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# data\nnp.random.seed(123)\nx = pd.Series(np.random.randn(100),index=pd.date_range('1\/1\/2000', periods=100)).cumsum()\ny = pd.Series(np.random.randn(100),index=pd.date_range('1\/1\/2000', periods=100)).cumsum()\nz = pd.Series(np.random.randn(100),index=pd.date_range('1\/1\/2000', periods=100)).cumsum()\ndf = pd.concat([x,y,z], axis = 1)\n\n# plot \nax = df.plot()\nplt.legend(loc=\"lower left\")\nplt.show()\n```\nPlot: The default layout seems to be vertical. Looking at the details of help(ax.legend) and the docs , there does not seem to be a straight forward way to change this to horizontal. Or is there? Edit - Desired Legend: (using MS Paint)","response":"Specify the ncol parameter in legend. In your case something like: \n```\nplt.legend(loc=\"lower left\", ncol=len(df.columns))\n```\nThis is the only line I changed in your script. Working full code: \n```\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# data\nnp.random.seed(123)\nx = pd.Series(np.random.randn(100),index=pd.date_range('1\/1\/2000', periods=100)).cumsum()\ny = pd.Series(np.random.randn(100),index=pd.date_range('1\/1\/2000', periods=100)).cumsum()\nz = pd.Series(np.random.randn(100),index=pd.date_range('1\/1\/2000', periods=100)).cumsum()\ndf = pd.concat([x,y,z], axis = 1)\n\n# plot\nax = plt.subplot()\nfor col in (df.columns):\n plt.plot(df[col])\nplt.legend(loc=\"lower left\", ncol=len(df.columns))\nplt.xticks(rotation=90)\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/54870585\/matplotlib-how-to-show-legend-elements-horizontally","best_answers_votes":83,"tags":["python","matplotlib"],"question_length":1085,"response_length":765,"tag_count":2} {"question":"Reverse the order of a legend I use the following code to plot the bar graph and need to present a legend in reverse order. How can I do it? \n```\ncolorsArr = plt.cm.BuPu(np.linspace(0, 0.5, len(C2)))\np = numpy.empty(len(C2), dtype=object)\nplt.figure(figsize=(11, 11))\n\nprevBar = 0\nfor index in range(len(C2)):\n plt.bar(ind, C2[index], width, bottom=prevBar, color=colorsArr[index],\n label=C0[index])\n prevBar = prevBar + C2[index]\n\n# Positions of the x-axis ticks (center of the bars as bar labels)\ntick_pos = [i + (width\/2) for i in ind]\n\nplt.ylabel('Home Category')\nplt.title('Affinity - Retail Details(Home category)')\n\n# Set the x ticks with names\nplt.xticks(tick_pos, C1)\nplt.yticks(np.arange(0, 70000, 3000))\nplt.legend(title=\"Line\", loc='upper left')\n\n# Set a buffer around the edge\nplt.xlim(-width*2, width*2)\nplt.show()\n```","response":"You could call \n```\nhandles, labels = ax.get_legend_handles_labels()\nax.legend(handles[::-1], labels[::-1], title='Line', loc='upper left')\n```\n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nnp.random.seed(2016)\n\nC0 = list('ABCDEF')\nC2 = np.random.randint(20000, size=(len(C0), 3))\nwidth = 1.0\nC1 = ['foo', 'bar', 'baz']\nind = np.linspace(-width, width, len(C1))\n\n\ncolorsArr = plt.cm.BuPu(np.linspace(0, 0.5, len(C2)))\nfig = plt.figure(figsize=(11,11))\nax = fig.add_subplot(1, 1, 1)\n\nprevBar = 0\nfor height, color, label in zip(C2, colorsArr, C0):\n h = ax.bar(ind, height, width, bottom=prevBar, color=color, label=label)\n prevBar = prevBar + height\n\nplt.ylabel('Home Category')\nplt.title('Affinity - Retail Details(Home category)')\n\n# positions of the x-axis ticks (center of the bars as bar labels)\ntick_pos = [i+(width\/2.0) for i in ind]\n# set the x ticks with names\nplt.xticks(tick_pos, C1)\nplt.yticks(np.arange(0,70000,3000))\n\nhandles, labels = ax.get_legend_handles_labels()\nax.legend(handles[::-1], labels[::-1], title='Line', loc='upper left')\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/34576059\/reverse-the-order-of-a-legend","best_answers_votes":102,"tags":["python","matplotlib","reverse","legend"],"question_length":849,"response_length":1081,"tag_count":4} {"question":"Python Matplotlib Venn diagram I want to plot variables that belongs to certain groups. Say that I have 6 variables that I want to sort into these 3 groups and plot like a venn diagram. I would like to annotate the variable names into the three bubbles. In this simple example we could say that 1 variable is in group 1, 3 variables in group 2 and 2 variables in group 3. Could anyone help me with a simple example of how to do it in matplotlib?","response":"There is a beautiful Venn diagram add-on for matplotlib called matplotlib-venn. It looks like it can be completely customized to do what you are looking for, from the size of the circles (proportional to the set size), to inner and outer labels. Using the example code on the website gives a plot like: Edit: Per the comments below the following code gives non-overlapping circles with text using the same library: \n```\nimport pylab as plt\nfrom matplotlib_venn import venn3, venn3_circles\n\nv = venn3(subsets=(1,1,0,1,0,0,0))\nv.get_label_by_id('100').set_text('First')\nv.get_label_by_id('010').set_text('Second')\nv.get_label_by_id('001').set_text('Third')\nplt.title(\"Not a Venn diagram\")\nplt.show()\n```\nGives the diagram:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/19841535\/python-matplotlib-venn-diagram","best_answers_votes":85,"tags":["python","matplotlib","venn-diagram","matplotlib-venn"],"question_length":445,"response_length":720,"tag_count":4} {"question":"Plotting a decision boundary separating 2 classes using Matplotlib's pyplot I could really use a tip to help me plotting a decision boundary to separate to classes of data. I created some sample data (from a Gaussian distribution) via Python NumPy. In this case, every data point is a 2D coordinate, i.e., a 1 column vector consisting of 2 rows. E.g., \n```\n[ 1\n 2 ]\n```\nLet's assume I have 2 classes, class1 and class2, and I created 100 data points for class1 and 100 data points for class2 via the code below (assigned to the variables x1_samples and x2_samples). \n```\nmu_vec1 = np.array([0,0])\ncov_mat1 = np.array([[2,0],[0,2]])\nx1_samples = np.random.multivariate_normal(mu_vec1, cov_mat1, 100)\nmu_vec1 = mu_vec1.reshape(1,2).T # to 1-col vector\n\nmu_vec2 = np.array([1,2])\ncov_mat2 = np.array([[1,0],[0,1]])\nx2_samples = np.random.multivariate_normal(mu_vec2, cov_mat2, 100)\nmu_vec2 = mu_vec2.reshape(1,2).T\n```\nWhen I plot the data points for each class, it would look like this: Now, I came up with an equation for an decision boundary to separate both classes and would like to add it to the plot. However, I am not really sure how I can plot this function: \n```\ndef decision_boundary(x_vec, mu_vec1, mu_vec2):\n g1 = (x_vec-mu_vec1).T.dot((x_vec-mu_vec1))\n g2 = 2*( (x_vec-mu_vec2).T.dot((x_vec-mu_vec2)) )\n return g1 - g2\n```\nI would really appreciate any help! EDIT: Intuitively (If I did my math right) I would expect the decision boundary to look somewhat like this red line when I plot the function...","response":"Your question is more complicated than a simple plot : you need to draw the contour which will maximize the inter-class distance. Fortunately it's a well-studied field, particularly for SVM machine learning. The easiest method is to download the scikit-learn module, which provides a lot of cool methods to draw boundaries: scikit-learn: Support Vector Machines Code : \n```\n# -*- coding: utf-8 -*-\n\nimport numpy as np\nimport matplotlib\nfrom matplotlib import pyplot as plt\nimport scipy\nfrom sklearn import svm\n\n\nmu_vec1 = np.array([0,0])\ncov_mat1 = np.array([[2,0],[0,2]])\nx1_samples = np.random.multivariate_normal(mu_vec1, cov_mat1, 100)\nmu_vec1 = mu_vec1.reshape(1,2).T # to 1-col vector\n\nmu_vec2 = np.array([1,2])\ncov_mat2 = np.array([[1,0],[0,1]])\nx2_samples = np.random.multivariate_normal(mu_vec2, cov_mat2, 100)\nmu_vec2 = mu_vec2.reshape(1,2).T\n\n\nfig = plt.figure()\n\n\nplt.scatter(x1_samples[:,0],x1_samples[:,1], marker='+')\nplt.scatter(x2_samples[:,0],x2_samples[:,1], c= 'green', marker='o')\n\nX = np.concatenate((x1_samples,x2_samples), axis = 0)\nY = np.array([0]*100 + [1]*100)\n\nC = 1.0 # SVM regularization parameter\nclf = svm.SVC(kernel = 'linear', gamma=0.7, C=C )\nclf.fit(X, Y)\n```\nLinear Plot \n```\nw = clf.coef_[0]\na = -w[0] \/ w[1]\nxx = np.linspace(-5, 5)\nyy = a * xx - (clf.intercept_[0]) \/ w[1]\n\nplt.plot(xx, yy, 'k-')\n```\nMultiLinear Plot \n```\nC = 1.0 # SVM regularization parameter\nclf = svm.SVC(kernel = 'rbf', gamma=0.7, C=C )\nclf.fit(X, Y)\n\nh = .02 # step size in the mesh\n# create a mesh to plot in\nx_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1\ny_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1\nxx, yy = np.meshgrid(np.arange(x_min, x_max, h),\n np.arange(y_min, y_max, h))\n\n\n# Plot the decision boundary. For that, we will assign a color to each\n# point in the mesh [x_min, m_max]x[y_min, y_max].\nZ = clf.predict(np.c_[xx.ravel(), yy.ravel()])\n\n# Put the result into a color plot\nZ = Z.reshape(xx.shape)\nplt.contour(xx, yy, Z, cmap=plt.cm.Paired)\n```\nImplementation If you want to implement it yourself, you need to solve the following quadratic equation: The Wikipedia article Unfortunately, for non-linear boundaries like the one you draw, it's a difficult problem relying on a kernel trick but there isn't a clear cut solution.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/22294241\/plotting-a-decision-boundary-separating-2-classes-using-matplotlibs-pyplot","best_answers_votes":54,"tags":["python","numpy","matplotlib"],"question_length":1523,"response_length":2287,"tag_count":3} {"question":"When to use imshow over pcolormesh I often find myself needing to create heatmap-style visualizations in Python with matplotlib. Matplotlib provides several functions which apparently do the same thing. pcolormesh is recommended instead of pcolor but what is the difference (from a practical point of view as a data plotter) between imshow and pcolormesh? What are the pros\/cons of using one over the other? In what scenarios would one or the other be a clear winner?","response":"Fundamentally, imshow assumes that all data elements in your array are to be rendered at the same size, whereas pcolormesh\/pcolor associates elements of the data array with rectangular elements whose size may vary over the rectangular grid. If your mesh elements are uniform, then imshow with interpolation set to \"nearest\" will look very similar to the default pcolormesh display (without the optional X and Y args). The obvious differences are that the imshow y-axis will be inverted (w.r.t. pcolormesh) and the aspect ratio is maintained, although those characteristics can be altered to look like the pcolormesh output as well. From a practical point of view, pcolormesh is more convenient if you want to visualize the data array as cells, particularly when the rectangular mesh is non-uniform or when you want to plot the boundaries\/edges of the cells. Otherwise, imshow is more convenient if you have a fixed cell size, want to maintain aspect ratio, want control over pixel interpolation, or want to specify RGB values directly.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/21166679\/when-to-use-imshow-over-pcolormesh","best_answers_votes":72,"tags":["python","matplotlib","imshow"],"question_length":467,"response_length":1035,"tag_count":3} {"question":"How to add a grid line at a specific location in matplotlib plot? How do I add grid at a specific location on the y axis in a matplotlib plot?","response":"Yes. It's very simple. Use the set_[x|y]ticks methods of axes object and toggle the grid as normal: \n```\nimport matplotlib.pyplot as plt\nfig, ax = plt.subplots()\nax.set_yticks([0.2, 0.6, 0.8], minor=False)\nax.set_yticks([0.3, 0.55, 0.7], minor=True)\nax.yaxis.grid(True, which='major')\nax.yaxis.grid(True, which='minor')\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/14608483\/how-to-add-a-grid-line-at-a-specific-location-in-matplotlib-plot","best_answers_votes":100,"tags":["grid","matplotlib"],"question_length":142,"response_length":334,"tag_count":2} {"question":"find length of sequences of identical values in a numpy array (run length encoding) In a pylab program (which could probably be a matlab program as well) I have a numpy array of numbers representing distances: d[t] is the distance at time t (and the timespan of my data is len(d) time units). The events I'm interested in are when the distance is below a certain threshold, and I want to compute the duration of these events. It's easy to get an array of booleans with b = d0 and b[i-1] and b[i]:\n counter+=1\n if (b[i-1] and not b[i]) or i==len(b)-1:\n durations.append(counter)\n\nprint '.'\n```","response":"Fully numpy vectorized and generic RLE for any array (works with strings, booleans etc too). Outputs tuple of run lengths, start positions, and values. \n```\nimport numpy as np\n\ndef rle(inarray):\n \"\"\" run length encoding. Partial credit to R rle function. \n Multi datatype arrays catered for including non Numpy\n returns: tuple (runlengths, startpositions, values) \"\"\"\n ia = np.asarray(inarray) # force numpy\n n = len(ia)\n if n == 0: \n return (None, None, None)\n else:\n y = ia[1:] != ia[:-1] # pairwise unequal (string safe)\n i = np.append(np.where(y), n - 1) # must include last element posi\n z = np.diff(np.append(-1, i)) # run lengths\n p = np.cumsum(np.append(0, z))[:-1] # positions\n return(z, p, ia[i])\n```\nPretty fast (i7): \n```\nxx = np.random.randint(0, 5, 1000000)\n%timeit yy = rle(xx)\n100 loops, best of 3: 18.6 ms per loop\n```\nMultiple data types: \n```\nrle([True, True, True, False, True, False, False])\nOut[8]: \n(array([3, 1, 1, 2]),\n array([0, 3, 4, 5]),\n array([ True, False, True, False], dtype=bool))\n\nrle(np.array([5, 4, 4, 4, 4, 0, 0]))\nOut[9]: (array([1, 4, 2]), array([0, 1, 5]), array([5, 4, 0]))\n\nrle([\"hello\", \"hello\", \"my\", \"friend\", \"okay\", \"okay\", \"bye\"])\nOut[10]: \n(array([2, 1, 1, 2, 1]),\n array([0, 2, 3, 4, 6]),\n array(['hello', 'my', 'friend', 'okay', 'bye'], \n dtype='|S6'))\n```\nSame results as Alex Martelli above: \n```\nxx = np.random.randint(0, 2, 20)\n\nxx\nOut[60]: array([1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1])\n\nam = runs_of_ones_array(xx)\n\ntb = rle(xx)\n\nam\nOut[63]: array([4, 5, 2, 5])\n\ntb[0][tb[2] == 1]\nOut[64]: array([4, 5, 2, 5])\n\n%timeit runs_of_ones_array(xx)\n10000 loops, best of 3: 28.5 \u00b5s per loop\n\n%timeit rle(xx)\n10000 loops, best of 3: 38.2 \u00b5s per loop\n```\nSlightly slower than Alex (but still very fast), and much more flexible.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/1066758\/find-length-of-sequences-of-identical-values-in-a-numpy-array-run-length-encodi","best_answers_votes":81,"tags":["python","matlab","numpy","matplotlib"],"question_length":1495,"response_length":1963,"tag_count":4} {"question":"seaborn color_palette as matplotlib colormap Seaborn offers a function called color_palette, which allows you to easily create new color_palettes for plots. \n```\ncolors = [\"#67E568\",\"#257F27\",\"#08420D\",\"#FFF000\",\"#FFB62B\",\"#E56124\",\"#E53E30\",\"#7F2353\",\"#F911FF\",\"#9F8CA6\"]\n\ncolor_palette = sns.color_palette(colors)\n```\nI want to transform color_palette to a cmap, which I can use in matplotlib, but I don't see how I can do this. Sadly just functions like \"cubehelix_palette\",\"light_palette\",\u2026 have an \"as_cmap\" paramater. \"color_palette\" doesn't, unfortunately.","response":"You have to convert a list of colors from seaborn palette to color map of matplolib (thx to @RafaelLopes for proposed changes): \n```\nimport seaborn as sns\nimport matplotlib.pylab as plt\nimport numpy as np\nfrom matplotlib.colors import ListedColormap\n\n# construct cmap\nflatui = [\"#9b59b6\", \"#3498db\", \"#95a5a6\", \"#e74c3c\", \"#34495e\", \"#2ecc71\"]\nmy_cmap = ListedColormap(sns.color_palette(flatui).as_hex())\n\nN = 500\ndata1 = np.random.randn(N)\ndata2 = np.random.randn(N)\ncolors = np.linspace(0,1,N)\nplt.scatter(data1, data2, c=colors, cmap=my_cmap)\nplt.colorbar()\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/37902459\/seaborn-color-palette-as-matplotlib-colormap","best_answers_votes":61,"tags":["python","matplotlib","plot","seaborn"],"question_length":563,"response_length":575,"tag_count":4} {"question":"Writing numerical values on the plot with Matplotlib Is it possible, with Matplotlib, to print the values of each point on the graph? For example, if I have: \n```\nx = numpy.range(0,10)\ny = numpy.array([5,3,4,2,7,5,4,6,3,2])\npyplot.plot(x,y)\n```\nHow can I display y values on the plot (e.g. print a 5 near the (0,5) point, print a 3 near the (1,3) point, etc.)?","response":"You can use the annotate command to place text annotations at any x and y values you want. To place them exactly at the data points you could do this \n```\nimport numpy\nfrom matplotlib import pyplot\n\nx = numpy.arange(10)\ny = numpy.array([5,3,4,2,7,5,4,6,3,2])\n\nfig = pyplot.figure()\nax = fig.add_subplot(111)\nax.set_ylim(0,10)\npyplot.plot(x,y)\nfor i,j in zip(x,y):\n ax.annotate(str(j),xy=(i,j))\n\npyplot.show()\n```\nIf you want the annotations offset a little, you could change the annotate line to something like \n```\nax.annotate(str(j),xy=(i,j+0.5))\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/6282058\/writing-numerical-values-on-the-plot-with-matplotlib","best_answers_votes":97,"tags":["python","matplotlib"],"question_length":360,"response_length":555,"tag_count":2} {"question":"How do I get a list of axes for a figure in pyplot? Regarding matplotlib.figure, the documentation says there is a class matplotlib.figure.AxesStack and that The AxesStack is a callable, where ax_stack() returns the current axes However, when I call fig.ax_stack(), I get the error: \n```\nAttributeError: 'Figure' object has no attribute 'ax_stack'\n```","response":"The property .axes returns a list of the Axes objects in the Figure object: \n```\nax_list = fig.axes\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/24104990\/how-do-i-get-a-list-of-axes-for-a-figure-in-pyplot","best_answers_votes":104,"tags":["python","matplotlib"],"question_length":351,"response_length":103,"tag_count":2} {"question":"How to change figuresize using seaborn factorplot \n```\n%pylab inline\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib as mpl\nimport seaborn as sns\n\ntypessns = pd.DataFrame.from_csv('C:\/data\/testesns.csv', index_col=False, sep=';')\n\nmpl.rc(\"figure\", figsize=(45, 10))\nsns.factorplot(\"MONTH\", \"VALUE\", hue=\"REGION\", data=typessns, kind=\"box\", palette=\"OrRd\");\n```\nI always get a small size figure, no matter what size I 've specified in figsize... How to fix it?","response":"Note added in 2019: In modern seaborn versions the size argument has been renamed to height. To be a little more concrete: \n```\n%matplotlib inline\n\nimport seaborn as sns\n\nexercise = sns.load_dataset(\"exercise\")\n\n# Defaults are size=5, aspect=1\nsns.factorplot(\"kind\", \"pulse\", \"diet\", exercise, kind=\"point\", size=2, aspect=1)\nsns.factorplot(\"kind\", \"pulse\", \"diet\", exercise, kind=\"point\", size=4, aspect=1)\nsns.factorplot(\"kind\", \"pulse\", \"diet\", exercise, kind=\"point\", size=4, aspect=2)\n```\nYou want to pass in the arguments 'size' or 'aspect' to the sns.factorplot() when constructing your plot. Size will change the height, while maintaining the aspect ratio (so it will also also get wider if only size is changed.) Aspect will change the width while keeping the height constant. The above code should be able to be run locally in an ipython notebook. Plot sizes are reduced in these examples to show the effects, and because the plots from the above code were fairly large when saved as png's. This also shows that size\/aspect includes the legend in the margin. size=2, aspect=1 size=4, aspect=1 size=4, aspect=2 Also, all other useful parameters\/arguments and defaults for this plotting function can be viewed with once the 'sns' module is loaded: \n```\nhelp(sns.factorplot)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/26163702\/how-to-change-figuresize-using-seaborn-factorplot","best_answers_votes":118,"tags":["python","matplotlib","seaborn"],"question_length":469,"response_length":1285,"tag_count":3} {"question":"Row titles for matplotlib subplot In matplotlib, Is it possible to set a a separate title for each row of subplots in addition to the title set for the entire figure and the title set for each individual plot? This would correspond to the orange text in the figure below. If not, how would you get around this problem? Create a separate column of empty subplots to the left and fill them with the orange text? I am aware that it is possible to manually position each single title using text() or annotate(), but that usually requires a lot of tweaking and I have many subplots. Is there a smoother solution?","response":"New in matplotlib 3.4.0 Row titles can now be implemented as subfigure suptitles: The new subfigure feature allows creating virtual figures within figures with localized artists (e.g., colorbars and suptitles) that only pertain to each subfigure. See how to plot subfigures for further details. How to reproduce OP's reference figure: Either Figure.subfigures (most straightforward) Create 3x1 fig.subfigures where each subfig gets its own 1x3 subfig.subplots and subfig.suptitle: \n```py\nfig = plt.figure(constrained_layout=True)\nfig.suptitle('Figure title')\n\n# create 3x1 subfigs\nsubfigs = fig.subfigures(nrows=3, ncols=1)\nfor row, subfig in enumerate(subfigs):\n subfig.suptitle(f'Subfigure title {row}')\n\n # create 1x3 subplots per subfig\n axs = subfig.subplots(nrows=1, ncols=3)\n for col, ax in enumerate(axs):\n ax.plot()\n ax.set_title(f'Plot title {col}')\n```\nOr Figure.add_subfigure (onto existing subplots) If you already have 3x1 plt.subplots, then add_subfigure into the underlying gridspec. Again each subfig will get its own 1x3 subfig.subplots and subfig.suptitle: \n```py\n# create 3x1 subplots\nfig, axs = plt.subplots(nrows=3, ncols=1, constrained_layout=True)\nfig.suptitle('Figure title')\n\n# clear subplots\nfor ax in axs:\n ax.remove()\n\n# add subfigure per subplot\ngridspec = axs[0].get_subplotspec().get_gridspec()\nsubfigs = [fig.add_subfigure(gs) for gs in gridspec]\n\nfor row, subfig in enumerate(subfigs):\n subfig.suptitle(f'Subfigure title {row}')\n\n # create 1x3 subplots per subfig\n axs = subfig.subplots(nrows=1, ncols=3)\n for col, ax in enumerate(axs):\n ax.plot()\n ax.set_title(f'Plot title {col}')\n```\nOutput of either example (after some styling):","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/27426668\/row-titles-for-matplotlib-subplot","best_answers_votes":66,"tags":["python","matplotlib","subplot"],"question_length":607,"response_length":1722,"tag_count":3} {"question":"How to plot the lines first and points last I have a simple plot with several sets of points and lines connecting each set. I want the points to be plotted on top of the lines (so that the line doesn't show inside the point). Regardless of order of the plot and scatter calls, this plot comes out the same, and not as I'd like. Is there a simple way to do it? \n```\nimport math\nimport matplotlib.pyplot as plt\n\ndef poisson(m):\n def f(k):\n e = math.e**(-m)\n f = math.factorial(k)\n g = m**k\n return g*e\/f\n return f\n\nR = range(20)\nL = list()\nmeans = (1,4,10)\nfor m in means:\n f = poisson(m)\n L.append([f(k) for k in R])\ncolors = ['r','b','purple']\n\nfor c,P in zip(colors,L):\n plt.plot(R,P,color='0.2',lw=1.5)\n plt.scatter(R,P,s=150,color=c)\n\nax = plt.axes()\nax.set_xlim(-0.5,20)\nax.set_ylim(-0.01,0.4)\nplt.savefig('example.png')\n```","response":"You need to set the Z-order. \n```\nplt.plot(R,P,color='0.2',lw=1.5, zorder=1)\nplt.scatter(R,P,s=150,color=c, zorder=2)\n```\nCheck out this example. http:\/\/matplotlib.sourceforge.net\/examples\/pylab_examples\/zorder_demo.html","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/2314379\/how-to-plot-the-lines-first-and-points-last","best_answers_votes":104,"tags":["python","matplotlib"],"question_length":874,"response_length":220,"tag_count":2} {"question":"scatter plot logarithmic scale In my code, I take the logarithm of two data series and plot them. I would like to change each tick value of the x-axis by raising it to the power of e (anti-log of natural logarithm). In other words. I want to graph the logarithms of both series but have x-axis in levels. Here is the code that I'm using. \n```\nfrom pylab import scatter\nimport pylab\nimport matplotlib.pyplot as plt\nimport pandas as pd\nfrom pandas import Series, DataFrame\nimport numpy as np\n\nfile_name = '\/Users\/joedanger\/Desktop\/Python\/scatter_python.csv'\n\ndata = DataFrame(pd.read_csv(file_name))\n\ny = np.log(data['o_value'], dtype='float64')\nx = np.log(data['time_diff_day'], dtype='float64')\n\nfig = plt.figure()\nplt.scatter(x, y, c='blue', alpha=0.05, edgecolors='none')\nfig.suptitle('test title', fontsize=20)\nplt.xlabel('time_diff_day', fontsize=18)\nplt.ylabel('o_value', fontsize=16)\nplt.xticks([-8,-7,-6,-5,-4,-3,-2,-1,0,1,2,3,4])\n\nplt.grid(True)\npylab.show()\n```","response":"let matplotlib take the log for you: \n```\nfig = plt.figure()\nax = plt.gca()\nax.scatter(data['o_value'] ,data['time_diff_day'] , c='blue', alpha=0.05, edgecolors='none')\nax.set_yscale('log')\nax.set_xscale('log')\n```\nIf you are using all the same size and color markers, it is faster to use plot \n```\nfig = plt.figure()\nax = plt.gca()\nax.plot(data['o_value'] ,data['time_diff_day'], 'o', c='blue', alpha=0.05, markeredgecolor='none')\nax.set_yscale('log')\nax.set_xscale('log')\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/18773662\/scatter-plot-logarithmic-scale","best_answers_votes":121,"tags":["python","matplotlib","logarithm"],"question_length":970,"response_length":477,"tag_count":3} {"question":"How to plot bar graphs with same X coordinates side by side ('dodged') \n```\nimport matplotlib.pyplot as plt\n\ngridnumber = range(1,4)\n\nb1 = plt.bar(gridnumber, [0.2, 0.3, 0.1], width=0.4,\n label=\"Bar 1\", align=\"center\")\n\nb2 = plt.bar(gridnumber, [0.3, 0.2, 0.2], color=\"red\", width=0.4,\n label=\"Bar 2\", align=\"center\")\n\n\nplt.ylim([0,0.5])\nplt.xlim([0,4])\nplt.xticks(gridnumber)\nplt.legend()\nplt.show()\n```\nCurrently b1 and b2 overlap each other. How do I plot them separately like so:","response":"Below answer will explain each and every line of code in the simplest manner possible: \n```\n# Numbers of pairs of bars you want\nN = 3\n\n# Data on X-axis\n\n# Specify the values of blue bars (height)\nblue_bar = (23, 25, 17)\n# Specify the values of orange bars (height)\norange_bar = (19, 18, 14)\n\n# Position of bars on x-axis\nind = np.arange(N)\n\n# Figure size\nplt.figure(figsize=(10,5))\n\n# Width of a bar \nwidth = 0.3 \n\n# Plotting\nplt.bar(ind, blue_bar , width, label='Blue bar label')\nplt.bar(ind + width, orange_bar, width, label='Orange bar label')\n\nplt.xlabel('Here goes x-axis label')\nplt.ylabel('Here goes y-axis label')\nplt.title('Here goes title of the plot')\n\n# xticks()\n# First argument - A list of positions at which ticks should be placed\n# Second argument - A list of labels to place at the given locations\nplt.xticks(ind + width \/ 2, ('Xtick1', 'Xtick3', 'Xtick3'))\n\n# Finding the best position for legends and putting it\nplt.legend(loc='best')\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/10369681\/how-to-plot-bar-graphs-with-same-x-coordinates-side-by-side-dodged","best_answers_votes":65,"tags":["python","matplotlib","bar-chart"],"question_length":513,"response_length":975,"tag_count":3} {"question":"Logarithmic y-axis bins in python I'm trying to create a histogram of a data column and plot it logarithmically (y-axis) and I'm not sure why the following code does not work: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\ndata = np.loadtxt('foo.bar')\nfig = plt.figure()\nax = fig.add_subplot(111)\nplt.hist(data, bins=(23.0, 23.5,24.0,24.5,25.0,25.5,26.0,26.5,27.0,27.5,28.0))\nax.set_xlim(23.5, 28)\nax.set_ylim(0, 30)\nax.grid(True)\nplt.yscale('log')\nplt.show()\n```\nI've also tried instead of plt.yscale('log') adding Log=true in the plt.hist line and also I tried ax.set_yscale('log'), but nothing seems to work. I either get an empty plot, either the y-axis is indeed logarithmic (with the code as shown above), but there is no data plotted (no bins).","response":"try \n```\nplt.yscale('log', nonposy='clip')\n```\nhttp:\/\/matplotlib.org\/api\/pyplot_api.html#matplotlib.pyplot.yscale The issue is with the bottom of bars being at y=0 and the default is to mask out in-valid points (log(0) -> undefined) when doing the log transformation (there was discussion of changing this, but I don't remember which way it went) so when it tries to draw the rectangles for you bar plot, the bottom edge is masked out -> no rectangles.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/17952279\/logarithmic-y-axis-bins-in-python","best_answers_votes":112,"tags":["python","matplotlib","histogram","logarithm"],"question_length":760,"response_length":452,"tag_count":4} {"question":"How do you directly overlay a scatter plot on top of a jpg image in matplotlib \/ Python? I need to rapidly plot jpg frames that result as the output of a tracking algorithm. Companion with the jpg frames are text files containing simple (x,y) data locating the image targets that are being tracked. I would like to use matplotlib to plot the jpg images, then overlay a scatter plot of the (x,y) data which gets read from the text file and stored into a Pythonic list. Below is code that will plot the jpg image, but in all of the scouring I have done of matplotlib, scipy, and PIL manuals and help pages, I cannot find anything that explains how to maintain this plot window and simply overlay a scatter plot of simple markers at various (x,y) locations in the image. Any help is greatly appreciated. \n```\nimport matplotlib.pyplot as plt;\nim = plt.imread(image_name);\nimplot = plt.imshow(im);\nplt.show()\n```","response":"The pyplot.scatter() function was tailor made for this reason: \n```\nimport matplotlib.pyplot as plt\nim = plt.imread(image_name)\nimplot = plt.imshow(im)\n\n# put a blue dot at (10, 20)\nplt.scatter([10], [20])\n\n# put a red dot, size 40, at 2 locations:\nplt.scatter(x=[30, 40], y=[50, 60], c='r', s=40)\n\nplt.show()\n```\nSee the documentation for more info.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/5073386\/how-do-you-directly-overlay-a-scatter-plot-on-top-of-a-jpg-image-in-matplotlib","best_answers_votes":93,"tags":["python","image","matplotlib","overlay","plot"],"question_length":907,"response_length":350,"tag_count":5} {"question":"Is \"from matplotlib import pyplot as plt\" == \"import matplotlib.pyplot as plt\"? \n```\nfrom matplotlib import pyplot as plt\n\nimport matplotlib.pyplot as plt\n```\nAre the above statements equivalent? Which is more readable\/better form?","response":"Even though they are equivalent, I think there is a pretty good argument that the second form import matplotlib.pyplot as plt is objectively more readable: It is generally customary to use import matplotlib.pyplot as plt and suggested in the matplotlib documentation (see http:\/\/matplotlib.org\/users\/pyplot_tutorial.html etc...) so this will be more familiar to most readers. import matplotlib.pyplot as plt is shorter but no less clear. import matplotlib.pyplot as plt gives an unfamiliar reader a hint that pyplot is a module, rather than a function which could be incorrectly assumed from the first form.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/30558087\/is-from-matplotlib-import-pyplot-as-plt-import-matplotlib-pyplot-as-plt","best_answers_votes":64,"tags":["python","matplotlib","python-import"],"question_length":231,"response_length":607,"tag_count":3} {"question":"Is it possible to have multiple PyPlot windows? Or am I limited to subplots? I'm not sure how to word my question more clearly. Basically, is PyPlot limited to one instance\/window? Any hack or workaround I try either causes my program to freeze or for the second pyplot window to be queued until the first one is closed.","response":"Sure, just open a new figure: \n```\nimport matplotlib.pyplot as plt\n\nplt.plot(range(10))\n\nplt.figure()\nplt.plot(range(10), 'ro-')\n\nplt.figure(), plt.plot(...)\n\nplt.show() # only do this once, at the end\n```\nIf you're running this in the default python interpreter, this won't work, as each figure needs to enter the gui's mainloop. If you want to run things in an interactive shell, look into IPython. If you just run this normally (i.e. put it into a file and call python filename.py) it will work fine, though.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/5993206\/is-it-possible-to-have-multiple-pyplot-windows-or-am-i-limited-to-subplots","best_answers_votes":108,"tags":["python","matplotlib","plot","interpreter","freeze"],"question_length":320,"response_length":511,"tag_count":5} {"question":"Figure to image as a numpy array I'm trying to get a numpy array image from a Matplotlib figure and I'm currently doing it by saving to a file, then reading the file back in, but I feel like there has to be a better way. Here's what I'm doing now: \n```\nfrom matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas\nfrom matplotlib.figure import Figure\n\nfig = Figure()\ncanvas = FigureCanvas(fig)\nax = fig.gca()\n\nax.text(0.0,0.0,\"Test\", fontsize=45)\nax.axis('off')\n\ncanvas.print_figure(\"output.png\")\nimage = plt.imread(\"output.png\")\n```\nI tried this: \n```\nimage = np.fromstring( canvas.tostring_rgb(), dtype='uint8' )\n```\nfrom an example I found but it gives me an error saying that 'FigureCanvasAgg' object has no attribute 'renderer'.","response":"In order to get the figure contents as RGB pixel values, the matplotlib.backend_bases.Renderer needs to first draw the contents of the canvas. You can do this by manually calling canvas.draw(): \n```\nfrom matplotlib.figure import Figure\n\nfig = Figure()\ncanvas = fig.canvas\nax = fig.gca()\n\nax.text(0.0,0.0,\"Test\", fontsize=45)\nax.axis('off')\n\ncanvas.draw() # Draw the canvas, cache the renderer\n\nimage_flat = np.frombuffer(canvas.tostring_rgb(), dtype='uint8') # (H * W * 3,)\n# NOTE: reversed converts (W, H) from get_width_height to (H, W)\nimage = image_flat.reshape(*reversed(canvas.get_width_height()), 3) # (H, W, 3)\n```\nSee here for more info on the Matplotlib API.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/35355930\/figure-to-image-as-a-numpy-array","best_answers_votes":65,"tags":["python","numpy","matplotlib"],"question_length":748,"response_length":671,"tag_count":3} {"question":"percent label position in pie chart Is there a way to change the default position of the percent label in a matplotlib pie chart? Here is an example pie chart: Which I have created using: \n```\nplt.pie(sizes, labels=labels, colors=colors, explode=explode, autopct='%1.0f%%')\n```\nNow I don't like how some percent labels are intruding on other sections teritory (actually the only perpitrator in this example is the 9m section). Ideally I would like such labels to be outside the pie chart with an arrow of some sort pointing to the section, or alternativly just outside the section.","response":"You can control the distance of the percents and labels from the center of the pie using pctdistance= and labeldistance=, try this on your code: \n```\nplt.pie(sizes, labels=labels, autopct='%1.0f%%', pctdistance=1.1, labeldistance=1.2)\n```\nYou can also set a radius of the pie using radius= (by default is 1)","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/21572870\/percent-label-position-in-pie-chart","best_answers_votes":117,"tags":["python","matplotlib","pie-chart","plot-annotations"],"question_length":581,"response_length":307,"tag_count":4} {"question":"How to change marker border width and hatch width? In this example of a marker from my scatter plot, I have set the color to green, and edge color to black, and hatch to \"|\". For the hatch pattern to show up at all, I must set the edgecolor; however when I do, I get a very thick border around the marker. Two questions: How can I to set the size of this border (preferably to 0)? How can I increase the thickness of the hatch lines?","response":"You just need to set the linewidth to control the marker border thickness. You can increase the density of hatching, by repeating symbols (in the example below, the '|' is repeated in the R\/H pane; note that to obtain NW->SE diagonal lines the symbol must be escaped so needs twice as many characters to really double it -- '\\\\\\\\' is density 2 while '||||' is density 4). However, I don't think the thickness of individual lines within hatching is controllable. See the code example below to produce scatter plots such as these: \n```\nimport matplotlib.pyplot as plt\n# generate some data\nx = [1,2,3,4,5,8]\ny= [i**2 for i in x]\ny2= [60-i**2+3*i for i in x]\n\n# plot markers with thick borders\nplt.subplot(121)\nplt.scatter(x,y, s=500, marker='s', edgecolor='black', linewidth=3, facecolor='green', hatch='|')\n# compare with no borders, and denser hatch.\nplt.subplot(122)\nplt.scatter(x,y2, s=500, marker='s', edgecolor='black', linewidth=0, facecolor='green', hatch='||||')\n\nplt.show()\n```\nmatplotlib documentation on collections and scatter.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/14325773\/how-to-change-marker-border-width-and-hatch-width","best_answers_votes":92,"tags":["python","matplotlib","scatter-plot"],"question_length":433,"response_length":1037,"tag_count":3} {"question":"Creating a Colormap Legend in Matplotlib I am using imshow() in matplotlib like so: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nmat = '''SOME MATRIX'''\nplt.imshow(mat, origin=\"lower\", cmap='gray', interpolation='nearest')\nplt.show()\n```\nHow do I add a legend showing the numeric value for the different shades of gray. Sadly, my googling has not uncovered an answer :( Thank you in advance for the help. Vince","response":"Simple, just plt.colorbar(): \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nmat = np.random.random((10,10))\nplt.imshow(mat, origin=\"lower\", cmap='gray', interpolation='nearest')\nplt.colorbar()\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/2451264\/creating-a-colormap-legend-in-matplotlib","best_answers_votes":62,"tags":["python","matplotlib"],"question_length":421,"response_length":216,"tag_count":2} {"question":"Automatically Rescale ylim and xlim I'm plotting data in Python using matplotlib. I am updating the data of the plot based upon some calculations and want the ylim and xlim to be rescaled automatically. Instead what happens is the scale is set based upon the limits of the initial plot. A MWE is \n```\nimport random\nimport matplotlib.pyplot as pyplot\n\npyplot.ion()\n\nx = range(10)\ny = lambda m: [m*random.random() for i in range(10)]\n\npLine, = pyplot.plot(x, y(1))\n\nfor i in range(10):\n pLine.set_ydata(y(i+1))\n pyplot.draw()\n```\nThe first plot command generates a plot from [0,1] and I can see everything just fine. At the end, the y-data array goes from [0,10) with most of it greater than 1, but the y-limits of the figure remain [0,1]. I know I can manually change the limits using pyplot.ylim(...), but I don't know what to change them to. In the for loop, can I tell pyplot to scale the limits as if it was the first time being plotted?","response":"You will need to update the axes' dataLim, then subsequently update the axes' viewLim based on the dataLim. The approrpiate methods are axes.relim() and ax.autoscale_view() method. Your example then looks like: \n```\nimport random\nimport matplotlib.pyplot as pyplot\n\npyplot.ion()\n\nx = range(10)\ny = lambda m: [m*random.random() for i in range(10)]\n\npLine, = pyplot.plot(x, y(1))\n\nfor i in range(10):\n pLine.set_ydata(y(i+1))\n\nax = pyplot.gca()\n\n# recompute the ax.dataLim\nax.relim()\n# update ax.viewLim using the new dataLim\nax.autoscale_view()\npyplot.draw()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/10984085\/automatically-rescale-ylim-and-xlim","best_answers_votes":80,"tags":["python","matplotlib"],"question_length":946,"response_length":564,"tag_count":2} {"question":"Remove the extra plot in the matplotlib subplot I want to plot 5 data frames in a 2 by 3 setting (i.e. 2 rows and 3 columns). This is my code: However there is an extra empty plot in the 6th position (second row and third column) which I want to get rid of it. I am wondering how I could remove it so that I have three plots in the first row and two plots in the second row. \n```\nimport matplotlib.pyplot as plt\nfig, axes = plt.subplots(nrows=2, ncols=3)\n\nfig.set_figheight(8)\nfig.set_figwidth(15)\n\n\n\ndf[2].plot(kind='bar',ax=axes[0,0]); axes[0,0].set_title('2')\n\ndf[4].plot(kind='bar',ax=axes[0,1]); axes[0,1].set_title('4')\n\ndf[6].plot(kind='bar',ax=axes[0,2]); axes[0,2].set_title('6')\n\ndf[8].plot(kind='bar',ax=axes[1,0]); axes[1,0].set_title('8')\n\ndf[10].plot(kind='bar',ax=axes[1,1]); axes[1,1].set_title('10')\n\nplt.setp(axes, xticks=np.arange(len(observations)), xticklabels=map(str,observations),\n yticks=[0,1])\n\nfig.tight_layout()\n```","response":"Try this: \n```\nfig.delaxes(axes[1][2])\n```\nA much more flexible way to create subplots is the fig.add_axes() method. The parameters is a list of rect coordinates: fig.add_axes([x, y, xsize, ysize]). The values are relative to the canvas size, so an xsize of 0.5 means the subplot has half the width of the window.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/44980658\/remove-the-extra-plot-in-the-matplotlib-subplot","best_answers_votes":100,"tags":["python","matplotlib","subplot"],"question_length":950,"response_length":313,"tag_count":3} {"question":"Get the list of figures in matplotlib I would like to: \n```\npylab.figure()\npylab.plot(x)\npylab.figure()\npylab.plot(y)\n# ...\nfor i, figure in enumerate(pylab.MagicFunctionReturnsListOfAllFigures()):\n figure.savefig('figure%d.png' % i)\n```\nWhat is the magic function that returns a list of current figures in pylab? Websearch didn't help...","response":"Pyplot has get_fignums method that returns a list of figure numbers. This should do what you want: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nx = np.arange(100)\ny = -x\n\nplt.figure()\nplt.plot(x)\nplt.figure()\nplt.plot(y)\n\nfor i in plt.get_fignums():\n plt.figure(i)\n plt.savefig('figure%d.png' % i)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/3783217\/get-the-list-of-figures-in-matplotlib","best_answers_votes":115,"tags":["python","matplotlib"],"question_length":339,"response_length":319,"tag_count":2} {"question":"How to change the legend edgecolor and facecolor Is there while rcParams['legend.frameon'] = 'False' a simple way to fill the legend area background with a given colour. More specifically I would like the grid not to be seen on the legend area because it disturbs the text reading. The keyword framealpha sounds like what I need but it doesn't change anything. \n```\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nmpl.rcParams['legend.frameon'] = 'False'\nplt.plot(range(5), label = u\"line\")\nplt.grid(True)\nplt.legend(loc = best)\nplt.show()\n```\nI've also tried: \n```\nlegend = plt.legend(frameon = 1)\nframe = legend.get_frame()\nframe.set_color('white')\n```\nbut then I need to ask how can I change the background colour while keeping the frame on? Sometimes I want it ON with a background colour other than white. And also, is there a way of changing the colour of the frame? With the above code I was expecting to change the colour of the frame only, not the background.","response":"You can set the edge color and the face color separately like this: \n```\nframe.set_facecolor('green')\nframe.set_edgecolor('red')\n```\nThere's more information under FancyBboxPatch here.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/19863368\/how-to-change-the-legend-edgecolor-and-facecolor","best_answers_votes":55,"tags":["python","matplotlib","legend","legend-properties"],"question_length":977,"response_length":184,"tag_count":4} {"question":"Annotate data points while plotting from Pandas DataFrame I would like to annotate the data points with their values next to the points on the plot. The examples I found only deal with x and y as vectors. However, I would like to do this for a pandas DataFrame that contains multiple columns. \n```\nax = plt.figure().add_subplot(1, 1, 1)\ndf.plot(ax = ax)\nplt.show()\n```\nWhat is the best way to annotate all the points for a multi-column DataFrame?","response":"Here's a (very) slightly slicker version of Dan Allan's answer: \n```\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\nimport string\n\ndf = pd.DataFrame({'x':np.random.rand(10), 'y':np.random.rand(10)}, \n index=list(string.ascii_lowercase[:10]))\n```\nWhich gives: \n```\nx y\na 0.541974 0.042185\nb 0.036188 0.775425\nc 0.950099 0.888305\nd 0.739367 0.638368\ne 0.739910 0.596037\nf 0.974529 0.111819\ng 0.640637 0.161805\nh 0.554600 0.172221\ni 0.718941 0.192932\nj 0.447242 0.172469\n```\nAnd then: \n```\nfig, ax = plt.subplots()\ndf.plot('x', 'y', kind='scatter', ax=ax)\n\nfor k, v in df.iterrows():\n ax.annotate(k, v)\n```\nFinally, if you're in interactive mode you might need to refresh the plot: \n```\nfig.canvas.draw()\n```\nWhich produces: Or, since that looks incredibly ugly, you can beautify things a bit pretty easily: \n```\nfrom matplotlib import cm\ncmap = cm.get_cmap('Spectral')\ndf.plot('x', 'y', kind='scatter', ax=ax, s=120, linewidth=0, \n c=range(len(df)), colormap=cmap)\n\nfor k, v in df.iterrows():\n ax.annotate(k, v,\n xytext=(10,-5), textcoords='offset points',\n family='sans-serif', fontsize=18, color='darkslategrey')\n```\nWhich looks a lot nicer:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/15910019\/annotate-data-points-while-plotting-from-pandas-dataframe","best_answers_votes":64,"tags":["matplotlib","pandas"],"question_length":446,"response_length":1253,"tag_count":2} {"question":"Python ASCII plots in terminal With Octave I am able to plot arrays to the terminal, for example, plotting an array with values for the function x^2 gives this output in my terminal: \n```\n10000 ++---------+-----------+----------+-----------+---------++\n ++ + + + + ++\n |+ : : : : +|\n |++ : : : : ++|\n | + : : : : + |\n | ++ : : : : ++ |\n 8000 ++.+..................................................+.++\n | ++ : : : : ++ |\n | ++ : : : : ++ |\n | + : : : : + |\n | ++ : : : : ++ |\n | + : : : : + |\n 6000 ++....++..........................................++....++\n | ++ : : : : ++ |\n | + : : : : + |\n | ++ : : : : ++ |\n | ++: : : :++ |\n 4000 ++........++..................................++........++\n | + : : + |\n | ++ : : ++ |\n | :++ : : ++: |\n | : ++ : : ++ : |\n | : ++ : : ++ : |\n 2000 ++.............++........................++.............++\n | : ++ : : ++ : |\n | : +++ : : +++ : |\n | : ++ : : ++ : |\n | : +++: :+++ : |\n + + ++++ ++++ + +\n 0 ++---------+-----------+----------+-----------+---------++\n 0 20000 40000 60000 80000 100000\n```\nIs there some way I can do something similar in Python, specifically with matplotlib? bashplotlib seems to offer some of this functionality but appears to be quite basic compared to Octave's offering.","response":"As few answers already suggested the gnuplot is a great choice. However, there is no need to call a gnuplot subprocess, it might be much easier to use a python gnuplotlib library. Example (from: https:\/\/github.com\/dkogan\/gnuplotlib): \n```\n\n```python\nimport numpy as np\n```\n\n```python\nimport gnuplotlib as gp\n```\n\n```python\nx = np.linspace(-5,5,100)\n```\n\n```python\ngp.plot( x, np.sin(x) )\n#Output\n#[ graphical plot pops up showing a simple sinusoid ]\n```\n\n```python\ngp.plot( (x, np.sin(x), {'with': 'boxes'}),\n (x, np.cos(x), {'legend': 'cosine'}),\n _with = 'lines',\n terminal = 'dumb 80,40',\n unset = 'grid')\n```\n\n[ ascii plot printed on STDOUT]\n 1 +-+---------+----------+-----------+-----------+----------+---------+-+\n + +|||+ + + +++++ +++|||+ + +\n | |||||+ + + +|||||| cosine +-----+ |\n 0.8 +-+ |||||| + + ++||||||+ +-+\n | ||||||+ + ++||||||||+ |\n | ||||||| + ++||||||||| |\n | |||||||+ + ||||||||||| |\n 0.6 +-+ |||||||| + +||||||||||+ +-+\n | ||||||||+ | ++||||||||||| |\n | ||||||||| + ||||||||||||| |\n 0.4 +-+ ||||||||| | ++||||||||||||+ +-+\n | ||||||||| + +|||||||||||||| |\n | |||||||||+ + ||||||||||||||| |\n | ||||||||||+ | ++||||||||||||||+ + |\n 0.2 +-+ ||||||||||| + ||||||||||||||||| + +-+\n | ||||||||||| | +||||||||||||||||+ | |\n | ||||||||||| + |||||||||||||||||| + |\n 0 +-+ +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +-+\n | + ||||||||||||||||||+ | ++|||||||||| |\n | | +||||||||||||||||| + ||||||||||| |\n | + ++|||||||||||||||| | +|||||||||| |\n-0.2 +-+ + ||||||||||||||||| + ||||||||||| +-+\n | | ++||||||||||||||+ | ++||||||||| |\n | + ||||||||||||||| + ++|||||||| |\n | | +|||||||||||||| + ||||||||| |\n-0.4 +-+ + ++||||||||||||+ | +|||||||| +-+\n | + ||||||||||||| + ||||||||| |\n | | +|||||||||||+ + ++||||||| |\n-0.6 +-+ + ++|||||||||| | +||||||| +-+\n | + ||||||||||| + ++|||||| |\n | + +|||||||||+ + ||||||| |\n | + ++|||||||| + +++||||| |\n-0.8 +-+ + + ++||||||+ + + +||||| +-+\n | + + +|||||| + + ++|||| |\n + + + ++ ++|||++ + + ++ + + ++||| +\n -1 +-+---------+----------+-----------+-----------+----------+---------+-+\n -6 -4 -2 0 2 4 6\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/20295646\/python-ascii-plots-in-terminal","best_answers_votes":28,"tags":["python","matplotlib"],"question_length":2499,"response_length":3493,"tag_count":2} {"question":"Plot multiple columns of pandas DataFrame using Seaborn suppose I have DataFrame with columns ['X_Axis','col_2','col_3',...,'col_n',] I need to plot the first column on X-Axis and rest on Y-Axis. FYI : all the values have been grouped according to X-Axis, the X-Axis values range from 0-25 and all other column values have been normalized to the scale of 0 - 1. I want it on same graph plot, not subplots. Preferred : FactorPlot , normal line graph.","response":"Some seaborn plots will accept a wide dataframe, sns.pointplot(data=df, x='X_Axis', y='col_2'), but not sns.pointplot(data=df, x='X_Axis', y=['col_2', 'col_3']), so it's better to reshape the DataFrame. Reshape the DataFrame from wide to long with pandas.DataFrame.melt. Converting the dataframe from a wide to long form is standard for all seaborn plots, not just the examples shown. Tested in python 3.8.12, pandas 1.3.4, matplotlib 3.4.3, seaborn 0.11.2 Sample DataFrame \n```py\nimport pandas as pd\nimport seaborn as sns\n\ndf = pd.DataFrame({'X_Axis':[1,3,5,7,10,20],\n 'col_2':[.4,.5,.4,.5,.5,.4],\n 'col_3':[.7,.8,.9,.4,.2,.3],\n 'col_4':[.1,.3,.5,.7,.1,.0],\n 'col_5':[.5,.3,.6,.9,.2,.4]})\n\n# display(df)\n X_Axis col_2 col_3 col_4 col_5\n0 1 0.4 0.7 0.1 0.5\n1 3 0.5 0.8 0.3 0.3\n2 5 0.4 0.9 0.5 0.6\n3 7 0.5 0.4 0.7 0.9\n4 10 0.5 0.2 0.1 0.2\n5 20 0.4 0.3 0.0 0.4\n\n# convert to long (tidy) form\ndfm = df.melt('X_Axis', var_name='cols', value_name='vals')\n\n# display(dfm.head())\n X_Axis cols vals\n0 1 col_2 0.4\n1 3 col_2 0.5\n2 5 col_2 0.4\n3 7 col_2 0.5\n4 10 col_2 0.5\n```\nCurrent Plot Methods catplot: figure-level Use seaborn.catplot with kind= (e.g. kind='point' to reproduce the FactorPlot default): \n```py\ng = sns.catplot(x=\"X_Axis\", y=\"vals\", hue='cols', data=dfm, kind='point')\n```\npointplot: axes-level \n```py\nsns.pointplot(x=\"X_Axis\", y=\"vals\", hue='cols', data=dfm)\n```\nOriginal factorplot: was renamed to catplot v0.9.0 (July 2018) New versions of seaborn get warning: The factorplot function has been renamed to catplot. The original name will be removed in a future release. Please update your code. Note that the default kind in factorplot ('point') has changed 'strip' in catplot. \n```py\ng = sns.factorplot(x=\"X_Axis\", y=\"vals\", hue='cols', data=dfm)\n\n# using pd.melt instead of pd.DataFrame.melt for pandas < 0.20.0\n# dfm = pd.melt(df, 'X_Axis', var_name='cols', value_name='vals')\n# g = sns.factorplot(x=\"X_Axis\", y=\"vals\", hue='cols', data=dfm)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/44941082\/plot-multiple-columns-of-pandas-dataframe-using-seaborn","best_answers_votes":123,"tags":["python","pandas","matplotlib","dataframe","seaborn"],"question_length":449,"response_length":2193,"tag_count":5} {"question":"How to smooth matplotlib contour plot? I have numpy array with this shape: (33,10). When I plot contour I get ugly image like this: while contour() doesn't seem to have any argument about smoothing or some sort of interpolation feature. I somehow expected that tool which offers contour plot should offer smoothing too. Is there straight forward way to do it in MPL?","response":"As others have already pointed out, you need to interpolate your data. There are a number of different ways to do this, but for starters, consider scipy.ndimage.zoom. As a quick exmaple: \n```\nimport numpy as np\nimport scipy.ndimage\nimport matplotlib.pyplot as plt\n\ndata = np.loadtxt('data.txt')\n\n# Resample your data grid by a factor of 3 using cubic spline interpolation.\ndata = scipy.ndimage.zoom(data, 3)\n\nplt.contour(data)\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/12274529\/how-to-smooth-matplotlib-contour-plot","best_answers_votes":68,"tags":["python","matplotlib"],"question_length":366,"response_length":441,"tag_count":2} {"question":"How does one insert statistical annotations (stars or p-values) This seems like a trivial question, but I've been searching for a while and can't seem to find an answer. It also seems like something that should be a standard part of these packages. Does anyone know if there is a standard way to include statistical annotation between distribution plots in seaborn? For example, between two box or swarmplots?","response":"A brace \/ bracket can be plotted direct with matplotlib.pyplot.plot or matplotlib.axes.Axes.plot, and annotations can be added with matplotlib.pyplot.text or matplotlib.axes.Axes.text. seaborn categorical plots are 0 indexed, whereas box plots, by default, with matplotlib and pandas, start at range(1, N+1), which can be adjusted with the positions parameter. seaborn is a high-level API for matplotlib, and pandas.DataFrame.plot uses matplotlib as the default backend. Imports and DataFrame \n```py\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\n# dataframe in long form for seaborn\ntips = sns.load_dataset(\"tips\")\n\n# dataframe in wide form for plotting with pandas.DataFrame.plot\ndf = tips.pivot(columns='day', values='total_bill')\n\n# data as a list of lists for plotting directly with matplotlib (no nan values allowed)\ndata = [df[c].dropna().tolist() for c in df.columns]\n```\nseaborn \n```py\nsns.boxplot(x=\"day\", y=\"total_bill\", data=tips, palette=\"PRGn\")\n\n# statistical annotation\nx1, x2 = 2, 3 # columns 'Sat' and 'Sun' (first column: 0, see plt.xticks())\ny, h, col = tips['total_bill'].max() + 2, 2, 'k'\n\nplt.plot([x1, x1, x2, x2], [y, y+h, y+h, y], lw=1.5, c=col)\nplt.text((x1+x2)*.5, y+h, \"ns\", ha='center', va='bottom', color=col)\n\nplt.show()\n```\npandas.DataFrame.plot \n```py\nax = df.plot(kind='box', positions=range(len(df.columns)))\n\nx1, x2 = 2, 3\ny, h, col = df.max().max() + 2, 2, 'k'\n\nax.plot([x1, x1, x2, x2], [y, y+h, y+h, y], lw=1.5, c=col)\nax.text((x1+x2)*.5, y+h, \"ns\", ha='center', va='bottom', color=col)\n```\nmatplotlib \n```py\nplt.boxplot(data, positions=range(len(data)))\n\nx1, x2 = 2, 3\n\ny, h, col = max(map(max, data)) + 2, 2, 'k'\n\nplt.plot([x1, x1, x2, x2], [y, y+h, y+h, y], lw=1.5, c=col)\nplt.text((x1+x2)*.5, y+h, \"ns\", ha='center', va='bottom', color=col)\n```\ntips.head() \n```none\ntotal_bill tip sex smoker day time size\n0 16.99 1.01 Female No Sun Dinner 2\n1 10.34 1.66 Male No Sun Dinner 3\n2 21.01 3.50 Male No Sun Dinner 3\n3 23.68 3.31 Male No Sun Dinner 2\n4 24.59 3.61 Female No Sun Dinner 4\n```\ndf.head() \n```none\nday Thur Fri Sat Sun\n0 NaN NaN NaN 16.99\n1 NaN NaN NaN 10.34\n2 NaN NaN NaN 21.01\n3 NaN NaN NaN 23.68\n4 NaN NaN NaN 24.59\n```\ndata \n```none\n[[27.2, 22.76, 17.29, ..., 20.53, 16.47, 18.78],\n [28.97, 22.49, 5.75, ..., 13.42, 16.27, 10.09],\n [20.65, 17.92, 20.29, ..., 29.03, 27.18, 22.67, 17.82],\n [16.99, 10.34, 21.01, ..., 18.15, 23.1, 15.69]]\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/36578458\/how-does-one-insert-statistical-annotations-stars-or-p-values","best_answers_votes":81,"tags":["python","pandas","matplotlib","seaborn","plot-annotations"],"question_length":409,"response_length":2550,"tag_count":5} {"question":"How to remove or hide x-axis labels from a plot I have a boxplot and need to remove the x-axis ('user_type' and 'member_gender') label. How do I do this given the below format? \n```\nsb.boxplot(x=\"user_type\", y=\"Seconds\", data=df, color = default_color, ax = ax[0,0], sym='').set_title('User-Type (0=Non-Subscriber, 1=Subscriber)')\nsb.boxplot(x=\"member_gender\", y=\"Seconds\", data=df, color = default_color, ax = ax[1,0], sym='').set_title('Gender (0=Male, 1=Female, 2=Other)')\n```","response":"After creating the boxplot, use .set(). .set(xticklabels=[]) should remove tick labels. This doesn't work if you use .set_title(), but you can use .set(title=''). Do not use sns.boxplot(...).set(xticklabels=[]) because, while this works, the object type is changed from matplotlib.axes._axes.Axes for sns.boxplot(...), to list. .set(xlabel=None) should remove the axis label. .tick_params(bottom=False) will remove the ticks. Similarly, for the y-axis: How to remove or hide y-axis ticklabels from a plot Tested in python 3.11, pandas 1.5.2, matplotlib 3.6.2, seaborn 0.12.1 From the OP: No sample data \n```py\nfig, ax = plt.subplots(2, 1)\n\ng1 = sb.boxplot(x=\"user_type\", y=\"Seconds\", data=df, color = default_color, ax = ax[0], sym='')\ng1.set(xticklabels=[])\ng1.set(title='User-Type (0=Non-Subscriber, 1=Subscriber)')\ng1.set(xlabel=None)\n\ng2 = sb.boxplot(x=\"member_gender\", y=\"Seconds\", data=df, color = default_color, ax = ax[1], sym='')\ng2.set(xticklabels=[])\ng2.set(title='Gender (0=Male, 1=Female, 2=Other)')\ng2.set(xlabel=None)\n```\nExample 1 With xticks and xlabel \n```py\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\n# load data\nexercise = sns.load_dataset('exercise')\npen = sns.load_dataset('penguins')\n\n# create figures\nfig, ax = plt.subplots(2, 1, figsize=(8, 8))\n\n# plot data\ng1 = sns.boxplot(x='time', y='pulse', hue='kind', data=exercise, ax=ax[0])\n\ng2 = sns.boxplot(x='species', y='body_mass_g', hue='sex', data=pen, ax=ax[1])\n\nplt.show()\n```\nWithout xticks and xlabel \n```py\nfig, ax = plt.subplots(2, 1, figsize=(8, 8))\n\ng1 = sns.boxplot(x='time', y='pulse', hue='kind', data=exercise, ax=ax[0])\n\ng1.set(xticklabels=[]) # remove the tick labels\ng1.set(title='Exercise: Pulse by Time for Exercise Type') # add a title\ng1.set(xlabel=None) # remove the axis label\n\ng2 = sns.boxplot(x='species', y='body_mass_g', hue='sex', data=pen, ax=ax[1])\n\ng2.set(xticklabels=[]) \ng2.set(title='Penguins: Body Mass by Species for Gender')\ng2.set(xlabel=None)\ng2.tick_params(bottom=False) # remove the ticks\n\nplt.show()\n```\nExample 2 \n```py\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\n# sinusoidal sample data\nsample_length = range(1, 1+1) # number of columns of frequencies\nrads = np.arange(0, 2*np.pi, 0.01)\ndata = np.array([(np.cos(t*rads)*10**67) + 3*10**67 for t in sample_length])\ndf = pd.DataFrame(data.T, index=pd.Series(rads.tolist(), name='radians'), columns=[f'freq: {i}x' for i in sample_length])\ndf.reset_index(inplace=True)\n\n# plot\nfig, ax = plt.subplots(figsize=(8, 8))\nax.plot('radians', 'freq: 1x', data=df)\n\n# or skip the previous two lines and plot df directly\n# ax = df.plot(x='radians', y='freq: 1x', figsize=(8, 8), legend=False)\n```\nRemove Labels \n```py\n# plot\nfig, ax = plt.subplots(figsize=(8, 8))\nax.plot('radians', 'freq: 1x', data=df)\n\n# or skip the previous two lines and plot df directly\n# ax = df.plot(x='radians', y='freq: 1x', figsize=(8, 8), legend=False)\n\nax.set(xticklabels=[]) # remove the tick labels\nax.tick_params(bottom=False) # remove the ticks\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/58476654\/how-to-remove-or-hide-x-axis-labels-from-a-plot","best_answers_votes":110,"tags":["python","python-3.x","pandas","matplotlib","seaborn"],"question_length":479,"response_length":3031,"tag_count":5} {"question":"No handles with labels found to put in legend I'm trying to create a parallelogram in PyPlot. I'm not up to drawing the parallelogram--first I'm putting in the vector arrows--using the following code: \n```\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.spines['left'].set_position('zero')\nax.spines['right'].set_color('none')\nax.spines['bottom'].set_position('zero')\nax.spines['top'].set_color('none')\nplt.axis([-5,5,-5,5])\nax.xaxis.set_ticks_position('bottom')\nax.yaxis.set_ticks_position('left')\nplt.grid()\nplt.arrow(0,0, 3,1, head_width=0.2, color='r', length_includes_head=True, label='u')\nplt.arrow(0,0, 1,3, head_width=0.2, color='r', length_includes_head=True, label='v')\nplt.arrow(0,0, 4,4, head_width=0.2, color='r', length_includes_head=True, label='u+v')\nplt.legend()\n```\nThis returns the following error: \n```none\nNo handles with labels found to put in legend.\n```\nI'm not sure why, because, based on the documentation for plt.arrow(), label is an acceptable kwarg, and plt.legend() should ostensibly be reading that. The rest of the figure draws fine; it's just missing the legend.","response":"It might be late but for anyone with the same issue the solution is using the method legend() for the corresponding ax not as for plt \n```\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.spines['left'].set_position('zero')\nax.spines['right'].set_color('none')\nax.spines['bottom'].set_position('zero')\nax.spines['top'].set_color('none')\nplt.axis([-5,5,-5,5])\nax.xaxis.set_ticks_position('bottom')\nax.yaxis.set_ticks_position('left')\nplt.grid()\nplt.arrow(0,0, 3,1, head_width=0.2, color='r', length_includes_head=True, label='u')\nplt.arrow(0,0, 1,3, head_width=0.2, color='r', length_includes_head=True, label='v')\nplt.arrow(0,0, 4,4, head_width=0.2, color='r', length_includes_head=True, label='u+v')\nax.legend()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/59346731\/no-handles-with-labels-found-to-put-in-legend","best_answers_votes":56,"tags":["python","matplotlib","legend"],"question_length":1095,"response_length":715,"tag_count":3} {"question":"How to export figures to files from IPython Notebook I use the IPython Notebook with the --pylab inline option, since I don't want plots to show up in a different window. Now I'd like to save the plots I see in the notebook to PDF or PNG files. Some code examples use \n```\nimport matplotlib as plt\n\nplt.savefig(\"figure.png\") # save as png\n```\nbut this does not seem to work in inline mode. Of course I could simply save the PNG that is generated out of the browser, but I'd like to do this with a line of Python. I am also interested in PDF export.","response":"try this (note that the files get saved to the default notebook folder): \n```\nplot(range(80))\nxlabel('foo')\nylabel('bar')\nlegend(['myline'])\naxis([0, 80, 0, 120])\nsavefig('sample.pdf')\n```\nif you want png just change it to 'sample.png'. Note that the savefig() call should be in the same notebook cell as the plotting commands.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/13642528\/how-to-export-figures-to-files-from-ipython-notebook","best_answers_votes":57,"tags":["python","matplotlib","jupyter-notebook"],"question_length":548,"response_length":327,"tag_count":3} {"question":"How can I set the background color on specific areas of a figure? I've managed to plot a series of points with the following code: \n```\nplt = pp.figure()\nfor i in range(spt.shape[1]):\n spktrain = spt[0,i]\n for trial in spktrain:\n non_z = np.nonzero(trial)\n non_z = non_z[0]\n pp.plot(t[non_z], trial[non_z], 'bo')\n```\nI would like to place alternating bands of white and gray background on the figure in order to separate the data from each iteration of the outer for loop. In other words, I would like the data from each \"spktrain\" to have it's own background color (the data does not overlap). How can I go about changing the background color of a figure in a specific region?","response":"You can use axhspan and\/or axvspan like this: \n```\nimport matplotlib.pyplot as plt\n\nplt.figure()\nplt.xlim(0, 5)\nplt.ylim(0, 5)\n\nfor i in range(0, 5):\n plt.axhspan(i, i+.2, facecolor='0.2', alpha=0.5)\n plt.axvspan(i, i+.5, facecolor='b', alpha=0.5)\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/9957637\/how-can-i-set-the-background-color-on-specific-areas-of-a-figure","best_answers_votes":123,"tags":["python","matplotlib"],"question_length":704,"response_length":269,"tag_count":2} {"question":"fill between multiple lines I would like to fill between 3 lines in matplotlib.pyplot but unfortunately the fill_between gives me opportunity to fill between only two lines. Any ideas how to deal with this? Edit: Ok, I did not explain what I really mean since I cannot add the picture with my current reputation so maybe in that way: I try to fill the polygon bounded by these lines and I have no idea how because fill_between gives me opportunity to fill only area between two of them. Below the fill equation: \n```\ny <= 4- 2x\ny <= 3 - 1\/2x\ny <= 1 - x\ny >= 0\nx >= 0\n```\nthe x and y bigger than 0 is obvious. I start the plot from (0,0) but I still have 3 lines... \n```\ny <= 4- 2x\ny <= 3 - 1\/2x\ny <= 1 - x\n```","response":"If you start the plot in point (0, 0), and therefore do not need to consider the area of the polygon not in the first quadrant, then this should do the trick in this particular situation: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nx = np.arange(0,10,0.1)\n\n# The lines to plot\ny1 = 4 - 2*x\ny2 = 3 - 0.5*x\ny3 = 1 -x\n\n# The upper edge of polygon (min of lines y1 & y2)\ny4 = np.minimum(y1, y2)\n\n# Set y-limit, making neg y-values not show in plot\nplt.ylim(0, 5)\n\n# Plotting of lines\nplt.plot(x, y1,\n x, y2,\n x, y3)\n\n# Filling between line y3 and line y4\nplt.fill_between(x, y3, y4, color='grey', alpha='0.5')\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/16417496\/fill-between-multiple-lines","best_answers_votes":62,"tags":["python","matplotlib"],"question_length":709,"response_length":649,"tag_count":2} {"question":"What is the process to create pdf reports with charts from a DB? [closed] Closed. This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meet Stack Overflow guidelines. It is not currently accepting answers. We don\u2019t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations. Closed 6 months ago. The community reviewed whether to reopen this question 6 months ago and left it closed: Original close reason(s) were not resolved Improve this question I have a database generated by a survey to evaluate university professors. What I want is a python script that takes the information from that database, generates a graphing table for each user, creates graphs for each user, and then renders it in a template to export it to a pdf. What does the database look like? \n```\nUser Professor_evaluated Category Question Answer\n_________________________________________________________________\nMike Professor Criss respect 1 3\nMike Professor Criss respect 2 4\nMike Professor Criss wisdom 3 5\nMike Professor Criss wisdom 4 3\nCharles Professor Criss respect 1 3\nCharles Professor Criss respect 2 4\nCharles Professor Criss wisdom 3 5\nCharles Professor Criss wisdom 4 3\n```\nEach teacher has several categories assigned to be evaluated (respect, wisdom, etc.) and in turn each category has associated questions. In other words, a category has several questions. Each row of the DB is the answer to a question from a student evaluating a teacher What do I need? I need to create a script for automatically generate pdf reports that summarizes this information through charts, for example a chart with the overall score of each teacher, another chart with the score of each teacher by category, another chart with the average of each student, etc..Finally, every teacher would have a report.I want a report like this What is my question? my question is about which python packages and modules I would need to do this task. And what would be the general process of doing so. I don't need the code, because I know the answer is very general, but the knowledge of how I could do it. For example: you would first need to process the information with pandas, to create a table that summarizes the information you want to graph, then plot it, then create a template of your report with XYZ module and then export it to pdf with XYZ module.","response":"There are a lot of options for creating a pdf in python. Some of these options are ReportLab, pydf2, pdfdocument and FPDF. The FPDF library is fairly stragihtforward to use and is what I've used in this example. FPDF Documentation can be found here. It's perhaps also good to think about what python modules you might want to use to create graphs and tables. In my example, I use matplotlib (link to docs) and I also use Pandas to create a dataframe using pandas.dataframe(). I've posted a rather lengthy but fully reproducible example below, using pandas, matplotlib and fpdf. The data are a subset of what the OP provided in the question. I loop through the dataframe in my example to create the table, but there are alternative and perhaps more efficient ways to do this. \n```\nimport pandas as pd\nimport matplotlib\nfrom pylab import title, figure, xlabel, ylabel, xticks, bar, legend, axis, savefig\nfrom fpdf import FPDF\n\n\ndf = pd.DataFrame()\ndf['Question'] = [\"Q1\", \"Q2\", \"Q3\", \"Q4\"]\ndf['Charles'] = [3, 4, 5, 3]\ndf['Mike'] = [3, 3, 4, 4]\n\ntitle(\"Professor Criss's Ratings by Users\")\nxlabel('Question Number')\nylabel('Score')\n\nc = [2.0, 4.0, 6.0, 8.0]\nm = [x - 0.5 for x in c]\n\nxticks(c, df['Question'])\n\nbar(m, df['Mike'], width=0.5, color=\"#91eb87\", label=\"Mike\")\nbar(c, df['Charles'], width=0.5, color=\"#eb879c\", label=\"Charles\")\n\nlegend()\naxis([0, 10, 0, 8])\nsavefig('barchart.png')\n\npdf = FPDF()\npdf.add_page()\npdf.set_xy(0, 0)\npdf.set_font('arial', 'B', 12)\npdf.cell(60)\npdf.cell(75, 10, \"A Tabular and Graphical Report of Professor Criss's Ratings by Users Charles and Mike\", 0, 2, 'C')\npdf.cell(90, 10, \" \", 0, 2, 'C')\npdf.cell(-40)\npdf.cell(50, 10, 'Question', 1, 0, 'C')\npdf.cell(40, 10, 'Charles', 1, 0, 'C')\npdf.cell(40, 10, 'Mike', 1, 2, 'C')\npdf.cell(-90)\npdf.set_font('arial', '', 12)\nfor i in range(0, len(df)):\n pdf.cell(50, 10, '%s' % (df['Question'].iloc[i]), 1, 0, 'C')\n pdf.cell(40, 10, '%s' % (str(df.Mike.iloc[i])), 1, 0, 'C')\n pdf.cell(40, 10, '%s' % (str(df.Charles.iloc[i])), 1, 2, 'C')\n pdf.cell(-90)\npdf.cell(90, 10, \" \", 0, 2, 'C')\npdf.cell(-30)\npdf.image('barchart.png', x = None, y = None, w = 0, h = 0, type = '', link = '')\npdf.output('test.pdf', 'F')\n```\nExpected test.pdf: Update (April 2020): I made an edit to the original answer in April 2020 to replace use of pandas.DataFrame.ix() since this is deprecated. In my example I was able to replace it's use with pandas.DataFrame.iloc and the output is the same as before.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/51864730\/what-is-the-process-to-create-pdf-reports-with-charts-from-a-db","best_answers_votes":76,"tags":["python","pandas","matplotlib","jinja2","plotly"],"question_length":2749,"response_length":2472,"tag_count":5} {"question":"Common title to many subplots in Matplotlib [duplicate] This question already has answers here: Global legend and title aside subplots (4 answers) Closed 7 years ago. I am making a chart in matplotlib and I have many subplots in it each of them with a different title, but on the top I also want to a put a title to the whole chart. How this can be done?","response":"You can use the pyplot.suptitle command to add a centered title to the figure in addition to sub plot titles.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/10717104\/common-title-to-many-subplots-in-matplotlib","best_answers_votes":91,"tags":["python","matplotlib"],"question_length":354,"response_length":109,"tag_count":2} {"question":"TypeError: Invalid dimensions for image data when plotting array with imshow() For the following code \n```\n# Numerical operation\nSN_map_final = (new_SN_map - mean_SN) \/ sigma_SN \n\n# Plot figure\nfig12 = plt.figure(12)\nfig_SN_final = plt.imshow(SN_map_final, interpolation='nearest')\nplt.colorbar()\n\nfig12 = plt.savefig(outname12)\n```\nwith new_SN_map being a 1D array and mean_SN and sigma_SN being constants, I get the following error. \n```\nTraceback (most recent call last):\n File \"c:\\Users\\Valentin\\Desktop\\Stage M2\\density_map_simple.py\", line 546, in \n fig_SN_final = plt.imshow(SN_map_final, interpolation='nearest')\n File \"c:\\users\\valentin\\appdata\\local\\enthought\\canopy\\user\\lib\\site-packages\\matplotlib\\pyplot.py\", line 3022, in imshow\n **kwargs)\n File \"c:\\users\\valentin\\appdata\\local\\enthought\\canopy\\user\\lib\\site-packages\\matplotlib\\__init__.py\", line 1812, in inner\n return func(ax, *args, **kwargs)\n File \"c:\\users\\valentin\\appdata\\local\\enthought\\canopy\\user\\lib\\site-packages\\matplotlib\\axes\\_axes.py\", line 4947, in imshow\n im.set_data(X)\n File \"c:\\users\\valentin\\appdata\\local\\enthought\\canopy\\user\\lib\\site-packages\\matplotlib\\image.py\", line 453, in set_data\n raise TypeError(\"Invalid dimensions for image data\")\nTypeError: Invalid dimensions for image data\n```\nWhat is the source of this error? I thought my numerical operations were allowed.","response":"There is a (somewhat) related question on StackOverflow: Showing an image with pylab.imshow() Here the problem was that an array of shape (nx,ny,1) is still considered a 3D array, and must be squeezed or sliced into a 2D array. More generally, the reason for the Exception TypeError: Invalid dimensions for image data is shown here: matplotlib.pyplot.imshow() needs a 2D array, or a 3D array with the third dimension being of shape 3 or 4! You can easily check this with (these checks are done by imshow, this function is only meant to give a more specific message in case it's not a valid input): \n```\nfrom __future__ import print_function\nimport numpy as np\n\ndef valid_imshow_data(data):\n data = np.asarray(data)\n if data.ndim == 2:\n return True\n elif data.ndim == 3:\n if 3 <= data.shape[2] <= 4:\n return True\n else:\n print('The \"data\" has 3 dimensions but the last dimension '\n 'must have a length of 3 (RGB) or 4 (RGBA), not \"{}\".'\n ''.format(data.shape[2]))\n return False\n else:\n print('To visualize an image the data must be 2 dimensional or '\n '3 dimensional, not \"{}\".'\n ''.format(data.ndim))\n return False\n```\nIn your case: \n```\n\n```python\nnew_SN_map = np.array([1,2,3])\n```\n\n```python\nvalid_imshow_data(new_SN_map)\n#Output\n#To visualize an image the data must be 2 dimensional or 3 dimensional, not \"1\".\n#False\n#```\n#The np.asarray is what is done internally by matplotlib.pyplot.imshow so it's generally best you do it too. If you have a numpy array it's obsolete but if not (for example a list) it's necessary. In your specific case you got a 1D array, so you need to add a dimension with np.expand_dims() \n#```\n#import matplotlib.pyplot as plt\n#a = np.array([1,2,3,4,5])\n#a = np.expand_dims(a, axis=0) # or axis=1\n#plt.imshow(a)\n#plt.show()\n#```\n#or just use something that accepts 1D arrays like plot: \n#```\n#a = np.array([1,2,3,4,5])\n#plt.plot(a)\n#plt.show()\n#```\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/36431496\/typeerror-invalid-dimensions-for-image-data-when-plotting-array-with-imshow","best_answers_votes":77,"tags":["python","arrays","numpy","matplotlib"],"question_length":1392,"response_length":1976,"tag_count":4} {"question":"bbox_to_anchor and loc in matplotlib I came across matplotlib code which customizes legend location using keywords loc and bbox_to_anchor. For example : \n```\nfig.legend([line1, line2], ['series1', 'series2'], bbox_to_anchor=[0.5, 0.5], \n loc='center', ncol=2)\n```\nI have seen variation of above where bbox_to_anchor is used after loc. I understand the purpose of using bbox_to_anchor and loc separately. However, is there any benefit of using both in the same legend specification? From my understanding and usage, it appears to me that if bbox_to_anchor is specified, then the loc parameter is pretty much don't care. Can anyone confirm this? I don't see any documentation regarding this.","response":"When bbox_to_anchor and loc are used together, the loc argument will inform matplotlib which part of the bounding box of the legend should be placed at the arguments of bbox_to_anchor. For example (I've simplified the command a bit), the three options below will produce different locations for your legend, \n```\nfig.legend([line1], ['series1'], bbox_to_anchor=[0.5, 0.5], loc='center')\n fig.legend([line1], ['series1'], bbox_to_anchor=[0.5, 0.5], loc='center left')\n fig.legend([line1], ['series1'], bbox_to_anchor=[0.5, 0.5], loc='center right')\n```\nThe first command will put the center of the bounding box at axes coordinates 0.5,0.5. The second will put the center left edge of the bounding box at the same coordinates (i.e. shift the legend to the right). Finally, the third option will put the center right edge of the bounding box at the coordinates (i.e. shift the legend to the left).","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/25068384\/bbox-to-anchor-and-loc-in-matplotlib","best_answers_votes":76,"tags":["python","matplotlib"],"question_length":699,"response_length":894,"tag_count":2} {"question":"Dynamically add\/create subplots in matplotlib I want to create a plot consisting of several subplots with shared x\/y axes. It should look something like this from the documentation (though my subplots will be scatterblots): (code here) But I want to create the subplots dynamically! So the number of subplots depends on the output of a previous function. (It will probably be around 3 to 15 subplots per diagram, each from a distinct dataset, depending on the input of my script.) Can anyone tell me how to accomplish that?","response":"Suppose you know total subplots and total columns you want to use: \n```\nimport matplotlib.pyplot as plt\n\n# Subplots are organized in a Rows x Cols Grid\n# Tot and Cols are known\n\nTot = number_of_subplots\nCols = number_of_columns\n\n# Compute Rows required\n\nRows = Tot \/\/ Cols \n\n# EDIT for correct number of rows:\n# If one additional row is necessary -> add one:\n\nif Tot % Cols != 0:\n Rows += 1\n\n# Create a Position index\n\nPosition = range(1,Tot + 1)\n```\nFirst instance of Rows accounts only for rows completely filled by subplots, then is added one more Row if 1 or 2 or ... Cols - 1 subplots still need location. Then create figure and add subplots with a for loop. \n```\n# Create main figure\n\nfig = plt.figure(1)\nfor k in range(Tot):\n\n # add every single subplot to the figure with a for loop\n\n ax = fig.add_subplot(Rows,Cols,Position[k])\n ax.plot(x,y) # Or whatever you want in the subplot\n\nplt.show()\n```\nPlease note that you need the range Position to move the subplots into the right place.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/12319796\/dynamically-add-create-subplots-in-matplotlib","best_answers_votes":46,"tags":["python","matplotlib"],"question_length":523,"response_length":1011,"tag_count":2} {"question":"How to add percentages on top of grouped bars Given the following count plot how do I place percentages on top of the bars? \n```\nimport seaborn as sns\nsns.set(style=\"darkgrid\")\ntitanic = sns.load_dataset(\"titanic\")\nax = sns.countplot(x=\"class\", hue=\"who\", data=titanic)\n```\nFor example for \"First\" I want total First men\/total First, total First women\/total First, and total First children\/total First on top of their respective bars.","response":"The seaborn.catplot organizing function returns a FacetGrid, which gives you access to the fig, the ax, and its patches. If you add the labels when nothing else has been plotted you know which bar-patches came from which variables. From @LordZsolt's answer I picked up the order argument to catplot: I like making that explicit because now we aren't relying on the barplot function using the order we think of as default. \n```\nimport seaborn as sns\nfrom itertools import product\n\ntitanic = sns.load_dataset(\"titanic\")\n\nclass_order = ['First','Second','Third'] \nhue_order = ['child', 'man', 'woman']\nbar_order = product(class_order, hue_order)\n\ncatp = sns.catplot(data=titanic, kind='count', \n x='class', hue='who',\n order = class_order, \n hue_order = hue_order )\n\n# As long as we haven't plotted anything else into this axis,\n# we know the rectangles in it are our barplot bars\n# and we know the order, so we can match up graphic and calculations:\n\nspots = zip(catp.ax.patches, bar_order)\nfor spot in spots:\n class_total = len(titanic[titanic['class']==spot[1][0]])\n class_who_total = len(titanic[(titanic['class']==spot[1][0]) & \n (titanic['who']==spot[1][1])])\n height = spot[0].get_height() \n catp.ax.text(spot[0].get_x(), height+3, '{:1.2f}'.format(class_who_total\/class_total))\n\n #checking the patch order, not for final:\n #catp.ax.text(spot[0].get_x(), -3, spot[1][0][0]+spot[1][1][0])\n```\nproduces An alternate approach is to do the sub-summing explicitly, e.g. with the excellent pandas, and plot with matplotlib, and also do the styling yourself. (Though you can get quite a lot of styling from sns context even when using matplotlib plotting functions. Try it out -- )","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/31749448\/how-to-add-percentages-on-top-of-grouped-bars","best_answers_votes":70,"tags":["python","matplotlib","seaborn","plot-annotations","grouped-bar-chart"],"question_length":434,"response_length":1757,"tag_count":5} {"question":"How to adjust transparency (alpha) in seaborn pairplot? I can create beatiful scatter plot with seaborns regplot, obtain the right level of transparency through the scatter_kws as in \n```\nsns.regplot(x='logAssets', y='logLTIFR', lowess=True, data=df, scatter_kws={'alpha':0.15}, line_kws={'color': 'red'})\n```\nand obtain this: Is there an option in a seaborn pairplot to tweak transparency?","response":"Ok I was very close to the solution. Seaborn pairplots have plot_kws that takes as arguments a dictionary of the kind of modifications you would do in a regplot. The following line is exactly what I needed: \n```\ng = sns.pairplot(df, kind='reg', plot_kws={'line_kws':{'color':'red'}, 'scatter_kws': {'alpha': 0.1}})\n```\nAnd this is the outcome: If you don't do the regression but just the scatter plot (kind='scatter'), within plot keywords you don't have to do the division between line and scatter keywords: \n```\ng = sns.pairplot(df, kind='scatter', plot_kws={'alpha':0.1})\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/47200033\/how-to-adjust-transparency-alpha-in-seaborn-pairplot","best_answers_votes":86,"tags":["python","matplotlib","seaborn"],"question_length":390,"response_length":578,"tag_count":3} {"question":"How to put text outside of plots I am plotting two time series and computing varies indices for them. How to write these indices for these plots outside the plot using annotation or text in python? Below is my code \n```\nimport matplotlib.pyplot as plt\n\nobs_graph=plt.plot(obs_df['cms'], '-r', label='Observed')\nplt.legend(loc='best')\nplt.hold(True)\nsim_graph=plt.plot(sim_df['cms'], '-g', label=\"Simulated\")\nplt.legend(loc='best')\nplt.ylabel('Daily Discharge (m^3\/s)')\nplt.xlabel('Year')\nplt.title('Observed vs Simulated Daily Discharge')\ntextstr = 'NSE=%.2f\\nRMSE=%.2f\\n'%(NSE, RMSE)\n# print textstr\nplt.text(2000, 2000, textstr, fontsize=14)\nplt.grid(True)\nplt.show()\n```\nI want to print teststr outside the plots. Here is the current plot:","response":"It's probably best to define the position in figure coordinates instead of data coordinates as you'd probably not want the text to change its position when changing the data. Using figure coordinates can be done either by specifying the figure transform (fig.transFigure) \n```\nplt.text(0.02, 0.5, textstr, fontsize=14, transform=plt.gcf().transFigure)\n```\nor by using the text method of the figure instead of that of the axes. \n```\nplt.gcf().text(0.02, 0.5, textstr, fontsize=14)\n```\nIn both cases the coordinates to place the text are in figure coordinates, where (0,0) is the bottom left and (1,1) is the top right of the figure. At the end you still may want to provide some extra space for the text to fit next to the axes, using plt.subplots_adjust(left=0.3) or so.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/42435446\/how-to-put-text-outside-of-plots","best_answers_votes":117,"tags":["python","matplotlib","text","plot-annotations"],"question_length":742,"response_length":770,"tag_count":4} {"question":"How to increase\/reduce the fontsize of x and y tick labels [duplicate] This question already has answers here: How to change tick label font size (10 answers) Closed 1 year ago. I seem to have a problem in figuring out how to increase or decrease the fontsize of both the x and y tick labels while using matplotlib. I am aware that there is the set_xticklabels(labels, fontdict=None, minor=False, **kwargs) function, but I failed to understand how to control the fontsize in it. I expected something somehow explicit, like \n```\ntitle_string=('My Title')\nplt.suptitle(title_string, y=1.0, fontsize=17)\n```\nbut I haven't found anything like that so far. What am I missing?","response":"You can set the fontsize directly in the call to set_xticklabels and set_yticklabels (as noted in previous answers). This will only affect one Axes at a time. \n```\nax.set_xticklabels(x_ticks, rotation=0, fontsize=8)\nax.set_yticklabels(y_ticks, rotation=0, fontsize=8)\n```\nNote this method should only be used if you are fixing the positions of the ticks first (e.g. using ax.set_xticks). If you are not changing the tick positions from the default ones, you can just change the font size of the tick labels without changing the text using ax.tick_params \n```\nax.tick_params(axis='x', labelsize=8)\nax.tick_params(axis='y', labelsize=8)\n```\nor \n```\nax.tick_params(axis='both', labelsize=8)\n```\nYou can also set the ticklabel font size globally (i.e. for all figures\/subplots in a script) using rcParams: \n```\nimport matplotlib.pyplot as plt\n\nplt.rc('xtick',labelsize=8)\nplt.rc('ytick',labelsize=8)\n```\nOr, equivalently: \n```\nplt.rcParams['xtick.labelsize']=8\nplt.rcParams['ytick.labelsize']=8\n```\nFinally, if this is a setting that you would like to be set for all your matplotlib plots, you could also set these two rcParams in your matplotlibrc file: \n```\nxtick.labelsize : 8 # fontsize of the x tick labels\nytick.labelsize : 8 # fontsize of the y tick labels\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/34001751\/how-to-increase-reduce-the-fontsize-of-x-and-y-tick-labels","best_answers_votes":77,"tags":["python","text","matplotlib","axis-labels","keyword-argument"],"question_length":670,"response_length":1273,"tag_count":5} {"question":"How to plot vectors in python using matplotlib I am taking a course on linear algebra and I want to visualize the vectors in action, such as vector addition, normal vector, so on. For instance: \n```\nV = np.array([[1,1],[-2,2],[4,-7]])\n```\nIn this case I want to plot 3 vectors V1 = (1,1), M2 = (-2,2), M3 = (4,-7). Then I should be able to add V1,V2 to plot a new vector V12(all together in one figure). when I use the following code, the plot is not as intended \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nM = np.array([[1,1],[-2,2],[4,-7]])\n\nprint(\"vector:1\")\nprint(M[0,:])\n# print(\"vector:2\")\n# print(M[1,:])\nrows,cols = M.T.shape\nprint(cols)\n\nfor i,l in enumerate(range(0,cols)):\n print(\"Iteration: {}-{}\".format(i,l))\n print(\"vector:{}\".format(i))\n print(M[i,:])\n v1 = [0,0],[M[i,0],M[i,1]]\n # v1 = [M[i,0]],[M[i,1]]\n print(v1)\n plt.figure(i)\n plt.plot(v1)\n plt.show()\n```","response":"How about something like \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nV = np.array([[1,1], [-2,2], [4,-7]])\norigin = np.array([[0, 0, 0],[0, 0, 0]]) # origin point\n\nplt.quiver(*origin, V[:,0], V[:,1], color=['r','b','g'], scale=21)\nplt.show()\n```\nThen to add up any two vectors and plot them to the same figure, do so before you call plt.show(). Something like: \n```\nplt.quiver(*origin, V[:,0], V[:,1], color=['r','b','g'], scale=21)\nv12 = V[0] + V[1] # adding up the 1st (red) and 2nd (blue) vectors\nplt.quiver(*origin, v12[0], v12[1], scale=21)\nplt.show()\n```\nNOTE: in Python2 use origin[0], origin[1] instead of *origin","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/42281966\/how-to-plot-vectors-in-python-using-matplotlib","best_answers_votes":77,"tags":["python","python-2.7","numpy","matplotlib","vector"],"question_length":916,"response_length":634,"tag_count":5} {"question":"Putting arrowheads on vectors in a 3d plot I plotted the eigenvectors of some 3D-data and was wondering if there is currently (already) a way to put arrowheads on the lines? Would be awesome if someone has a tip for me. \n```\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\n####################################################\n# This part is just for reference if\n# you are interested where the data is\n# coming from\n# The plot is at the bottom\n#####################################################\n\n# Generate some example data\nmu_vec1 = np.array([0,0,0])\ncov_mat1 = np.array([[1,0,0],[0,1,0],[0,0,1]])\nclass1_sample = np.random.multivariate_normal(mu_vec1, cov_mat1, 20)\n\nmu_vec2 = np.array([1,1,1])\ncov_mat2 = np.array([[1,0,0],[0,1,0],[0,0,1]])\nclass2_sample = np.random.multivariate_normal(mu_vec2, cov_mat2, 20)\n\n# concatenate data for PCA\nsamples = np.concatenate((class1_sample, class2_sample), axis=0)\n\n# mean values\nmean_x = mean(samples[:,0])\nmean_y = mean(samples[:,1])\nmean_z = mean(samples[:,2])\n\n#eigenvectors and eigenvalues\neig_val, eig_vec = np.linalg.eig(cov_mat)\n\n################################\n#plotting eigenvectors\n################################ \n\nfig = plt.figure(figsize=(15,15))\nax = fig.add_subplot(111, projection='3d')\n\nax.plot(samples[:,0], samples[:,1], samples[:,2], 'o', markersize=10, color='green', alpha=0.2)\nax.plot([mean_x], [mean_y], [mean_z], 'o', markersize=10, color='red', alpha=0.5)\nfor v in eig_vec:\n ax.plot([mean_x, v[0]], [mean_y, v[1]], [mean_z, v[2]], color='red', alpha=0.8, lw=3)\nax.set_xlabel('x_values')\nax.set_ylabel('y_values')\nax.set_zlabel('z_values')\n\nplt.title('Eigenvectors')\n\nplt.draw()\nplt.show()\n```","response":"To add arrow patches to a 3D plot, the simple solution is to use FancyArrowPatch class defined in \/matplotlib\/patches.py. However, it only works for 2D plot (at the time of writing), as its posA and posB are supposed to be tuples of length 2. Therefore we create a new arrow patch class, name it Arrow3D, which inherits from FancyArrowPatch. The only thing we need to override its posA and posB. To do that, we initiate Arrow3d with posA and posB of (0,0)s. The 3D coordinates xs, ys, zs was then projected from 3D to 2D using proj3d.proj_transform(), and the resultant 2D coordinates get assigned to posA and posB using .set_position() method, replacing the (0,0)s. This way we get the 3D arrow to work. The projection steps go into the .draw method, which overrides the .draw method of the FancyArrowPatch object. This might appear like a hack. However, the mplot3d currently only provides (again, only) simple 3D plotting capacity by supplying 3D-2D projections and essentially does all the plotting in 2D, which is not truly 3D. \n```\nimport numpy as np\nfrom numpy import *\nfrom matplotlib import pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib.patches import FancyArrowPatch\nfrom mpl_toolkits.mplot3d import proj3d\n\nclass Arrow3D(FancyArrowPatch):\n def __init__(self, xs, ys, zs, *args, **kwargs):\n FancyArrowPatch.__init__(self, (0,0), (0,0), *args, **kwargs)\n self._verts3d = xs, ys, zs\n\n def draw(self, renderer):\n xs3d, ys3d, zs3d = self._verts3d\n xs, ys, zs = proj3d.proj_transform(xs3d, ys3d, zs3d, renderer.M)\n self.set_positions((xs[0],ys[0]),(xs[1],ys[1]))\n FancyArrowPatch.draw(self, renderer)\n\n####################################################\n# This part is just for reference if\n# you are interested where the data is\n# coming from\n# The plot is at the bottom\n#####################################################\n\n# Generate some example data\nmu_vec1 = np.array([0,0,0])\ncov_mat1 = np.array([[1,0,0],[0,1,0],[0,0,1]])\nclass1_sample = np.random.multivariate_normal(mu_vec1, cov_mat1, 20)\n\nmu_vec2 = np.array([1,1,1])\ncov_mat2 = np.array([[1,0,0],[0,1,0],[0,0,1]])\nclass2_sample = np.random.multivariate_normal(mu_vec2, cov_mat2, 20)\n```\nActual drawing. Note that we only need to change one line of your code, which add an new arrow artist: \n```\n# concatenate data for PCA\nsamples = np.concatenate((class1_sample, class2_sample), axis=0)\n\n# mean values\nmean_x = mean(samples[:,0])\nmean_y = mean(samples[:,1])\nmean_z = mean(samples[:,2])\n\n#eigenvectors and eigenvalues\neig_val, eig_vec = np.linalg.eig(cov_mat1)\n\n################################\n#plotting eigenvectors\n################################ \n\nfig = plt.figure(figsize=(15,15))\nax = fig.add_subplot(111, projection='3d')\n\nax.plot(samples[:,0], samples[:,1], samples[:,2], 'o', markersize=10, color='g', alpha=0.2)\nax.plot([mean_x], [mean_y], [mean_z], 'o', markersize=10, color='red', alpha=0.5)\nfor v in eig_vec:\n #ax.plot([mean_x,v[0]], [mean_y,v[1]], [mean_z,v[2]], color='red', alpha=0.8, lw=3)\n #I will replace this line with:\n a = Arrow3D([mean_x, v[0]], [mean_y, v[1]], \n [mean_z, v[2]], mutation_scale=20, \n lw=3, arrowstyle=\"-|>\", color=\"r\")\n ax.add_artist(a)\nax.set_xlabel('x_values')\nax.set_ylabel('y_values')\nax.set_zlabel('z_values')\n\nplt.title('Eigenvectors')\n\nplt.draw()\nplt.show()\n```\nPlease check this post, which inspired this question, for further details.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/22867620\/putting-arrowheads-on-vectors-in-a-3d-plot","best_answers_votes":80,"tags":["python","matplotlib","plot","matplotlib-3d"],"question_length":1721,"response_length":3468,"tag_count":4} {"question":"How to add group labels for bar charts I want to plot data of the following form, using matplotlib bar plot: \n```\ndata = {'Room A':\n {'Shelf 1':\n {'Milk': 10,\n 'Water': 20},\n 'Shelf 2':\n {'Sugar': 5,\n 'Honey': 6}\n },\n 'Room B':\n {'Shelf 1':\n {'Wheat': 4,\n 'Corn': 7},\n 'Shelf 2':\n {'Chicken': 2,\n 'Cow': 1}\n }\n }\n```\nThe bar chart is supposed to look The bar groups should be visible from the labels on the x axis. Is there any way to do this with matplotlib?","response":"Since I could not find a built-in solution for this in matplotlib, I coded my own: \n```\n#!\/usr\/bin\/env python\n\nfrom matplotlib import pyplot as plt\n\ndef mk_groups(data):\n try:\n newdata = data.items()\n except:\n return\n\n thisgroup = []\n groups = []\n for key, value in newdata:\n newgroups = mk_groups(value)\n if newgroups is None:\n thisgroup.append((key, value))\n else:\n thisgroup.append((key, len(newgroups[-1])))\n if groups:\n groups = [g + n for n, g in zip(newgroups, groups)]\n else:\n groups = newgroups\n return [thisgroup] + groups\n\ndef add_line(ax, xpos, ypos):\n line = plt.Line2D([xpos, xpos], [ypos + .1, ypos],\n transform=ax.transAxes, color='black')\n line.set_clip_on(False)\n ax.add_line(line)\n\ndef label_group_bar(ax, data):\n groups = mk_groups(data)\n xy = groups.pop()\n x, y = zip(*xy)\n ly = len(y)\n xticks = range(1, ly + 1)\n\n ax.bar(xticks, y, align='center')\n ax.set_xticks(xticks)\n ax.set_xticklabels(x)\n ax.set_xlim(.5, ly + .5)\n ax.yaxis.grid(True)\n\n scale = 1. \/ ly\n for pos in xrange(ly + 1): # change xrange to range for python3\n add_line(ax, pos * scale, -.1)\n ypos = -.2\n while groups:\n group = groups.pop()\n pos = 0\n for label, rpos in group:\n lxpos = (pos + .5 * rpos) * scale\n ax.text(lxpos, ypos, label, ha='center', transform=ax.transAxes)\n add_line(ax, pos * scale, ypos)\n pos += rpos\n add_line(ax, pos * scale, ypos)\n ypos -= .1\n\nif __name__ == '__main__':\n data = {'Room A':\n {'Shelf 1':\n {'Milk': 10,\n 'Water': 20},\n 'Shelf 2':\n {'Sugar': 5,\n 'Honey': 6}\n },\n 'Room B':\n {'Shelf 1':\n {'Wheat': 4,\n 'Corn': 7},\n 'Shelf 2':\n {'Chicken': 2,\n 'Cow': 1}\n }\n }\n fig = plt.figure()\n ax = fig.add_subplot(1,1,1)\n label_group_bar(ax, data)\n fig.subplots_adjust(bottom=0.3)\n fig.savefig('label_group_bar_example.png')\n```\nThe mk_groups function takes a dictionary (or anything with an items() method, like collections.OrderedDict) and converts it to a data format that is then used to create the chart. It is basically a list of the form: \n```\n[ [(label, bars_to_span), ...], ..., [(tick_label, bar_value), ...] ]\n```\nThe add_line function creates a vertical line in the subplot at the specified positions (in axes coordinates). The label_group_bar function takes a dictionary and creates the bar chart in the subplot with the labels beneath. The result from the example then looks like this. Easier or better solutions and suggestions are still very much appreciated.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/19184484\/how-to-add-group-labels-for-bar-charts","best_answers_votes":73,"tags":["python","matplotlib","bar-chart","x-axis"],"question_length":650,"response_length":2945,"tag_count":4} {"question":"Scatter plot error bars (the error on each point is unique) I am attempting a scatter plot of 2 arrays for which I have a third array containing the absolute error (error in y direction) on each point. I want the error bars to between (point a - error on a) and (point a + error on a). Is there a way of achieving this with pylab and if not any ideas on how else I could do it?","response":"This is almost like the other answer but you don't need a scatter plot at all, you can simply specify a scatter-plot-like format (fmt-parameter) for errorbar: \n```\nimport matplotlib.pyplot as plt\nx = [1, 2, 3, 4]\ny = [1, 4, 9, 16]\ne = [0.5, 1., 1.5, 2.]\nplt.errorbar(x, y, yerr=e, fmt='o')\nplt.show()\n```\nResult: A list of the avaiable fmt parameters can be found for example in the plot documentation: \n```\ncharacter description\n'-' solid line style\n'--' dashed line style\n'-.' dash-dot line style\n':' dotted line style\n'.' point marker\n',' pixel marker\n'o' circle marker\n'v' triangle_down marker\n'^' triangle_up marker\n'<' triangle_left marker\n'>' triangle_right marker\n'1' tri_down marker\n'2' tri_up marker\n'3' tri_left marker\n'4' tri_right marker\n's' square marker\n'p' pentagon marker\n'*' star marker\n'h' hexagon1 marker\n'H' hexagon2 marker\n'+' plus marker\n'x' x marker\n'D' diamond marker\n'd' thin_diamond marker\n'|' vline marker\n'_' hline marker\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/22364565\/scatter-plot-error-bars-the-error-on-each-point-is-unique","best_answers_votes":73,"tags":["python","matplotlib","scatter-plot","errorbar"],"question_length":377,"response_length":1058,"tag_count":4} {"question":"Seaborn multiple barplots I have a pandas dataframe that looks like this: \n```\nclass men woman children\n0 first 0.91468 0.667971 0.660562\n1 second 0.30012 0.329380 0.882608\n2 third 0.11899 0.189747 0.121259\n```\nHow would I create a plot using seaborn that looks like this? Do I have to rearrange my data in some way? (source: mwaskom at stanford.edu)","response":"Tested in python 3.12.0, pandas 2.1.1, matplotlib 3.8.0, seaborn 0.13.0 Reshape the DataFrame with pandas.DataFrame.melt or pandas.melt: \n```\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\n# convert the dataframe to a long format\ndfm = pd.melt(df, id_vars=\"class\", var_name=\"sex\", value_name=\"survival rate\")\ndfm\nOut: \n class sex survival rate\n0 first men 0.914680\n1 second men 0.300120\n2 third men 0.118990\n3 first woman 0.667971\n4 second woman 0.329380\n5 third woman 0.189747\n6 first children 0.660562\n7 second children 0.882608\n8 third children 0.121259\n```\nConsolidate the plot by creating a single facet with grouped bars, instead of multiple facets with single bars. Plot with the figure-level method sns.catplot \n```\ng = sns.catplot(x='class', y='survival rate', hue='sex', data=dfm, kind='bar', height=5, aspect=1)\n```\nPlot with the axes-level method sns.barplot \n```\n# the following code matches the plot produced by catplot\nplt.figure(figsize=(5, 5))\nax = sns.barplot(x='class', y='survival rate', hue='sex', data=dfm)\nax.spines[['top', 'right']].set_visible(False)\nsns.move_legend(ax, bbox_to_anchor=(1, 0.5), loc='center left', frameon=False)\n```\nDeprecated factorplot (v0.8.1 or earlier): \n```\nsns.factorplot(x='class', y='survival rate', hue='sex', data=df, kind='bar')\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/38807895\/seaborn-multiple-barplots","best_answers_votes":90,"tags":["python","pandas","matplotlib","seaborn","grouped-bar-chart"],"question_length":390,"response_length":1427,"tag_count":5} {"question":"How to draw horizontal grid only (using pandas plot + pyplot) I would like to get only horizontal grid using pandas plot. The integrated parameter of pandas only has grid=True or grid=False, so I tried with matplotlib pyplot, changing the axes parameters, specifically with this code: \n```py\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfig = plt.figure()\nax2 = plt.subplot()\nax2.grid(axis='x')\ndf.plot(kind='bar',ax=ax2, fontsize=10, sort_columns=True)\nplt.show(fig)\n```\nBut I get no grid, neither horizontal nor vertical. Is Pandas overwriting the axes? Or am I doing something wrong?","response":"Try setting the grid after plotting the DataFrame. Also, to get the horizontal grid, you need to use ax2.grid(axis='y'). Below is an answer using a sample DataFrame. I have restructured how you define ax2 by making use of subplots. \n```\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\ndf = pd.DataFrame({'lab':['A', 'B', 'C'], 'val':[10, 30, 20]})\n\nfig, ax2 = plt.subplots()\n\ndf.plot(kind='bar',ax=ax2, fontsize=10, sort_columns=True)\nax2.grid(axis='y')\nplt.show()\n```\nAlternatively, you can also do the following: Use the axis object returned from the DataFrame plot directly to turn on the horizontal grid \n```\nfig = plt.figure()\n\nax2 = df.plot(kind='bar', fontsize=10, sort_columns=True)\nax2.grid(axis='y')\n```\nThird option as suggested by @ayorgo in the comments is to chain the two commands as \n```\ndf.plot(kind='bar',ax=ax2, fontsize=10, sort_columns=True).grid(axis='y')\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/54714018\/how-to-draw-horizontal-grid-only-using-pandas-plot-pyplot","best_answers_votes":100,"tags":["python","python-3.x","pandas","matplotlib","plot"],"question_length":593,"response_length":886,"tag_count":5} {"question":"Rotate xtick labels in seaborn boxplot? I have a question that is basically the same as a question back from 2014 (see here). However, my script still throws an error. Here is what I do: I have a pandas dataframe with a few columns. I plot a simple boxplot comparison. \n```\ng = sns.boxplot(x='categories', y='oxygen', hue='target', data=df)\ng.set_xticklabels(rotation=30)\n```\nThe graph looks like this: I'd like to rotate the x-labels by 30 degrees. Hence I use g.set_xticklabels(rotation=30). However, I get the following error: set_xticklabels() missing 1 required positional argument: 'labels' I don't know how to pass the matplotlib labels argument to seaborns sns.boxplot. Any ideas?","response":"The question you link to uses a factorplot. A factorplot returns its own class which has a method called set_xticklabels(rotation). This is different from the set_xticklabels method of the matplotlib Axes. In the linked question's answers there are also other options which you may use \n```\nax = sns.boxplot(x='categories', y='oxygen', hue='target', data=df)\nax.set_xticklabels(ax.get_xticklabels(),rotation=30)\n```\nor \n```\nax = sns.boxplot(x='categories', y='oxygen', hue='target', data=df)\nplt.setp(ax.get_xticklabels(), rotation=45)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/44954123\/rotate-xtick-labels-in-seaborn-boxplot","best_answers_votes":90,"tags":["python-3.x","pandas","matplotlib","seaborn"],"question_length":688,"response_length":539,"tag_count":4} {"question":"How to space overlapping annotations I want to annotate the bars in a graph with some text but if the bars are close together and have comparable height, the annotations are above ea. other and thus hard to read (the coordinates for the annotations were taken from the bar position and height). Is there a way to shift one of them if there is a collision? Edit: The bars are very thin and very close sometimes so just aligning vertically doesn't solve the problem... A picture might clarify things:","response":"I've written a quick solution, which checks each annotation position against default bounding boxes for all the other annotations. If there is a collision it changes its position to the next available collision free place. It also puts in nice arrows. For a fairly extreme example, it will produce this (none of the numbers overlap): Instead of this: Here is the code: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom numpy.random import *\n\ndef get_text_positions(x_data, y_data, txt_width, txt_height):\n a = zip(y_data, x_data)\n text_positions = y_data.copy()\n for index, (y, x) in enumerate(a):\n local_text_positions = [i for i in a if i[0] > (y - txt_height) \n and (abs(i[1] - x) < txt_width * 2) and i != (y,x)]\n if local_text_positions:\n sorted_ltp = sorted(local_text_positions)\n if abs(sorted_ltp[0][0] - y) < txt_height: #True == collision\n differ = np.diff(sorted_ltp, axis=0)\n a[index] = (sorted_ltp[-1][0] + txt_height, a[index][1])\n text_positions[index] = sorted_ltp[-1][0] + txt_height\n for k, (j, m) in enumerate(differ):\n #j is the vertical distance between words\n if j > txt_height * 2: #if True then room to fit a word in\n a[index] = (sorted_ltp[k][0] + txt_height, a[index][1])\n text_positions[index] = sorted_ltp[k][0] + txt_height\n break\n return text_positions\n\ndef text_plotter(x_data, y_data, text_positions, axis,txt_width,txt_height):\n for x,y,t in zip(x_data, y_data, text_positions):\n axis.text(x - txt_width, 1.01*t, '%d'%int(y),rotation=0, color='blue')\n if y != t:\n axis.arrow(x, t,0,y-t, color='red',alpha=0.3, width=txt_width*0.1, \n head_width=txt_width, head_length=txt_height*0.5, \n zorder=0,length_includes_head=True)\n```\nHere is the code producing these plots, showing the usage: \n```\n#random test data:\nx_data = random_sample(100)\ny_data = random_integers(10,50,(100))\n\n#GOOD PLOT:\nfig2 = plt.figure()\nax2 = fig2.add_subplot(111)\nax2.bar(x_data, y_data,width=0.00001)\n#set the bbox for the text. Increase txt_width for wider text.\ntxt_height = 0.04*(plt.ylim()[1] - plt.ylim()[0])\ntxt_width = 0.02*(plt.xlim()[1] - plt.xlim()[0])\n#Get the corrected text positions, then write the text.\ntext_positions = get_text_positions(x_data, y_data, txt_width, txt_height)\ntext_plotter(x_data, y_data, text_positions, ax2, txt_width, txt_height)\n\nplt.ylim(0,max(text_positions)+2*txt_height)\nplt.xlim(-0.1,1.1)\n\n#BAD PLOT:\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.bar(x_data, y_data, width=0.0001)\n#write the text:\nfor x,y in zip(x_data, y_data):\n ax.text(x - txt_width, 1.01*y, '%d'%int(y),rotation=0)\nplt.ylim(0,max(text_positions)+2*txt_height)\nplt.xlim(-0.1,1.1)\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/8850142\/how-to-space-overlapping-annotations","best_answers_votes":63,"tags":["python","matplotlib","plot-annotations"],"question_length":498,"response_length":2943,"tag_count":3} {"question":"Arrow on a line plot I'd like to add an arrow to a line plot with matplotlib like in the plot below (drawn with pgfplots). How can I do (position and direction of the arrow should be parameters ideally)? Here is some code to experiment. \n```\nfrom matplotlib import pyplot\nimport numpy as np\n\nt = np.linspace(-2, 2, 100)\nplt.plot(t, np.sin(t))\nplt.show()\n```\nThanks.","response":"In my experience this works best by using annotate. Thereby you avoid the weird warping you get with ax.arrow which is somehow hard to control. EDIT: I've wrapped it into a little function. \n```\nfrom matplotlib import pyplot as plt\nimport numpy as np\n\n\ndef add_arrow(line, position=None, direction='right', size=15, color=None):\n \"\"\"\n add an arrow to a line.\n\n line: Line2D object\n position: x-position of the arrow. If None, mean of xdata is taken\n direction: 'left' or 'right'\n size: size of the arrow in fontsize points\n color: if None, line color is taken.\n \"\"\"\n if color is None:\n color = line.get_color()\n\n xdata = line.get_xdata()\n ydata = line.get_ydata()\n\n if position is None:\n position = xdata.mean()\n # find closest index\n start_ind = np.argmin(np.absolute(xdata - position))\n if direction == 'right':\n end_ind = start_ind + 1\n else:\n end_ind = start_ind - 1\n\n line.axes.annotate('',\n xytext=(xdata[start_ind], ydata[start_ind]),\n xy=(xdata[end_ind], ydata[end_ind]),\n arrowprops=dict(arrowstyle=\"->\", color=color),\n size=size\n )\n\n\nt = np.linspace(-2, 2, 100)\ny = np.sin(t)\n# return the handle of the line\nline = plt.plot(t, y)[0]\n\nadd_arrow(line)\n\nplt.show()\n```\nIt's not very intuitive but it works. You can then fiddle with the arrowprops dictionary until it looks right.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/34017866\/arrow-on-a-line-plot","best_answers_votes":37,"tags":["python","matplotlib"],"question_length":365,"response_length":1416,"tag_count":2} {"question":"why does my colorbar have lines in it? Edit: Since this seems to be a popular post, here's the solution that seems to be working well for me. Thanks @gazzar and @mfra. \n```\ncbar.solids.set_rasterized(True)\ncbar.solids.set_edgecolor(\"face\")\n```\nDoes anyone know why my colorbar has what appear to be lines in it? Or rather why is the color transition not smooth? I'm using basemap, obviously, but that shouldn't matter since it's all matplotlib calls under the hood AFAICT. I create the map doing something like \n```\ngrays = plt.cm.get_cmap(\"Grays\")\nsc = mymap.scatter(xpoints, ypoints, s=sizes, c=color_values, cmap=grays, alpha=.75,\n marker=\"o\", zorder=10, vmin=0, vmax=1)\ncbar = mymap.colorbar(sc, drawedges=True, location=\"bottom\")\n```\nI tried without and without alpha and the result was the same. Maybe it is because my color_values array is not fine enough? Can I set the underlying values that are mapped to the colorbar somewhere? I don't see how, and I don't see this problem elsewhere. Ie., I can replicate the matplotlib show_colorbars example without this problem.","response":"In case you create vector graphics, have you tried this (taken from http:\/\/matplotlib.org\/api\/pyplot_api.html?highlight=colorbar#matplotlib.pyplot.colorbar): \"It is known that some vector graphics viewer (svg and pdf) renders white gaps between segments of the colorbar. This is due to bugs in the viewers not matplotlib. As a workaround the colorbar can be rendered with overlapping segments: \n```\ncbar = colorbar()\ncbar.solids.set_edgecolor(\"face\")\ndraw()\n```\nHowever this has negative consequences in other circumstances. Particularly with semi transparent images (alpha < 1) and colorbar extensions and is not enabled by default see (issue #1188).\"","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/15003353\/why-does-my-colorbar-have-lines-in-it","best_answers_votes":33,"tags":["python","matplotlib","matplotlib-basemap"],"question_length":1094,"response_length":652,"tag_count":3} {"question":"One colorbar for seaborn heatmaps in subplot Here is an example that shows a colorbar for each subplot: \n```\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\n\ndf = pd.DataFrame(np.random.random((10,10,)))\n\nfig,axn = plt.subplots(2, 2, sharex=True, sharey=True)\n\nfor ax in axn.flat:\n sns.heatmap(df, ax=ax)\n```\nHow can I remove the colorbars for each subplot? I'd like to have only one colorbar that is either vertically or horizontally oriented. I know I have access to each colorbar axes via fig.get_axes()[:-4], but how can I remove it from them entirely from the plot? I don't think there is an option to opt out of drawing the colorbar when heatmap is called.","response":"The cbar parameter controls whether a colorbar should be added, and the cbar_ax parameter can optionally specify the axes where the colorbar should go. So, you could do: \n```\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\n\ndf = pd.DataFrame(np.random.random((10,10,)))\n\nfig, axn = plt.subplots(2, 2, sharex=True, sharey=True)\ncbar_ax = fig.add_axes([.91, .3, .03, .4])\n\nfor i, ax in enumerate(axn.flat):\n sns.heatmap(df, ax=ax,\n cbar=i == 0,\n vmin=0, vmax=1,\n cbar_ax=None if i else cbar_ax)\n\nfig.tight_layout(rect=[0, 0, .9, 1])\n```\n(You'll get a warning about tight_layout here, but it actually is correct because we placed cbar_ax explicitly. If you don't like seeing the warning, you can also call tight_layout before plotting, but it won't be as tight).","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/28356359\/one-colorbar-for-seaborn-heatmaps-in-subplot","best_answers_votes":87,"tags":["python","matplotlib","heatmap","seaborn"],"question_length":710,"response_length":852,"tag_count":4} {"question":"Plotting a histogram from pre-counted data in Matplotlib I'd like to use Matplotlib to plot a histogram over data that's been pre-counted. For example, say I have the raw data \n```\ndata = [1, 2, 2, 3, 4, 5, 5, 5, 5, 6, 10]\n```\nGiven this data, I can use \n```\npylab.hist(data, bins=[...])\n```\nto plot a histogram. In my case, the data has been pre-counted and is represented as a dictionary: \n```\ncounted_data = {1: 1, 2: 2, 3: 1, 4: 1, 5: 4, 6: 1, 10: 1}\n```\nIdeally, I'd like to pass this pre-counted data to a histogram function that lets me control the bin widths, plot range, etc, as if I had passed it the raw data. As a workaround, I'm expanding my counts into the raw data: \n```\ndata = list(chain.from_iterable(repeat(value, count)\n for (value, count) in counted_data.iteritems()))\n```\nThis is inefficient when counted_data contains counts for millions of data points. Is there an easier way to use Matplotlib to produce a histogram from my pre-counted data? Alternatively, if it's easiest to just bar-plot data that's been pre-binned, is there a convenience method to \"roll-up\" my per-item counts into binned counts?","response":"You can use the weights keyword argument to np.histgram (which plt.hist calls underneath) \n```\nval, weight = zip(*[(k, v) for k,v in counted_data.items()])\nplt.hist(val, weights=weight)\n```\nAssuming you only have integers as the keys, you can also use bar directly: \n```\nmin_bin = np.min(counted_data.keys())\nmax_bin = np.max(counted_data.keys())\n\nbins = np.arange(min_bin, max_bin + 1)\nvals = np.zeros(max_bin - min_bin + 1)\n\nfor k,v in counted_data.items():\n vals[k - min_bin] = v\n\nplt.bar(bins, vals, ...)\n```\nwhere ... is what ever arguments you want to pass to bar (doc) If you want to re-bin your data see Histogram with separate list denoting frequency","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/19212508\/plotting-a-histogram-from-pre-counted-data-in-matplotlib","best_answers_votes":36,"tags":["python","matplotlib","histogram"],"question_length":1135,"response_length":662,"tag_count":3} {"question":"Difference between plt.draw() and plt.show() in matplotlib I was wondering why some people put a plt.draw() into their code before the plt.show(). For my code, the behavior of the plt.draw() didn't seem to change anything about the output. I did a search on the internet but couldn't find anything useful. (assuming we imported pyplot as from matplotlib import pyplot as plt)","response":"plt.show() will display the current figure that you are working on. plt.draw() will re-draw the figure. This allows you to work in interactive mode and, should you have changed your data or formatting, allow the graph itself to change. The plt.draw docs state: This is used in interactive mode to update a figure that has been altered using one or more plot object method calls; it is not needed if figure modification is done entirely with pyplot functions, if a sequence of modifications ends with a pyplot function, or if matplotlib is in non-interactive mode and the sequence of modifications ends with show() or savefig(). This seems to suggest that using plt.draw() before plt.show() when not in interactive mode will be redundant the vast majority of the time. The only time you may need it is if you are doing some very strange modifications that don't involve using pyplot functions. Refer to the Matplotlib doc, \"Interactive figures\" for more information.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/23141452\/difference-between-plt-draw-and-plt-show-in-matplotlib","best_answers_votes":61,"tags":["python","matplotlib"],"question_length":375,"response_length":965,"tag_count":2} {"question":"AttributeError while adding colorbar in matplotlib The following code fails to run on Python 2.5.4: \n```\nfrom matplotlib import pylab as pl\nimport numpy as np\n\ndata = np.random.rand(6,6)\nfig = pl.figure(1)\nfig.clf()\nax = fig.add_subplot(1,1,1)\nax.imshow(data, interpolation='nearest', vmin=0.5, vmax=0.99)\npl.colorbar()\n\npl.show()\n```\nThe error message is \n```\nC:\\temp>python z.py\nTraceback (most recent call last):\n File \"z.py\", line 10, in \n pl.colorbar()\n File \"C:\\Python25\\lib\\site-packages\\matplotlib\\pyplot.py\", line 1369, in colorbar\n ret = gcf().colorbar(mappable, cax = cax, ax=ax, **kw)\n File \"C:\\Python25\\lib\\site-packages\\matplotlib\\figure.py\", line 1046, in colorbar\n cb = cbar.Colorbar(cax, mappable, **kw)\n File \"C:\\Python25\\lib\\site-packages\\matplotlib\\colorbar.py\", line 622, in __init__\n mappable.autoscale_None() # Ensure mappable.norm.vmin, vmax\nAttributeError: 'NoneType' object has no attribute 'autoscale_None'\n```\nHow can I add colorbar to this code? Following is the interpreter information: \n```\nPython 2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)] on win32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>>\n```","response":"(This is a very old question I know) The reason you are seeing this issue is because you have mixed the use of the state machine (matplotlib.pyplot) with the OO approach of adding images to an axes. The plt.imshow function differs from the ax.imshow method in just one subtly different way. The method ax.imshow: creates and returns an Image which has been added to the axes The function plt.imshow: creates and returns an Image which has been added to the current axes, and sets the image to be the \"current\" image\/mappable (which can then be automatically picked up by the plt.colorbar function). If you want to be able to use the plt.colorbar (which in all but the most extreme cases, you do) with the ax.imshow method, you will need to pass the returned image (which is an instance of a ScalarMappable) to plt.colorbar as the first argument: \n```\nplt.imshow(image_file)\nplt.colorbar()\n```\nis equivalent (without using the state machine) to: \n```\nimg = ax.imshow(image_file)\nplt.colorbar(img, ax=ax)\n```\nIf ax is the current axes in pyplot, then the kwarg ax=ax is not needed.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/2643953\/attributeerror-while-adding-colorbar-in-matplotlib","best_answers_votes":90,"tags":["python","matplotlib"],"question_length":1210,"response_length":1079,"tag_count":2} {"question":"How to save the Pandas dataframe\/series data as a figure? It sounds somewhat weird\uff0c but I need to save the Pandas console output string to png pics. For example: \n```\n\n```python\ndf\n#Output\n# sales net_pft ROE ROIC\n#STK_ID RPT_Date \n#600809 20120331 22.1401 4.9253 0.1651 0.6656\n# 20120630 38.1565 7.8684 0.2567 1.0385\n# 20120930 52.5098 12.4338 0.3587 1.2867\n# 20121231 64.7876 13.2731 0.3736 1.2205\n# 20130331 27.9517 7.5182 0.1745 0.3723\n# 20130630 40.6460 9.8572 0.2560 0.4290\n# 20130930 53.0501 11.8605 0.2927 0.4369\n#```\n#Is there any way like df.output_as_png(filename='df_data.png') to generate a pic file which just display above content inside?\n```","response":"Option-1: use matplotlib table functionality, with some additional styling: \n```\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndf = pd.DataFrame()\ndf['date'] = ['2016-04-01', '2016-04-02', '2016-04-03']\ndf['calories'] = [2200, 2100, 1500]\ndf['sleep hours'] = [8, 7.5, 8.2]\ndf['gym'] = [True, False, False]\n\ndef render_mpl_table(data, col_width=3.0, row_height=0.625, font_size=14,\n header_color='#40466e', row_colors=['#f1f1f2', 'w'], edge_color='w',\n bbox=[0, 0, 1, 1], header_columns=0,\n ax=None, **kwargs):\n if ax is None:\n size = (np.array(data.shape[::-1]) + np.array([0, 1])) * np.array([col_width, row_height])\n fig, ax = plt.subplots(figsize=size)\n ax.axis('off')\n mpl_table = ax.table(cellText=data.values, bbox=bbox, colLabels=data.columns, **kwargs)\n mpl_table.auto_set_font_size(False)\n mpl_table.set_fontsize(font_size)\n\n for k, cell in mpl_table._cells.items():\n cell.set_edgecolor(edge_color)\n if k[0] == 0 or k[1] < header_columns:\n cell.set_text_props(weight='bold', color='w')\n cell.set_facecolor(header_color)\n else:\n cell.set_facecolor(row_colors[k[0]%len(row_colors) ])\n return ax.get_figure(), ax\n\nfig,ax = render_mpl_table(df, header_columns=0, col_width=2.0)\nfig.savefig(\"table_mpl.png\")\n```\nOptions-2 Use Plotly + kaleido \n```\nimport plotly.figure_factory as ff\nimport pandas as pd\n\ndf = pd.DataFrame()\ndf['date'] = ['2016-04-01', '2016-04-02', '2016-04-03']\ndf['calories'] = [2200, 2100, 1500]\ndf['sleep hours'] = [8, 7.5, 8.2]\ndf['gym'] = [True, False, False]\n\nfig = ff.create_table(df)\nfig.update_layout(\n autosize=False,\n width=500,\n height=200,\n)\nfig.write_image(\"table_plotly.png\", scale=2)\nfig.show()\n```\nFor the above, the font size can be changed using the font attribute: \n```\nfig.update_layout(\n autosize=False,\n width=500,\n height=200,\n font={'size':8}\n)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/19726663\/how-to-save-the-pandas-dataframe-series-data-as-a-figure","best_answers_votes":68,"tags":["python","matplotlib","pandas"],"question_length":754,"response_length":1996,"tag_count":3} {"question":"ValueError: Unknown projection '3d' (once again) When executing this line of code: \n```\nimport matplotlib.pyplot as plt\n\n#your code\n\nfig = plt.figure()\nax = fig.gca(projection='3d')\n```\nI have an output error: \n```\nraise ValueError(\"Unknown projection %r\" % projection)\n\nValueError: Unknown projection '3d'\n\n
\n```\nThe error appears also when I use Spyder as IDE. The version of matplotlib is \n```\nprint('matplotlib: {}'.format(matplotlib.__version__))\nmatplotlib: 1.5.0rc3\n```\nBut I had the same problem even with other versions of matplotlib. A similar error was reported in this question (Stackoverflow) but the answers do not help. Some suggestions on how to modify the instruction? matplotlib: 3.0.2","response":"You will have to import Axes3D to enable the 3d plotting in matplotlib. The official tutorials on 3d plotting can be found here. So the correct imports and code would look like \n```\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D # <--- This is important for 3d plotting \n\n#your code\n\nfig = plt.figure()\nax = fig.gca(projection='3d')\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/56222259\/valueerror-unknown-projection-3d-once-again","best_answers_votes":109,"tags":["python","matplotlib","conda","spyder","matplotlib-3d"],"question_length":735,"response_length":361,"tag_count":5} {"question":"Matplotlib boxplot without outliers Is there any way of hiding the outliers when plotting a boxplot in matplotlib (python)? I'm using the simplest way of plotting it: \n```\nfrom pylab import *\n boxplot([1,2,3,4,5,10])\n show()\n```\nThis gives me the following plot: (I cannot post the image because I have not enough reputation, but basically it is a boxplot with Q1 at y=1, Q3 at y=5, and the outlier at y=10) I would like to remove the outlier at y=10, so that the plot only shows from Q1 to Q3 (in this case from 1 to 5).","response":"In current versions of matplotlib you can do: \n```\nboxplot([1,2,3,4,5,10], showfliers=False)\n```\nor \n```\nboxplot([1,2,3,4,5,10], sym='')\n```\nIn older versions, only the second approach will work. The docs for boxplot do mention this, btw as, \"Enter an empty string (\u2018\u2019) if you don\u2019t want to show fliers.\", though, at least for myself, \"outliers\" is the more familiar word.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/22028064\/matplotlib-boxplot-without-outliers","best_answers_votes":101,"tags":["python","matplotlib","boxplot","outliers"],"question_length":523,"response_length":372,"tag_count":4} {"question":"Changing the background color of the axes planes of a 3D plot On the basis of the scatterplot example of matplotlib, how can I change the gray background color of the 3 axes grid planes? I would like to set it to white, keeping the grid lines with the default gray color. I found this question but I couldn't apply it to the example. Thanks.","response":"For a slightly different approach, see below: \n```\n# Get rid of colored axes planes\n# First remove fill\nax.xaxis.pane.fill = False\nax.yaxis.pane.fill = False\nax.zaxis.pane.fill = False\n\n# Now set color to white (or whatever is \"invisible\")\nax.xaxis.pane.set_edgecolor('w')\nax.yaxis.pane.set_edgecolor('w')\nax.zaxis.pane.set_edgecolor('w')\n\n# Bonus: To get rid of the grid as well:\nax.grid(False)\n```\nSee this blog post that I used as my source.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/11448972\/changing-the-background-color-of-the-axes-planes-of-a-3d-plot","best_answers_votes":48,"tags":["python","matplotlib","scatter-plot","matplotlib-3d"],"question_length":341,"response_length":444,"tag_count":4} {"question":"How to set the line width of error bar caps How can the line width of the error bar caps in Matplotlib be changed? I tried the following code: \n```\n(_, caplines, _) = matplotlib.pyplot.errorbar(\n data['distance'], data['energy'], yerr=data['energy sigma'],\n capsize=10, elinewidth=3)\n\nfor capline in caplines:\n capline.set_linewidth(10)\n capline.set_color('red')\n\npp.draw()\n```\nUnfortunately, this updates the color of the caps, but does not update the line width of the caps! The resulting effect is similar to the \"fat error bar lines \/ thin caps\" in the following image: It would be nice to have \"fat\" bar caps, in the case; how can this be done, in Matplotlib? Drawing the bar caps \"manually\", one by one with plot() would work, but a simpler alternative would be best.","response":"EOL, you were very close.., \n```\ndistance = [1,3,7,9]\nenergy = [10,20,30,40]\nsigma = [1,3,2,5]\n\n(_, caps, _) = plt.errorbar(distance, energy, sigma, capsize=20, elinewidth=3)\n\nfor cap in caps:\n cap.set_color('red')\n cap.set_markeredgewidth(10)\n\nplt.show\n```\nset_markeredgewidth sets the width of the cap lines. Matplotlib objects have so many attributes that often it is difficult to remember the right ones for a given object. IPython is a very useful tool for introspecting matplotlib. I used it to analyze the properties of the 2Dlines correponding to the error cap lines and I found that and other marker properties. Cheers","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/7601334\/how-to-set-the-line-width-of-error-bar-caps","best_answers_votes":41,"tags":["python","matplotlib"],"question_length":785,"response_length":633,"tag_count":2} {"question":"Hiding Axis Labels I'm trying to hide the axis labels on the first subplot at 211. I'd like to label the figure, not just a subplot (reference: \"Isub Event Characteristics\"). How can I control font properties like size, font, color? \n```\nf = Figure()\n\nvdsvgsPlot = f.add_subplot(211)\nvdsvgsPlot.plot(theLister()[3],theLister()[0])\nvdsvgsPlot.plot(theLister()[3],theLister()[1])\n\nisubPlot = f.add_subplot(212)\nisubPlot.plot(theLister()[3],theLister()[2])\n\nplotCanvas = FigureCanvasTkAgg(f, master)\ntoolbar = NavigationToolbar2TkAgg(plotCanvas, master)\n\nplotCanvas.get_tk_widget().pack()\n```\nThank you in advance.","response":"You have several different questions here... Let me break them up a bit... By \"hide the axis labels on the first subplot\" do you mean the actual axis labels (which aren't there unless you specify them), the tick labels (i.e. the numbers along the axis), the axis ticks, or all of the above? If you mean \"all of the above\", just do ax.xaxis.set_visible(False) and the same for the y-axis. (ax here would be vdsvgsPlot in your example code above) If you mean the axis tick labels, just set them to [], i.e.: ax.set_xticklabels([]). (and set_yticklabels for the y-axis) If you mean the axis ticks, you can do something similar: ax.set_xticks([]) and ax.set_yticks([]) which will turn off both the ticks and ticklabels. As to the second question, use suptitle to title the entire figure. i.e.: fig.suptitle('whatever') (f.suptitle... in your example code above). As for how to control the font properties, you can either pass various keyword arguments to suptitle (or anything else that creates text on a plot) or set them after you create the text. For example fig.suptitle('This is a title', size=20, horizontalalignment='left', font='Times', color='red') In general, I would suggest you look through the various user's guide, gallery of examples (all of which have the source code included), the pyplot api docs, and the detailed api docs.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/4079795\/hiding-axis-labels","best_answers_votes":92,"tags":["python","matplotlib","tkinter"],"question_length":611,"response_length":1338,"tag_count":3} {"question":"Plot Normal distribution with Matplotlib [duplicate] This question already has answers here: How to plot normal distribution (10 answers) Closed 3 years ago. please help me to plot the normal distribution of the folowing data: DATA: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.stats import norm\n\nh = [186, 176, 158, 180, 186, 168, 168, 164, 178, 170, 189, 195, 172,\n 187, 180, 186, 185, 168, 179, 178, 183, 179, 170, 175, 186, 159,\n 161, 178, 175, 185, 175, 162, 173, 172, 177, 175, 172, 177, 180]\n\nstd = np.std(h) \nmean = np.mean(h) \nplt.plot(norm.pdf(h,mean,std))\n```\noutput: \n```\nStandard Deriviation = 8.54065575872 \nmean = 176.076923077\n```\nthe plot is incorrect, what is wrong with my code?","response":"Note: This solution is using pylab, not matplotlib.pyplot You may try using hist to put your data info along with the fitted curve as below: \n```\nimport numpy as np\nimport scipy.stats as stats\nimport pylab as pl\n\nh = sorted([186, 176, 158, 180, 186, 168, 168, 164, 178, 170, 189, 195, 172,\n 187, 180, 186, 185, 168, 179, 178, 183, 179, 170, 175, 186, 159,\n 161, 178, 175, 185, 175, 162, 173, 172, 177, 175, 172, 177, 180]) #sorted\n\nfit = stats.norm.pdf(h, np.mean(h), np.std(h)) #this is a fitting indeed\n\npl.plot(h,fit,'-o')\n\npl.hist(h,normed=True) #use this to draw histogram of your data\n\npl.show() #use may also need add this\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/20011494\/plot-normal-distribution-with-matplotlib","best_answers_votes":100,"tags":["python","numpy","matplotlib","plot","scipy"],"question_length":730,"response_length":666,"tag_count":5} {"question":"Hide axis label only, not entire axis, in Pandas plot I can clear the text of the xlabel in a Pandas plot with: \n```py\nplt.xlabel(\"\")\n```\nInstead, is it possible to hide the label? May be something like .xaxis.label.set_visible(False).","response":"From the Pandas docs - The plot method on Series and DataFrame is just a simple wrapper around plt.plot(): This means that anything you can do with matplolib, you can do with a Pandas DataFrame plot. pyplot has an axis() method that lets you set axis properties. Calling plt.axis('off') before calling plt.show() will turn off both axes. \n```\ndf.plot()\nplt.axis('off')\nplt.show()\nplt.close()\n```\nTo control a single axis, you need to set its properties via the plot's Axes. For the x axis - (pyplot.axes().get_xaxis().....) \n```\ndf.plot()\nax1 = plt.axes()\nx_axis = ax1.axes.get_xaxis()\nx_axis.set_visible(False)\nplt.show()\nplt.close()\n```\nSimilarly to control an axis label, get the label and turn it off. \n```\ndf.plot()\nax1 = plt.axes()\nx_axis = ax1.axes.get_xaxis()\nx_axis.set_label_text('foo')\nx_label = x_axis.get_label()\n##print isinstance(x_label, matplotlib.artist.Artist)\nx_label.set_visible(False)\nplt.show()\nplt.close()\n```\nYou can also get to the x axis like this \n```\nax1 = plt.axes()\nx_axis = ax1.xaxis\nx_axis.set_label_text('foo')\nx_axis.label.set_visible(False)\n```\nOr this \n```\nax1 = plt.axes()\nax1.xaxis.set_label_text('foo')\nax1.xaxis.label.set_visible(False)\n```\nDataFrame.plot returns a matplotlib.axes.Axes or numpy.ndarray of them so you can get it\/them when you call it. \n```\naxs = df.plot()\n```\n.set_visible() is an Artist method. The axes and their labels are Artists so they have Artist methods\/attributes as well as their own. There are many ways to customize your plots. Sometimes you can find the feature you want browsing the Gallery and Examples","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/40705614\/hide-axis-label-only-not-entire-axis-in-pandas-plot","best_answers_votes":87,"tags":["python","python-3.x","pandas","matplotlib","axis-labels"],"question_length":235,"response_length":1576,"tag_count":5} {"question":"Non-ASCII characters in Matplotlib I have a problem displaying non-ASCII characters in Matplotlib, these characters are rendered as small boxes instead of a proper font, it looks like (I filled these boxes with red paint to hightlight them): How do I fix it? A related question is Accented characters in Matplotlib.","response":"This problem may actually have a couple of different causes: The default font does not include these glyphs You may change the default font using the following (before any plotting is done!) \n```\nmatplotlib.rc('font', family='Arial')\n```\nIn some versions of matplotlib you'll have to set the family: \n```\nmatplotlib.rc('font', **{'sans-serif' : 'Arial',\n 'family' : 'sans-serif'})\n```\n(Note that because sans-serif contains a hyphen inside the **{} syntax, it is actually necessary.) The first command changes the sans-serif font family to contain only one font (in my case it was Arial), the second sets the default font family to sans-serif. Other options are included in the documentation. You have improperly created\/passed string objects into Matplotlib Even if the font contains proper glyphs, if you forgot to use u to create Unicode constants, Matplotlib will have this behaviour: \n```\nplt.xlabel(\"\u015arednia odleg\u0142o\u015b\u0107 mi\u0119dzy stacjami wsparcia a modelowan\u0105 [km]\")\n```\nSo you need to add u: \n```\nplt.xlabel(u\"\u015arednia odleg\u0142o\u015b\u0107 mi\u0119dzy stacjami wsparcia a modelowan\u0105 [km]\")\n```\nAnother cause is that you forgot to put a UTF-8 magic comment on top of the file (I read that this might be the source of the problem): \n```\n# -*- coding: utf-8 -*-\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/10960463\/non-ascii-characters-in-matplotlib","best_answers_votes":80,"tags":["python","unicode","matplotlib","ascii"],"question_length":315,"response_length":1272,"tag_count":4} {"question":"Specify figure size in centimeter in matplotlib I am wondering whether you can specify the size of a figure in matplotlib in centimeter. At the moment I write: \n```\ndef cm2inch(value):\n return value\/2.54\n\nfig = plt.figure(figsize=(cm2inch(12.8), cm2inch(9.6)))\n```\nBut is there a native approach?","response":"This is not an answer to the question ''Is there a native way?'', but I think that there is a more elegant way: \n```py\ndef cm2inch(*tupl):\n inch = 2.54\n if isinstance(tupl[0], tuple):\n return tuple(i\/inch for i in tupl[0])\n else:\n return tuple(i\/inch for i in tupl)\n```\nThen one can issue plt.figure(figsize=cm2inch(12.8, 9.6)), which I think is a much cleaner way. The implementation also allows us to use cm2inch((12.8, 9.6)), which I personally do not prefer, but some people may do. Even though there isn't any way of doing this natively at the moment, I found a discussion here.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/14708695\/specify-figure-size-in-centimeter-in-matplotlib","best_answers_votes":34,"tags":["python","matplotlib","figsize"],"question_length":299,"response_length":606,"tag_count":3} {"question":"Multiple histograms in Pandas I would like to create the following histogram (see image below) taken from the book \"Think Stats\". However, I cannot get them on the same plot. Each DataFrame takes its own subplot. I have the following code: \n```\nimport nsfg\nimport matplotlib.pyplot as plt\ndf = nsfg.ReadFemPreg()\npreg = nsfg.ReadFemPreg()\nlive = preg[preg.outcome == 1]\n\nfirst = live[live.birthord == 1]\nothers = live[live.birthord != 1]\n\n#fig = plt.figure()\n#ax1 = fig.add_subplot(111)\n\nfirst.hist(column = 'prglngth', bins = 40, color = 'teal', \\\n alpha = 0.5)\nothers.hist(column = 'prglngth', bins = 40, color = 'blue', \\\n alpha = 0.5)\nplt.show()\n```\nThe above code does not work when I use ax = ax1 as suggested in: pandas multiple plots not working as hists nor this example does what I need: Overlaying multiple histograms using pandas. When I use the code as it is, it creates two windows with histograms. Any ideas how to combine them? Here's an example of how I'd like the final figure to look:","response":"As far as I can tell, pandas can't handle this situation. That's ok since all of their plotting methods are for convenience only. You'll need to use matplotlib directly. Here's how I do it: \n```\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas\n#import seaborn\n#seaborn.set(style='ticks')\n\nnp.random.seed(0)\ndf = pandas.DataFrame(np.random.normal(size=(37,2)), columns=['A', 'B'])\nfig, ax = plt.subplots()\n\na_heights, a_bins = np.histogram(df['A'])\nb_heights, b_bins = np.histogram(df['B'], bins=a_bins)\n\nwidth = (a_bins[1] - a_bins[0])\/3\n\nax.bar(a_bins[:-1], a_heights, width=width, facecolor='cornflowerblue')\nax.bar(b_bins[:-1]+width, b_heights, width=width, facecolor='seagreen')\n#seaborn.despine(ax=ax, offset=10)\n```\nAnd that gives me:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/25539195\/multiple-histograms-in-pandas","best_answers_votes":57,"tags":["python","matplotlib","pandas","histogram"],"question_length":1024,"response_length":775,"tag_count":4} {"question":"Setting axes.linewidth without changing the rcParams global dict So, it seems one cannot do the following (it raises an error, since axes does not have a set_linewidth method): \n```\naxes_style = {'linewidth':5}\naxes_rect = [0.1, 0.1, 0.9, 0.9]\n\naxes(axes_rect, **axes_style)\n```\nand has to use the following old trick instead: \n```\nrcParams['axes.linewidth'] = 5 # set the value globally\n\n... # some code\n\nrcdefaults() # restore [global] defaults\n```\nIs there an easy \/ clean way (may be one can set x- and y- axes parameters individually, etc)? If no, why?","response":"This answer does not work, as it is explained in the comments. I suggest using spines. As mentioned in a comment by Czechnology, consider changing the ticks too. \n```py\nimport matplotlib.pyplot as plt\n\nfig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 4))\n\nax1.set_title('Normal spine and ticks')\nax2.set_title('Adjusted spine and ticks')\n\n# change each spine separately:\n# ax.spines['right'].set_linewidth(0.5)\n\n# change all spines\nfor axis in ['top','bottom','left','right']:\n ax2.spines[axis].set_linewidth(4)\n\n# increase tick width\nax2.tick_params(width=4)\n\nplt.show()\n```\nsee more about spines at: http:\/\/matplotlib.org\/api\/spines_api.html http:\/\/matplotlib.org\/examples\/pylab_examples\/multiple_yaxis_with_spines.html","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/2553521\/setting-axes-linewidth-without-changing-the-rcparams-global-dict","best_answers_votes":103,"tags":["python","matplotlib","plot","graphing"],"question_length":557,"response_length":725,"tag_count":4} {"question":"matplotlib: disregard outliers when plotting I'm plotting some data from various tests. Sometimes in a test I happen to have one outlier (say 0.1), while all other values are three orders of magnitude smaller. With matplotlib, I plot against the range [0, max_data_value] How can I just zoom into my data and not display outliers, which would mess up the x-axis in my plot? Should I simply take the 95 percentile and have the range [0, 95_percentile] on the x-axis?","response":"There's no single \"best\" test for an outlier. Ideally, you should incorporate a-priori information (e.g. \"This parameter shouldn't be over x because of blah...\"). Most tests for outliers use the median absolute deviation, rather than the 95th percentile or some other variance-based measurement. Otherwise, the variance\/stddev that is calculated will be heavily skewed by the outliers. Here's a function that implements one of the more common outlier tests. \n```\ndef is_outlier(points, thresh=3.5):\n \"\"\"\n Returns a boolean array with True if points are outliers and False \n otherwise.\n\n Parameters:\n -----------\n points : An numobservations by numdimensions array of observations\n thresh : The modified z-score to use as a threshold. Observations with\n a modified z-score (based on the median absolute deviation) greater\n than this value will be classified as outliers.\n\n Returns:\n --------\n mask : A numobservations-length boolean array.\n\n References:\n ----------\n Boris Iglewicz and David Hoaglin (1993), \"Volume 16: How to Detect and\n Handle Outliers\", The ASQC Basic References in Quality Control:\n Statistical Techniques, Edward F. Mykytka, Ph.D., Editor. \n \"\"\"\n if len(points.shape) == 1:\n points = points[:,None]\n median = np.median(points, axis=0)\n diff = np.sum((points - median)**2, axis=-1)\n diff = np.sqrt(diff)\n med_abs_deviation = np.median(diff)\n\n modified_z_score = 0.6745 * diff \/ med_abs_deviation\n\n return modified_z_score > thresh\n```\nAs an example of using it, you'd do something like the following: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# The function above... In my case it's in a local utilities module\nfrom sci_utilities import is_outlier\n\n# Generate some data\nx = np.random.random(100)\n\n# Append a few \"bad\" points\nx = np.r_[x, -3, -10, 100]\n\n# Keep only the \"good\" points\n# \"~\" operates as a logical not operator on boolean numpy arrays\nfiltered = x[~is_outlier(x)]\n\n# Plot the results\nfig, (ax1, ax2) = plt.subplots(nrows=2)\n\nax1.hist(x)\nax1.set_title('Original')\n\nax2.hist(filtered)\nax2.set_title('Without Outliers')\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/11882393\/matplotlib-disregard-outliers-when-plotting","best_answers_votes":81,"tags":["python","plot","matplotlib","percentile","outliers"],"question_length":465,"response_length":2202,"tag_count":5} {"question":"How to format axis tick labels from number to thousands or Millions (125,436 to 125.4K) \n```\nimport matplotlib.pyplot as plt\nimport matplotlib.ticker as ticker\nimport seaborn as sns\nimport pandas as pd\nsns.set(style=\"darkgrid\") \nfig, ax = plt.subplots(figsize=(8, 5)) \npalette = sns.color_palette(\"bright\", 6)\ng = sns.scatterplot(ax=ax, x=\"Area\", y=\"Rent\/Sqft\", hue=\"Region\", marker='o', data=df, s=100, palette= palette)\ng.legend(bbox_to_anchor=(1, 1), ncol=1)\ng.set(xlim = (50000,250000))\n```\nHow can I can change the axis format from a number to custom format? For example, 125000 to 125.00K","response":"IIUC you can format the xticks and set these: \n```\n\n```python\n#generate some psuedo data\ndf = pd.DataFrame({'num':[50000, 75000, 100000, 125000], 'Rent\/Sqft':np.random.randn(4), 'Region':list('abcd')})\ndf\n#Output\n#num Rent\/Sqft Region\n#0 50000 0.109196 a\n#1 75000 0.566553 b\n#2 100000 -0.274064 c\n#3 125000 -0.636492 d\n```\n\n```python\nimport matplotlib.pyplot as plt\nimport matplotlib.ticker as ticker\nimport seaborn as sns\nimport pandas as pd\nsns.set(style=\"darkgrid\") \nfig, ax = plt.subplots(figsize=(8, 5)) \npalette = sns.color_palette(\"bright\", 4)\ng = sns.scatterplot(ax=ax, x=\"num\", y=\"Rent\/Sqft\", hue=\"Region\", marker='o', data=df, s=100, palette= palette)\ng.legend(bbox_to_anchor=(1, 1), ncol=1)\ng.set(xlim = (50000,250000))\nxlabels = ['{:,.2f}'.format(x) + 'K' for x in g.get_xticks()\/1000]\ng.set_xticklabels(xlabels)\n#Output\n#```\n#The key bit here is this line: \n#```\n#xlabels = ['{:,.2f}'.format(x) + 'K' for x in g.get_xticks()\/1000]\n#g.set_xticklabels(xlabels)\n#```\n#So this divides all the ticks by 1000 and then formats them and sets the xtick labels UPDATE Thanks to @ScottBoston who has suggested a better method: \n#```\n#ax.xaxis.set_major_formatter(ticker.FuncFormatter(lambda x, pos: '{:,.2f}'.format(x\/1000) + 'K'))\n#```\n#see the docs\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/53747298\/how-to-format-axis-tick-labels-from-number-to-thousands-or-millions-125-436-to","best_answers_votes":60,"tags":["python","matplotlib","seaborn","xticks","yticks"],"question_length":600,"response_length":1277,"tag_count":5} {"question":"How to create a stacked bar chart for my DataFrame using seaborn [duplicate] This question already has answers here: How to have clusters of stacked bars (10 answers) Closed 7 years ago. I have a DataFrame df: \n```py\ndf = pd.DataFrame(columns=[\"App\",\"Feature1\", \"Feature2\",\"Feature3\", \"Feature4\",\"Feature5\", \"Feature6\",\"Feature7\",\"Feature8\"], data=[['SHA', 0, 0, 1, 1, 1, 0, 1, 0], ['LHA', 1, 0, 1, 1, 0, 1, 1, 0], ['DRA', 0, 0, 0, 0, 0, 0, 1, 0], ['FRA', 1, 0, 1, 1, 1, 0, 1, 1], ['BRU', 0, 0, 1, 0, 1, 0, 0, 0], ['PAR', 0, 1, 1, 1, 1, 0, 1, 0], ['AER', 0, 0, 1, 1, 0, 1, 1, 0], ['SHE', 0, 0, 0, 1, 0, 0, 1, 0]])\n\n# display(df)\n App Feature1 Feature2 Feature3 Feature4 Feature5 Feature6 Feature7 Feature8\n0 SHA 0 0 1 1 1 0 1 0\n1 LHA 1 0 1 1 0 1 1 0\n2 DRA 0 0 0 0 0 0 1 0\n3 FRA 1 0 1 1 1 0 1 1\n4 BRU 0 0 1 0 1 0 0 0\n5 PAR 0 1 1 1 1 0 1 0\n6 AER 0 0 1 1 0 1 1 0\n7 SHE 0 0 0 1 0 0 1 0\n```\nI want to create a stacked bar chart so that each stack would correspond to App while the Y axis would contain the count of 1 values and the X axis would be Feature. It should be similar to this bar chart with the only difference that now I want to see stack bars and a legend with colors: \n```py\ndf_c = df.iloc[:, 1:].eq(1).sum().rename_axis('Feature').reset_index(name='Cou\u200cnt')\ndf_c = df_c.sort_values('Cou\u200cnt')\nplt.figure(figsize=(12,8))\nax = sns.barplot(x=\"Feature\", y='Cou\u200cnt', data=df_c, palette=sns.color_palette(\"GnBu\", 10))\nplt.xticks(rotation='vertical')\nax.grid(b=True, which='major', color='#d3d3d3', linewidth=1.0)\nax.grid(b=True, which='minor', color='#d3d3d3', linewidth=0.5)\nplt.show()\n```","response":"You could use pandas plot as @Bharath suggest: \n```\nimport seaborn as sns\nsns.set()\ndf.set_index('App').T.plot(kind='bar', stacked=True)\n```\nOutput: Updated: from matplotlib.colors import ListedColormap df.set_index('App')\\ .reindex_axis(df.set_index('App').sum().sort_values().index, axis=1)\\ .T.plot(kind='bar', stacked=True, colormap=ListedColormap(sns.color_palette(\"GnBu\", 10)), figsize=(12,6)) Updated Pandas 0.21.0+ reindex_axis is deprecated, use reindex \n```\nfrom matplotlib.colors import ListedColormap\n\ndf.set_index('App')\\\n .reindex(df.set_index('App').sum().sort_values().index, axis=1)\\\n .T.plot(kind='bar', stacked=True,\n colormap=ListedColormap(sns.color_palette(\"GnBu\", 10)), \n figsize=(12,6))\n```\nOutput:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/47138271\/how-to-create-a-stacked-bar-chart-for-my-dataframe-using-seaborn","best_answers_votes":87,"tags":["python","pandas","matplotlib","seaborn"],"question_length":2122,"response_length":742,"tag_count":4} {"question":"Draw graph in NetworkX I'm trying to draw any graph in NetworkX, but get nothing, not even errors: \n```\nimport networkx as nx\nimport matplotlib.pyplot as plt\ng1=nx.petersen_graph()\nnx.draw(g1)\n```","response":"Add to the end: \n```\nplt.show()\n```\n```\nimport networkx as nx\nimport matplotlib.pyplot as plt\ng1 = nx.petersen_graph()\nnx.draw(g1)\nplt.show()\n```\nWhen run from an interactive shell where plt.ion() has been called, the plt.show() is not needed. This is probably why it is omitted in a lot of examples. If you run these commands from a script (where plt.ion() has not been called), the plt.show() is needed. plt.ion() is okay for interactive sessions, but is not recommended for scripts.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/19212979\/draw-graph-in-networkx","best_answers_votes":75,"tags":["python","matplotlib","ipython","jupyter-notebook","networkx"],"question_length":196,"response_length":485,"tag_count":5} {"question":"Plot with fewer markers than data points (or a better way to plot CDFs?) [matplotlib, or general plotting help] I am plotting Cumulative Distribution Functions, with a large number of data points. I am plotting a few lines on the same plot, which are identified with markers as it will be printed in black and white. What I would like are markers evenly spaced in the x-dimension. What I am getting is one marker per data point (and given the number of points, they all overlap) I'm not sure if it's my understanding of how to plot well, or just a lack of understanding matplotlib. I can't find a 'marker frequency' setting. An easy solution for one line would be to take every N'th value from the line, and use that as a separate line with linestyle='', but I would like the markers to be vertically aligned, and the different x arrays have different lengths. \n```\n# in reality, many thousands of values\nx_example = [ 567, 460, 66, 1034, 275, 26, 628, 99, 287, 157, 705, 421, 1093, \\ \n 139, 204, 14, 240, 179, 94, 139, 645, 670, 47, 520, 891, 450, 56, 964, \\\n 1728, 99, 277, 356, 1628, 745, 364, 88, 112, 810, 816, 523, 401, 89, \\ \n 278, 917, 370, 53, 39, 90, 853, 356 ] \nx = sort(x_example)\ny = linspace(0,1,len(x))\n\nax = subplot(1,1,1)\nplots[w] = ax.plot(x,y, marker='o')\n```","response":"You can do plot(x,y,marker='o',markevery=5) to mark every fifth point, but I don't think there is any built-in support for setting marks at even intervals. You could decide on the x locations where you want the marks, use e.g. numpy.searchsorted to find which data points the locations fall between, and then interpolate between the neighboring points to find the y coordinates.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/2040306\/plot-with-fewer-markers-than-data-points-or-a-better-way-to-plot-cdfs-matplo","best_answers_votes":76,"tags":["matplotlib","distribution","plot","marker","cumulative-frequency"],"question_length":1296,"response_length":378,"tag_count":5} {"question":"Plot histogram with colors taken from colormap I want to plot a simple 1D histogram where the bars should follow the color-coding of a given colormap. Here's an MWE: \n```\nimport numpy as n\nimport matplotlib.pyplot as plt\n\n# Random gaussian data.\nNtotal = 1000\ndata = 0.05 * n.random.randn(Ntotal) + 0.5\n\n# This is the colormap I'd like to use.\ncm = plt.cm.get_cmap('RdYlBu_r')\n\n# Plot histogram.\nn, bins, patches = plt.hist(data, 25, normed=1, color='green')\n\nplt.show()\n```\nwhich outputs this: Instead of the color being green for the entire histogram, I'd like the columns to follow a color-coding given by the colormap defined in cm and the values of the bins. This would mean that bins closer to zero (not in height but in position) should look bluer and those closer to one redder, according to the chosen colormap RdYlBu_r. Since plt.histo doesn't take a cmap argument I don't know how to tell it to use the colormap defined in cm.","response":"The hist command returns a list of patches, so you can iterate over them and set their color like so: \n```\nimport numpy as n\nimport matplotlib.pyplot as plt\n\n# Random gaussian data.\nNtotal = 1000\ndata = 0.05 * n.random.randn(Ntotal) + 0.5\n\n# This is the colormap I'd like to use.\ncm = plt.cm.get_cmap('RdYlBu_r')\n\n# Plot histogram.\nn, bins, patches = plt.hist(data, 25, normed=1, color='green')\nbin_centers = 0.5 * (bins[:-1] + bins[1:])\n\n# scale values to interval [0,1]\ncol = bin_centers - min(bin_centers)\ncol \/= max(col)\n\nfor c, p in zip(col, patches):\n plt.setp(p, 'facecolor', cm(c))\n\nplt.show()\n```\nTo get the colors, you need to call the colormap with a value between 0 and 1. Resulting figure:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/23061657\/plot-histogram-with-colors-taken-from-colormap","best_answers_votes":55,"tags":["python","matplotlib","histogram"],"question_length":937,"response_length":705,"tag_count":3} {"question":"How to remove outline of circle marker when using pyplot.plot in matplotlib I'm producing a scatter plot using pyplot.plot (instead of scatter - I'm having difficulties with the colormap) I am plotting using the 'o' marker to get a circle, but the circle always has a black outline. How do I remove the outline, or adjust its colour?","response":"To remove the outline of a marker, and adjust its color, use markeredgewidth (aka mew), and markeredgecolor (aka mec) respectively. Using this as a guide: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.arange(0, 5, 0.1)\ny = np.sin(x)\n\nplt.plot(x,\n y,\n color='blue',\n marker='o',\n fillstyle='full',\n markeredgecolor='red',\n markeredgewidth=0.0)\n```\nThis produces: As you notice, even though the marker edge color is set, because the width of it is set to zero it doesn't show up.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/28403179\/how-to-remove-outline-of-circle-marker-when-using-pyplot-plot-in-matplotlib","best_answers_votes":55,"tags":["python","matplotlib"],"question_length":333,"response_length":543,"tag_count":2} {"question":"matplotlib Axes.plot() vs pyplot.plot() What is the difference between the Axes.plot() and pyplot.plot() methods? Does one use another as a subroutine? It seems that my options for plotting are \n```\nline = plt.plot(data)\n```\nor \n```\nax = plt.axes()\nline = ax.plot(data)\n```\nor even \n```\nfig = plt.figure()\nax = fig.add_axes([0,0,1,1])\nline = ax.plot(data)\n```\nAre there situations where it is preferable to use one over the other?","response":"For drawing a single plot, the best practice is probably \n```\nfig = plt.figure()\nplt.plot(data)\nfig.show()\n```\nNow, lets take a look in to 3 examples from the question and explain what they do. Takes the current figure and axes (if none exists it will create a new one) and plot into them. \n```\nline = plt.plot(data)\n```\nIn your case, the behavior is same as before with explicitly stating the axes for plot. \n```\nax = plt.axes()\n line = ax.plot(data)\n```\nThis approach of using ax.plot(...) is a must, if you want to plot into multiple axes (possibly in one figure). For example when using a subplots. Explicitly creates new figure - you will not add anything to previous one. Explicitly creates a new axes with given rectangle shape and the rest is the same as with 2. \n```\nfig = plt.figure()\n ax = fig.add_axes([0,0,1,1])\n line = ax.plot(data)\n```\npossible problem using figure.add_axes is that it may add a new axes object to the figure, which will overlay the first one (or others). This happens if the requested size does not match the existing ones.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/43482191\/matplotlib-axes-plot-vs-pyplot-plot","best_answers_votes":32,"tags":["python","matplotlib","plot"],"question_length":430,"response_length":1056,"tag_count":3} {"question":"Adjust Axes3D label positioning I am having trouble with axes labels overlapping ticks labels in matplotlib. I've tried to reposition the labels \"manually\" by applying transforms or by calling set_y(), but no avail. Here's a snippet that reproduces the problem: \n```\nimport matplotlib\nmatplotlib.use(\"TKAGG\")\nimport matplotlib.pyplot as pyplot\nimport mpl_toolkits.mplot3d\n\nfigure = pyplot.figure()\nfigure.subplots_adjust(bottom=0.25, top=0.75)\naxes = figure.gca(projection='3d')\nxLabel = axes.set_xlabel('XXX xxxxxx xxxx x xx x')\nyLabel = axes.set_ylabel('YY (y) yyyyyy')\nzLabel = axes.set_zlabel('Z zzzz zzz (z)')\nplot = axes.plot([1,2,3],[1,2,3])\n\npyplot.show()\n```\nNote how the x and y labels clash with the ticks. Can I solve this elegantly ?","response":"I share your frustration. I worked on it for a good half hour and got nowhere. The docs say set_xlabel takes an arg labelpad but I get an error (AttributeError: Unknown property labelpad)! Setting it after the fact doesn't do anything, on xaxis or w_xaxis. Here's a crude workaround: \n```\nimport matplotlib\nmatplotlib.use(\"TKAGG\")\nimport matplotlib.pyplot as pyplot\nimport mpl_toolkits.mplot3d\n\nfigure = pyplot.figure(figsize=(8,4), facecolor='w')\nax = figure.gca(projection='3d')\n\nxLabel = ax.set_xlabel('\\nXXX xxxxxx xxxx x xx x', linespacing=3.2)\nyLabel = ax.set_ylabel('\\nYY (y) yyyyyy', linespacing=3.1)\nzLabel = ax.set_zlabel('\\nZ zzzz zzz (z)', linespacing=3.4)\nplot = ax.plot([1,2,3],[1,2,3])\nax.dist = 10\n\npyplot.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/5525782\/adjust-axes3d-label-positioning","best_answers_votes":55,"tags":["python","matplotlib","matplotlib-3d"],"question_length":746,"response_length":732,"tag_count":3} {"question":"Matplotlib, horizontal bar chart (barh) is upside-down TL'DR, the vertical bar charts are shown in a conventional way -- things line up from left to right. However, when it is converted to horizontal bar chart (from bar to barh), everything is upside-down. I.e., for a grouped bar chart, not only the order of the grouped bar is wrong, the order of the each group is wrong as well. For e.g., the graph from http:\/\/dwheelerau.com\/2014\/05\/28\/pandas-data-analysis-new-zealanders-and-their-sheep\/ If you look closely, you will find that the the bar and legend are in reverse order -- Beef shows on top in legend but on bottom in the graph. As the simplest demo, I changed kind='bar', to kind='barh', from this graph https:\/\/plot.ly\/pandas\/bar-charts\/#pandas-grouped-bar-chart and the result looks like this: https:\/\/plot.ly\/7\/~xpt\/ I.e., the bars in the horizontal grouped bar chart is ordered upside-down. How to fix it? EDIT: @Ajean, it is actually not only the order of the grouped bar is wrong, the order of the each group is wrong as well. The graph from Simple customization of matplotlib\/pandas bar chart (labels, ticks, etc.) shows it clearly: We can see that the order is unconventional too, because people would expect the graph to be top-down, with \"AAA\" at the top, not the bottom. If you search for \"Excel upside-down\", you will find people are complaining about this in Excel all over the places. The Microsoft Excel has a fix for it, do Matplotlib\/Panda\/Searborn\/Ploty\/etc has a fix for it?","response":"I believe the joint wrong order of groups and subgroups boils down to a single feature: that the y axis increases upwards, as in a usual plot. Try reversing the y axis of your axes as in this pandas-less example: \n```py\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = range(5)\ny = np.random.randn(5)\n\n# plot 1: bar\nplt.figure()\nplt.bar(x, y)\n\n# plot 2: barh, wrong order\nplt.figure()\nplt.barh(x, y)\n\n# plot 3: barh with correct order: top-down y axis\nplt.figure()\nplt.barh(x, y)\nplt.gca().invert_yaxis()\n\nplt.show()\n```\nSpecifically for pandas, pandas.DataFrame.plot and its various plotting submethods return a matplotlib axes object, so you can invert its y axis directly: \n```py\nax = df.plot.barh() # or df.plot(), or similar\nax.invert_yaxis()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/34076177\/matplotlib-horizontal-bar-chart-barh-is-upside-down","best_answers_votes":100,"tags":["python","pandas","matplotlib","bar-chart"],"question_length":1501,"response_length":759,"tag_count":4} {"question":"Plot a bar plot from a Pandas DataFrame Assuming I have a DataFrame that looks like this: \n```none\nHour V1 V2 A1 A2\n 0 15 13 25 37\n 1 26 52 21 45\n 2 18 45 45 25\n 3 65 38 98 14\n```\nI'm trying to create a bar plot to compare columns V1 and V2 by the Hour. When I do: \n```py\nimport matplotlib.pyplot as plt\nax = df.plot(kind='bar', title =\"V comp\",figsize=(15,10),legend=True, fontsize=12)\nax.set_xlabel(\"Hour\",fontsize=12)\nax.set_ylabel(\"V\",fontsize=12)\n```\nI get a plot and a legend with all the columns' values and names. How can I modify my code so the plot and legend only displays the columns V1 and V2?","response":"To plot just a selection of your columns you can select the columns of interest by passing a list to the subscript operator: \n```\nax = df[['V1','V2']].plot(kind='bar', title =\"V comp\", figsize=(15, 10), legend=True, fontsize=12)\n```\nWhat you tried was df['V1','V2'] this will raise a KeyError as correctly no column exists with that label, although it looks funny at first you have to consider that your are passing a list hence the double square brackets [[]]. \n```\nimport matplotlib.pyplot as plt\nax = df[['V1','V2']].plot(kind='bar', title =\"V comp\", figsize=(15, 10), legend=True, fontsize=12)\nax.set_xlabel(\"Hour\", fontsize=12)\nax.set_ylabel(\"V\", fontsize=12)\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/29498652\/plot-a-bar-plot-from-a-pandas-dataframe","best_answers_votes":84,"tags":["python","pandas","dataframe","matplotlib","plot"],"question_length":634,"response_length":679,"tag_count":5} {"question":"what does axes.flat in matplotlib do? I have seen various programs using matplotlib that uses the axes.flat function, like this code: \n```\nfor i, ax in enumerate(axes.flat):\n```\nwhat does this do?","response":"Let's look a minimal example, where we create some axes with plt.subplots, also see this question, \n```\nimport matplotlib.pyplot as plt\n\nfig, axes = plt.subplots(ncols=2,nrows=3, sharex=True, sharey=True)\n\nfor i, ax in enumerate(axes.flat):\n ax.scatter([i\/\/2+1, i],[i,i\/\/3])\n \nplt.show()\n```\nHere, axes is a numpy array of axes, \n```\nprint(type(axes))\n> \nprint(axes.shape)\n> (3L, 2L)\n```\naxes.flat is not a function, it's an attribute of the numpy.ndarray: numpy.ndarray.flat ndarray.flat A 1-D iterator over the array. This is a numpy.flatiter instance, which acts similarly to, but is not a subclass of, Python\u2019s built-in iterator object. Example: \n```\nimport numpy as np\n\na = np.array([[2,3],\n [4,5],\n [6,7]])\n \nfor i in a.flat:\n print(i)\n```\nwhich would print the numbers 2 3 4 5 6 7. Being an interator over the array, you can use it to loop over all the axes from the 3x2 array of axes, \n```\nfor i, ax in enumerate(axes.flat):\n```\nFor each iteration it would yield the next axes from that array, such that you may easily plot to all axes in a single loop. An alternative would be to use axes.flatten(), where flatten() is method of the numpy array. Instead of an iterator, it returns a flattened version of the array: \n```\nfor i, ax in enumerate(axes.flatten()):\n```\nThere is no difference seen from the outside between the two. However an iterator does not actually create a new array and may hence be slightly faster (although this will never be noticable in the case of matplotlib axes objects). \n```\nflat1 = [ax for ax in axes.flat]\nflat2 = axes.flatten()\nprint(flat1 == flat2)\n> [ True True True True True True]\n```\nIterating a flattened version of the axes array has the advantage that you will save one loop, compared to the naive approach of iterating over rows and columns separately, \n```\nfor row in axes:\n for ax in row:\n ax.scatter(...)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/46862861\/what-does-axes-flat-in-matplotlib-do","best_answers_votes":79,"tags":["python","matplotlib"],"question_length":196,"response_length":1933,"tag_count":2} {"question":"How to create a scatter plot legend with only one symbol for each label? How can I create a scatter plot legend without two symbols showing up each time? I can understand why you'd want this when you're joining symbols by lines, but for a pure scatter plot, all I want in the legend is one example of the symbol. This plot from a previous stackoverflow post shows the kind of thing I mean:","response":"In the legend command you can use the scatterpoints option: \n```\nax.legend(loc=0, scatterpoints = 1)\n```\nFor a normal plot, it is the option numpoints. Here you can find more information about the keyword arguments for the legend: http:\/\/matplotlib.sourceforge.net\/api\/pyplot_api.html#matplotlib.pyplot.legend","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/6099799\/how-to-create-a-scatter-plot-legend-with-only-one-symbol-for-each-label","best_answers_votes":72,"tags":["matplotlib","legend"],"question_length":389,"response_length":309,"tag_count":2} {"question":"Labels for clustermap in seaborn I have several questions about labeling for clustermap in seaborn. First is it possible to extract the the distance values for the hierarchical clustering, and plot the value on the tree structure visualization (maybe only the first three levels). Here is my example code for creating a clustermap plot: \n```\nimport pandas as pd\nimport numpy as np\nimport seaborn as sns\nget_ipython().magic(u'matplotlib inline')\n\nm = np.random.rand(50, 50)\ndf = pd.DataFrame(m, columns=range(4123, 4173), index=range(4123, 4173))\nsns.clustermap(df, metric=\"correlation\")\n```\nThe other two questions are: - How to rotate the y labels since they overlaps together. - How to move the color bar to the bottom or right. (There was a question for heatmap, but does not work for my case. Also does not address the color bar position)","response":"I had the exact same issue with the labels on the y-axis being rotated and found a solution. The issue is that if you do plt.yticks(rotation=0) like suggested in the question you referenced, it will rotate the labels on your colobar due to the way ClusterGrid works. To solve it and rotate the right labels, you need to reference the Axes from the underlying Heatmap and rotate these: \n```\ncg = sns.clustermap(df, metric=\"correlation\")\nplt.setp(cg.ax_heatmap.yaxis.get_majorticklabels(), rotation=0)\n```\nFor your other question about the colorbar placement, I don't think this is supported at the moment, as indicated by this Github issue unfortunately. And finally for the hierarchical clustering distance values, you can access the linkage matrics for rows or columns with: \n```\ncg = sns.clustermap(df, metric=\"correlation\")\ncg.dendrogram_col.linkage # linkage matrix for columns\ncg.dendrogram_row.linkage # linkage matrix for rows\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/34572177\/labels-for-clustermap-in-seaborn","best_answers_votes":59,"tags":["python","matplotlib","seaborn"],"question_length":842,"response_length":937,"tag_count":3} {"question":"How to set my xlabel at the end of xaxis I want my x axis has the label like this format \n```\n0 1 2 3 4 5 Xlabel\n```\nbut I try code below it result me in 2 lines \n```\nself.axes.set_xticks(np.arange(0,6,1))\nself.axes.set_xlabel('Xlabel', fontsize=9,x=1,y=1)\n```\n=> my result :( \n```\n0 1 2 3 4 5 \n Xlabel\n```","response":"When setting the xlabel, the x parameter assigns the position in axis units, so 0 is the origin and 1 is the right edge of the plot. y is ignored as it's expected to be a default value, just below the tick marks. To override this behavior, you can set the position in axis units using the Axis set_label_coords method. You can use other units by also providing a transform. Here is an example of this: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nax = plt.gca()\nax.set_xticks(np.arange(0,6,1))\nlabel = ax.set_xlabel('Xlabel', fontsize = 9)\nax.xaxis.set_label_coords(1.05, -0.025)\n\nplt.savefig('labelAtEnd.png')\nplt.show()\n```\nResulting in: The x value (1.05) was chosen to position the label outside the Axes frame. The y value (-0.025) was chose as a best guess to the position you desired. Using a transform, it might be possible to automatically position the text in line with the Tick labels. EDIT: Here's an extended example using a transform. It is not necissarily more helpful to use the last ticklabel's transform, because it does not take into account the size of the text and how it is aligned. So to get a somewhat desired effect, I had to 1) use the same font size for my x label, 2) position the vertical alignment (va) to 'top', and 3) position the horizontal alignment to 'left'. The transform for each tick is set for data units for x (because it's an xaxis) and axis units for y (0 to 1), but displaced by a fixed padding (in pixels) from the x axis. \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nax = plt.gca()\nax.set_xticks(np.arange(0,6,1))\nax.set_yticks(np.arange(0,6,1))\nlabel = ax.set_xlabel('xlabel', ha='left', va = 'top', )#fontsize = 9)\n\n# need to draw the figure first to position the tick labels\nfig = plt.gcf()\nfig.draw(fig.canvas.get_renderer())\n\n# get a tick and will position things next to the last one\nticklab = ax.xaxis.get_ticklabels()[0]\ntrans = ticklab.get_transform()\nax.xaxis.set_label_coords(5.1, 0, transform=trans)\n\nplt.savefig('labelAtEnd2.png')\nplt.show()\n```\nThis results in:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/9290938\/how-to-set-my-xlabel-at-the-end-of-xaxis","best_answers_votes":73,"tags":["python","matplotlib"],"question_length":313,"response_length":2045,"tag_count":2} {"question":"How to show an AxesSubplot in Python? I have an object fig2 that is a class mathplotlib.axes.axessubplot, but when I try to execute fig2.show(), python says axessubplot object has no attribute show. How can I show AxesSubplot?","response":"You should call matplotlib.pyplot.show(), which is a method that displays all the figures. If you have imported as plt, then: \n```\nimport matplotlib.pyplot as plt\n\n# create fig1 (of type plt.figure)\n# create fig2\n\nplt.show() # will display fig1 and fig2 in different windows\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/26540567\/how-to-show-an-axessubplot-in-python","best_answers_votes":69,"tags":["python","matplotlib"],"question_length":226,"response_length":279,"tag_count":2} {"question":"Matplotlib imshow() stretch to \"fit width\" I've got an image, and a measure associated with each column of its pixels. I'm using pyplot to create a figure with the image on top, and a plot of the column measurements below. I'm using something like this: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nA = np.random.rand(34*52).reshape(34,52)\nmeans = np.average(A,axis=0)\n\nplt.figure()\n\nplt.subplot(2,1,1)\nplt.imshow(A, interpolation='nearest' )\n\nplt.subplot(2,1,2)\nplt.plot(means)\n\nplt.show()\n```\nHow can I stretch the image's width to the match that of the plots. That way, when looking at the measurements in the plot, the souce pixels will be in a column directly above it.","response":"Turns out that it's as simple as giving aspect='auto' to the imshow call. \n```\nplt.imshow(A, interpolation='nearest', aspect='auto')\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/12806481\/matplotlib-imshow-stretch-to-fit-width","best_answers_votes":94,"tags":["python","matplotlib"],"question_length":686,"response_length":136,"tag_count":2} {"question":"Different colors for points and line in Seaborn regplot All examples listed in Seaborn's regplot documentation show the same color for dots and the regression line. Changing the color argument changes both. How can one set a different color for the points as the line?","response":"You are right in that the color argument changes all the plot elements. However, if you read the last bit of the relevant sentence in the documentation: color : matplotlib color Color to apply to all plot elements; will be superseded by colors passed in scatter_kws or line_kws. Therefore, using scatter_kws or line_kws we can change the color of them individually. Taking the first example given in the documentation: \n```\nimport seaborn as sns\n\ntips = sns.load_dataset(\"tips\")\nax = sns.regplot(x=\"total_bill\", y=\"tip\", data=tips,\n scatter_kws={\"color\": \"black\"}, line_kws={\"color\": \"red\"})\n\nplt.show()\n```\nGives:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/48145924\/different-colors-for-points-and-line-in-seaborn-regplot","best_answers_votes":79,"tags":["python","matplotlib","seaborn"],"question_length":268,"response_length":630,"tag_count":3} {"question":"how to plot and annotate hierarchical clustering dendrograms in scipy\/matplotlib I'm using dendrogram from scipy to plot hierarchical clustering using matplotlib as follows: \n```\nmat = array([[1, 0.5, 0.9],\n [0.5, 1, -0.5],\n [0.9, -0.5, 1]])\nplt.subplot(1,2,1)\nplt.title(\"mat\")\ndist_mat = mat\nlinkage_matrix = linkage(dist_mat,\n \"single\")\nprint \"linkage2:\"\nprint linkage(1-dist_mat, \"single\")\ndendrogram(linkage_matrix,\n color_threshold=1,\n labels=[\"a\", \"b\", \"c\"],\n show_leaf_counts=True)\nplt.subplot(1,2,2)\nplt.title(\"1 - mat\")\ndist_mat = 1 - mat\nlinkage_matrix = linkage(dist_mat,\n \"single\")\ndendrogram(linkage_matrix,\n color_threshold=1,\n labels=[\"a\", \"b\", \"c\"],\n show_leaf_counts=True)\n```\nMy questions are: first, why does mat and 1-mat give identical clusterings here? and second, how can I annotate the distance along each branch of the tree using dendrogram so that the distances between pairs of nodes can be compared? finally it seems that show_leaf_counts flag is ignored, is there a way to turn it on so that the number of objects in each class is shown? thanks.","response":"The input to linkage() is either an n x m array, representing n points in m-dimensional space, or a one-dimensional array containing the condensed distance matrix. In your example, mat is 3 x 3, so you are clustering three 3-d points. Clustering is based on the distance between these points. Why does mat and 1-mat give identical clusterings here? The arrays mat and 1-mat produce the same clustering because the clustering is based on distances between the points, and neither a reflection (-mat) nor a translation (mat + offset) of the entire data set change the relative distances between the points. How can I annotate the distance along each branch of the tree using dendrogram so that the distances between pairs of nodes can be compared? In the code below, I show how you can use the data returned by dendrogram to label the horizontal segments of the diagram with the corresponding distance. The values associated with the keys icoord and dcoord give the x and y coordinates of each three-segment inverted-U of the figure. In augmented_dendrogram this data is used to add a label of the distance (i.e. y value) of each horizontal line segment in dendrogram. \n```\nfrom scipy.cluster.hierarchy import dendrogram\nimport matplotlib.pyplot as plt\n\n\ndef augmented_dendrogram(*args, **kwargs):\n\n ddata = dendrogram(*args, **kwargs)\n\n if not kwargs.get('no_plot', False):\n for i, d in zip(ddata['icoord'], ddata['dcoord']):\n x = 0.5 * sum(i[1:3])\n y = d[1]\n plt.plot(x, y, 'ro')\n plt.annotate(\"%.3g\" % y, (x, y), xytext=(0, -8),\n textcoords='offset points',\n va='top', ha='center')\n\n return ddata\n```\nFor your mat array, the augmented dendrogram is So point 'a' and 'c' are 1.01 units apart, and point 'b' is 1.57 units from the cluster ['a', 'c']. It seems that show_leaf_counts flag is ignored, is there a way to turn it on so that the number of objects in each class is shown? The flag show_leaf_counts only applies when not all the original data points are shown as leaves. For example, when trunc_mode = \"lastp\", only the last p nodes are show. Here's an example with 100 points: \n```\nimport numpy as np\nfrom scipy.cluster.hierarchy import linkage\nimport matplotlib.pyplot as plt\nfrom augmented_dendrogram import augmented_dendrogram\n\n\n# Generate a random sample of `n` points in 2-d.\nnp.random.seed(12312)\nn = 100\nx = np.random.multivariate_normal([0, 0], np.array([[4.0, 2.5], [2.5, 1.4]]),\n size=(n,))\n\nplt.figure(1, figsize=(6, 5))\nplt.clf()\nplt.scatter(x[:, 0], x[:, 1])\nplt.axis('equal')\nplt.grid(True)\n\nlinkage_matrix = linkage(x, \"single\")\n\nplt.figure(2, figsize=(10, 4))\nplt.clf()\n\nplt.subplot(1, 2, 1)\nshow_leaf_counts = False\nddata = augmented_dendrogram(linkage_matrix,\n color_threshold=1,\n p=6,\n truncate_mode='lastp',\n show_leaf_counts=show_leaf_counts,\n )\nplt.title(\"show_leaf_counts = %s\" % show_leaf_counts)\n\nplt.subplot(1, 2, 2)\nshow_leaf_counts = True\nddata = augmented_dendrogram(linkage_matrix,\n color_threshold=1,\n p=6,\n truncate_mode='lastp',\n show_leaf_counts=show_leaf_counts,\n )\nplt.title(\"show_leaf_counts = %s\" % show_leaf_counts)\n\nplt.show()\n```\nThese are the points in the data set: With p=6 and trunc_mode=\"lastp\", dendrogram only shows the \"top\" of the dendrogram. The following shows the effect of show_leaf_counts.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/11917779\/how-to-plot-and-annotate-hierarchical-clustering-dendrograms-in-scipy-matplotlib","best_answers_votes":70,"tags":["python","numpy","matplotlib","scipy","dendrogram"],"question_length":1206,"response_length":3535,"tag_count":5} {"question":"How to get default blue colour of matplotlib.pyplot.scatter? How do I get the shade of blue that is used as default in matplotlib.pyplot.scatter? When giving the keyword argument c='b', it gives a darker shade of blue. In this documentation of matplotlib.pyplot.scatter, it says the default is supposed to be 'b', yet it looks different. See example below: \n```\nimport matplotlib.pyplot as plt\nfig, ax = plt.subplots()\nax.scatter(-1, 0)\nax.text(-1, 0, 'Default blue')\nax.scatter(1, 0, c='b')\nax.text(1, 0, 'Darker blue')\nax.set_xlim(-2, 2)\n```\nI'm using Python 3.5 with Matplotlib 2.0.0. The reason why I'm asking this, is because I would like to use the same blue colour when plotting some of the points one by one with plt.plot().","response":"The default colour cycle was changed in matplotlib version 2 as shown in the docs. Therefore, to plot the \"new\" default blue you can do 2 things: \n```\nfig, ax = plt.subplots()\n\nax.scatter(-1, 1)\nax.text(-0.9, 1, 'Default blue')\n\nax.scatter(1, 1, c='#1f77b4')\nax.text(1.1, 1, 'Using hex value')\n\nax.scatter(0, 0.5, c='C0')\nax.text(0.1, 0.5, 'Using \"C0\" notation')\n\nax.set_xlim(-2, 3)\nax.set_ylim(-1,2)\nplt.show()\n```\nWhich gives: Alternatively you can change the colour cycle back to what it was: \n```\nimport matplotlib as mpl\nfrom cycler import cycler\n\nmpl.rcParams['axes.prop_cycle'] = cycler(color='bgrcmyk')\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/47074423\/how-to-get-default-blue-colour-of-matplotlib-pyplot-scatter","best_answers_votes":69,"tags":["python","matplotlib"],"question_length":732,"response_length":614,"tag_count":2} {"question":"How do I show logarithmically spaced grid lines at all ticks on a log-log plot using Matplotlib? I'm trying to plot a log-log graph that shows logarithmically spaced grid lines at all of the ticks that you see along the bottom and left hand side of the plot. I've been able to show some gridlines by using matplotlib.pyplot.grid(True), but this is only showing grid lines for me at power of 10 intervals. So as an example, here is what I'm currently getting: I'd really like something with grid lines looking more like this, where the gridlines aren't all evenly spaced: How would I go about achieving this in Matplotlib?","response":"Basically, you just need to put in the parameter which=\"both\" in the grid command so that it becomes: \n```\nmatplotlib.pyplot.grid(True, which=\"both\")\n```\nOther options for which are 'minor' and 'major' which are the major ticks (which are shown in your graph) and the minor ticks which you are missing. If you want solid lines then you can use ls=\"-\" as a parameter to grid() as well. Here is an example for kicks: \n```\nimport numpy as np\nfrom matplotlib import pyplot as plt\n\nx = np.arange(0, 100, .5)\ny = 2 * x**3\n\nplt.loglog(x, y)\nplt.grid(True, which=\"both\", ls=\"-\")\nplt.show()\n```\nwhich generates: More details on the Matplotlib Docs","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/3590905\/how-do-i-show-logarithmically-spaced-grid-lines-at-all-ticks-on-a-log-log-plot-u","best_answers_votes":65,"tags":["grid","matplotlib"],"question_length":621,"response_length":638,"tag_count":2} {"question":"Axes class - set explicitly size (width\/height) of axes in given units I want to to create a figure using matplotlib where I can explicitly specify the size of the axes, i.e. I want to set the width and height of the axes bbox. I have looked around all over and I cannot find a solution for this. What I typically find is how to adjust the size of the complete Figure (including ticks and labels), for example using fig, ax = plt.subplots(figsize=(w, h)) This is very important for me as I want to have a 1:1 scale of the axes, i.e. 1 unit in paper is equal to 1 unit in reality. For example, if xrange is 0 to 10 with major tick = 1 and x axis is 10cm, then 1 major tick = 1cm. I will save this figure as pdf to import it to a latex document. This question brought up a similar topic but the answer does not solve my problem (using plt.gca().set_aspect('equal', adjustable='box') code) From this other question I see that it is possible to get the axes size, but not how to modify them explicitly. Any ideas how I can set the axes box size and not just the figure size. The figure size should adapt to the axes size. Thanks! For those familiar with pgfplots in latex, it will like to have something similar to the scale only axis option (see here for example).","response":"The axes size is determined by the figure size and the figure spacings, which can be set using figure.subplots_adjust(). In reverse this means that you can set the axes size by setting the figure size taking into acount the figure spacings: \n```\nimport matplotlib.pyplot as plt\n\ndef set_size(w,h, ax=None):\n \"\"\" w, h: width, height in inches \"\"\"\n if not ax: ax=plt.gca()\n l = ax.figure.subplotpars.left\n r = ax.figure.subplotpars.right\n t = ax.figure.subplotpars.top\n b = ax.figure.subplotpars.bottom\n figw = float(w)\/(r-l)\n figh = float(h)\/(t-b)\n ax.figure.set_size_inches(figw, figh)\n \nfig, ax=plt.subplots()\n\nax.plot([1,3,2])\n\nset_size(5,5)\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/44970010\/axes-class-set-explicitly-size-width-height-of-axes-in-given-units","best_answers_votes":44,"tags":["python","matplotlib","plot","axis","axes"],"question_length":1261,"response_length":695,"tag_count":5} {"question":"How to increase the font size of the legend in seaborn I have the following codes to create a Seaborn strip plot. I am having a hard time figuring out how to increase the font size of the legend appearing in the plot. \n```py\ng=sns.stripplot(x=\"Market\", y=\"Rate\", hue=\"Group\",data=myBenchmarkData, jitter=True, size=12, alpha=0.5)\ng.axes.set_title(\"4* Rate Market and by Hotel Groups for Year 2016\",fontsize=25)\ng.set_xlabel(\"Market\",fontsize=20)\ng.set_ylabel(\"Rate (in EUR)\",fontsize=20)\ng.tick_params(labelsize=15)\nplt.savefig ('benchmark1.png')\n```\nI am OK with my x-axis and y-axis labels font size but the font size of the legend in my plot is small. How to change it?","response":"Use matplotlib function setp according to this example: \n```\nimport seaborn as sns\nimport matplotlib.pylab as plt\nsns.set_style(\"whitegrid\")\ntips = sns.load_dataset(\"tips\")\n\nax = sns.stripplot(x=\"sex\", y=\"total_bill\", hue=\"day\", data=tips, jitter=True)\nplt.setp(ax.get_legend().get_texts(), fontsize='22') # for legend text\nplt.setp(ax.get_legend().get_title(), fontsize='32') # for legend title\n\nplt.show()\n```\nAnother way is to change font_scale of all graph with plotting_context: http:\/\/seaborn.pydata.org\/generated\/seaborn.plotting_context.html","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/44880444\/how-to-increase-the-font-size-of-the-legend-in-seaborn","best_answers_votes":72,"tags":["python","matplotlib","legend","seaborn"],"question_length":672,"response_length":549,"tag_count":4} {"question":"How to add title and xlabel and ylabel Is there a way to add title (and xlabel and ylabel) to plt.scatter(x,y,...) or plt.plot(x,y,...) directly without writing additional lines? It is easy to add it when we use Series_name.plot in which we simply write Series_name.plot(...,title='name') but it does not work for me if I write: plt.scatter(...,title='name') or plt.plot(...,title='name') [plt<< import matplotlib.pyplot as plt] I am using Python 3.","response":"From the documentation of plt.scatter() there is no such arguments to set the title or labels. But neither does the plt.plot() command have such arguments. plt.plot(x,y, title=\"title\") throws an error AttributeError: Unknown property title. So I wonder why this should work in either case. In any case, the usual way to set the title is plt.title. The usual way to set the labels is plt.xlabeland plt.ylabel. \n```\nimport matplotlib.pyplot as plt\n\nx= [8,3,5]; y = [3,4,5]\nplt.scatter(x,y)\nplt.title(\"title\")\nplt.xlabel(\"x-label\")\nplt.ylabel(\"y-label\")\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/42223587\/how-to-add-title-and-xlabel-and-ylabel","best_answers_votes":79,"tags":["python","matplotlib","plot"],"question_length":449,"response_length":565,"tag_count":3} {"question":"Seaborn heatmap not displaying all xticks and yticks I have a pandas dataframe of shape (39, 67). When I plot it's seaborn heatmap, I don't get as many labels on the X and Y axes. .get_xticklabels() method also returns only 23 labels. matplotlib doesn't show any labels (only numbers) as well. Both these heatmaps are for the same dataframe (39, 67).","response":"To ensure the labels are visible, you have to set the parameters xticklabels, yticklabels to True, like so. \n```\nimport seaborn as sns \nsns.heatmap(dataframe, xticklabels=True, yticklabels=True)\n```\nHere's the documentation for the heatmap function.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/50754471\/seaborn-heatmap-not-displaying-all-xticks-and-yticks","best_answers_votes":84,"tags":["python","pandas","matplotlib","seaborn"],"question_length":350,"response_length":249,"tag_count":4} {"question":"Wrapping long y labels in matplotlib tight layout using setp I've been trying to wrap text for long labels in my code. I tried the textwrap method suggested earlier here, but my code defines yticklabels through an array imported from a csv using the pyplot.setp() method. I'm using tight_layout() for the formatting otherwise. So the question is - is there a way to wrap the really long y labels to newlines easily? Here is some sample code that I'd like a fix for: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nlabels=('Really really really really really really long label 1', 'Really really really really really really long label 2', 'Really really really really really really long label 3')\nvalues=(30,50,40)\n\nfig = plt.figure()\nax=fig.add_subplot(111)\n\nplt.ylim((0,40))\nfor i in np.arange(3):\n plt.barh(15*i, values[i])\n\nplt.yticks(15*np.arange(3))\nplt.setp(ax.set_yticklabels(labels))\n\nplt.tight_layout()\nplt.show()\n```\nThis plots something like this I'd like the labels to go to newlines after a fixed width. Any ideas?","response":"I have tried using textwrap on the labels and it works for me. \n```\nfrom textwrap import wrap\nlabels=['Really really really really really really long label 1',\n 'Really really really really really really long label 2',\n 'Really really really really really really long label 3']\nlabels = [ '\\n'.join(wrap(l, 20)) for l in labels ]\n```\nInserting this in your code gives us:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/15740682\/wrapping-long-y-labels-in-matplotlib-tight-layout-using-setp","best_answers_votes":73,"tags":["python","matplotlib"],"question_length":1039,"response_length":385,"tag_count":2} {"question":"Remove colorbar from figure This should be easy but I'm having a hard time with it. Basically, I have a subplot in matplotlib that I'm drawing a hexbin plot in every time a function is called, but every time I call the function I get a new colorbar, so what I'd really like to do is update the colorbar. Unfortunately, this doesn't seem to work since the object the colorbar is attached to is being recreated by subplot.hexbin. \n```\ndef foo(self):\n self.subplot.clear()\n hb = self.subplot.hexbin(...)\n if self.cb:\n self.cb.update_bruteforce() # Doesn't work (hb is new)\n else:\n self.cb = self.figure.colorbar(hb)\n```\nI'm now in this annoying place where I'm trying to delete the colorbar axes altogether and simply recreate it. Unfortunately, when I delete the colorbar axes, the subplot axes don't reclaim the space, and calling self.subplot.reset_position() isn't doing what I thought it would. \n```\ndef foo(self):\n self.subplot.clear()\n hb = self.subplot.hexbin(...)\n if self.cb:\n self.figure.delaxes(self.figure.axes[1])\n del self.cb\n # TODO: resize self.subplot so it fills the \n # whole figure before adding the new colorbar\n self.cb = self.figure.colorbar(hb)\n```","response":"I think the problem is that with del you cancel the variable, but not the referenced object colorbar. If you want the colorbar to be removed from plot and disappear, you have to use the method remove of the colorbar instance and to do this you need to have the colorbar in a variable, for which you have two options: holding the colorbar in a value at the moment of creation, as shown in other answers e.g. cb=plt.colorbar() retrieve an existing colorbar, that you can do following (and upvoting :)) what I wrote here: How to retrieve colorbar instance from figure in matplotlib then: cb.remove() plt.draw() #update plot Full code and result: \n```\nfrom matplotlib import pyplot as plt \nimport numpy as np\n\nplt.ion() \nplt.imshow(np.random.random(15).reshape((5,3))) \ncb = plt.colorbar() \nplt.savefig('test01.png') \ncb.remove() \nplt.savefig('test02.png')\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/5263034\/remove-colorbar-from-figure","best_answers_votes":39,"tags":["python","matplotlib","colorbar"],"question_length":1219,"response_length":856,"tag_count":3} {"question":"How to obtain the same font(-style, -size etc.) in matplotlib output as in latex output? I have one .tex-document in which one graph is made by the python module matplotlib. What I want is, that the graph blends in to the document as good as possible. So I want the characters used in the graph to look exactly like the other same characters in the rest of the document. My first try looks like this (the matplotlibrc-file): \n```\ntext.usetex : True\ntext.latex.preamble: \\usepackage{lmodern} #Used in .tex-document\nfont.size : 11.0 #Same as in .tex-document\nbackend: PDF\n```\nFor compiling of the .tex in which the PDF output of matplotlib is included, pdflatex is used. Now, the output looks not bad, but it looks somewhat different, the characters in the graph seem weaker in stroke width. What is the best approach for this? EDIT: Minimum example: LaTeX-Input: \n```\n\\documentclass[11pt]{scrartcl}\n\n\\usepackage[T1]{fontenc}\n\\usepackage[utf8]{inputenc}\n\\usepackage{lmodern}\n\\usepackage{graphicx}\n\n\\begin{document}\n\n\\begin{figure}\n\\includegraphics{.\/graph}\n\\caption{Excitation-Energy}\n\\label{fig:graph}\n\\end{figure}\n\n\\end{document}\n```\nPython-Script: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nplt.plot([1,2,3,4])\nplt.xlabel(\"Excitation-Energy\")\nplt.ylabel(\"Intensit\u00e4t\")\nplt.savefig(\"graph.pdf\")\n```\nPDF output:","response":"The difference in the fonts can be caused by incorrect parameter setting out pictures with matplotlib or wrong its integration into the final document. I think problem in text.latex.preamble: \\usepackage{lmodern}. This thing works very badly and even developers do not guarantee its workability, how you can find here. In my case it did not work at all. Minimal differences in font associated with font family. For fix this u need: 'font.family' : 'lmodern' in rc. Other options and more detailed settings can be found here. To suppress this problem, I used a slightly different method - direct. plt.rcParams['text.latex.preamble']=[r\"\\usepackage{lmodern}\"]. It is not strange, but it worked. Further information can be found at the link above. To prevent these effects suggest taking a look at this code: \n```\nimport matplotlib.pyplot as plt\n\n#Direct input \nplt.rcParams['text.latex.preamble']=[r\"\\usepackage{lmodern}\"]\n#Options\nparams = {'text.usetex' : True,\n 'font.size' : 11,\n 'font.family' : 'lmodern',\n 'text.latex.unicode': True,\n }\nplt.rcParams.update(params) \n\nfig = plt.figure()\n\n#You must select the correct size of the plot in advance\nfig.set_size_inches(3.54,3.54) \n\nplt.plot([1,2,3,4])\nplt.xlabel(\"Excitation-Energy\")\nplt.ylabel(\"Intensit\u00e4t\")\nplt.savefig(\"graph.pdf\", \n #This is simple recomendation for publication plots\n dpi=1000, \n # Plot will be occupy a maximum of available space\n bbox_inches='tight', \n )\n```\nAnd finally move on to the latex: \n```\n\\documentclass[11pt]{scrartcl}\n\n\\usepackage[T1]{fontenc}\n\\usepackage[utf8]{inputenc}\n\\usepackage{lmodern}\n\\usepackage{graphicx}\n\n\\begin{document}\n\n\\begin{figure}\n \\begin{center}\n \\includegraphics{.\/graph}\n \\caption{Excitation-Energy}\n \\label{fig:graph}\n \\end{center}\n\\end{figure}\n\n\\end{document}\n```\nResults As can be seen from a comparison of two fonts - differences do not exist (1 - MatPlotlib, 2 - pdfLaTeX)","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/17687213\/how-to-obtain-the-same-font-style-size-etc-in-matplotlib-output-as-in-latex","best_answers_votes":33,"tags":["python","matplotlib","tex"],"question_length":1328,"response_length":1999,"tag_count":3} {"question":"Matplotlib - Border around scatter plot points I am following this tutorial. I would like to use Matplotlib to create a scatter plot with points that are colored inside, but have a black border, such as this plot: However, when I copy the code exactly, I get this plot instead. Here is the code: \n```\ncolors = ['black', 'blue', 'purple', 'yellow', 'white', 'red', 'lime', 'cyan', 'orange', 'gray']\nfor i in range(len(colors)):\n x = reduced_data_rpca[:, 0][digits.target == i]\n y = reduced_data_rpca[:, 1][digits.target == i]\n plt.scatter(x, y, c=colors[i])\nplt.legend(digits.target_names, bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\nplt.xlabel('First Principal Component')\nplt.ylabel('Second Principal Component')\nplt.title(\"PCA Scatter Plot\")\nplt.show()\n```\nI tried adjusting the style, but that didn't help.","response":"When you use scatter plot, you set a color for both face and edge. In the official documentation you can find an additional parameter, edgecolors, which allows setting the edge color. edgecolors : color or sequence of color, optional, default: None If None, defaults to \u2018face\u2019 If \u2018face\u2019, the edge color will always be the same as the face color. If it is \u2018none\u2019, the patch boundary will not be drawn. For non-filled markers, the edgecolors kwarg is ignored and forced to \u2018face\u2019 internally. So, after all, you need only plt.scatter(x, y, c=colors[i],edgecolors='black')","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/50706901\/matplotlib-border-around-scatter-plot-points","best_answers_votes":71,"tags":["python","matplotlib"],"question_length":825,"response_length":568,"tag_count":2} {"question":"How to adjust space between legend markers and labels I want to adjust space between legend markers and labels. Sometime the space is too much as default. Does anyone know how to do this? Thanks.","response":"legend() has a kwarg in called handletextpad which will do what you are looking for. By default, this is set to 0.8. From the docs: handletextpad : float or None The pad between the legend handle and text. Measured in font-size units. Default is None which will take the value from the legend.handletextpad rcParam. So when you call legend, add that kwarg, and experiment with the value. Something like: \n```\nax.legend(handletextpad=0.1)\n```\nConsider the following: \n```\nimport matplotlib.pyplot as plt\n\nfig, (ax1, ax2) = plt.subplots(ncols=2)\n\nax1.plot(range(5), 'ro', label='handletextpad=0.8')\nax2.plot(range(5), 'bo', label='handletextpad=0.1')\n\nax1.legend()\nax2.legend(handletextpad=0.1)\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/41827406\/how-to-adjust-space-between-legend-markers-and-labels","best_answers_votes":74,"tags":["python","matplotlib","legend"],"question_length":195,"response_length":708,"tag_count":3} {"question":"How to set x axis values in matplotlib python? [duplicate] This question already has answers here: Modify tick label text (13 answers) Closed 2 years ago. I want to draw this graph using matplotlib. I wrote the code but it's not changing the x axis values. \n```\nimport matplotlib.pyplot as plt\nx = [0.00001,0.001,0.01,0.1,0.5,1,5]\ny = [0.945,0.885,0.893,0.9,0.996,1.25,1.19]\nplt.xlim(0.00001,5)\nplt.ylim(0.8,1.4)\nplt.plot(x, y, marker='o', linestyle='--', color='r', \nlabel='Square') \nplt.xlabel('x')\nplt.ylabel('y') \nplt.title('compare')\nplt.legend() \nplt.show()\n```\nHow I can draw the blue line of the given graph using matplotlib?","response":"The scaling on your example figure is a bit strange but you can force it by plotting the index of each x-value and then setting the ticks to the data points: \n```\nimport matplotlib.pyplot as plt\nx = [0.00001,0.001,0.01,0.1,0.5,1,5]\n# create an index for each tick position\nxi = list(range(len(x)))\ny = [0.945,0.885,0.893,0.9,0.996,1.25,1.19]\nplt.ylim(0.8,1.4)\n# plot the index for the x-values\nplt.plot(xi, y, marker='o', linestyle='--', color='r', label='Square') \nplt.xlabel('x')\nplt.ylabel('y') \nplt.xticks(xi, x)\nplt.title('compare')\nplt.legend() \nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/44813601\/how-to-set-x-axis-values-in-matplotlib-python","best_answers_votes":69,"tags":["python","matplotlib"],"question_length":633,"response_length":566,"tag_count":2} {"question":"How to create a draggable legend in matplotlib? I'm drawing a legend on an axes object in matplotlib but the default positioning which claims to place it in a smart place doesn't seem to work. Ideally, I'd like to have the legend be draggable by the user. How can this be done?","response":"Note: This is now built into matplotlib \n```\nleg = plt.legend()\nif leg:\n leg.draggable()\n```\nwill work as expected Well, I found bits and pieces of the solution scattered among mailing lists. I've come up with a nice modular chunk of code that you can drop in and use... here it is: \n```\nclass DraggableLegend:\n def __init__(self, legend):\n self.legend = legend\n self.gotLegend = False\n legend.figure.canvas.mpl_connect('motion_notify_event', self.on_motion)\n legend.figure.canvas.mpl_connect('pick_event', self.on_pick)\n legend.figure.canvas.mpl_connect('button_release_event', self.on_release)\n legend.set_picker(self.my_legend_picker)\n\n def on_motion(self, evt):\n if self.gotLegend:\n dx = evt.x - self.mouse_x\n dy = evt.y - self.mouse_y\n loc_in_canvas = self.legend_x + dx, self.legend_y + dy\n loc_in_norm_axes = self.legend.parent.transAxes.inverted().transform_point(loc_in_canvas)\n self.legend._loc = tuple(loc_in_norm_axes)\n self.legend.figure.canvas.draw()\n\n def my_legend_picker(self, legend, evt): \n return self.legend.legendPatch.contains(evt) \n\n def on_pick(self, evt): \n if evt.artist == self.legend:\n bbox = self.legend.get_window_extent()\n self.mouse_x = evt.mouseevent.x\n self.mouse_y = evt.mouseevent.y\n self.legend_x = bbox.xmin\n self.legend_y = bbox.ymin \n self.gotLegend = 1\n\n def on_release(self, event):\n if self.gotLegend:\n self.gotLegend = False\n```\n...and in your code... \n```\ndef draw(self): \n ax = self.figure.add_subplot(111)\n scatter = ax.scatter(np.random.randn(100), np.random.randn(100))\n\n\nlegend = DraggableLegend(ax.legend())\n```\nI emailed the Matplotlib-users group and John Hunter was kind enough to add my solution it to SVN HEAD. On Thu, Jan 28, 2010 at 3:02 PM, Adam Fraser wrote: I thought I'd share a solution to the draggable legend problem since it took me forever to assimilate all the scattered knowledge on the mailing lists... Cool -- nice example. I added the code to legend.py. Now you can do leg = ax.legend() leg.draggable() to enable draggable mode. You can repeatedly call this func to toggle the draggable state. I hope this is helpful to people working with matplotlib.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/2539477\/how-to-create-a-draggable-legend-in-matplotlib","best_answers_votes":32,"tags":["python","matplotlib","draggable","legend"],"question_length":277,"response_length":2363,"tag_count":4} {"question":"matplotlib legend location numbers I am beginning to use Python for my scientific computing, and I am really liking it a lot, however I am confused by a feature of the matplotlib.pylab.legend function. In particular, the location feature allows one to specifiy the location of their legend using numbers, following this scheme: best -- 0 upper right -- 1 upper left -- 2 lower left -- 3 lower right -- 4 right -- 5 center left -- 6 center right -- 7 lower center -- 8 upper center -- 9 center -- 10 Does anyone know why you wouldn't use the ordering on the numpad? I.e. center -- 5, upper right -- 9, etc. I am just curious if anyone knows.","response":"The docs show this example: \n```\nlegend( ('label1', 'label2', 'label3'), loc='upper left')\n```\nPresumably, you could write loc=2, but why would you? It's much more readable to use the English word. As to why they didn't enumerate the values to align with the numeric keypad, I presume they weren't thinking about the numeric keypad at the time. Edit: It's worth including here the full text of Joe Kington's comment: Actually, they were deliberately mimicking matlab's behavior at the time. See the \"obsolete location values\" section in the documentation for MATLAB's legend: mathworks.com\/help\/techdoc\/ref\/legend.html","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/10824156\/matplotlib-legend-location-numbers","best_answers_votes":38,"tags":["python","matplotlib"],"question_length":640,"response_length":618,"tag_count":2} {"question":"copy an axes content and show it in a new figure let say I have this code: \n```\nnum_rows = 10\nnum_cols = 1\nfig, axs = plt.subplots(num_rows, num_cols, sharex=True)\nfor i in xrange(num_rows):\n ax = axs[i]\n ax.plot(np.arange(10), np.arange(10)**i)\nplt.show()\n```\nthe result figure has too much info and now I want to pick 1 of the axes and draw it alone in a new figure I tried doing something like this \n```\ndef on_click(event):\n axes = event.inaxes.get_axes()\n fig2 = plt.figure(15)\n fig2.axes.append(axes)\n fig2.show()\n\nfig.canvas.mpl_connect('button_press_event', on_click)\n```\nbut it didn't quite work. what would be the correct way to do it? searching through the docs and throw SE gave hardly any useful result edit: I don't mind redrawing the chosen axes, but I'm not sure how can I tell which of the axes was chosen so if that information is available somehow then it is a valid solution for me edit #2: so I've managed to do something like this: \n```\ndef on_click(event):\n fig2 = plt.figure(15)\n fig2.clf()\n for line in event.inaxes.axes.get_lines():\n xydata = line.get_xydata()\n plt.plot(xydata[:, 0], xydata[:, 1])\n fig2.show()\n```\nwhich seems to be \"working\" (all the other information is lost - labels, lines colors, lines style, lines width, xlim, ylim, etc...) but I feel like there must be a nicer way to do it","response":"Copying the axes The inital answer here does not work, we keep it for future reference and also to see why a more sophisticated approach is needed. \n```css\n#There are some pitfalls on the way with the initial approach. \r\n#Adding an `axes` to a figure can be done via `fig.add_axes(axes)`. However, at this point, \r\n#the axes' figure needs to be the figure the axes should be added to. \r\n#This may sound a bit like running in circles but we can actually set the axes' \r\n#figure as `axes.figure = fig2` and hence break out of this.\r\n\r\n#One might then also position the axes in the new figure to take the usual dimensions. \r\n#For this a dummy axes can be added first, the axes can change its position to the position \r\n#of the dummy axes and then the dummy axes is removed again. In total, this would look as follows.\r\n\r\nimport matplotlib.pyplot as plt\r\nimport numpy as np\r\n\r\nnum_rows = 10\r\nnum_cols = 1\r\nfig, axs = plt.subplots(num_rows, num_cols, sharex=True)\r\nfor i in xrange(num_rows):\r\n ax = axs[i]\r\n ax.plot(np.arange(10), np.arange(10)**i)\r\n \r\n \r\ndef on_click(event):\r\n axes = event.inaxes\r\n if not axes: return \r\n fig2 = plt.figure()\r\n axes.figure=fig2\r\n fig2.axes.append(axes)\r\n fig2.add_axes(axes)\r\n \r\n dummy = fig2.add_subplot(111)\r\n axes.set_position(dummy.get_position())\r\n dummy.remove()\r\n fig2.show()\r\n\r\nfig.canvas.mpl_connect('button_press_event', on_click)\r\n\r\n\r\nplt.show()\r\n\r\n#So far so good, however, be aware that now after a click the axes is somehow \r\n#residing in both figures, which can cause all sorts of problems, e.g. if you\r\n# want to resize or save the initial figure.\n```\nInstead, the following will work: Pickling the figure The problem is that axes cannot be copied (even deepcopy will fail). Hence to obtain a true copy of an axes, you may need to use pickle. The following will work. It pickles the complete figure and removes all but the one axes to show. \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pickle\nimport io\n\nnum_rows = 10\nnum_cols = 1\nfig, axs = plt.subplots(num_rows, num_cols, sharex=True)\nfor i in range(num_rows):\n ax = axs[i]\n ax.plot(np.arange(10), np.arange(10)**i)\n\ndef on_click(event):\n\n if not event.inaxes: return\n inx = list(fig.axes).index(event.inaxes)\n buf = io.BytesIO()\n pickle.dump(fig, buf)\n buf.seek(0)\n fig2 = pickle.load(buf) \n\n for i, ax in enumerate(fig2.axes):\n if i != inx:\n fig2.delaxes(ax)\n else:\n axes=ax\n\n axes.change_geometry(1,1,1)\n fig2.show()\n\nfig.canvas.mpl_connect('button_press_event', on_click)\n\nplt.show()\n```\nRecreate plots The alternative to the above is of course to recreate the plot in a new figure each time the axes is clicked. To this end one may use a function that creates a plot on a specified axes and with a specified index as input. Using this function during figure creation as well as later for replicating the plot in another figure ensures to have the same plot in all cases. \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nnum_rows = 10\nnum_cols = 1\ncolors = plt.rcParams[\"axes.prop_cycle\"].by_key()[\"color\"]\nlabels = [\"Label {}\".format(i+1) for i in range(num_rows)]\n\ndef myplot(i, ax):\n ax.plot(np.arange(10), np.arange(10)**i, color=colors[i])\n ax.set_ylabel(labels[i])\n\n\nfig, axs = plt.subplots(num_rows, num_cols, sharex=True)\nfor i in xrange(num_rows):\n myplot(i, axs[i])\n\n\ndef on_click(event):\n axes = event.inaxes\n if not axes: return\n inx = list(fig.axes).index(axes)\n fig2 = plt.figure()\n ax = fig2.add_subplot(111)\n myplot(inx, ax)\n fig2.show()\n\nfig.canvas.mpl_connect('button_press_event', on_click)\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/45810557\/copy-an-axes-content-and-show-it-in-a-new-figure","best_answers_votes":35,"tags":["python","matplotlib"],"question_length":1373,"response_length":3707,"tag_count":2} {"question":"Scientific notation colorbar I am trying to put a colorbar to my image using matplotlib. The issue comes when I try to force the ticklabels to be written in scientific notation. How can I force the scientific notation (ie, 1x10^0, 2x10^0, ..., 1x10^2, and so on) in the ticks of the color bar? Example, let's create and plot and image with its color bar: \n```\nimport matplotlib as plot\nimport numpy as np\n\nimg = np.random.randn(300,300)\nmyplot = plt.imshow(img)\nplt.colorbar(myplot)\nplt.show()\n```\nWhen I do this, I get the following image: However, I would like to see the ticklabels in scientific notation... Is there any one line command to do this? Otherwise, is there any hint out there?","response":"You could use colorbar's format parameter: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport matplotlib.ticker as ticker\n\nimg = np.random.randn(300,300)\nmyplot = plt.imshow(img)\n\ndef fmt(x, pos):\n a, b = '{:.2e}'.format(x).split('e')\n b = int(b)\n return r'${} \\times 10^{{{}}}$'.format(a, b)\n\nplt.colorbar(myplot, format=ticker.FuncFormatter(fmt))\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/25983218\/scientific-notation-colorbar","best_answers_votes":57,"tags":["python","matplotlib","scientific-notation","colorbar"],"question_length":692,"response_length":384,"tag_count":4} {"question":"Logscale plots with zero values in matplotlib I am currently using logscale in order to have greater possibilities of plotting my data. Nevertheless, my data consists also of zero values. I know that these zero values will not work on logscale as log(0) is not defined. So e.g., \n```\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.plot([0,1,2],[10,10,100],marker='o',linestyle='-')\nax.set_yscale('log')\nax.set_xscale('log')\n```\ncompletely omits the zero value. Is this behavior acceptable? At least there should be some kind of warning. I only recognized it by accident. Is there maybe also a way of plotting zero value data in logscale? Thanks! P.S.: I hope this fits to stackoverflow. I did not find a mailing list of matplotlib.","response":"It's easiest to use a \"symlog\" plot for this purpose. The interval near 0 will be on a linear scale, so 0 can be displayed. \n```\nimport matplotlib.pyplot as plt\nfig, ax = plt.subplots()\nax.plot([0,1,2],[10,10,100],marker='o',linestyle='-')\nax.set_yscale('symlog')\nax.set_xscale('symlog')\nplt.show()\n```\nSymlog sets a small interval near zero (both above and below) to use a linear scale. This allows things to cross 0 without causing log(x) to explode (or go to -inf, rather). There's a nice visual comparison as an SO answer here: https:\/\/stackoverflow.com\/a\/3513150\/325565","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/16904755\/logscale-plots-with-zero-values-in-matplotlib","best_answers_votes":64,"tags":["python","matplotlib"],"question_length":732,"response_length":574,"tag_count":2} {"question":"Scatter plot form dataframe with index on x-axis I've got pandas DataFrame, df, with index named date and the columns columnA, columnB and columnC I am trying to scatter plot index on a x-axis and columnA on a y-axis using the DataFrame syntax. When I try: \n```\ndf.plot(kind='scatter', x='date', y='columnA')\n```\nI ma getting an error KeyError: 'date' probably because the date is not column \n```\ndf.plot(kind='scatter', y='columnA')\n```\nI am getting an error: \n```none\nValueError: scatter requires and x and y column\n```\nso no default index on x-axis. \n```\ndf.plot(kind='scatter', x=df.index, y='columnA')\n```\nI am getting error \n```none\nKeyError: \"DatetimeIndex(['1818-01-01', '1818-01-02', '1818-01-03', '1818-01-04',\\n\n '1818-01-05', '1818-01-06', '1818-01-07', '1818-01-08',\\n\n '1818-01-09', '1818-01-10',\\n ...\\n \n '2018-03-22', '2018-03-23', '2018-03-24', '2018-03-25',\\n\n '2018-03-26', '2018-03-27', '2018-03-28', '2018-03-29',\\n \n '2018-03-30', '2018-03-31'],\\n \ndtype='datetime64[ns]', name='date', length=73139, freq=None) not in index\"\n```\nI can plot it if I use matplotlib.pyplot directly \n```\nplt.scatter(df.index, df['columnA'])\n```\nIs there a way to plot index as x-axis using the DataFrame kind syntax?","response":"This is kind of ugly (I think the matplotlib solution you used in your question is better, FWIW), but you can always create a temporary DataFrame with the index as a column usinng \n```\ndf.reset_index()\n```\nIf the index was nameless, the default name will be 'index'. Assuming this is the case, you could use \n```\ndf.reset_index().plot(kind='scatter', x='index', y='columnA')\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/49834883\/scatter-plot-form-dataframe-with-index-on-x-axis","best_answers_votes":31,"tags":["python","pandas","matplotlib"],"question_length":1360,"response_length":378,"tag_count":3} {"question":"Is it possible to plot implicit equations? I would like to plot implicit equations (of the form f(x, y)=g(x, y) eg. X^y=y^x) in Matplotlib. Is this possible?","response":"I don't believe there's very good support for this, but you could try something like \n```\nimport matplotlib.pyplot\nfrom numpy import arange\nfrom numpy import meshgrid\n\ndelta = 0.025\nxrange = arange(-5.0, 20.0, delta)\nyrange = arange(-5.0, 20.0, delta)\nX, Y = meshgrid(xrange,yrange)\n\n# F is one side of the equation, G is the other\nF = Y**X\nG = X**Y\n\nmatplotlib.pyplot.contour(X, Y, (F - G), [0])\nmatplotlib.pyplot.show()\n```\nSee the API docs for contour: if the fourth argument is a sequence then it specifies which contour lines to plot. But the plot will only be as good as the resolution of your ranges, and there are certain features it may never get right, often at self-intersection points.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/2484527\/is-it-possible-to-plot-implicit-equations","best_answers_votes":40,"tags":["python","matplotlib","equation","implicit","sympy"],"question_length":157,"response_length":697,"tag_count":5} {"question":"How to Display Custom Images in Tensorboard (e.g. Matplotlib Plots)? The Image Dashboard section of the Tensorboard ReadMe says: Since the image dashboard supports arbitrary pngs, you can use this to embed custom visualizations (e.g. matplotlib scatterplots) into TensorBoard. I see how a pyplot image could be written to file, read back in as a tensor, and then used with tf.image_summary() to write it to TensorBoard, but this statement from the readme suggests there is a more direct way. Is there? If so, is there any further documentation and\/or examples of how to do this efficiently?","response":"It is quite easy to do if you have the image in a memory buffer. Below, I show an example, where a pyplot is saved to a buffer and then converted to a TF image representation which is then sent to an image summary. \n```\nimport io\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\n\n\ndef gen_plot():\n \"\"\"Create a pyplot plot and save to buffer.\"\"\"\n plt.figure()\n plt.plot([1, 2])\n plt.title(\"test\")\n buf = io.BytesIO()\n plt.savefig(buf, format='png')\n buf.seek(0)\n return buf\n\n\n# Prepare the plot\nplot_buf = gen_plot()\n\n# Convert PNG buffer to TF image\nimage = tf.image.decode_png(plot_buf.getvalue(), channels=4)\n\n# Add the batch dimension\nimage = tf.expand_dims(image, 0)\n\n# Add image summary\nsummary_op = tf.summary.image(\"plot\", image)\n\n# Session\nwith tf.Session() as sess:\n # Run\n summary = sess.run(summary_op)\n # Write summary\n writer = tf.train.SummaryWriter('.\/logs')\n writer.add_summary(summary)\n writer.close()\n```\nThis gives the following TensorBoard visualization:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/38543850\/how-to-display-custom-images-in-tensorboard-e-g-matplotlib-plots","best_answers_votes":48,"tags":["python","tensorflow","matplotlib","pytorch","tensorboard"],"question_length":590,"response_length":1023,"tag_count":5} {"question":"How to create a grouped bar plot The goal here is to create a grouped bar plot, not subplots like the image below Is there a simple way to create a grouped bar plot in Python? Right now I get separate bar plots, instead of separate bars on one plot. \n```py\nimport pandas as pd\n\ndf = pd.DataFrame([['g1', 'c1', 10], ['g1', 'c2', 12], ['g1', 'c3', 13], ['g2', 'c1', 8], ['g2', 'c2', 10], ['g2', 'c3', 12]], columns=['group', 'column', 'val'])\n\n group column val\n0 g1 c1 10\n1 g1 c2 12\n2 g1 c3 13\n3 g2 c1 8\n4 g2 c2 10\n5 g2 c3 12\n \n\ndf.groupby(['group']).plot(kind='bar')\n```","response":"Pandas will show grouped bars by columns. Entries in each row but different columns will constitute a group in the resulting plot. Hence you need to \"reshape\" your dataframe to have the \"group\" as columns. In this case you can pivot like \n```\ndf.pivot(\"column\", \"group\", \"val\")\n```\nproducing \n```\ngroup g1 g2\ncolumn \nc1 10 8\nc2 12 10\nc3 13 12\n```\nPlotting this will result in a grouped bar chart. \n```\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\ndf = pd.DataFrame([['g1','c1',10],['g1','c2',12],['g1','c3',13],['g2','c1',8],\n ['g2','c2',10],['g2','c3',12]],columns=['group','column','val'])\n\ndf.pivot(\"column\", \"group\", \"val\").plot(kind='bar')\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/47796264\/how-to-create-a-grouped-bar-plot","best_answers_votes":49,"tags":["python","pandas","matplotlib","seaborn","grouped-bar-chart"],"question_length":630,"response_length":715,"tag_count":5} {"question":"How to display all minor tick marks on a semi-log plot When making a semi-log plot (y is log), the minor tick marks (8 in a decade) on the y axis appear automatically, but it seems that when the axis range exceeds 10**10, they disappear. I tried many ways to force them back in, but to no avail. It might be that they go away for large ranges to avoid overcrowding, but one should have a choice?","response":"solution for matplotlib >= 2.0.2 Let's consider the following example which is produced by this code: \n```\nimport matplotlib.pyplot as plt\nimport matplotlib.ticker\nimport numpy as np\n\ny = np.arange(12)\nx = 10.0**y\n\nfig, ax=plt.subplots()\nax.plot(x,y)\nax.set_xscale(\"log\")\nplt.show()\n```\nThe minor ticklabels are indeed gone and usual ways to show them (like plt.tick_params(axis='x', which='minor')) fail. The first step would then be to show all powers of 10 on the axis, \n```\nlocmaj = matplotlib.ticker.LogLocator(base=10,numticks=12) \nax.xaxis.set_major_locator(locmaj)\n```\nwhere the trick is to set numticks to a number equal or larger the number of ticks (i.e. 12 or higher in this case). Then, we can add minor ticklabels as \n```\nlocmin = matplotlib.ticker.LogLocator(base=10.0,subs=(0.2,0.4,0.6,0.8),numticks=12)\nax.xaxis.set_minor_locator(locmin)\nax.xaxis.set_minor_formatter(matplotlib.ticker.NullFormatter())\n```\nNote that I restricted this to include 4 minor ticks per decade (using 8 is equally possible but in this example would overcrowd the axes). Also note that numticks is again (quite unintuitively) 12 or larger. Finally we need to use a NullFormatter() for the minor ticks, in order not to have any ticklabels appear for them. solution for matplotlib 2.0.0 The following works in matplotlib 2.0.0 or below, but it does not work in matplotlib 2.0.2. Let's consider the following example which is produced by this code: \n```\nimport matplotlib.pyplot as plt\nimport matplotlib.ticker\nimport numpy as np\n\ny = np.arange(12)\nx = 10.0**y\n\nfig, ax=plt.subplots()\nax.plot(x,y)\nax.set_xscale(\"log\")\nplt.show()\n```\nThe minor ticklabels are indeed gone and usual ways to show them (like plt.tick_params(axis='x', which='minor')) fail. The first step would then be to show all powers of 10 on the axis, \n```\nlocmaj = matplotlib.ticker.LogLocator(base=10.0, subs=(0.1,1.0, ))\nax.xaxis.set_major_locator(locmaj)\n```\nThen, we can add minor ticklabels as \n```\nlocmin = matplotlib.ticker.LogLocator(base=10.0, subs=(0.1,0.2,0.4,0.6,0.8,1,2,4,6,8,10 )) \nax.xaxis.set_minor_locator(locmin)\nax.xaxis.set_minor_formatter(matplotlib.ticker.NullFormatter())\n```\nNote that I restricted this to include 4 minor ticks per decade (using 8 is equally possible but in this example would overcrowd the axes). Also note - and that may be the key here - that the subs argument, which gives the multiples of integer powers of the base at which to place ticks (see documentation), is given a list ranging over two decades instead of one. Finally we need to use a NullFormatter() for the minor ticks, in order not to have any ticklabels appear for them.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/44078409\/how-to-display-all-minor-tick-marks-on-a-semi-log-plot","best_answers_votes":62,"tags":["python","matplotlib","axis","xticks"],"question_length":395,"response_length":2636,"tag_count":4} {"question":"How to plot 1-d data at given y-value with pylab I want to plot the data points that are in a 1-D array just along the horizontal axis [edit: at a given y-value], like in this plot: How can I do this with pylab?","response":"Staven already edited his post to include how to plot the values along y-value 1, but he was using Python lists. A variant that should be faster (although I did not measure it) only uses numpy arrays: \n```\nimport numpy as np\nimport matplotlib.pyplot as pp\nval = 0. # this is the value where you want the data to appear on the y-axis.\nar = np.arange(10) # just as an example array\npp.plot(ar, np.zeros_like(ar) + val, 'x')\npp.show()\n```\nAs a nice-to-use function that offers all usual matplotlib refinements via kwargs this would be: \n```\ndef plot_at_y(arr, val, **kwargs):\n pp.plot(arr, np.zeros_like(arr) + val, 'x', **kwargs)\n pp.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/7352220\/how-to-plot-1-d-data-at-given-y-value-with-pylab","best_answers_votes":42,"tags":["python","matplotlib"],"question_length":211,"response_length":648,"tag_count":2} {"question":"Matplotlib histogram with collection bin for high values I have an array with values, and I want to create a histogram of it. I am mainly interested in the low end numbers, and want to collect every number above 300 in one bin. This bin should have the same width as all other (equally wide) bins. How can I do this? Note: this question is related to this question: Defining bin width\/x-axis scale in Matplotlib histogram This is what I tried so far: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndef plot_histogram_01():\n np.random.seed(1)\n values_A = np.random.choice(np.arange(600), size=200, replace=True).tolist()\n values_B = np.random.choice(np.arange(600), size=200, replace=True).tolist()\n\n bins = [0, 25, 50, 75, 100, 125, 150, 175, 200, 225, 250, 275, 300, 600]\n\n fig, ax = plt.subplots(figsize=(9, 5))\n _, bins, patches = plt.hist([values_A, values_B], normed=1, # normed is deprecated and will be replaced by density\n bins=bins,\n color=['#3782CC', '#AFD5FA'],\n label=['A', 'B'])\n\n xlabels = np.array(bins[1:], dtype='|S4')\n xlabels[-1] = '300+'\n\n N_labels = len(xlabels)\n plt.xlim([0, 600])\n plt.xticks(25 * np.arange(N_labels) + 12.5)\n ax.set_xticklabels(xlabels)\n\n plt.yticks([])\n plt.title('')\n plt.setp(patches, linewidth=0)\n plt.legend()\n\n fig.tight_layout()\n plt.savefig('my_plot_01.png')\n plt.close()\n```\nThis is the result, which does not look nice: I then changed the line with xlim in it: \n```\nplt.xlim([0, 325])\n```\nWith the following result: It looks more or less as I want it, but the last bin is not visible now. Which trick am I missing to visualize this last bin with a width of 25?","response":"Numpy has a handy function for dealing with this: np.clip. Despite what the name may sound like, it doesn't remove values, it just limits them to the range you specify. Basically, it does Artem's \"dirty hack\" inline. You can leave the values as they are, but in the hist call, just wrap the array in an np.clip call, like so \n```\nplt.hist(np.clip(values_A, bins[0], bins[-1]), bins=bins)\n```\nThis is nicer for a number of reasons: It's way faster \u2014 at least for large numbers of elements. Numpy does its work at the C level. Operating on python lists (as in Artem's list comprehension) has a lot of overhead for each element. Basically, if you ever have the option to use numpy, you should. You do it right where it's needed, which reduces the chance of making mistakes in your code. You don't need to keep a second copy of the array hanging around, which reduces memory usage (except within this one line) and further reduces the chances of making mistakes. Using bins[0], bins[-1] instead of hard-coding the values reduces the chances of making mistakes again, because you can change the bins just where bins was defined; you don't need to remember to change them in the call to clip or anywhere else. So to put it all together as in the OP: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndef plot_histogram_01():\n np.random.seed(1)\n values_A = np.random.choice(np.arange(600), size=200, replace=True)\n values_B = np.random.choice(np.arange(600), size=200, replace=True)\n\n bins = np.arange(0,350,25)\n\n fig, ax = plt.subplots(figsize=(9, 5))\n _, bins, patches = plt.hist([np.clip(values_A, bins[0], bins[-1]),\n np.clip(values_B, bins[0], bins[-1])],\n # normed=1, # normed is deprecated; replace with density\n density=True,\n bins=bins, color=['#3782CC', '#AFD5FA'], label=['A', 'B'])\n\n xlabels = bins[1:].astype(int).astype(str)\n xlabels[-1] += '+'\n\n N_labels = len(xlabels)\n plt.xlim([0, 325])\n plt.xticks(25 * np.arange(N_labels) + 12.5)\n ax.set_xticklabels(xlabels)\n\n plt.yticks([])\n plt.title('')\n plt.setp(patches, linewidth=0)\n plt.legend(loc='upper left')\n\n fig.tight_layout()\nplot_histogram_01()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/26218704\/matplotlib-histogram-with-collection-bin-for-high-values","best_answers_votes":58,"tags":["python","matplotlib","histogram","bins"],"question_length":1772,"response_length":2294,"tag_count":4} {"question":"Understanding matplotlib: plt, figure, ax(arr)? I'm not really new to matplotlib and I'm deeply ashamed to admit I have always used it as a tool for getting a solution as quick and easy as possible. So I know how to get basic plots, subplots and stuff and have quite a few code which gets reused from time to time...but I have no \"deep(er) knowledge\" of matplotlib. Recently I thought I should change this and work myself through some tutorials. However, I am still confused about matplotlibs plt, fig(ure) and ax(arr). What is really the difference? In most cases, for some \"quick'n'dirty' plotting I see people using just pyplot as plt and directly plot with plt.plot. Since I am having multiple stuff to plot quite often, I frequently use f, axarr = plt.subplots()...but most times you see only code putting data into the axarr and ignoring the figure f. So, my question is: what is a clean way to work with matplotlib? When to use plt only, what is or what should a figure be used for? Should subplots just containing data? Or is it valid and good practice to everything like styling, clearing a plot, ..., inside of subplots? I hope this is not to wide-ranging. Basically I am asking for some advice for the true purposes of plt fig ax(arr) (and when\/how to use them properly). Tutorials would also be welcome. The matplotlib documentation is rather confusing to me. When one searches something really specific, like rescaling a legend, different plot markers and colors and so on the official documentation is really precise but rather general information is not that good in my opinion. Too much different examples, no real explanations of the purposes...looks more or less like a big listing of all possible API methods and arguments.","response":"pyplot is the 'scripting' level API in matplotlib (its highest level API to do a lot with matplotlib). It allows you to use matplotlib using a procedural interface in a similar way as you can do it with Matlab. pyplot has a notion of 'current figure' and 'current axes' that all the functions delegate to (@tacaswell dixit). So, when you use the functions available on the module pyplot you are plotting to the 'current figure' and 'current axes'. If you want 'fine-grain' control of where\/what your are plotting then you should use an object oriented API using instances of Figure and Axes. Functions available in pyplot have an equivalent method in the Axes. From the repo anatomy of matplotlib: The Figure is the top-level container in this hierarchy. It is the overall window\/page that everything is drawn on. You can have multiple independent figures and Figures can contain multiple Axes. But... Most plotting occurs on an Axes. The axes is effectively the area that we plot data on and any ticks\/labels\/etc associated with it. Usually we'll set up an Axes with a call to subplot (which places Axes on a regular grid), so in most cases, Axes and Subplot are synonymous. Each Axes has an XAxis and a YAxis. These contain the ticks, tick locations, labels, etc. If you want to know the anatomy of a plot you can visit this link.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/35677767\/understanding-matplotlib-plt-figure-axarr","best_answers_votes":21,"tags":["python","matplotlib","plot","figure"],"question_length":1744,"response_length":1332,"tag_count":4} {"question":"How to center labels in histogram plot I have a numpy array results that looks like \n```\n[ 0. 2. 0. 0. 0. 0. 3. 0. 0. 0. 0. 0. 0. 0. 0. 2. 0. 0.\n 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0.\n 0. 1. 1. 0. 0. 0. 0. 2. 0. 3. 1. 0. 0. 2. 2. 0. 0. 0.\n 0. 0. 0. 0. 0. 1. 1. 0. 0. 0. 0. 0. 0. 2. 0. 0. 0. 0.\n 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 3. 1. 0. 0. 0. 0. 0.\n 0. 0. 0. 1. 0. 0. 0. 1. 2. 2.]\n```\nI would like to plot a histogram of it. I have tried \n```\nimport matplotlib.pyplot as plt\nplt.hist(results, bins=range(5))\nplt.show()\n```\nThis gives me a histogram with the x-axis labelled 0.0 0.5 1.0 1.5 2.0 2.5 3.0. 3.5 4.0. I would like the x-axis to be labelled 0 1 2 3 instead with the labels in the center of each bar. How can you do that?","response":"The other answers just don't do it for me. The benefit of using plt.bar over plt.hist is that bar can use align='center': \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\narr = np.array([ 0., 2., 0., 0., 0., 0., 3., 0., 0., 0., 0., 0., 0.,\n 0., 0., 2., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0.,\n 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 1., 1.,\n 0., 0., 0., 0., 2., 0., 3., 1., 0., 0., 2., 2., 0.,\n 0., 0., 0., 0., 0., 0., 0., 1., 1., 0., 0., 0., 0.,\n 0., 0., 2., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0.,\n 0., 0., 0., 0., 0., 3., 1., 0., 0., 0., 0., 0., 0.,\n 0., 0., 1., 0., 0., 0., 1., 2., 2.])\n\nlabels, counts = np.unique(arr, return_counts=True)\nplt.bar(labels, counts, align='center')\nplt.gca().set_xticks(labels)\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/23246125\/how-to-center-labels-in-histogram-plot","best_answers_votes":66,"tags":["python","numpy","matplotlib","histogram"],"question_length":845,"response_length":881,"tag_count":4} {"question":"matplotlib ticks thickness Is there a way to increase the thickness and size of ticks in matplotlib without having to write a long piece of code like this: \n```\nfor line in ax1.yaxis.get_ticklines():\n line.set_markersize(25)\n line.set_markeredgewidth(3)\n```\nThe problem with this piece of code is that it uses a loop which costs usually a lot of CPU usage.","response":"A simpler way is to use the set_tick_params function of axis objects: \n```\nax.xaxis.set_tick_params(width=5)\nax.yaxis.set_tick_params(width=5)\n```\nDoing it this way means you can change this on a per-axis basis with out worrying about global state and with out making any assumptions about the internal structure of mpl objects. If you want to set this for all the ticks in your axes, \n```\nax = plt.gca()\nax.tick_params(width=5,...)\n```\nTake a look at set_tick_params doc and tick_params valid keywords","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/14705904\/matplotlib-ticks-thickness","best_answers_votes":60,"tags":["matplotlib"],"question_length":362,"response_length":502,"tag_count":1} {"question":"pyplot zooming in I am trying to plot some data from Flexible Image Transport System (FITS) files, and I wanted to know if anyone knows how to focus on certain regions of a plot's axis. Here is some example code: \n```py\nimport pyfits\nfrom matplotlib import pyplot as plt\nfrom matplotlib import pylab\nfrom pylab import *\n#Assuming I have my data in the current directory\na = pyfits.getdata('fits1.fits')\nx = a['data1'] # Let's assume data1 is the column: [0, 1, 1.3, 1.5, 2, 4, 8]\ny = a['data2'] # And data2 is the column: [0, 0.5, 1, 1.5, 2, 2.5,3]\nplt.plot(x,y)\n```\nHow could I only plot the region from [1.3 to 4] in the x-axis?","response":"Use the plt.axis() function with your limits. \n```\nplt.axis([x_min, x_max, y_min, y_max])\n```\nwhere x_min, x_max, y_min, and y_max are the coordinate limits for both axes.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/11400579\/pyplot-zooming-in","best_answers_votes":62,"tags":["python","matplotlib","zooming"],"question_length":640,"response_length":171,"tag_count":3} {"question":"Bar Chart: How to choose color if value is positive vs value is negative I have a pandas dataframe with positive and negative values and want to plot it as a bar chart. I want to plot the positive colors 'green' and the negative values 'red' (very original...lol). I'm not sure how to pass if > 0 'green' else < 0 'red'? \n```\ndata = pd.DataFrame([[-15], [10], [8], [-4.5]],\n index=['a', 'b', 'c', 'd'],\n columns=['values'])\ndata.plot(kind='barh')\n```","response":"I would create a dummy column for whether the observation is larger than 0. \n```\n\n```python\ndata['positive'] = data['values'] > 0\n```\n\n```python\ndata\n#Output\n#values positive\n#a -15.0 False\n#b 10.0 True\n#c 8.0 True\n#d -4.5 False\n#\n#[4 rows x 2 columns]\n```\n\n```python\ndata['values'].plot(kind='barh',\n color=data.positive.map({True: 'g', False: 'r'}))\n```\nAlso, you may want to be careful not to have column names that overlap with DataFrame attributes. DataFrame.values give the underlying numpy array for a DataFrame. Having overlapping names prevents you from using the df. syntax.\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/22311139\/bar-chart-how-to-choose-color-if-value-is-positive-vs-value-is-negative","best_answers_votes":40,"tags":["python","matplotlib","pandas"],"question_length":488,"response_length":624,"tag_count":3} {"question":"Running infinite loops using threads in python My program is designed in the following way: First part of the program takes real time values from a sensor and plots it using Matplotlib. This has to be done for long durations. And also, it logs information into a database. The second part is the IP Camera. I have to get the input from an IP Camera and display it. For displaying I am using OpenCV's imshow method. Also, I am storing the video from the IP Camera. Question: I have the algorithms in place, the problem is I need to run both these in a while loops. The condition is that I cannot exit from any of them. Now threading is a good alternative for this but I have read about the GIL, so how do I go about running two infinite loops? \n```\nfrom multiprocessing import Process\n\ndef methodA():\n while TRUE:\n do something\n\ndef methodB():\n while TRUE:\n do something\n\np=Process(target=methodA())\np.start()\np1=Process(target=methodB())\np1.start()\n```\nNow when I start process p it starts executing, after that how do I start p1 to run simultaneously?","response":"As far as I understood your question, you have two different tasks that you want them to perform continuously. Now regarding your questions: how do I go about running two infinite loops? You can create two different threads that will run these infinite loops for you. The first thread will perform your task1 and second one will perform task2. Also, once I start executing a thread, how do I execute the other thread when the first thread is running continuously\/infinitely? If you are using two different threads then you don't need to be worried about this issue. If the threads are not sharing any resource then you don't need to worry about this fact. How ever if you want to stop\/pause one thread from the other thread or vice versa then you can implement a mechanism using flags or locks. These questions will help in this case: Is there any way to kill a Thread in Python? Why does the python threading.Thread object has 'start', but not 'stop'? making-a-program-munltithreaded Sample example using threading: \n```\nfrom threading import Thread\n\nclass myClassA(Thread):\n def __init__(self):\n Thread.__init__(self)\n self.daemon = True\n self.start()\n def run(self):\n while True:\n print 'A'\n\nclass myClassB(Thread):\n def __init__(self):\n Thread.__init__(self)\n self.daemon = True\n self.start()\n def run(self):\n while True:\n print 'B'\n\n\nmyClassA()\nmyClassB()\nwhile True:\n pass\n```\nFor shared resources? Use Locks for them. Here are some examples. One, two and How to synchronize threads in python? what if I don't want to run it using classes? How do I do this using only methods? \n```\nfrom threading import Thread\n\ndef runA():\n while True:\n print 'A\\n'\n\ndef runB():\n while True:\n print 'B\\n'\n\nif __name__ == \"__main__\":\n t1 = Thread(target = runA)\n t2 = Thread(target = runB)\n t1.setDaemon(True)\n t2.setDaemon(True)\n t1.start()\n t2.start()\n while True:\n pass\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/23100704\/running-infinite-loops-using-threads-in-python","best_answers_votes":72,"tags":["python","multithreading","opencv","matplotlib"],"question_length":1072,"response_length":2006,"tag_count":4} {"question":"Superimpose scatter plots I am using Python matplotlib. i want to superimpose scatter plots. I know how to superimpose continuous line plots with commands: \n```\n\n```python\nplt.plot(seriesX)\n```\n\n```python\nplt.plot(Xresampl)\n```\n\n```python\nplt.show()\n#Output\n#```\n#But it does not seem to work the same way with scatter. Or maybe using plot() with a further argument specifying line style. How to proceed? thanks\n```","response":"You simply call the scatter function twice, matplotlib will superimpose the two plots for you. You might want to specify a color, as the default for all scatter plots is blue. This is perhaps why you were only seeing one plot. \n```\nimport numpy as np\nimport pylab as plt\n\nX = np.linspace(0,5,100)\nY1 = X + 2*np.random.random(X.shape)\nY2 = X**2 + np.random.random(X.shape)\n\nplt.scatter(X,Y1,color='k')\nplt.scatter(X,Y2,color='g')\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/11190735\/superimpose-scatter-plots","best_answers_votes":56,"tags":["python","plot","matplotlib","scatter-plot"],"question_length":372,"response_length":443,"tag_count":4} {"question":"Overlapping y-axis tick label and x-axis tick label in matplotlib If I create a plot with matplotlib using the following code: \n```\nimport numpy as np\nfrom matplotlib import pyplot as plt\nxx = np.arange(0,5, .5)\nyy = np.random.random( len(xx) )\nplt.plot(xx,yy)\nplt.imshow()\n```\nI get a result that looks like the attached image. The problem is the bottom-most y-tick label overlaps the left-most x-tick label. This looks unprofessional. I was wondering if there was an automatic way to delete the bottom-most y-tick label, so I don't have the overlap problem. The fewer lines of code, the better.","response":"In the ticker module there is a class called MaxNLocator that can take a prune kwarg. Using that you can remove the first tick: \n```\nimport matplotlib.pyplot as plt\nfrom matplotlib.ticker import MaxNLocator\nimport numpy as np\nxx = np.arange(0,5, .5)\nyy = np.random.random( len(xx) )\nplt.plot(xx,yy)\nplt.gca().xaxis.set_major_locator(MaxNLocator(prune='lower'))\nplt.show()\n```\nResult:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/9422587\/overlapping-y-axis-tick-label-and-x-axis-tick-label-in-matplotlib","best_answers_votes":39,"tags":["python","matplotlib"],"question_length":596,"response_length":383,"tag_count":2} {"question":"Drawing rectangle with border only in matplotlib So I found the following code here: \n```\nfrom matplotlib import pyplot as plt\nfrom matplotlib.patches import Rectangle\nsomeX, someY = 0.5, 0.5\nplt.figure()\ncurrentAxis = plt.gca()\ncurrentAxis.add_patch(Rectangle((someX - .1, someY - .1), 0.2, 0.2,alpha=1))\nplt.show()\n```\nWhich gives: But what I want is a rectangle with only a blue border and inside of it to be transparent. How can I do this?","response":"You just need to set the facecolor to the string 'none' (not the python None) \n```\nfrom matplotlib import pyplot as plt\nfrom matplotlib.patches import Rectangle\nsomeX, someY = 0.5, 0.5\nfig,ax = plt.subplots()\ncurrentAxis = plt.gca()\ncurrentAxis.add_patch(Rectangle((someX - 0.1, someY - 0.1), 0.2, 0.2,\n alpha=1, facecolor='none'))\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/21445005\/drawing-rectangle-with-border-only-in-matplotlib","best_answers_votes":41,"tags":["python","matplotlib"],"question_length":443,"response_length":356,"tag_count":2} {"question":"Overcome ValueError for empty array In this discussion I tried to fix an issue in plotting limits for y-axis, after the twiny() messes up my plot. I thought this: \n```\nax.set_ylim([y.min()-0.05, y.max()+0.05])\n```\nwas a good solution. And probably it is, for continuous set of data. As I said in that discussion, anyway, my data are noisy, and sometimes with gaps. So it happens that some plotted ranges have no data. In that case, naturally, the use of the .min() raises the error: \n```\nValueError: zero-size array to reduction operation minimum which has no identity\n```\nbecause the array is empty. How to work around it, so that the code just does not care about putting limits on the y-axis? (Hoping that this is the only issue the empty array will cause)","response":"Just catch the exception and ignore it: \n```\ntry:\n ax.set_ylim([y.min()-0.05, y.max()+0.05])\nexcept ValueError: #raised if `y` is empty.\n pass\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/22903114\/overcome-valueerror-for-empty-array","best_answers_votes":59,"tags":["python","numpy","matplotlib"],"question_length":759,"response_length":153,"tag_count":3} {"question":"Suggestions to plot overlapping lines in matplotlib? Does anybody have a suggestion on what's the best way to present overlapping lines on a plot? I have a lot of them, and I had the idea of having full lines of different colors where they don't overlap, and having dashed lines where they do overlap so that all colors are visible and overlapping colors are seen. But still, how do I that.","response":"I have the same issue on a plot with a high degree of discretization. Here the starting situation: \n```\nimport matplotlib.pyplot as plt\ngrid=[x for x in range(10)]\ngraphs=[\n [1,1,1,4,4,4,3,5,6,0],\n [1,1,1,5,5,5,3,5,6,0],\n [1,1,1,0,0,3,3,2,4,0],\n [1,2,4,4,3,2,3,2,4,0],\n [1,2,3,3,4,4,3,2,6,0],\n [1,1,3,3,0,3,3,5,4,3],\n ]\n\nfor gg,graph in enumerate(graphs):\n plt.plot(grid,graph,label='g'+str(gg))\n\nplt.legend(loc=3,bbox_to_anchor=(1,0))\nplt.show()\n```\nNo one can say where the green and blue lines run exactly and my \"solution\" \n```\nimport matplotlib.pyplot as plt\ngrid=[x for x in range(10)]\ngraphs=[\n [1,1,1,4,4,4,3,5,6,0],\n [1,1,1,5,5,5,3,5,6,0],\n [1,1,1,0,0,3,3,2,4,0],\n [1,2,4,4,3,2,3,2,4,0],\n [1,2,3,3,4,4,3,2,6,0],\n [1,1,3,3,0,3,3,5,4,3],\n ]\n\nfor gg,graph in enumerate(graphs):\n lw=10-8*gg\/len(graphs)\n ls=['-','--','-.',':'][gg%4]\n plt.plot(grid,graph,label='g'+str(gg), linestyle=ls, linewidth=lw)\n\nplt.legend(loc=3,bbox_to_anchor=(1,0))\nplt.show()\n```\nI am grateful for suggestions on improvement!","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/40766909\/suggestions-to-plot-overlapping-lines-in-matplotlib","best_answers_votes":42,"tags":["python","matplotlib"],"question_length":390,"response_length":1116,"tag_count":2} {"question":"Manipulation on vertical space in matplotlib subplots I want to reduce the verticalspacing between subplot. Surfing along the web I just have found how to reduce the horizontal spacing, something like \n```\nimport matplotlib.pyplot as plt\n\nfig, axes = plt.subplots(nrows=4, ncols=4)\nfig.tight_layout() # Or equivalently, \"plt.tight_layout()\"\nfig.subplots_adjust(hspace=0.5)\nplt.show()\n```\nThe hspace thing is the one that manipulates such behaviour, but apparently there's no vspace. EDIT: This does not reduce the space between the y-axis, that is what I want to manipulate.","response":"As you said in your question hspace reduces the vertical spacing between subplots. The equivalent for horizontal spacing between subplots is wspace. Below is an example: \n```\nx = np.linspace(0, 2 * np.pi, 400)\ny = np.sin(x ** 2)\n\n\nfig, ((ax1,ax2),(ax3,ax4)) = plt.subplots(nrows=2, ncols=2)\nfig.tight_layout()\nax1.plot(x, y)\nax2.scatter(x, y)\nax3.scatter(x, y)\nax4.scatter(x, y)\n\nfig.subplots_adjust(wspace=0.2)\nplt.show()\n```\nUsing a value for 1 for wspace gives Using 0.2 as the value of wspace gives","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/35754511\/manipulation-on-vertical-space-in-matplotlib-subplots","best_answers_votes":50,"tags":["matplotlib"],"question_length":575,"response_length":502,"tag_count":1} {"question":"python matplotlib dash-dot-dot - how to? I am using python and matplotlib to generate graphical output. Is there a simple way to generate a dash-dot-dot line-style? I am aware of the '--', '-.', and ':' options. Unfortunately, '-..' does not result in a dash-dot-dot line. I have looked at the set_dashes command, but that seems to control the length of the dashes and the space between two adjacent dashes. One option may be to plot two lines on top of each other; one dashed with ample space between the dashes - and one dotted, with the dots as large as the dashes are wide and spaced so that two dots are in between each of the dashes. I do not doubt this can be done, I am simply hoping for an easier way. Did I overlook an option?","response":"You can define custom dashes: \n```\nimport matplotlib.pyplot as plt\n\nline, = plt.plot([1,5,2,4], '-')\nline.set_dashes([8, 4, 2, 4, 2, 4]) \nplt.show()\n```\n[8, 4, 2, 4, 2, 4] means 8 points on, (dash) 4 points off, 2 points on, (dot) 4 points off, 2 points on, (dot) 4 points off. @Achim noted you can also specify the dashes parameter: \n```\nplt.plot([1,5,2,4], '-', dashes=[8, 4, 2, 4, 2, 4])\nplt.show()\n```\nproduces the same result shown above.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/14710221\/python-matplotlib-dash-dot-dot-how-to","best_answers_votes":63,"tags":["python","matplotlib","linestyle"],"question_length":736,"response_length":443,"tag_count":3} {"question":"Plotting results of Pandas GroupBy I'm starting to learn Pandas and am trying to find the most Pythonic (or panda-thonic?) ways to do certain tasks. Suppose we have a DataFrame with columns A, B, and C. Column A contains boolean values: each row's A value is either true or false. Column B has some important values we want to plot. What we want to discover is the subtle distinctions between B values for rows that have A set to false, vs. B values for rows that have A is true. In other words, how can I group by the value of column A (either true or false), then plot the values of column B for both groups on the same graph? The two datasets should be colored differently to be able to distinguish the points. Next, let's add another feature to this program: before graphing, we want to compute another value for each row and store it in column D. This value is the mean of all data stored in B for the entire five minutes before a record - but we only include rows that have the same boolean value stored in A. In other words, if I have a row where A=True and time=t, I want to compute a value for column D that is the mean of B for all records from time t-5 to t that have the same A=True. In this case, how can we execute the groupby on values of A, then apply this computation to each individual group, and finally plot the D values for the two groups?","response":"I think @herrfz hit all the high points. I'll just flesh out the details: \n```\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nsin = np.sin\ncos = np.cos\npi = np.pi\nN = 100\n\nx = np.linspace(0, pi, N)\na = sin(x)\nb = cos(x)\n\ndf = pd.DataFrame({\n 'A': [True]*N + [False]*N,\n 'B': np.hstack((a,b))\n })\n\nfor key, grp in df.groupby(['A']):\n plt.plot(grp['B'], label=key)\n grp['D'] = pd.rolling_mean(grp['B'], window=5) \n plt.plot(grp['D'], label='rolling ({k})'.format(k=key))\nplt.legend(loc='best') \nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/15465645\/plotting-results-of-pandas-groupby","best_answers_votes":43,"tags":["python","matplotlib","group-by","pandas","data-analysis"],"question_length":1360,"response_length":556,"tag_count":5} {"question":"How to change the tables' fontsize I'm drawing a table with matplotlib.axes.Axes.table like this: \n```\nsub_axes.table(cellText=table_vals,\n colWidths = [0.15, 0.25],\n rowLabels=row_labels,\n loc='right')\n```\nI'd like to change the fontsize of table's content, and found there is a fontsize property. So it becomes: \n```\nsub_axes.table(cellText=table_vals,\n colWidths = [0.15, 0.25],\n rowLabels=row_labels,\n fontsize=12,\n loc='right')\n```\nBut when I execute the code, I got an error: \n```\nTypeError: table() got an unexpected keyword argument 'fontsize'\n```\nIs this property deprecated? And how can I change the fontsize of table with pyplot?","response":"I think the documentation is either hinting at a parameter-to-be (notice fontsize is not a link like the other parameters) or perhaps is a bit misleading at the moment. There is no fontsize parameter. Digging through the source code, I found the Table.set_fontsize method: \n```\ntable = sub_axes.table(cellText=table_vals,\n colWidths = [0.15, 0.25],\n rowLabels=row_labels,\n loc='right')\ntable.set_fontsize(14)\ntable.scale(1.5, 1.5) # may help\n```\nHere is an example with a grossly exaggerated fontsize just to show the effect. \n```\nimport matplotlib.pyplot as plt\n# Based on http:\/\/stackoverflow.com\/a\/8531491\/190597 (Andrey Sobolev)\n\nfig = plt.figure()\nax = fig.add_subplot(111)\ny = [1, 2, 3, 4, 5, 4, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1] \ncol_labels = ['col1', 'col2', 'col3']\nrow_labels = ['row1', 'row2', 'row3']\ntable_vals = [[11, 12, 13], [21, 22, 23], [31, 32, 33]]\n\nthe_table = plt.table(cellText=table_vals,\n colWidths=[0.1] * 3,\n rowLabels=row_labels,\n colLabels=col_labels,\n loc='center right')\nthe_table.auto_set_font_size(False)\nthe_table.set_fontsize(24)\nthe_table.scale(2, 2)\n\nplt.plot(y)\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/15514005\/how-to-change-the-tables-fontsize","best_answers_votes":66,"tags":["python","matplotlib"],"question_length":703,"response_length":1264,"tag_count":2} {"question":"Plot a plane based on a normal vector and a point in Matlab or matplotlib How would one go plotting a plane in matlab or matplotlib from a normal vector and a point?","response":"For all the copy\/pasters out there, here is similar code for Python using matplotlib: \n```py\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\npoint = np.array([1, 2, 3])\nnormal = np.array([1, 1, 2])\n\n# a plane is a*x+b*y+c*z+d=0\n# [a,b,c] is the normal. Thus, we have to calculate\n# d and we're set\nd = -point.dot(normal)\n\n# create x,y\nxx, yy = np.meshgrid(range(10), range(10))\n\n# calculate corresponding z\nz = (-normal[0] * xx - normal[1] * yy - d) * 1. \/normal[2]\n\n# plot the surface\nplt3d = plt.figure().gca(projection='3d')\nplt3d.plot_surface(xx, yy, z)\nplt.show()\n```\nEDIT For newer matplotlib>3.6: \n```py\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\npoint = np.array([1, 2, 3])\nnormal = np.array([1, 1, 2])\n\n# a plane is a*x+b*y+c*z+d=0\n# [a,b,c] is the normal. Thus, we have to calculate\n# d and we're set\nd = -point.dot(normal)\n\n# create x,y\nxx, yy = np.meshgrid(range(10), range(10))\n\n# calculate corresponding z\nz = (-normal[0] * xx - normal[1] * yy - d) * 1. \/normal[2]\n\n# plot the surface\nfig = plt.figure()\nplt3d = fig.add_subplot(projection='3d')\nplt3d.plot_surface(xx, yy, z)\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/3461869\/plot-a-plane-based-on-a-normal-vector-and-a-point-in-matlab-or-matplotlib","best_answers_votes":59,"tags":["matlab","matplotlib","plot","scipy"],"question_length":165,"response_length":1198,"tag_count":4} {"question":"How to add a variable to Python plt.title? [duplicate] This question already has answers here: How do I put a variable\u2019s value inside a string (interpolate it into the string)? (9 answers) Closed 5 months ago. I am trying to plot lots of diagrams, and for each diagram, I want to use a variable to label them. How can I add a variable to plt.title? For example: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nplt.figure(1)\nplt.ylabel('y')\nplt.xlabel('x')\n\nfor t in xrange(50, 61):\n plt.title('f model: T=t')\n\n for i in xrange(4, 10):\n plt.plot(1.0 \/ i, i ** 2, 'ro')\n\n plt.legend\n plt.show()\n```\nIn the argument of plt.title(), I want t to be variable changing with the loop.","response":"You can change a value in a string by using %. Documentation can be found here. For example: \n```\nnum = 2\nprint \"1 + 1 = %i\" % num # i represents an integer\n```\nThis will output: 1 + 1 = 2 You can also do this with floats and you can choose how many decimal place it will print: \n```\nnum = 2.000\nprint \"1.000 + 1.000 = %1.3f\" % num # f represents a float\n```\ngives: 1.000 + 1.000 = 2.000 Using this in your example to update t in the figure title: \n```\nplt.figure(1)\nplt.ylabel('y')\nplt.xlabel('x')\n\nfor t in xrange(50,61):\n plt.title('f model: T=%i' %t)\n\n for i in xrange(4,10):\n plt.plot(1.0\/i,i**2,'ro')\n\n plt.legend\n plt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/43757820\/how-to-add-a-variable-to-python-plt-title","best_answers_votes":40,"tags":["python","python-3.x","matplotlib"],"question_length":704,"response_length":654,"tag_count":3} {"question":"ggplot styles in Python When I look at the plotting style in the Pandas documentation, the plots look different from the default one. It seems to mimic the ggplot \"look and feel\". Same thing with the seaborn's package. How can I load that style? (even if I am not using a notebook?)","response":"Update: If you have matplotlib >= 1.4, there is a new style module which has a ggplot style by default. To activate this, use: \n```\nfrom matplotlib import pyplot as plt\nplt.style.use('ggplot')\n```\nTo see all the available styles, you can check plt.style.available. Similarly, for seaborn styling you can do: \n```\nplt.style.use('seaborn-white')\n```\nor, you can use seaborn's own machinery to set up the styling: \n```\nimport seaborn as sns\nsns.set()\n```\nThe set() function has more options to select a specific style (see docs). Note that seaborn previously did the above automatically on import, but with the latest versions (>= 0.8) this is no longer the case. If you actually want a ggplot-like syntax in Python as well (and not only the styling), take a look at the plotnine package, which is a grammar of graphics implementation in Python with a syntax very similar to R's ggplot2. Note: the old answer mentioned to do pd.options.display.mpl_style = 'default' . This was however deprecated in pandas in favor of matplotlib's styling using plt.style(..), and in the meantime this functionality is even removed from pandas.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/22543208\/ggplot-styles-in-python","best_answers_votes":57,"tags":["python","matplotlib","pandas","ipython"],"question_length":282,"response_length":1124,"tag_count":4} {"question":"Changing the formatting of a datetime axis I have a series whose index is datetime that I wish to plot. I want to plot the values of the series on the y axis and the index of the series on the x axis. The Series looks as follows: \n```\n2014-01-01 7\n2014-02-01 8\n2014-03-01 9\n2014-04-01 8\n...\n```\nI generate a graph using plt.plot(series.index, series.values). But the graph looks like: The problem is that I would like to have only year and month (yyyy-mm or 2016 March). However, the graph contains hours, minutes and seconds. How can I remove them so that I get my desired formatting?","response":"```\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport matplotlib.dates as mdates\n\n# sample data\nN = 30\ndrange = pd.date_range(\"2014-01\", periods=N, freq=\"MS\")\nnp.random.seed(365) # for a reproducible example of values\nvalues = {'values':np.random.randint(1,20,size=N)}\ndf = pd.DataFrame(values, index=drange)\n\nfig, ax = plt.subplots()\nax.plot(df.index, df.values)\nax.set_xticks(df.index)\n\n# use formatters to specify major and minor ticks\nax.xaxis.set_major_formatter(mdates.DateFormatter(\"%Y-%m\"))\nax.xaxis.set_minor_formatter(mdates.DateFormatter(\"%Y-%m\"))\n_ = plt.xticks(rotation=90)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/43968985\/changing-the-formatting-of-a-datetime-axis","best_answers_votes":46,"tags":["python","pandas","datetime","matplotlib","python-datetime"],"question_length":601,"response_length":619,"tag_count":5} {"question":"Fixing x axis scale and autoscale y axis I would like to plot only part of the array, fixing the x part, but letting the y part autoscale. I tried as shown below, but it does not work. Any suggestions? \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndata=[np.arange(0,101,1),300-0.1*np.arange(0,101,1)]\n\nplt.figure()\n\nplt.scatter(data[0], data[1])\nplt.xlim([50,100])\nplt.autoscale(enable=True, axis='y')\n\nplt.show()\n```","response":"While Joe Kington certainly proposes the most sensible answer when he recommends that only the necessary data be plotted, there are situations where it would be best to plot all of the data and just zoom to a certain section. Additionally, it would be nice to have an \"autoscale_y\" function that only requires the axes object (i.e., unlike the answer here, which requires direct use of the data.) Here is a function that just rescales the y-axis based on the data that is in the visible x-region: \n```\ndef autoscale_y(ax,margin=0.1):\n \"\"\"This function rescales the y-axis based on the data that is visible given the current xlim of the axis.\n ax -- a matplotlib axes object\n margin -- the fraction of the total height of the y-data to pad the upper and lower ylims\"\"\"\n\n import numpy as np\n\n def get_bottom_top(line):\n xd = line.get_xdata()\n yd = line.get_ydata()\n lo,hi = ax.get_xlim()\n y_displayed = yd[((xd>lo) & (xd top: top = new_top\n\n ax.set_ylim(bot,top)\n```\nThis is something of a hack, and will probably not work in many situations, but for a simple plot, it works well. Here is a simple example using this function: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.linspace(-100,100,1000)\ny = x**2 + np.cos(x)*100\n\nfig,axs = plt.subplots(1,2,figsize=(8,5))\n\nfor ax in axs:\n ax.plot(x,y)\n ax.plot(x,y*2)\n ax.plot(x,y*10)\n ax.set_xlim(-10,10)\n\nautoscale_y(axs[1])\n\naxs[0].set_title('Rescaled x-axis')\naxs[1].set_title('Rescaled x-axis\\nand used \"autoscale_y\"')\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/29461608\/fixing-x-axis-scale-and-autoscale-y-axis","best_answers_votes":36,"tags":["python","pandas","matplotlib","yaxis","x-axis"],"question_length":428,"response_length":1916,"tag_count":5} {"question":"Python matplotlib.pyplot pie charts: How to remove the label on the left side? I plot a piechart using pyplot. \n```\nimport pylab\nimport pandas as pd\ntest = pd.Series(['male', 'male', 'male', 'male', 'female'], name=\"Sex\")\ntest = test.astype(\"category\")\ngroups = test.groupby([test]).agg(len)\ngroups.plot(kind='pie', shadow=True)\npylab.show()\n```\nThe result: However, I'm unable to remove the label on the left (marked red in the picture). I already tried \n```\nplt.axes().set_xlabel('')\n```\nand \n```\nplt.axes().set_ylabel('')\n```\nbut that did not work.","response":"You could just set the ylabel by calling pylab.ylabel: \n```\npylab.ylabel('')\n```\nor \n```\npylab.axes().set_ylabel('')\n```\nIn your example, plt.axes().set_ylabel('') will not work because you dont have import matplotlib.pyplot as plt in your code, so plt doesn't exist. Alternatively, the groups.plot command returns the Axes instance, so you could use that to set the ylabel: \n```\nax=groups.plot(kind='pie', shadow=True)\nax.set_ylabel('')\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/34094596\/python-matplotlib-pyplot-pie-charts-how-to-remove-the-label-on-the-left-side","best_answers_votes":29,"tags":["python","pandas","matplotlib","charts","pie-chart"],"question_length":551,"response_length":441,"tag_count":5} {"question":"How to plot a dashed line on seaborn lineplot? I'm simply trying to plot a dashed line using seaborn. This is the code I'm using and the output I'm getting \n```\nimport seaborn as sns\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nn = 11\nx = np.linspace(0,2,n)\ny = np.sin(2*np.pi*x)\n\nsns.lineplot(x,y, linestyle='--')\nplt.show()\n```\nWhat am I doing wrong? Thanks","response":"It seems that linestyle= argument doesn't work with lineplot(), and the argument dashes= is a bit more complicated than it might seem. A (relatively) simple way of doing it might be to get a list of the Line2D objects on the plot using ax.lines and then set the linestyle manually: \n```\nimport seaborn as sns\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nn = 11\nx = np.linspace(0,2,n)\ny = np.sin(2*np.pi*x)\n\nax = sns.lineplot(x,y)\n\n# Might need to loop through the list if there are multiple lines on the plot\nax.lines[0].set_linestyle(\"--\")\n\nplt.show()\n```\nUpdate: It appears the dashes argument applies only when plotting multiple lines (usually using a pandas dataframe). Dashes are specified the same as in matplotlib, a tuple of (segment, gap) lengths. Therefore, you need to pass a list of tuples. \n```\nn = 100\nx = np.linspace(0,4,n)\ny1 = np.sin(2*np.pi*x)\ny2 = np.cos(2*np.pi*x)\n\ndf = pd.DataFrame(np.c_[y1, y2]) # modified @Elliots dataframe production\n\nax = sns.lineplot(data=df, dashes=[(2, 2), (2, 2)])\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/51963725\/how-to-plot-a-dashed-line-on-seaborn-lineplot","best_answers_votes":50,"tags":["python","matplotlib","seaborn"],"question_length":366,"response_length":1034,"tag_count":3} {"question":"Changing the color of matplotlib's violin plots Is there a way to change the color of the violin plots in matplotlib? The default color is this \"brownish\" color, which is not too bad, but I'd like to color e.g., the first 3 violins differently to highlight them. I don't find any parameter in the documentation. Any ideas or hacks to color the violins differently?","response":"matplotlib.pyplot.violinplot() says it returns: A dictionary mapping each component of the violinplot to a list of the corresponding collection instances created. The dictionary has the following keys: bodies: A list of the matplotlib.collections.PolyCollection instances containing the filled area of each violin. [...among others...] Methods of PolyCollections include: set_color(c) which sets both the facecolor and edgecolor, set_facecolor(c) and set_edgecolor(c) all of which take a \"matplotlib color arg or sequence of rgba tuples\" So, it looks like you could just loop through the result's body list and modify the facecolor of each: \n```\nviolin_parts = plt.violinplot(...)\n\nfor pc in violin_parts['bodies']:\n pc.set_facecolor('red')\n pc.set_edgecolor('black')\n```\nIt is a bit strange though that you can't set this when creating it like the common plot types. I'd guess it's probably because the operation creates so many bits (the aforementioned PolyCollection along with 5 other LineCollections), that additional arguments would be ambiguous.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/26291479\/changing-the-color-of-matplotlibs-violin-plots","best_answers_votes":54,"tags":["python","matplotlib","violin-plot"],"question_length":364,"response_length":1058,"tag_count":3} {"question":"How to use `GridSpec()` with `subplots()` There are several (example) SO posts dealing with how to use GridSpec with subplotting in general. What i am trying to achieve to no avail, is to allow the use of GridSpec with subplots, something like this, Where i can replace the actual Array and list indexes with some loop controlled index : \n```\ngs = gridspec.GridSpec(4, 1, height_ratios=[2, 2, 1, 1]) \ntPlot, axes = plt.subplots(4, sharex=True, sharey=False)\ntPlot.suptitle(node, fontsize=20)\naxes[0].plot(targetDay[0], gs[0])\naxes[1].plot(targetDay[1], gs[1])\naxes[2].scatter(targetDay[2], gs[2])\naxes[3].plot(targetDay[3], gs[3])\n```\nNeedless to say that this code does not work, it just an example.","response":"Rather than calling gridspec.GridSpec before subplots, you can send the kwargs to GridSpec from within your subplots call, using the gridspec_kw argument. From the docs: gridspec_kw : dict Dict with keywords passed to the GridSpec constructor used to create the grid the subplots are placed on. So, for example: \n```\nimport matplotlib.pyplot as plt\n\ntPlot, axes = plt.subplots(\n nrows=4, ncols=1, sharex=True, sharey=False, \n gridspec_kw={'height_ratios':[2,2,1,1]}\n )\n\ntPlot.suptitle('node', fontsize=20)\n\naxes[0].plot(range(10),'ro-') \naxes[1].plot(range(10),'bo-') \naxes[2].plot(range(10),'go-') \naxes[3].plot(range(10),'mo-') \n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/34268742\/how-to-use-gridspec-with-subplots","best_answers_votes":72,"tags":["python","python-2.7","matplotlib"],"question_length":707,"response_length":667,"tag_count":3} {"question":"Line plot with arrows I have a line graph that I want to plot using arrows instead of lines. That is, the line between successive pairs of points should be an arrow going from the first point to the second point. I know of the arrow function, but that only seems to do individual arrows. Before I work out a way to try and use this to do a whole plot, is there a nicer way to do it?","response":"You can do this with quiver, but it's a little tricky to get the keyword arguments right. \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.linspace(0, 2*np.pi, 10)\ny = np.sin(x)\n\nplt.figure()\nplt.quiver(x[:-1], y[:-1], x[1:]-x[:-1], y[1:]-y[:-1], scale_units='xy', angles='xy', scale=1)\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/7519467\/line-plot-with-arrows","best_answers_votes":70,"tags":["python","plot","matplotlib"],"question_length":382,"response_length":317,"tag_count":3} {"question":"Matplotlib: Horizontal Linelength in Legend Lets say i'm plotting a line with matplotlib and add a legend. Within the legend it says ------ Label. I find the default horizontal length of this line a bit too long when plotting small figure sizes for print. Is there a property to set ------ Label to --- Label?","response":"When you create your legend add the handlelength keyword (doc) which sets the length of the '---' (which is called the handle).","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/13766444\/matplotlib-horizontal-linelength-in-legend","best_answers_votes":62,"tags":["matplotlib"],"question_length":309,"response_length":127,"tag_count":1} {"question":"Can matplotlib add metadata to saved figures? I want to be able to ascertain the provenance of the figures I create using matplotlib, i.e. to know which version of my code and data created these figures. (See this essay for more on provenance.) I imagine the most straightforward approach would be to add the revision numbers of the code and data to the metadata of the saved figures, or as comments in a postscript file for example. Is there any easy way to do this in Matplotlib? The savefig function doesn't seem to be capable of this but has someone come up with a workable solution?","response":"I don't know of a way using matplotlib, but you can add metadata to png's with PIL: \n```\nf = \"test.png\"\nMETADATA = {\"version\":\"1.0\", \"OP\":\"ihuston\"}\n\n# Create a sample image\nimport pylab as plt\nimport numpy as np\nX = np.random.random((50,50))\nplt.imshow(X)\nplt.savefig(f)\n\n# Use PIL to save some image metadata\nfrom PIL import Image\nfrom PIL import PngImagePlugin\n\nim = Image.open(f)\nmeta = PngImagePlugin.PngInfo()\n\nfor x in METADATA:\n meta.add_text(x, METADATA[x])\nim.save(f, \"png\", pnginfo=meta)\n\nim2 = Image.open(f)\nprint im2.info\n```\nThis gives: \n```\n{'version': '1.0', 'OP': 'ihuston'}\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/10532614\/can-matplotlib-add-metadata-to-saved-figures","best_answers_votes":21,"tags":["python","matplotlib","metadata"],"question_length":587,"response_length":598,"tag_count":3} {"question":"matplotlib matshow labels I start using matplotlib a month ago, so I'm still learning. I'm trying to do a heatmap with matshow. My code is the following: \n```\ndata = numpy.array(a).reshape(4, 4) \ncax = ax.matshow(data, interpolation='nearest', cmap=cm.get_cmap('PuBu'), norm=LogNorm()) \ncbar = fig.colorbar(cax)\n\nax.set_xticklabels(alpha) \nax.set_yticklabels(alpha)\n```\nwhere alpha is a model from django with 4fields: 'ABC', 'DEF', 'GHI', 'JKL' the thing is that I don't know why, the label 'ABC' doesn't appear, leaving the last cell without label. If someone would have a clue how to modify my script in a way to appear the 'ABC' I would be grateful :)","response":"What's happening is that the xticks actually extend outside of the displayed figure when using matshow. (I'm not quite sure exactly why this is. I've almost never used matshow, though.) To demonstrate this, look at the output of ax.get_xticks(). In your case, it's array([-1., 0., 1., 2., 3., 4.]). Therefore, when you set the xtick labels, \"ABC\" is at , and isn't displayed on the figure. The easiest solution is just to prepend a blank label to your list of labels, e.g. \n```\nax.set_xticklabels(['']+alpha)\nax.set_yticklabels(['']+alpha)\n```\nAs a full example: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nalpha = ['ABC', 'DEF', 'GHI', 'JKL']\n\ndata = np.random.random((4,4))\n\nfig = plt.figure()\nax = fig.add_subplot(111)\ncax = ax.matshow(data, interpolation='nearest')\nfig.colorbar(cax)\n\nax.set_xticklabels(['']+alpha)\nax.set_yticklabels(['']+alpha)\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/3529666\/matplotlib-matshow-labels","best_answers_votes":71,"tags":["python","django","matplotlib","label"],"question_length":658,"response_length":879,"tag_count":4} {"question":"Why is matplotlib plotting my circles as ovals? Is there a way to get matplotlib to plot a perfect circle? They look more like ovals.","response":"Just to expand on DSM's correct answer. By default, plots have more pixels along one axis over the other. When you add a circle, it's traditionally added in data units. If your axes have a symmetric range, that means one step along the x axis will involve a different number of pixels than one step along your y axis. So a symmetric circle in data units is asymmetric in your Pixel units (what you actually see). As DSM correctly pointed out, you can force the x and y axes to have equal number of pixels per data unit. This is done using the plt.axis(\"equal\") or ax.axis(\"equal\") methods (where ax is an instance of an Axes). You can also draw an Ellipse such that it is appropriately scaled to look like a circle on your plot. Here's an example of such a case: \n```\nimport matplotlib.pyplot as plt\nfrom matplotlib.patches import Ellipse, Circle\n\n\nfig = plt.figure()\nax1 = fig.add_subplot(211)\n# calculate asymmetry of x and y axes:\nx0, y0 = ax1.transAxes.transform((0, 0)) # lower left in pixels\nx1, y1 = ax1.transAxes.transform((1, 1)) # upper right in pixes\ndx = x1 - x0\ndy = y1 - y0\nmaxd = max(dx, dy)\nwidth = .15 * maxd \/ dx\nheight = .15 * maxd \/ dy\n\n# a circle you expect to be a circle, but it is not\nax1.add_artist(Circle((.5, .5), .15))\n# an ellipse you expect to be an ellipse, but it's a circle\nax1.add_artist(Ellipse((.75, .75), width, height))\nax2 = fig.add_subplot(212)\n\nax2.axis('equal')\n# a circle you expect to be a circle, and it is\nax2.add_artist(Circle((.5, .5), .15))\n# an ellipse you expect to be an ellipse, and it is\nax2.add_artist(Ellipse((.75, .75), width, height))\n\nfig.savefig('perfectCircle1.png')\n```\nresulting in this figure: Alternatively, you can adjust your figure so that the Axes are square: \n```\n# calculate dimensions of axes 1 in figure units\nx0, y0, dx, dy = ax1.get_position().bounds\nmaxd = max(dx, dy)\nwidth = 6 * maxd \/ dx\nheight = 6 * maxd \/ dy\n\nfig.set_size_inches((width, height))\n\nfig.savefig('perfectCircle2.png')\n```\nresulting in: Notice how the second axes, which has the axis(\"equal\") option, now has the same range for the x and y axes. The figure has been scaled so that the date units of each are represented by the same number of pixels. You can also adjust your axes to be square, even if the figure is not. Or you can change the default transform for the Circle to None, which means the units used are pixels. I'm having difficulty successfully doing this at the moment (the circle is a circle, but not where I want it to be).","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/9230389\/why-is-matplotlib-plotting-my-circles-as-ovals","best_answers_votes":59,"tags":["python","matplotlib"],"question_length":133,"response_length":2484,"tag_count":2} {"question":"Matplotlib chart does not display in PyCharm I run the following code in PyCharm 3.4.1, and it highlighted %matplotlib inline showing syntax error, and I delete the first line, and run, I expect it will prompt me some charts, but it runs normally with Process finished with exit code 0\uff0c and no charts is showing. My question is: 1. What is %matplotlib inline use for; 2. How to let pycharm shows matplotlib chart? \n```\n%matplotlib inline\nfrom IPython.core.pylabtools import figsize\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfigsize(11, 9)\n\nimport scipy.stats as stats\n\ndist = stats.beta\nn_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500]\ndata = stats.bernoulli.rvs(0.5, size=n_trials[-1])\nx = np.linspace(0, 1, 100)\n\n# For the already prepared, I'm using Binomial's conj. prior.\nfor k, N in enumerate(n_trials):\n sx = plt.subplot(len(n_trials) \/ 2, 2, k + 1)\n plt.xlabel(\"$p$, probability of heads\") \\\n if k in [0, len(n_trials) - 1] else None\n plt.setp(sx.get_yticklabels(), visible=False)\n heads = data[:N].sum()\n y = dist.pdf(x, 1 + heads, 1 + N - heads)\n plt.plot(x, y, label=\"observe %d tosses,\\n %d heads\" % (N, heads))\n plt.fill_between(x, 0, y, color=\"#348ABD\", alpha=0.4)\n plt.vlines(0.5, 0, 4, color=\"k\", linestyles=\"--\", lw=1)\n\n leg = plt.legend()\n leg.get_frame().set_alpha(0.4)\n plt.autoscale(tight=True)\n\n\nplt.suptitle(\"Bayesian updating of posterior probabilities\",\n y=1.02,\n fontsize=14)\n\nplt.tight_layout()\n```","response":"The % notation is for magic functions. The particular magic function and argument you reference, %matplotlib inline, is meant for an IPython notebook session. You will get a syntax error using magic functions in a normal python session. The %matplotlib magic function is meant to specify a backend for matplotlib and the argument inline will produce an error if you are not in an IPython notebook session. To show your plot you should use plt.show for an interactive window or plt.savefig to save it to file. For example, \n```\nplt.show()\n```\nor \n```\nplt.savefig( 'myfig.png' )\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/25163593\/matplotlib-chart-does-not-display-in-pycharm","best_answers_votes":55,"tags":["python","matplotlib","pycharm"],"question_length":1500,"response_length":580,"tag_count":3} {"question":"Matplotlib savefig does not save axes I'm trying to save a figure that works fine in IPython inline but does not save the figure to disk with the axes and titles included. I am using TKAgg backend by default in matplotlibrc. Any ideas what might be going wrong here? I have clearly set the xlabel and the tick marks work correctly in IPython inline plot. \n```\nimport matplotlib.pylab as plt \nx = [1,2,3,3]\ny = map(lambda(x): x * 2, x)\nfig = plt.figure()\nax = fig.add_axes([0,0,1,1])\nax.set_title(\"bleh\")\nax.set_xlabel(\"xlabel\")\nax.plot(x, y, 'r--')\nfig.savefig(\"fig.png\")\n```","response":"Defining fig = plt.figure(figsize=(15,10)) at the beginning, saving the file as .jpg, and setting bbox_inches='tight' solved the issue for me. \n```py\nplt.savefig('filename.jpg',bbox_inches='tight', dpi=150)\n```\nbbox_inches='tight' seems to fix cropping issues but it didn't work for .png.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/19576317\/matplotlib-savefig-does-not-save-axes","best_answers_votes":29,"tags":["python","matplotlib","savefig"],"question_length":576,"response_length":288,"tag_count":3} {"question":"Adding a background image to a plot Say I am plotting a set of points with an image as a background. I've used the Lena image in the example: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.misc import imread\n\nnp.random.seed(0)\nx = np.random.uniform(0.0,10.0,15)\ny = np.random.uniform(0.0,10.0,15)\nimg = imread(\"lena.jpg\")\nplt.scatter(x,y,zorder=1)\nplt.imshow(img,zorder=0)\nplt.show()\n```\nThis gives me . My question is: How can I specify the corner coordinates of the image in the plot? Let's say I'd like the bottom-left corner to be at x, y = 0.5, 1.0 and the top-right corner to be at x, y = 8.0, 7.0.","response":"Use the extent keyword of imshow. The order of the argument is [left, right, bottom, top] \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nnp.random.seed(0)\nx = np.random.uniform(0.0,10.0,15)\ny = np.random.uniform(0.0,10.0,15)\n\ndatafile = 'lena.jpg'\nimg = plt.imread(datafile)\nplt.scatter(x,y,zorder=1)\nplt.imshow(img, zorder=0, extent=[0.5, 8.0, 1.0, 7.0])\nplt.show()\n```\nFor cases where it's desired to have an image in a small area of the scatter plot, change the order of the plots (.imshow then .scatter) and change the extent values. \n```py\nplt.imshow(img, zorder=0, extent=[3.0, 5.0, 3.0, 4.50])\nplt.scatter(x, y, zorder=1)\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/15160123\/adding-a-background-image-to-a-plot","best_answers_votes":49,"tags":["python","numpy","matplotlib"],"question_length":624,"response_length":653,"tag_count":3} {"question":"Pandas: Adding new column to dataframe which is a copy of the index column I have a dataframe which I want to plot with matplotlib, but the index column is the time and I cannot plot it. This is the dataframe (df3): but when I try the following: \n```\nplt.plot(df3['magnetic_mag mean'], df3['YYYY-MO-DD HH-MI-SS_SSS'], label='FDI')\n```\nI'm getting an error obviously: \n```\nKeyError: 'YYYY-MO-DD HH-MI-SS_SSS'\n```\nSo what I want to do is to add a new extra column to my dataframe (named 'Time) which is just a copy of the index column. How can I do it? This is the entire code: \n```\n#Importing the csv file into df\ndf = pd.read_csv('university2.csv', sep=\";\", skiprows=1)\n\n#Changing datetime\ndf['YYYY-MO-DD HH-MI-SS_SSS'] = pd.to_datetime(df['YYYY-MO-DD HH-MI-SS_SSS'], \n format='%Y-%m-%d %H:%M:%S:%f')\n\n#Set index from column\ndf = df.set_index('YYYY-MO-DD HH-MI-SS_SSS')\n\n#Add Magnetic Magnitude Column\ndf['magnetic_mag'] = np.sqrt(df['MAGNETIC FIELD X (\u03bcT)']**2 + df['MAGNETIC FIELD Y (\u03bcT)']**2 + df['MAGNETIC FIELD Z (\u03bcT)']**2)\n\n#Subtract Earth's Average Magnetic Field from 'magnetic_mag'\ndf['magnetic_mag'] = df['magnetic_mag'] - 30\n\n#Copy interesting values\ndf2 = df[[ 'ATMOSPHERIC PRESSURE (hPa)',\n 'TEMPERATURE (C)', 'magnetic_mag']].copy()\n\n#Hourly Average and Standard Deviation for interesting values \ndf3 = df2.resample('H').agg(['mean','std'])\ndf3.columns = [' '.join(col) for col in df3.columns]\n\ndf3.reset_index()\nplt.plot(df3['magnetic_mag mean'], df3['YYYY-MO-DD HH-MI-SS_SSS'], label='FDI')\n```\nThank you !!","response":"I think you need reset_index: \n```\ndf3 = df3.reset_index()\n```\nPossible solution, but I think inplace is not good practice, check this and this: \n```\ndf3.reset_index(inplace=True)\n```\nBut if you need new column, use: \n```\ndf3['new'] = df3.index\n```\nI think you can read_csv better: \n```\ndf = pd.read_csv('university2.csv', \n sep=\";\", \n skiprows=1,\n index_col='YYYY-MO-DD HH-MI-SS_SSS',\n parse_dates='YYYY-MO-DD HH-MI-SS_SSS') #if doesnt work, use pd.to_datetime\n```\nAnd then omit: \n```\n#Changing datetime\ndf['YYYY-MO-DD HH-MI-SS_SSS'] = pd.to_datetime(df['YYYY-MO-DD HH-MI-SS_SSS'], \n format='%Y-%m-%d %H:%M:%S:%f')\n#Set index from column\ndf = df.set_index('YYYY-MO-DD HH-MI-SS_SSS')\n```\nEDIT: If MultiIndex or Index is from groupby operation, possible solutions are: \n```\ndf = pd.DataFrame({'A':list('aaaabbbb'),\n 'B':list('ccddeeff'),\n 'C':range(8),\n 'D':range(4,12)})\nprint (df)\n A B C D\n0 a c 0 4\n1 a c 1 5\n2 a d 2 6\n3 a d 3 7\n4 b e 4 8\n5 b e 5 9\n6 b f 6 10\n7 b f 7 11\n\ndf1 = df.groupby(['A','B']).sum()\nprint (df1)\n C D\nA B \na c 1 9\n d 5 13\nb e 9 17\n f 13 21\n```\nAdd parameter as_index=False: \n```\ndf2 = df.groupby(['A','B'], as_index=False).sum()\nprint (df2)\n A B C D\n0 a c 1 9\n1 a d 5 13\n2 b e 9 17\n3 b f 13 21\n```\nOr add reset_index: \n```\ndf2 = df.groupby(['A','B']).sum().reset_index()\nprint (df2)\n A B C D\n0 a c 1 9\n1 a d 5 13\n2 b e 9 17\n3 b f 13 21\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/36932759\/pandas-adding-new-column-to-dataframe-which-is-a-copy-of-the-index-column","best_answers_votes":41,"tags":["python","pandas","matplotlib"],"question_length":1578,"response_length":1653,"tag_count":3} {"question":"Importing an svg file into a matplotlib figure I like to produce high quality plots and therefore avoid rasterized graphics as much as possible. I am trying to import an svg file on to a matplotlib figure: \n```\nimport matplotlib.pyplot as plt\nearth = plt.imread('.\/gfx\/earth.svg')\nfig, ax = plt.subplots()\nim = ax.imshow(earth)\nplt.show()\n```\nThis works with png perfectly. Can somebody tell me how to do it with svg or at least point my to proper documentation. I know that a similar question has been asked (but not answered): here. Has anything changed since? P.S. I know that I could just export a high resolution png and achieve a similar effect. This is not the solution I am looking for. Here is the image I would like to import: .","response":"Maybe what you are looking for is svgutils \n```\nimport svgutils.compose as sc\nfrom IPython.display import SVG # \/!\\ note the 'SVG' function also in svgutils.compose\nimport numpy as np\n\n# drawing a random figure on top of your SVG\nfig, ax = plt.subplots(1, figsize=(4,4))\nax.plot(np.sin(np.linspace(0,2.*np.pi)), np.cos(np.linspace(0,2.*np.pi)), 'k--', lw=2.)\nax.plot(np.random.randn(20)*.3, np.random.randn(20)*.3, 'ro', label='random sampling')\nax.legend()\nax2 = plt.axes([.2, .2, .2, .2])\nax2.bar([0,1], [70,30])\nplt.xticks([0.5,1.5], ['water ', ' ground'])\nplt.yticks([0,50])\nplt.title('ratio (%)')\nfig.savefig('cover.svg', transparent=True)\n# here starts the assembling using svgutils \nsc.Figure(\"8cm\", \"8cm\", \n sc.Panel(sc.SVG(\".\/Worldmap_northern.svg\").scale(0.405).move(36,29)),\n sc.Panel(sc.SVG(\"cover.svg\"))\n ).save(\"compose.svg\")\nSVG('compose.svg')\n```\nOutput:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/31452451\/importing-an-svg-file-into-a-matplotlib-figure","best_answers_votes":37,"tags":["python","matplotlib","svg","python-imaging-library"],"question_length":745,"response_length":880,"tag_count":4} {"question":"Is it possible to do additive blending with matplotlib? When dealing with overlapping high density scatter or line plots of different colors it can be convenient to implement additive blending schemes, where the RGB colors of each marker add together to produce the final color in the canvas. This is a common operation in 2D and 3D render engines. However, in Matplotlib I've only found support for alpha\/opacity blending. Is there any roundabout way of doing it or am I stuck with rendering to bitmap and then blending them in some paint program? Edit: Here's some example code and a manual solution. This will produce two partially overlapping random distributions: \n```\nx1 = randn(1000)\ny1 = randn(1000)\nx2 = randn(1000) * 5\ny2 = randn(1000)\nscatter(x1,y1,c='b',edgecolors='none')\nscatter(x2,y2,c='r',edgecolors='none')\n```\nThis will produce in matplotlib the following: As you can see, there are some overlapping blue points that are occluded by red points and we would like to see them. By using alpha\/opacity blending in matplotlib, you can do: \n```\nscatter(x1,y1,c='b',edgecolors='none',alpha=0.5)\nscatter(x2,y2,c='r',edgecolors='none',alpha=0.5)\n```\nWhich will produce the following: But what I really want is the following: I can do it manually by rendering each plot independently to a bitmap: \n```\nxlim = plt.xlim()\nylim = plt.ylim()\nscatter(x1,y1,c='b',edgecolors='none')\nplt.xlim(xlim)\nplt.ylim(ylim)\nscatter(x2,y2,c='r',edgecolors='none')\nplt.xlim(xlim)\nplt.ylim(ylim)\nplt.savefig(r'scatter_blue.png',transparent=True)\nplt.savefig(r'scatter_red.png',transparent=True)\n```\nWhich gives me the following images: What you can do then is load them as independent layers in Paint.NET\/PhotoShop\/gimp and just additive blend them. Now ideal would be to be able to do this programmatically in Matplotlib, since I'll be processing hundreds of these!","response":"If you only need an image as the result, you can get the canvas buffer as a numpy array, and then do the blending, here is an example: \n```\nfrom matplotlib import pyplot as plt\nimport numpy as np\n\nfig, ax = plt.subplots()\nax.scatter(x1,y1,c='b',edgecolors='none')\nax.set_xlim(-4, 4)\nax.set_ylim(-4, 4)\nax.patch.set_facecolor(\"none\")\nax.patch.set_edgecolor(\"none\")\nfig.canvas.draw()\n\nw, h = fig.canvas.get_width_height()\nimg = np.frombuffer(fig.canvas.buffer_rgba(), np.uint8).reshape(h, w, -1).copy()\n\nax.clear()\nax.scatter(x2,y2,c='r',edgecolors='none')\nax.set_xlim(-4, 4)\nax.set_ylim(-4, 4)\nax.patch.set_facecolor(\"none\")\nax.patch.set_edgecolor(\"none\")\nfig.canvas.draw()\n\nimg2 = np.frombuffer(fig.canvas.buffer_rgba(), np.uint8).reshape(h, w, -1).copy()\n\nimg[img[:, :, -1] == 0] = 0\nimg2[img2[:, :, -1] == 0] = 0\n\nfig.clf()\n\nplt.imshow(np.maximum(img, img2))\nplt.subplots_adjust(0, 0, 1, 1)\nplt.axis(\"off\")\nplt.show()\n```\nthe result:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/26702176\/is-it-possible-to-do-additive-blending-with-matplotlib","best_answers_votes":16,"tags":["python","matplotlib","blending","color-blending"],"question_length":1854,"response_length":935,"tag_count":4} {"question":"Barplot savefig() returning an AttributeError I'm converting an iPython notebook to a python script, just trying to output the results of a couple Seaborn plots as png files. Code: \n```\nimport seaborn as sns\n\n...\n\nsns.set_style(\"whitegrid\")\nax = sns.barplot(x=range(1,11), y=[ (x\/nrows)*100 for x in addr_pop ], palette=\"Blues_d\")\nax.savefig(\"html\/addr_depth.png\")\n```\nDon't worry about the variables, they're populated as expected, and I get a great-looking chart in iPyNB. Running the code within a script, however, yields RuntimeError: Invalid DISPLAY variable. Following another thread, I modified the code, putting this at the top of the script: \n```\nimport matplotlib\nmatplotlib.use('Agg')\n```\nAnd tried again. This time, it doesn't seem like the savefig() method is available for the plot at all: \n```\nAttributeError: 'AxesSubplot' object has no attribute 'savefig'\n```\nAll the results searching out this error are related to pandas and a plot that is already being displayed. I'm just trying to get Seaborn to output the fig to a file, ideally without displaying it at all. Any help is appreciated.","response":"I solved the issue by changing \n```\nax.savefig('file.png')\n```\nto \n```\nax.figure.savefig('file.png')\n```\nI guess accessing the figure directly is one way to get to the savefig() method for the barplot. @WoodChopper also has a working solution, but it requires another import statement, and utilizing pyplot's savefig() directly. Either solution does require setting matplotlib.use('Agg') to get around the DISPLAY variable error. As the referenced post noted, this has to be set before importing other matplotlib libraries.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/33616557\/barplot-savefig-returning-an-attributeerror","best_answers_votes":122,"tags":["python","matplotlib","seaborn"],"question_length":1106,"response_length":523,"tag_count":3} {"question":"How can I draw scatter trend line? I want to draw a scatter trend line on matplot. How can I do that? Python \n```\nimport pandas as pd\nimport matplotlib.pyplot as plt\ncsv = pd.read_csv('\/tmp\/test.csv')\ndata = csv[['fee', 'time']]\nx = data['fee']\ny = data['time']\nplt.scatter(x, y)\nplt.show()\n```\nCSV \n```\nfee,time\n100,650\n90,700\n80,860\n70,800\n60,1000\n50,1200\n```\ntime is integer value. Scatter chart","response":"I'm sorry I found the answer by myself. How to add trendline in python matplotlib dot (scatter) graphs? Python \n```\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\ncsv = pd.read_csv('\/tmp\/test.csv')\ndata = csv[['fee', 'time']]\nx = data['fee']\ny = data['time']\nplt.scatter(x, y)\n\nz = np.polyfit(x, y, 1)\np = np.poly1d(z)\nplt.plot(x,p(x),\"r--\")\n\nplt.show()\n```\nChart","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/41635448\/how-can-i-draw-scatter-trend-line","best_answers_votes":80,"tags":["python","pandas","numpy","matplotlib","trendline"],"question_length":398,"response_length":387,"tag_count":5} {"question":"How to disable the minor ticks of log-plot in Matplotlib? Here is a simple plot: 1) How to disable the ticks? 2) How to reduce their number? Here is a sample code: \n```\nfrom pylab import *\nimport numpy as np\n\nx = [5e-05, 5e-06, 5e-07, 5e-08, 5e-09, 5e-10]\ny = [-13, 14, 100, 120, 105, 93]\n\ndef myfunc(x,p):\n sl,yt,yb,ec=p \n y = yb + (yt-yb)\/(1+np.power(10, sl*(np.log10(x)-np.log10(ec))))\n return y\n\nxp = np.power(10, np.linspace(np.log10(min(x)\/10), np.log10(max(x)*10), 100))\n\npxp=myfunc(xp, [1,100,0,1e-6])\nsubplot(111,axisbg=\"#dfdfdf\")\nplt.plot(x, y, '.', xp, pxp, 'g-', linewidth=1) \nplt.xscale('log')\n\nplt.grid(True,ls=\"-\", linewidth=0.4, color=\"#ffffff\", alpha=0.5)\n\n\nplt.draw()\nplt.show()\n```\nWhich produces:","response":"```\nplt.minorticks_off()\n```\nTurns em off! To change the number of them\/position them, you can use the subsx parameter. like this: \n```\nplt.xscale('log', subsx=[2, 3, 4, 5, 6, 7, 8, 9])\n```\nFrom the docs: subsx\/subsy: Where to place the subticks between each major tick. Should be a sequence of integers. For example, in a log10 scale: [2, 3, 4, 5, 6, 7, 8, 9] will place 8 logarithmically spaced minor ticks between each major tick.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/10781077\/how-to-disable-the-minor-ticks-of-log-plot-in-matplotlib","best_answers_votes":58,"tags":["python","matplotlib"],"question_length":730,"response_length":433,"tag_count":2} {"question":"Matplotlib graphic image to base64 Problem : Need to transform a graphic image of matplotlib to a base64 image Current Solution : Save the matplot image in a cache folder and read it with read() method and then convert to base64 New Problem : Annoyance : Need a workaround so I dont need to save the graphic as image in any folder. I want to just use the image in the memory. Doing unnecessary I\/O is a bad practice. \n```\ndef save_single_graphic_data(data, y_label=\"Loss\", x_label=\"Epochs\", save_as=\"data.png\"):\n total_epochs = len(data)\n plt.figure()\n plt.clf()\n\n plt.plot(total_epochs, data)\n\n ax = plt.gca()\n ax.ticklabel_format(useOffset=False)\n\n plt.ylabel(y_label)\n plt.xlabel(x_label)\n\n if save_as is not None:\n plt.savefig(save_as)\n\n plt.savefig(\"cache\/cached1.png\")\n\n cached_img = open(\"cache\/cached1.png\")\n\n cached_img_b64 = base64.b64encode(cached_img.read())\n\n os.remove(\"cache\/cached1.png\")\n\n return cached_img_b64\n```","response":"```\nimport cStringIO\nmy_stringIObytes = cStringIO.StringIO()\nplt.savefig(my_stringIObytes, format='jpg')\nmy_stringIObytes.seek(0)\nmy_base64_jpgData = base64.b64encode(my_stringIObytes.read())\n```\n[edit] in python3 it should be \n```\nimport io\nmy_stringIObytes = io.BytesIO()\nplt.savefig(my_stringIObytes, format='jpg')\nmy_stringIObytes.seek(0)\nmy_base64_jpgData = base64.b64encode(my_stringIObytes.read()).decode()\n```\nI think at least ... based on the documentation http:\/\/matplotlib.org\/api\/pyplot_api.html#matplotlib.pyplot.savefig","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/38061267\/matplotlib-graphic-image-to-base64","best_answers_votes":46,"tags":["python","python-2.7","matplotlib"],"question_length":980,"response_length":533,"tag_count":3} {"question":"Plot pie chart and table of pandas dataframe I have to plot pie-chart and a table side by side using matplotlib. For drawing the pie-chart, I use the below code: \n```\nimport matplotlib.pyplot as plt\ndf1.EventLogs.value_counts(sort=False).plot.pie()\nplt.show()\n```\nFor drawing a table, I use the below code: \n```\n%%chart table --fields MachineName --data df_result2\n```\ndf_result2 is a table with the list of MachineName's in it. Not sure whether we can place both pie chart and table side by side. Any help would be appreciated.","response":"Look at the code: \n```\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom pandas.tools.plotting import table\n\n# sample data\nraw_data = {'officer_name': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'],\n 'jan_arrests': [4, 24, 31, 2, 3],\n 'feb_arrests': [25, 94, 57, 62, 70],\n 'march_arrests': [5, 43, 23, 23, 51]}\ndf = pd.DataFrame(raw_data, columns = ['officer_name', 'jan_arrests', 'feb_arrests', 'march_arrests'])\ndf['total_arrests'] = df['jan_arrests'] + df['feb_arrests'] + df['march_arrests']\n\nplt.figure(figsize=(16,8))\n# plot chart\nax1 = plt.subplot(121, aspect='equal')\ndf.plot(kind='pie', y = 'total_arrests', ax=ax1, autopct='%1.1f%%', \n startangle=90, shadow=False, labels=df['officer_name'], legend = False, fontsize=14)\n\n# plot table\nax2 = plt.subplot(122)\nplt.axis('off')\ntbl = table(ax2, df, loc='center')\ntbl.auto_set_font_size(False)\ntbl.set_fontsize(14)\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/38337918\/plot-pie-chart-and-table-of-pandas-dataframe","best_answers_votes":46,"tags":["python","pandas","matplotlib"],"question_length":528,"response_length":907,"tag_count":3} {"question":"Saving an imshow-like image while preserving resolution I have an (n, m) array that I've been visualizing with matplotlib.pyplot.imshow. I'd like to save this data in some type of raster graphics file (e.g. a png) so that: The colors are the ones shown with imshow Each element of the underlying array is exactly one pixel in the saved image -- meaning that if the underlying array is (n, m) elements, the image is NxM pixels. (I'm not interested in interpolation='nearest' in imshow.) There is nothing in the saved image except for the pixels corresponding to the data in the array. (I.e. there's no white space around the edges, axes, etc.) How can I do this? I've seen some code that can kind of do this by using interpolation='nearest' and forcing matplotlib to (grudgingly) turn off axes, whitespace, etc. However, there must be some way to do this more directly -- maybe with PIL? After all, I have the underlying data. If I can get an RGB value for each element of the underlying array, then I can save it with PIL. Is there some way to extract the RGB data from imshow? I can write my own code to map the array values to RGB values, but I don't want to reinvent the wheel, since that functionality already exists in matplotlib.","response":"As you already guessed there is no need to create a figure. You basically need three steps. Normalize your data, apply the colormap, save the image. matplotlib provides all the necessary functionality: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# some data (512x512)\nimport scipy.misc\ndata = scipy.misc.lena()\n\n# a colormap and a normalization instance\ncmap = plt.cm.jet\nnorm = plt.Normalize(vmin=data.min(), vmax=data.max())\n\n# map the normalized data to colors\n# image is now RGBA (512x512x4) \nimage = cmap(norm(data))\n\n# save the image\nplt.imsave('test.png', image)\n```\nWhile the code above explains the single steps, you can also let imsave do all three steps (similar to imshow): \n```\nplt.imsave('test.png', data, cmap=cmap)\n```\nResult (test.png):","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/31544130\/saving-an-imshow-like-image-while-preserving-resolution","best_answers_votes":66,"tags":["numpy","matplotlib","python-imaging-library"],"question_length":1235,"response_length":766,"tag_count":3} {"question":"set_data and autoscale_view matplotlib I have multiple lines to be drawn on the same axes, and each of them are dynamically updated (I use set_data), The issue being that i am not aware of the x and y limits of each of the lines. And axes.autoscale_view(True,True,True) \/ axes.set_autoscale_on(True) are not doing what they are supposed to. How do i auto scale my axes? \n```\nimport matplotlib.pyplot as plt\n\nfig = plt.figure()\naxes = fig.add_subplot(111)\n\naxes.set_autoscale_on(True)\naxes.autoscale_view(True,True,True)\n\nl1, = axes.plot([0,0.1,0.2],[1,1.1,1.2])\nl2, = axes.plot([0,0.1,0.2],[-0.1,0,0.1])\n\n#plt.show() #shows the auto scaled.\n\nl2.set_data([0,0.1,0.2],[-1,-0.9,-0.8])\n\n#axes.set_ylim([-2,2]) #this works, but i cannot afford to do this. \n\nplt.draw()\nplt.show() #does not show auto scaled\n```\nI have referred to these already, this , this. In all cases I have come across, the x,y limits are known. I have multiple lines on the axes and their ranges change, keeping track of the ymax for the entire data is not practical A little bit of exploring got me to this, \n```\nxmin,xmax,ymin,ymax = matplotlib.figure.FigureImage.get_extent(FigureImage)\n```\nBut here again, i do not know how to access FigureImage from the Figure instance. Using matplotlib 0.99.3","response":"From the matplotlib docs for autoscale_view: The data limits are not updated automatically when artist data are changed after the artist has been added to an Axes instance. In that case, use matplotlib.axes.Axes.relim() prior to calling autoscale_view. So, you'll need to add two lines before your plt.draw() call after the set_data call: \n```\naxes.relim()\naxes.autoscale_view(True,True,True)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/7187504\/set-data-and-autoscale-view-matplotlib","best_answers_votes":54,"tags":["python","matplotlib"],"question_length":1267,"response_length":396,"tag_count":2} {"question":"change scatter plot marker thickness How can you change the thickness of marker lines in a scatter plot plt.scatter()? markeredgewidth is a valid attribute in plt.plot(); is there any equivalent attribute for scatter plots? For example, when you change the size of a scatter plot (with marker = 'x'), the markers only get bigger, but the line thickness doesn't change. I'm not trying to change the size but line thickness!","response":"you are looking for the kwarg linewidths. e.g.: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\nx = y = np.arange(5)\n\nfig,ax = plt.subplots(1)\n\nax.scatter(x,y, s=100,marker='x',color='b',linewidths=1)\nax.scatter(x,y+1,s=100,marker='x',color='r',linewidths=2)\nax.scatter(x,y+2,s=100,marker='x',color='g',linewidths=3)\n\nplt.show()\n```\nNote: On some versions of matplotlib, it appears the kwarg is linewidth, not linewidths, despite what the manual currently says (April 2020). This is a known issue on the matplotlib github.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/36498539\/change-scatter-plot-marker-thickness","best_answers_votes":42,"tags":["python","matplotlib"],"question_length":422,"response_length":531,"tag_count":2} {"question":"How to display print statements interlaced with matplotlib plots inline in Ipython? I would like to have the output of print statements interlaced with plots, in the order in which they were printed and plotted in the Ipython notebook cell. For example, consider the following code: (launching ipython with ipython notebook --no-browser --no-mathjax) \n```\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\ni = 0\nfor data in manydata:\n fig, ax = plt.subplots()\n print \"data number i =\", i\n ax.hist(data)\n i = i + 1\n```\nIdeally the output would look like: \n```\ndata number i = 0\n(histogram plot)\ndata number i = 1\n(histogram plot)\n...\n```\nHowever, the actual output in Ipython will look like: \n```\ndata number i = 0\ndata number i = 1\n...\n(histogram plot)\n(histogram plot)\n...\n```\nIs there a direct solution in Ipython, or a workaround or alternate solution to get the interlaced output?","response":"There is simple solution, use matplotlib.pyplot.show() function after plotting. this will display graph before executing next line of the code \n```\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\ni = 0\nfor data in manydata:\n fig, ax = plt.subplots()\n print \"data number i =\", i\n ax.hist(data)\n plt.show() # this will load image to console before executing next line of code\n i = i + 1\n```\nthis code will work as requested","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/31483046\/how-to-display-print-statements-interlaced-with-matplotlib-plots-inline-in-ipyth","best_answers_votes":44,"tags":["python","matplotlib","ipython","jupyter-notebook"],"question_length":898,"response_length":440,"tag_count":4} {"question":"How to plot cdf in matplotlib in Python? I have a disordered list named d that looks like: \n```\n[0.0000, 123.9877,0.0000,9870.9876, ...]\n```\nI just simply want to plot a cdf graph based on this list by using Matplotlib in Python. But don't know if there's any function I can use \n```\nd = []\nd_sorted = []\nfor line in fd.readlines():\n (addr, videoid, userag, usertp, timeinterval) = line.split()\n d.append(float(timeinterval))\n\nd_sorted = sorted(d)\n\nclass discrete_cdf:\n def __init__(data):\n self._data = data # must be sorted\n self._data_len = float(len(data))\n\n def __call__(point):\n return (len(self._data[:bisect_left(self._data, point)]) \/ \n self._data_len)\n\ncdf = discrete_cdf(d_sorted)\nxvalues = range(0, max(d_sorted))\nyvalues = [cdf(point) for point in xvalues]\nplt.plot(xvalues, yvalues)\n```\nNow I am using this code, but the error message is : \n```\nTraceback (most recent call last):\nFile \"hitratioparea_0117.py\", line 43, in \ncdf = discrete_cdf(d_sorted)\nTypeError: __init__() takes exactly 1 argument (2 given)\n```","response":"I know I'm late to the party. But, there is a simpler way if you just want the cdf for your plot and not for future calculations: \n```\nplt.hist(put_data_here, normed=True, cumulative=True, label='CDF',\n histtype='step', alpha=0.8, color='k')\n```\nAs an example, \n```\nplt.hist(dataset, bins=bins, normed=True, cumulative=True, label='CDF DATA', \n histtype='step', alpha=0.55, color='purple')\n# bins and (lognormal \/ normal) datasets are pre-defined\n```\nEDIT: This example from the matplotlib docs may be more helpful.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/9378420\/how-to-plot-cdf-in-matplotlib-in-python","best_answers_votes":49,"tags":["python","matplotlib"],"question_length":1081,"response_length":531,"tag_count":2} {"question":"Stacked Bar Chart with Centered Labels I'm trying to \"robustly\" center the data labels in a stacked bar chart. A simple code example and the result are given below. As you can see, the data labels aren't really centered in all rectangles. What am I missing? \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nA = [45, 17, 47]\nB = [91, 70, 72]\n\nfig = plt.figure(facecolor=\"white\")\n\nax = fig.add_subplot(1, 1, 1)\nbar_width = 0.5\nbar_l = np.arange(1, 4)\ntick_pos = [i + (bar_width \/ 2) for i in bar_l]\n\nax1 = ax.bar(bar_l, A, width=bar_width, label=\"A\", color=\"green\")\nax2 = ax.bar(bar_l, B, bottom=A, width=bar_width, label=\"B\", color=\"blue\")\nax.set_ylabel(\"Count\", fontsize=18)\nax.set_xlabel(\"Class\", fontsize=18)\nax.legend(loc=\"best\")\nplt.xticks(tick_pos, [\"C1\", \"C2\", \"C3\"], fontsize=16)\nplt.yticks(fontsize=16)\n\nfor r1, r2 in zip(ax1, ax2):\n h1 = r1.get_height()\n h2 = r2.get_height()\n plt.text(r1.get_x() + r1.get_width() \/ 2., h1 \/ 2., \"%d\" % h1, ha=\"center\", va=\"bottom\", color=\"white\", fontsize=16, fontweight=\"bold\")\n plt.text(r2.get_x() + r2.get_width() \/ 2., h1 + h2 \/ 2., \"%d\" % h2, ha=\"center\", va=\"bottom\", color=\"white\", fontsize=16, fontweight=\"bold\")\n\nplt.show()\n```","response":"The following method is more succinct, and easily scales. Putting the data into a pandas.DataFrame is the easiest way to plot a stacked bar plot. Using pandas.DataFrame.plot.bar(stacked=True), or pandas.DataFrame.plot(kind='bar', stacked=True), is the easiest way to plot a stacked bar plot. This method returns a matplotlib.axes.Axes or a numpy.ndarray of them. Since seaborn is just a high-level API for matplotlib, these solutions also work with seaborn plots, as shown in How to annotate a seaborn barplot with the aggregated value. For horizontal stacked bars, see Horizontal stacked bar plot and add labels to each section Tested in python 3.10, pandas 1.4.2, matplotlib 3.5.1, seaborn 0.11.2 Imports & Test DataFrame \n```py\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nA = [45, 17, 47]\nB = [91, 70, 72]\nC = [68, 43, 13]\n\n# pandas dataframe\ndf = pd.DataFrame(data={'A': A, 'B': B, 'C': C}, index=['C1', 'C2', 'C3'])\n\n A B C\nC1 45 91 68\nC2 17 70 43\nC3 47 72 13\n```\nUpdated for matplotlib v3.4.2 Use matplotlib.pyplot.bar_label, which will automatically center the values in the bar. See How to add value labels on a bar chart for additional details and examples with .bar_label. Tested with pandas v1.2.4, which is using matplotlib as the plot engine. If some sections of the bar plot will be zero, see my answer, which shows how to customize the labels for .bar_label(). ax.bar_label(c, fmt='%0.0f', label_type='center') will change the number format to show no decimal places, if needed. \n```py\nax = df.plot(kind='bar', stacked=True, figsize=(8, 6), rot=0, xlabel='Class', ylabel='Count')\nfor c in ax.containers:\n\n # Optional: if the segment is small or 0, customize the labels\n labels = [v.get_height() if v.get_height() > 0 else '' for v in c]\n \n # remove the labels parameter if it's not needed for customized labels\n ax.bar_label(c, labels=labels, label_type='center')\n```\nOther options for removing labels for small segments can use fmt With the matplotlib 3.7 Update, the fmt argument of bar_label now accepts {}-style format strings. fmt=lambda x: f'{x:.0f}' if x > 0 else '' fmt=lambda x: np.where(x > 0, f'{x:.0f}', '') with np.where \n```py\nax = df.plot(kind='bar', stacked=True, figsize=(8, 6), rot=0, xlabel='Class', ylabel='Count')\nfor c in ax.containers:\n ax.bar_label(c, fmt=lambda x: f'{x:.0f}' if x > 0 else '', label_type='center')\n```\nSeaborn Options seaborn is a high-level api for matplotlib The seaborn.barplot api doesn't have an option for stacking, but it \"can\" be implemented with sns.histplot, or sns.displot. Seaborn DataFrame Format \n```py\n# create the data frame\ndf = pd.DataFrame(data={'A': A, 'B': B, 'C': C, 'cat': ['C1', 'C2', 'C3']})\n\n A B C cat\n0 45 91 68 C1\n1 17 70 43 C2\n2 47 72 13 C3\n\n# convert the dataframe to a long form\ndf = df.melt(id_vars='cat')\n\n cat variable value\n0 C1 A 45\n1 C2 A 17\n2 C3 A 47\n3 C1 B 91\n4 C2 B 70\n5 C3 B 72\n6 C1 C 68\n7 C2 C 43\n8 C3 C 13\n```\naxes-level plot \n```py\n# plot\nax = sns.histplot(data=df, x='cat', hue='variable', weights='value', discrete=True, multiple='stack')\n\n# iterate through each container\nfor c in ax.containers:\n\n # Optional: if the segment is small or 0, customize the labels\n labels = [v.get_height() if v.get_height() > 0 else '' for v in c]\n \n # remove the labels parameter if it's not needed for customized labels\n ax.bar_label(c, labels=labels, label_type='center')\n```\nfigure-level plot \n```py\n# plot\ng = sns.displot(data=df, x='cat', hue='variable', weights='value', discrete=True, multiple='stack')\n\n# iterate through each axes\nfor ax in g.axes.flat:\n\n # iterate through each container\n for c in ax.containers:\n\n # Optional: if the segment is small or 0, customize the labels\n labels = [v.get_height() if v.get_height() > 0 else '' for v in c]\n\n # remove the labels parameter if it's not needed for customized labels\n ax.bar_label(c, labels=labels, label_type='center')\n```\nOriginal Answer Using the .patches method unpacks a list of matplotlib.patches.Rectangle objects, one for each of the sections of the stacked bar. Each .Rectangle has methods for extracting the various values that define the rectangle. Each .Rectangle is in order from left to right, and bottom to top, so all the .Rectangle objects, for each level, appear in order, when iterating through .patches. The labels are made using an f-string, label_text = f'{height}', so any additional text can be added as needed, such as label_text = f'{height}%' label_text = f'{height:0.0f}' will display numbers with no decimal places. Plot \n```py\nplt.style.use('ggplot')\n\nax = df.plot(stacked=True, kind='bar', figsize=(12, 8), rot='horizontal')\n\n# .patches is everything inside of the chart\nfor rect in ax.patches:\n # Find where everything is located\n height = rect.get_height()\n width = rect.get_width()\n x = rect.get_x()\n y = rect.get_y()\n \n # The height of the bar is the data value and can be used as the label\n label_text = f'{height}' # f'{height:.2f}' to format decimal values\n \n # ax.text(x, y, text)\n label_x = x + width \/ 2\n label_y = y + height \/ 2\n\n # plot only when height is greater than specified value\n if height > 0:\n ax.text(label_x, label_y, label_text, ha='center', va='center', fontsize=8)\n \nax.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.) \nax.set_ylabel(\"Count\", fontsize=18)\nax.set_xlabel(\"Class\", fontsize=18)\nplt.show()\n```\nTo plot a horizontal bar: kind='barh' label_text = f'{width}' if width > 0: Attribution: jsoma\/chart.py","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/41296313\/stacked-bar-chart-with-centered-labels","best_answers_votes":67,"tags":["python","pandas","matplotlib","seaborn","stacked-bar-chart"],"question_length":1199,"response_length":5713,"tag_count":5} {"question":"decompose() for time series: ValueError: You must specify a period or x must be a pandas object with a DatetimeIndex with a freq not set to None I have some problems executing an additive model right. I have the following data frame: And when I run this code: \n```py\nimport statsmodels as sm\nimport statsmodels.api as sm\ndecomposition = sm.tsa.seasonal_decompose(df, model = 'additive')\nfig = decomposition.plot()\nmatplotlib.rcParams['figure.figsize'] = [9.0,5.0]\n```\nI got that message: ValueError: You must specify a period or x must be a pandas object with a >DatetimeIndex with a freq not set to None What should I do in order to get that example: The screen above I took from this place","response":"Having the same ValueError, this is just the result of some testing and little research on my own, without the claim to be complete or professional about it. Please comment or answer whoever finds something wrong. Of course, your data should be in the right order of the index values, which you would assure with df.sort_index(inplace=True), as you state it in your answer. This is not wrong as such, though the error message is not about the sort order, and I have checked this: the error does not go away in my case when I sort the index of a huge dataset I have at hand. It is true, I also have to sort the df.index, but the decompose() can handle unsorted data as well where items jump here and there in time: then you simply get a lot of blue lines from left to the right and back, until the whole graph is full of it. What is more, usually, the sorting is already in the right order anyway. In my case, sorting does not help fixing the error. Thus I also doubt that index sorting has fixed the error in your case, because: what does the error actually say? ValueError: You must specify: [either] a period or x must be a pandas object with a DatetimeIndex with a freq not set to None Before all, in case you have a list column so that your time series is nested up to now, see Convert pandas df with data in a \"list column\" into a time series in long format. Use three columns: [list of data] + [timestamp] + [duration] for details how to unnest a list column. This would be needed for both 1.) and 2.). Details of 1.: \"You must specify [either] a period ...\" Definition of period \"period, int, optional\" from https:\/\/www.statsmodels.org\/stable\/generated\/statsmodels.tsa.seasonal.seasonal_decompose.html: Period of the series. Must be used if x is not a pandas object or if the index of x does not have a frequency. Overrides default periodicity of x if x is a pandas object with a timeseries index. The period parameter that is set with an integer means the number of cycles which you expect to be in the data. If you have a df with 1000 rows with a list column in it (call it df_nested), and each list with for example 100 elements, then you will have 100 elements per cycle. It is probably smart taking period = len(df_nested) (= number of cycles) in order to get the best split of seasonality and trend. If your elements per cycle vary over time, other values may be better. I am not sure about how to rightly set the parameter, therefore the question statsmodels seasonal_decompose(): What is the right \u201cperiod of the series\u201d in the context of a list column (constant vs. varying number of items) on Cross Validated which is not yet answered. The \"period\" parameter of option 1.) has a big advantage over option 2.). Though it uses the time index (DatetimeIndex) for its x-axis, it does not require an item to hit the frequency exactly, in contrast to option 2.). Instead, it just joins together whatever is in a row, with the advantage that you do not need to fill any gaps: the last value of the previous event is just joined with the next value of the following event, whether it is already in the next second or on the next day. What is the max possible \"period\" value? In case you have a list column (call the df \"df_nested\" again), you should first unnest the list column to a normal column. The max period is len(df_unnested)\/2. Example1: 20 items in x (x is the amount of all items of df_unnested) can maximally have a period = 10. Example2: Having the 20 items and taking period=20 instead, this throws the following error: ValueError: x must have 2 complete cycles requires 40 observations. x only has 20 observation(s) Another side-note: To get rid of the error in question, period = 1 should already take it away, but for time series analysis, \"=1\" does not reveal anything new, every cycle is just 1 item then, the trend is the same as the original data, the seasonality is 0, and the residuals are always 0. #### Example borrowed from Convert pandas df with data in a \"list column\" into a time series in long format. Use three columns: [list of data] + [timestamp] + [duration] \n```\ndf_test = pd.DataFrame({'timestamp': [1462352000000000000, 1462352100000000000, 1462352200000000000, 1462352300000000000],\n 'listData': [[1,2,1,9], [2,2,3,0], [1,3,3,0], [1,1,3,9]],\n 'duration_sec': [3.0, 3.0, 3.0, 3.0]})\ntdi = pd.DatetimeIndex(df_test.timestamp)\ndf_test.set_index(tdi, inplace=True)\ndf_test.drop(columns='timestamp', inplace=True)\ndf_test.index.name = 'datetimeindex'\n\ndf_test = df_test.explode('listData') \nsizes = df_test.groupby(level=0)['listData'].transform('size').sub(1)\nduration = df_test['duration_sec'].div(sizes)\ndf_test.index += pd.to_timedelta(df_test.groupby(level=0).cumcount() * duration, unit='s')\n```\nThe resulting df_test['listData'] looks as follows: \n```\n2016-05-04 08:53:20 1\n2016-05-04 08:53:21 2\n2016-05-04 08:53:22 1\n2016-05-04 08:53:23 9\n2016-05-04 08:55:00 2\n2016-05-04 08:55:01 2\n2016-05-04 08:55:02 3\n2016-05-04 08:55:03 0\n2016-05-04 08:56:40 1\n2016-05-04 08:56:41 3\n2016-05-04 08:56:42 3\n2016-05-04 08:56:43 0\n2016-05-04 08:58:20 1\n2016-05-04 08:58:21 1\n2016-05-04 08:58:22 3\n2016-05-04 08:58:23 9\n```\nNow have a look at different period's integer values. period = 1: \n```\nresult_add = seasonal_decompose(x=df_test['listData'], model='additive', extrapolate_trend='freq', period=1)\nplt.rcParams.update({'figure.figsize': (5,5)})\nresult_add.plot().suptitle('Additive Decompose', fontsize=22)\nplt.show()\n```\nperiod = 2: \n```\nresult_add = seasonal_decompose(x=df_test['listData'], model='additive', extrapolate_trend='freq', period=2)\nplt.rcParams.update({'figure.figsize': (5,5)})\nresult_add.plot().suptitle('Additive Decompose', fontsize=22)\nplt.show()\n```\nIf you take a quarter of all items as one cycle which is 4 (out of 16 items) here. period = 4: \n```\nresult_add = seasonal_decompose(x=df_test['listData'], model='additive', extrapolate_trend='freq', period=int(len(df_test)\/4))\nplt.rcParams.update({'figure.figsize': (5,5)})\nresult_add.plot().suptitle('Additive Decompose', fontsize=22)\nplt.show()\n```\nOr if you take the max possible size of a cycle which is 8 (out of 16 items) here. period = 8: \n```\nresult_add = seasonal_decompose(x=df_test['listData'], model='additive', extrapolate_trend='freq', period=int(len(df_test)\/2))\nplt.rcParams.update({'figure.figsize': (5,5)})\nresult_add.plot().suptitle('Additive Decompose', fontsize=22)\nplt.show()\n```\nHave a look at how the y-axes change their scale. #### You will increase the period integer according to your needs. The max in your case of the question: \n```\nsm.tsa.seasonal_decompose(df, model = 'additive', period = int(len(df)\/2))\n```\nDetails of 2.: \"... or x must be a pandas object with a DatetimeIndex with a freq not set to None\" To get x to be a DatetimeIndex with a freq not set to None, you need to assign the freq of the DatetimeIndex using .asfreq('?') with ? being your choice among a wide range of offset aliases from https:\/\/pandas.pydata.org\/pandas-docs\/stable\/user_guide\/timeseries.html#offset-aliases. In your case, this option 2. is the better suited as you seem to have a list without gaps. Your monthly data then should probably be introduced as \"month start frequency\" --> \"MS\" as offset alias: \n```\nsm.tsa.seasonal_decompose(df.asfreq('MS'), model = 'additive')\n```\nSee How to set frequency with pd.to_datetime()? for more details, also about how you would deal with gaps. If you have data that is highly scattered in time so that you have too many gaps to fill or if gaps in time are nothing important, option 1 of using \"period\" is probably the better choice. In my example case of df_test, option 2. is not good. The data is totally scattered in time, and if I take a second as the frequency, you get this: Output of df_test.asfreq('s') (=frequency in seconds): \n```\n2016-05-04 08:53:20 1\n2016-05-04 08:53:21 2\n2016-05-04 08:53:22 1\n2016-05-04 08:53:23 9\n2016-05-04 08:53:24 NaN\n ...\n2016-05-04 08:58:19 NaN\n2016-05-04 08:58:20 1\n2016-05-04 08:58:21 1\n2016-05-04 08:58:22 3\n2016-05-04 08:58:23 9\nFreq: S, Name: listData, Length: 304, dtype: object\n```\nYou see here that although my data is only 16 rows, introducing a frequency in seconds forces the df to be 304 rows only to reach out from \"08:53:20\" till \"08:58:23\", 288 gaps are caused here. What is more, here you have to hit the exact time. If you have 0.1 or even 0.12314 seconds as your real frequency instead, you will not hit most of the items with your index. Here an example with min as the offset alias, df_test.asfreq('min'): \n```\n2016-05-04 08:53:20 1\n2016-05-04 08:54:20 NaN\n2016-05-04 08:55:20 NaN\n2016-05-04 08:56:20 NaN\n2016-05-04 08:57:20 NaN\n2016-05-04 08:58:20 1\n```\nWe see that only the first and the last minute are filled at all, the rest is not hit. Taking the day as as the offset alias, df_test.asfreq('d'): \n```\n2016-05-04 08:53:20 1\n```\nWe see that you get only the first row as the resulting df, since there is only one day covered. It will give you the first item found, the rest is dropped. The end of it all Putting together all of this, in your case, take option 2., while in my example case of df_test, option 1 is needed.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/60017052\/decompose-for-time-series-valueerror-you-must-specify-a-period-or-x-must-be","best_answers_votes":48,"tags":["python","pandas","matplotlib","time-series","decomposition"],"question_length":691,"response_length":9317,"tag_count":5} {"question":"matplotlib embed figures in auto generated html [duplicate] This question already has answers here: Dynamically serving a matplotlib image to the web using python (6 answers) Closed 7 years ago. I want to embed a figure generated by python matplotlib into a html file with other content. Is that possible? What I have thought is saving figures as png file and then use tag to refer to it. Some code I was trying to use are like: \n```\nimport matplotlib.pyplot as plt\nfig = plt.figure()\n#plot sth\nplt.savefig('test.png')\n\nhtml = 'Some html head' + '' + 'Some more html'\n\nwith open('test.html','w') as f:\n f.write(html)\n```\nHowever, this will generate two files instead of one and I don't have a server to host the png file. Is that possible to embed the figure in the html? How do I do it in python. Thank you.","response":"You can write the image into a temporary file and encode it with base64 and then embed the encoded base64 image into your html. Most modern browsers will correctly render the image. A short example modified from your code will be: \n```\nimport matplotlib.pyplot as plt\nimport base64\nfrom io import BytesIO\n\nfig = plt.figure()\n#plot sth\n\ntmpfile = BytesIO()\nfig.savefig(tmpfile, format='png')\nencoded = base64.b64encode(tmpfile.getvalue()).decode('utf-8')\n\nhtml = 'Some html head' + ''.format(encoded) + 'Some more html'\n\nwith open('test.html','w') as f:\n f.write(html)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/48717794\/matplotlib-embed-figures-in-auto-generated-html","best_answers_votes":58,"tags":["python","html","matplotlib"],"question_length":834,"response_length":612,"tag_count":3} {"question":"Exponential curve fitting in SciPy I have two NumPy arrays x and y. When I try to fit my data using exponential function and curve_fit (SciPy) with this simple code \n```\n#!\/usr\/bin\/env python\nfrom pylab import *\nfrom scipy.optimize import curve_fit\n\nx = np.array([399.75, 989.25, 1578.75, 2168.25, 2757.75, 3347.25, 3936.75, 4526.25, 5115.75, 5705.25])\ny = np.array([109,62,39,13,10,4,2,0,1,2])\n\ndef func(x, a, b, c, d):\n return a*np.exp(b-c*x)+d\n\npopt, pcov = curve_fit(func, x, y)\n```\nI get wrong coefficients popt \n```\n[a,b,c,d] = [1., 1., 1., 24.19999988]\n```\nWhat is the problem?","response":"First comment: since a*exp(b - c*x) = (a*exp(b))*exp(-c*x) = A*exp(-c*x), a or b is redundant. I'll drop b and use: \n```\nimport matplotlib.pyplot as plt\n\ndef func(x, a, c, d):\n return a*np.exp(-c*x)+d\n```\nThat isn't the main issue. The problem is simply that curve_fit fails to converge to a solution to this problem when you use the default initial guess (which is all 1s). Check pcov; you'll see that it is inf. This is not surprising, because if c is 1, most of the values of exp(-c*x) underflow to 0: \n```\n\n```python\nnp.exp(-x)\n#Output\n#array([ 2.45912644e-174, 0.00000000e+000, 0.00000000e+000,\n# 0.00000000e+000, 0.00000000e+000, 0.00000000e+000,\n# 0.00000000e+000, 0.00000000e+000, 0.00000000e+000,\n# 0.00000000e+000])\n#```\n#This suggests that c should be small. A better initial guess is, say, p0 = (1, 1e-6, 1). Then I get: \n#```\n```\n\n```python\npopt, pcov = curve_fit(func, x, y, p0=(1, 1e-6, 1))\n```\n\n```python\npopt\n#Output\n#array([ 1.63561656e+02, 9.71142196e-04, -1.16854450e+00])\n#```\n#This looks reasonable: \n#```\n```\n\n```python\nxx = np.linspace(300, 6000, 1000)\n```\n\n```python\nyy = func(xx, *popt)\n```\n\n```python\nplt.plot(x, y, 'ko')\n#Output\n#[]\n```\n\n```python\nplt.plot(xx, yy)\n#Output\n#[]\n#```\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/21420792\/exponential-curve-fitting-in-scipy","best_answers_votes":53,"tags":["python","numpy","matplotlib","scipy","curve-fitting"],"question_length":587,"response_length":1286,"tag_count":5} {"question":"Invert an axis in a matplotlib grafic How can I invert the y_axis? Z is a np.array.Thank you \n```\nZ=TempLake \n\nX,Y=np.meshgrid(range(Z.shape[0]+1),range(Z.shape[1]+1)) \nim = plt.pcolormesh(X,Y,Z.transpose(), cmap='hot') \nplt.colorbar(im, orientation='horizontal') \nplt.show()\n```\nI have this: I need this:","response":"As @Chris said, this can be done with: \n```\nax = plt.gca()\nax.invert_yaxis()\n```\nBefore the 'plt.show()'.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/11777381\/invert-an-axis-in-a-matplotlib-grafic","best_answers_votes":62,"tags":["python","matplotlib"],"question_length":305,"response_length":105,"tag_count":2} {"question":"Clearing a subplot in Matplotlib I have a number of subplots in a figure fig1, created via \n```\nax = fig1.add_subplot(221)\n```\nI then plot stuff in each of the subplots via \n```\nim=ax.plot(x,y)\n```\nand add some axis labels via \n```\nax.set_xlabel('xlabel')\n```\nI would then like to clear a specific subplot completely, as described in When to use cla(), clf() or close() for clearing a plot in matplotlib?. However the problem is that ax.cla()and ax.clear() seem to only clear the data from the plot, without removing the axes, axis tick labels etc. On the other hand plt.clf() clears the entire figure. Is there something in between? A clf-like command that clears everything in a subplot, including axis labels? Or have I simply used the commands in a wrong way?","response":"ax.clear() clears the axes. That is, it removes all settings and data from the axes such that you are left with an axes, just as it had been just created. ax.axis(\"off\") turns the axes off, such that all axes spines and ticklabels are hidden. ax.set_visible(False) turns the complete axes invisible, including the data that is in it. ax.remove() removes the axes from the figure. Complete example: \n```\nimport matplotlib.pyplot as plt\n\nfig,axes = plt.subplots(2,3)\nfor ax in axes.flat:\n ax.plot([2,3,1])\n\naxes[0,1].clear()\naxes[1,0].axis(\"off\")\naxes[1,1].set_visible(False)\naxes[0,2].remove()\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/47282918\/clearing-a-subplot-in-matplotlib","best_answers_votes":60,"tags":["matplotlib","subplot"],"question_length":763,"response_length":611,"tag_count":2} {"question":"How to repress scientific notation in factorplot y-axis Simple example below for this issue which I just can't solve. N.B. Some other Seaborn plotting methods seems to have arguments to repress the exponential form but seemingly not factorplots. I tried some Matplotlib solutions including those suggested in this similar question but none work. Also this is not a dupe of this question. I use factorplots very frequently and ideally want to find a proper solution as opposed to a workaround. \n```\ndata = {'reports': [4, 24, 31, 2, 3],'coverage': [35050800, 54899767, 57890789, 62890798, 70897871]}\ndf = pd.DataFrame(data)\ndf\n```\nProduces this dataframe: \n```\ncoverage reports\n0 35050800 4\n1 54899767 24\n2 57890789 31\n3 62890798 2\n4 70897871 3\n```\nAnd then this Seaborn code: \n```\nsns.factorplot(y=\"coverage\", x=\"reports\", kind='bar', data=df, label=\"Total\")\n```\nProduces this plot: Is there a way to get the y axis to display an appropriate numeric scale based on the coverage values?","response":"It looks like the following line solves the issue: \n```\nplt.ticklabel_format(style='plain', axis='y')\n```\nHere is the documentation link.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/36780948\/how-to-repress-scientific-notation-in-factorplot-y-axis","best_answers_votes":46,"tags":["python","matplotlib","seaborn"],"question_length":1013,"response_length":137,"tag_count":3} {"question":"Getting empty tick labels before showing a plot in Matplotlib I'm experiencing a similar issue to the one reported here. I don't understand why the tick label text is an empty string: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.linspace(0,2*np.pi,100)\ny = np.sin(x)**2\n\nfig, ax = plt.subplots()\nax.plot(x,y)\n\nlabels = ax.get_xticklabels()\nfor label in labels:\n print(label)\n\nplt.show()\n```\nOutput: \n```\nText(0,0,'')\nText(0,0,'')\nText(0,0,'')\nText(0,0,'')\nText(0,0,'')\nText(0,0,'')\nText(0,0,'')\nText(0,0,'')\n```\nI get the same result with ax.xaxis.get_ticklabels() but the plotted graph shows eight labelled ticks on the x-axis when saved or shown. However, if I ask for the labels after I show the plot, then the labels list is properly populated. Of course, it's a bit late to do anything about changing them then. \n```\nfig, ax = plt.subplots()\nax.plot(x,y)\nplt.show()\n\nlabels = ax.get_xticklabels()\nfor label in labels:\n print(label)\n```\nOutput: \n```\nText(0,0,'0')\nText(1,0,'1')\nText(2,0,'2')\nText(3,0,'3')\nText(4,0,'4')\nText(5,0,'5')\nText(6,0,'6')\nText(7,0,'7')\n```\nWhy does this happen (Mac OS X Yosemite, Matplotlib 1.5.1) and how can I get my labels before I show or save my plot?","response":"You've correctly identified the problem: before you call plt.show() nothing is explicitly set. This is because matplotlib avoids static positioning of the ticks unless it has to, because you're likely to want to interact with it: if you can ax.set_xlim() for example. In your case, you can draw the figure canvas with fig.canvas.draw() to trigger tick positioning, so you can retrieve their value. Alternatively, you can explicity set the xticks that will in turn set the the axis to FixedFormatter and FixedLocator and achieve the same result. \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.linspace(0,2*np.pi,100)\ny = np.sin(x)**2\n\nfig, ax = plt.subplots()\n\nax.plot(x,y)\nax.set_xlim(0,6)\n\n# Must draw the canvas to position the ticks\nfig.canvas.draw()\n# Or Alternatively\n#ax.set_xticklabels(ax.get_xticks())\n\nlabels = ax.get_xticklabels()\nfor label in labels:\n print(label.get_text())\n\nplt.show()\n\nOut:\n0\n1\n2\n3\n4\n5\n6\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/41122923\/getting-empty-tick-labels-before-showing-a-plot-in-matplotlib","best_answers_votes":41,"tags":["python","python-3.x","matplotlib","label"],"question_length":1212,"response_length":942,"tag_count":4} {"question":"Rotating axes label text in 3D How do I rotate the z-label so the text reads (bottom => top), rather than (top => bottom)? \n```\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nax.set_zlabel('label text flipped', rotation=90) \nax.azim = 225\nplt.show()\n```\nI want this to hold no matter what my ax.azim setting is. This seems to be an old feature request on github but there isn't a work on it. Is there a workaround?","response":"As a workaround, you could set the direction of the z-label manually by: \n```\nax.zaxis.set_rotate_label(False) # disable automatic rotation\nax.set_zlabel('label text', rotation=90)\n```\nPlease note that the direction of your z-label also depends on your viewpoint, e.g: \n```\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\nfg = plt.figure(1); fg.clf()\naxx = [fg.add_subplot(4,1,1+i, projection='3d') for i in range(4)]\nfor ax,azel in zip(axx, [(115,10), (115,-10), (-115,10), (-115,-10)]):\n ax.set_title(u\"Azim, elev = {}\u00b0, {}\u00b0\".format(*azel))\n ax.set_zlabel('label text')\n ax.azim, ax.elev = azel\n\nfg.canvas.draw()\nplt.show()\n```\ngives Update: It is also possible, to adjust the z-label direction of a plot, which is already drawn (but not beforehand). This is the adjusted version to modify the labels: \n```\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\nfg = plt.figure(1); fg.clf()\naxx = [fg.add_subplot(4,1,1+i, projection='3d') for i in range(4)]\nfor ax,azel in zip(axx, [(115,10), (115,-10), (-115,10), (-115,-10)]):\n ax.set_title(u\"Azim, elev = {}\u00b0, {}\u00b0\".format(*azel))\n ax.set_zlabel('label text')\n ax.azim, ax.elev = azel\nfg.canvas.draw() # the angles of the text are calculated here\n\n# Read drawn z-label rotations and switch them if needed\nfor ax in axx:\n ax.zaxis.set_rotate_label(False)\n a = ax.zaxis.label.get_rotation()\n if a<180:\n a += 180\n ax.zaxis.label.set_rotation(a)\n a = ax.zaxis.label.get_rotation() # put the actual angle in the z-label\n ax.set_zlabel(u'z-rot = {:.1f}\u00b0'.format(a))\nfg.canvas.draw()\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/21918380\/rotating-axes-label-text-in-3d","best_answers_votes":37,"tags":["python","matplotlib","axis","axis-labels","matplotlib-3d"],"question_length":502,"response_length":1627,"tag_count":5} {"question":"FacetGrid change titles I am trying to create a FacetGrid in Seaborn My code is currently: \n```\ng = sns.FacetGrid(df_reduced, col=\"ActualExternal\", margin_titles=True)\nbins = np.linspace(0, 100, 20)\ng.map(plt.hist, \"ActualDepth\", color=\"steelblue\", bins=bins, width=4.5)\n```\nThis gives my the Figure Now, instead of \"ActualExternal = 0.0\" and \"ActualExternal = 1.0\", I would like the titles \"Internal\" and \"External\" And, instead of \"ActualDepth\" I would like the xlabel to say \"Percentage Depth\" Finally, I would like to add a ylabel of \"Number of Defects\".","response":"Although you can iterate through the axes and set the titles individually using matplotlib commands, it is cleaner to use seaborn's built-in tools to control the title. For example: \n```\n# Add a column of appropriate labels\ndf_reduced['measure'] = df_reduced['ActualExternal'].replace({0: 'Internal',\n 1: 'External'}\n\ng = sns.FacetGrid(df_reduced, col=\"measure\", margin_titles=True)\ng.map(plt.hist, \"ActualDepth\", color=\"steelblue\", bins=bins, width=4.5)\n\n# Adjust title and axis labels directly\ng.set_titles(\"{col_name}\") # use this argument literally\ng.set_axis_labels(x_var=\"Percentage Depth\", y_var=\"Number of Defects\")\n```\nThis has the benefit of not needing modification regardless of whether you have 1D or 2D facets.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/43920341\/facetgrid-change-titles","best_answers_votes":46,"tags":["python","matplotlib","seaborn","facet-grid"],"question_length":558,"response_length":786,"tag_count":4} {"question":"Hide contour linestroke on pyplot.contourf to get only fills I have a pet project to create images of maps, where I draw the roads and other stuff over a contour plot of the terrain elevation. It is intended to plan mountain bike routes (I have made some vectorial drawings by hand, in the past, and they work great for visualization). Currently, I download Digital Elevation Model, in GeoTIFF, from here: http:\/\/www.ecologia.ufrgs.br\/labgeo\/arquivos\/downloads\/dados\/SRTM\/geotiff\/rs.rar and then create the plot with GDAL and Matplotlib contourf function: \n```\nfrom osgeo import gdal\nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom pylab import cm\nimport numpy\n\nf = 'rs.tif'\n\nelev = gdal.Open(f)\n\na = elev.GetRasterBand(1).ReadAsArray()\n\nw = elev.RasterXSize\nh = elev.RasterYSize\nprint w, h\n\naltura = (0.35, 0.42)\nlargura = (0.70, 0.82)\n\na = a[int(h*altura[0]):int(h*altura[1]),\n int(w*largura[0]):int(w*largura[1])]\n\n\ncont = plt.contourf(a, origin='upper', cmap=cm.gist_earth, levels=numpy.arange(0,1000,20))\nplt.title('Altitudes - max: %d m; min: %d m' % (numpy.amax(a), numpy.amin(a)))\nplt.show()\n```\nWhich gives: The problem is that contour lines are \"white\", and generate some visual pollution, which is undesired since I want to plot roads and rivers later. So, I am trying to modify the way contourf create these lighter lines, either via parameter setting, or via hack (changing source code), similar to the one proposed here: How to format contour lines from Matplotlib Also, if anyone knows how to generate such a map in a more elegant way, using other libraries, I would appreciate the tip very much! Thanks for reading.","response":"I finally found a proper solution to this long-standing problem (currently in Matplotlib 3), which does not require multiple calls to contour or rasterize the figure. Note that the problem illustrated in the question appears only in saved publication-quality figures formats like PDF, not in lower-quality raster files like PNG. My solution was inspired by this answer, related to a similar problem with the colorbar. A similar solution turns out to solve the contour plot as well, as follows: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nnp.random.seed(123)\nx, y = np.random.uniform(size=(100, 2)).T\nz = np.exp(-x**2 - y**2)\nlevels = np.linspace(0, 1, 100)\n\ncnt = plt.tricontourf(x, y, z, levels=levels, cmap=\"ocean\")\n\n# This is the fix for the white lines between contour levels\ncnt.set_edgecolor(\"face\")\n\nplt.savefig(\"test.pdf\")\n```\nHere below is an example of contours before the fix And here below is the same figure after the above fix","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/8263769\/hide-contour-linestroke-on-pyplot-contourf-to-get-only-fills","best_answers_votes":63,"tags":["python","matplotlib","terrain","contour"],"question_length":1644,"response_length":953,"tag_count":4} {"question":"Grouped Bar graph Pandas I have a table in a pandas DataFrame named df: \n```\n+--- -----+------------+-------------+----------+------------+-----------+\n|avg_views| avg_orders | max_views |max_orders| min_views |min_orders |\n+---------+------------+-------------+----------+------------+-----------+\n| 23 | 123 | 135 | 500 | 3 | 1 |\n+---------+------------+-------------+----------+------------+-----------+\n```\nWhat I am looking for now is to plot a grouped bar graph which shows me (avg, max, min) of views and orders in one single bar chart. i.e on x axis there would be Views and orders separated by a distance and 3 bars of (avg, max, min) for views and similarly for orders. I have attached a sample bar graph image, just to know how the bar graph should look. Green color should be for avg, yellow for max and pink for avg. I took the following code from setting spacing between grouped bar plots in matplotlib but it is not working for me: \n```\nplt.figure(figsize=(13, 7), dpi=300)\n\ngroups = [[23, 135, 3], [123, 500, 1]]\ngroup_labels = ['views', 'orders']\nnum_items = len(group_labels)\nind = np.arange(num_items)\nmargin = 0.05\nwidth = (1. - 2. * margin) \/ num_items\n\ns = plt.subplot(1, 1, 1)\nfor num, vals in enumerate(groups):\n print 'plotting: ', vals\n # The position of the xdata must be calculated for each of the two data \n # series.\n xdata = ind + margin + (num * width)\n # Removing the \"align=center\" feature will left align graphs, which is \n # what this method of calculating positions assumes.\n gene_rects = plt.bar(xdata, vals, width)\ns.set_xticks(ind + 0.5)\ns.set_xticklabels(group_labels)\n```\nplotting: [23, 135, 3] ... ValueError: shape mismatch: objects cannot be broadcast to a single shape","response":"Using pandas: \n```\nimport pandas as pd\n\ngroups = [[23,135,3], [123,500,1]]\ngroup_labels = ['views', 'orders']\n\n# Convert data to pandas DataFrame.\ndf = pd.DataFrame(groups, index=group_labels).T\n\n# Plot.\npd.concat(\n [\n df.mean().rename('average'), \n df.min().rename('min'), \n df.max().rename('max')\n ],\n axis=1,\n).plot.bar()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/40420240\/grouped-bar-graph-pandas","best_answers_votes":39,"tags":["python","pandas","matplotlib","grouped-bar-chart"],"question_length":1780,"response_length":358,"tag_count":4} {"question":"Merge subplots with shared x-axis I have two graphs to where both have the same x-axis, but with different y-axis scalings. The plot with regular axes is the data with a trend line depicting a decay while the y semi-log scaling depicts the accuracy of the fit. \n```\nfig1 = plt.figure(figsize=(15,6))\nax1 = fig1.add_subplot(111)\n\n# Plot of the decay model \nax1.plot(FreqTime1,DecayCount1, '.', color='mediumaquamarine')\n\n# Plot of the optimized fit\nax1.plot(x1, y1M, '-k', label='Fitting Function: $f(t) = %.3f e^{%.3f\\t} \\\n %+.3f$' % (aR1,kR1,bR1))\n\nax1.set_xlabel('Time (sec)')\nax1.set_ylabel('Count')\nax1.set_title('Run 1 of Cesium-137 Decay')\n\n# Allows me to change scales\n# ax1.set_yscale('log')\nax1.legend(bbox_to_anchor=(1.0, 1.0), prop={'size':15}, fancybox=True, shadow=True)\n```\nNow, i'm trying to figure out to implement both close together like the examples supplied by this link http:\/\/matplotlib.org\/examples\/pylab_examples\/subplots_demo.html In particular, this one When looking at the code for the example, i'm a bit confused on how to implant 3 things: 1) Scaling the axes differently 2) Keeping the figure size the same for the exponential decay graph but having a the line graph have a smaller y size and same x size. For example: 3) Keeping the label of the function to appear in just only the decay graph. Any help would be most appreciated.","response":"Look at the code and comments in it: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom matplotlib import gridspec\n\n# Simple data to display in various forms\nx = np.linspace(0, 2 * np.pi, 400)\ny = np.sin(x ** 2)\n\nfig = plt.figure()\n# set height ratios for subplots\ngs = gridspec.GridSpec(2, 1, height_ratios=[2, 1]) \n\n# the first subplot\nax0 = plt.subplot(gs[0])\n# log scale for axis Y of the first subplot\nax0.set_yscale(\"log\")\nline0, = ax0.plot(x, y, color='r')\n\n# the second subplot\n# shared axis X\nax1 = plt.subplot(gs[1], sharex = ax0)\nline1, = ax1.plot(x, y, color='b', linestyle='--')\nplt.setp(ax0.get_xticklabels(), visible=False)\n# remove last tick label for the second subplot\nyticks = ax1.yaxis.get_major_ticks()\nyticks[-1].label1.set_visible(False)\n\n# put legend on first subplot\nax0.legend((line0, line1), ('red line', 'blue line'), loc='lower left')\n\n# remove vertical gap between subplots\nplt.subplots_adjust(hspace=.0)\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/37737538\/merge-subplots-with-shared-x-axis","best_answers_votes":52,"tags":["python","matplotlib","subplot"],"question_length":1369,"response_length":959,"tag_count":3} {"question":"how to annotate heatmap with text in matplotlib I am plotting a heatmap in matplotlib using: plt.pcolor(rand(5,5)) how can I annotate the heatmap with the actual numbers plotted? meaning in each cell of the plotted heatmap, put the value corresponding to that cell in the 5x5 matrix passed to pcolor. thanks.","response":"There is no automatic feature to do such a thing, but you could loop through each point and put text in the appropriate location: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndata = np.random.rand(5, 4)\nheatmap = plt.pcolor(data)\n\nfor y in range(data.shape[0]):\n for x in range(data.shape[1]):\n plt.text(x + 0.5, y + 0.5, '%.4f' % data[y, x],\n horizontalalignment='center',\n verticalalignment='center',\n )\n\nplt.colorbar(heatmap)\n\nplt.show()\n```\nHTH","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/11917547\/how-to-annotate-heatmap-with-text-in-matplotlib","best_answers_votes":55,"tags":["python","numpy","matplotlib","scipy"],"question_length":308,"response_length":518,"tag_count":4} {"question":"Discrete legend in seaborn heatmap plot I am using the data present here to construct this heat map using seaborn and pandas. Code: \n```\nimport pandas\n import seaborn.apionly as sns\n\n # Read in csv file\n df_trans = pandas.read_csv('LUH2_trans_matrix.csv')\n\n sns.set(font_scale=0.8)\n cmap = sns.cubehelix_palette(start=2.8, rot=.1, light=0.9, as_cmap=True)\n cmap.set_under('gray') # 0 values in activity matrix are shown in gray (inactive transitions)\n df_trans = df_trans.set_index(['Unnamed: 0'])\n ax = sns.heatmap(df_trans, cmap=cmap, linewidths=.5, linecolor='lightgray')\n\n # X - Y axis labels\n ax.set_ylabel('FROM')\n ax.set_xlabel('TO')\n\n # Rotate tick labels\n locs, labels = plt.xticks()\n plt.setp(labels, rotation=0)\n locs, labels = plt.yticks()\n plt.setp(labels, rotation=0)\n\n # revert matplotlib params\n sns.reset_orig()\n```\nAs you can see from csv file, it contains 3 discrete values: 0, -1 and 1. I want a discrete legend instead of the colorbar. Labeling 0 as A, -1 as B and 1 as C. How can I do that?","response":"Well, there's definitely more than one way to accomplish this. In this case, with only three colors needed, I would pick the colors myself by creating a LinearSegmentedColormap instead of generating them with cubehelix_palette. If there were enough colors to warrant using cubehelix_palette, I would define the segments on colormap using the boundaries option of the cbar_kws parameter. Either way, the ticks can be manually specified using set_ticks and set_ticklabels. The following code sample demonstrates the manual creation of LinearSegmentedColormap, and includes comments on how to specify boundaries if using a cubehelix_palette instead. \n```\nimport matplotlib.pyplot as plt\nimport pandas\nimport seaborn.apionly as sns\nfrom matplotlib.colors import LinearSegmentedColormap\n\nsns.set(font_scale=0.8)\ndataFrame = pandas.read_csv('LUH2_trans_matrix.csv').set_index(['Unnamed: 0'])\n\n# For only three colors, it's easier to choose them yourself.\n# If you still really want to generate a colormap with cubehelix_palette instead,\n# add a cbar_kws={\"boundaries\": linspace(-1, 1, 4)} to the heatmap invocation\n# to have it generate a discrete colorbar instead of a continous one.\nmyColors = ((0.8, 0.0, 0.0, 1.0), (0.0, 0.8, 0.0, 1.0), (0.0, 0.0, 0.8, 1.0))\ncmap = LinearSegmentedColormap.from_list('Custom', myColors, len(myColors))\n\nax = sns.heatmap(dataFrame, cmap=cmap, linewidths=.5, linecolor='lightgray')\n\n# Manually specify colorbar labelling after it's been generated\ncolorbar = ax.collections[0].colorbar\ncolorbar.set_ticks([-0.667, 0, 0.667])\ncolorbar.set_ticklabels(['B', 'A', 'C'])\n\n# X - Y axis labels\nax.set_ylabel('FROM')\nax.set_xlabel('TO')\n\n# Only y-axis labels need their rotation set, x-axis labels already have a rotation of 0\n_, labels = plt.yticks()\nplt.setp(labels, rotation=0)\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/38836154\/discrete-legend-in-seaborn-heatmap-plot","best_answers_votes":36,"tags":["python","pandas","matplotlib","seaborn"],"question_length":1067,"response_length":1816,"tag_count":4} {"question":"Animating \"growing\" line plot I want to produce a set of frames that can be used to animate a plot of a growing line. In the past, I have always used plt.draw() and set_ydata() to redraw the y-data as it changed over time. This time, I wish to draw a \"growing\" line, moving across the graph with time. Because of this, set_ydata doesn't work (xdata is changing length). For example, \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.linspace(0, 10, 100)\ny = np.sin(x)\n\nplt.figure()\nfor n in range(len(x)):\n plt.plot(x[:n], y[:n], color='k')\n plt.axis([0, 10, 0, 1])\n plt.savefig('Frame%03d.png' %n)\n```\nWhile this works, it becomes very slow as it scales. Is there a faster way to do this?","response":"A couple of notes: First off, the reason that things become progressively slower is that you're drawing more and more and more overlapping lines in the same position. A quick fix is to clear the plot each time: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.linspace(0, 10, 100)\ny = np.sin(x)\n\nplt.figure()\nfor n in range(len(x)):\n plt.cla()\n plt.plot(x[:n], y[:n], color='k')\n plt.axis([0, 10, 0, 1])\n plt.savefig('Frame%03d.png' %n)\n```\nBetter yet, however, update both the x and y data at the same time: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.linspace(0, 10, 100)\ny = np.sin(x)\n\nfig, ax = plt.subplots()\nline, = ax.plot(x, y, color='k')\n\nfor n in range(len(x)):\n line.set_data(x[:n], y[:n])\n ax.axis([0, 10, 0, 1])\n fig.canvas.draw()\n fig.savefig('Frame%03d.png' %n)\n```\nAnd if you'd like to use the animation module (side note: blit=True may not work properly on some backends (e.g. OSX), so try blit=False if you have issues): \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.animation as animation\n\nx = np.linspace(0, 10, 100)\ny = np.sin(x)\n\nfig, ax = plt.subplots()\nline, = ax.plot(x, y, color='k')\n\ndef update(num, x, y, line):\n line.set_data(x[:num], y[:num])\n line.axes.axis([0, 10, 0, 1])\n return line,\n\nani = animation.FuncAnimation(fig, update, len(x), fargs=[x, y, line],\n interval=25, blit=True)\nani.save('test.gif')\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/28074461\/animating-growing-line-plot","best_answers_votes":59,"tags":["python","matplotlib","graphics","matplotlib-animation"],"question_length":712,"response_length":1475,"tag_count":4} {"question":"Python: A4 size for a plot I have a code that saves a figure with: \n```\nsavefig(\"foo.eps\", orientation = 'portrait', format = 'eps')\n```\nIf I don't specify anythingelse, the figure is correctly saved but when I print it, the figure fills only the half of a A4 sheet.If I modify the string as: \n```\nsavefig(\"foo.eps\", papertype = 'a4', orientation = 'portrait', format = 'eps')\n```\nNothing chages! How can I set the size of the figure in a way that it fills the whole A4 sheet? Many thanks in advance.","response":"Try to set the size of the figure (in inches) before you save it. You can do this when you initialize the figure by doing: \n```\nfigure(figsize=(11.69,8.27)) # for landscape\n```\nor if the figure exists: \n```\nf = gcf() # f = figure(n) if you know the figure number\nf.set_size_inches(11.69,8.27)\n```\nor in advance for all plots, using \n```\nrc('figure', figsize=(11.69,8.27))\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/15571267\/python-a4-size-for-a-plot","best_answers_votes":55,"tags":["python","matplotlib"],"question_length":500,"response_length":376,"tag_count":2} {"question":"Lines to separate groups in seaborn heatmap I am plotting data as a Seaborn heatmap in Python. My data is intrinsically grouped into categories, and I'd like to have lines on the plot to indicate where the groups lie on the map. As a simple example, suppose I wanted to modify this plot from the documentation... \n```\nimport seaborn as sns; sns.set()\nflights = sns.load_dataset(\"flights\")\nflights = flights.pivot(\"month\", \"year\", \"passengers\")\nax = sns.heatmap(flights, cbar=False)\n```\nWhere I wanted to emphasize the comparisons between quarters of the year by making a plot like the one below; how would I do that?","response":"You want ax.hlines: ax.hlines([3, 6, 9], *ax.get_xlim())","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/39352932\/lines-to-separate-groups-in-seaborn-heatmap","best_answers_votes":48,"tags":["python","matplotlib","heatmap","seaborn"],"question_length":616,"response_length":56,"tag_count":4} {"question":"How to plot a density map in python? I have a .txt file containing the x,y values of regularly spaced points in a 2D map, the 3rd coordinate being the density at that point. \n```\n4.882812500000000E-004 4.882812500000000E-004 0.9072267\n1.464843750000000E-003 4.882812500000000E-004 1.405174\n2.441406250000000E-003 4.882812500000000E-004 24.32851\n3.417968750000000E-003 4.882812500000000E-004 101.4136\n4.394531250000000E-003 4.882812500000000E-004 199.1388\n5.371093750000000E-003 4.882812500000000E-004 1278.898\n6.347656250000000E-003 4.882812500000000E-004 1636.955\n7.324218750000000E-003 4.882812500000000E-004 1504.590\n8.300781250000000E-003 4.882812500000000E-004 814.6337\n9.277343750000000E-003 4.882812500000000E-004 273.8610\n```\nWhen I plot this density map in gnuplot, with the following commands: \n```\nset palette rgbformulae 34,35,0\nset size square\nset pm3d map\nsplot \"dens_map.map\" u 1:2:(log10($3+10.)) title \"Density map\"`\n```\nWhich gives me this beautiful image: Now I would like to have the same result with matplotlib.","response":"Here is my aim at a more complete answer including choosing the color map and a logarithmic normalization of the color axis. \n```\nimport matplotlib.pyplot as plt\nimport matplotlib.cm as cm\nfrom matplotlib.colors import LogNorm\nimport numpy as np\nx, y, z = np.loadtxt('data.txt', unpack=True)\nN = int(len(z)**.5)\nz = z.reshape(N, N)\nplt.imshow(z+10, extent=(np.amin(x), np.amax(x), np.amin(y), np.amax(y)),\n cmap=cm.hot, norm=LogNorm())\nplt.colorbar()\nplt.show()\n```\nI assume here that your data can be transformed into a 2d array by a simple reshape. If this is not the case than you need to work a bit harder on getting the data in this form. Using imshow and not pcolormesh is more efficient here if you data lies on a grid (as it seems to do). The above code snippet results in the following image, that comes pretty close to what you wanted:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/24119920\/how-to-plot-a-density-map-in-python","best_answers_votes":30,"tags":["python","matplotlib","histogram"],"question_length":1061,"response_length":852,"tag_count":3} {"question":"How to add a footnote under the x-axis of a plot I couldn't find the right function to add a footnote in my plot. The footnote I want to have is something like an explanation of one item in the legend, but it is too long to put in the legend box. So, I'd like to add a ref number, e.g. [1], to the legend item, and add the footnote in the bottom of the plot, under the x-axis. Which function should I use?","response":"You would be just use: \n```\nplt.figtext(0.5, 0.01, \"one text and next text\", ha=\"center\", fontsize=18, bbox={\"facecolor\":\"orange\", \"alpha\":0.5, \"pad\":5})\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/7917107\/how-to-add-a-footnote-under-the-x-axis-of-a-plot","best_answers_votes":26,"tags":["python","matplotlib","plot-annotations"],"question_length":405,"response_length":157,"tag_count":3} {"question":"How to remove scientific notation on a matplotlib log-log plot I know that this question has been asked before, but I tried all the possible solutions and none of them worked for me. So, I have a log-log plot in matplotlib, and I would like to avoid scientific notation on the x-axis. This is my code: \n```\nfrom numpy import array, log, pi\nimport matplotlib.pyplot as plt\nfrom scipy.optimize import curve_fit\nimport matplotlib.ticker as mticker\n\nplt.rc('axes.formatter', useoffset=False)\n\ntc = array([7499680.0, 12508380.0, 23858280.0, 34877020.0, 53970660.0, 89248580.0, 161032860.0, 326814160.0, 784460200.0])\n\ntheta = array([70, 60, 50, 45, 40, 35, 30, 25, 20])\n\nplt.scatter(theta,tc)\n\nax=plt.gca()\n\nax.set_xscale('log')\nax.set_yscale('log')\n\nax.xaxis.set_major_formatter(mticker.ScalarFormatter())\nax.xaxis.get_major_formatter().set_scientific(False)\nax.xaxis.get_major_formatter().set_useOffset(False)\n\nplt.show()\n```\nAnd this is the output: As you can see, the numbers on the x-axis are still in scientific notation. I would like to display them as 20, 30, 40... I tried every possible solution with no result. Thank you very much to everyone that will help. NB. I can't use the plt.loglog() command, because I am doing some curve fitting on the data and I need it like that. NB2. I noticed a very weird thing happening: if I change the code to yaxis.get_mayor_formatter()..., it works on the y-axis! It is just on the x one that it's not working. How is it possible? Edit: maybe it is not clear, but if you look at the code, there are 3 methods that should affect the display of the x-ticks: plt.rc('axes.formatter', useoffset=False), ax.xaxis.set_major_formatter(mticker.ScalarFormatter()) and ax.xaxis.get_major_formatter().set_scientific(False). They are 3 methods that should all do the trick alone, according to what I found around, but they don't. Of course I also tried them one by one and not all together.","response":"Those are minor ticks on the x-axis (i.e. they are not on integer powers of 10), not major ticks. matplotlib automatically detemines if it should label the major or minor ticks - in this case because you don't have any major ticks displayed in the x range, the minor ticks are being labelled). So, you need to use the set_minor_formatter method: \n```\nax.xaxis.set_minor_formatter(mticker.ScalarFormatter())\n```\nThe reason it works on the y-axis is because those ticks are major ticks (i.e. on integer powers of 10), not minor ticks.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/49750107\/how-to-remove-scientific-notation-on-a-matplotlib-log-log-plot","best_answers_votes":33,"tags":["python","matplotlib","plot","notation"],"question_length":1921,"response_length":532,"tag_count":4} {"question":"Fitting to Poisson histogram I am trying to fit a curve over the histogram of a Poisson distribution that looks like this I have modified the fit function so that it resembles a Poisson distribution, with the parameter t as a variable. But the curve_fit function can not be plotted and I am not sure why. \n```\ndef histo(bsize):\n N = bsize\n #binwidth\n bw = (dt.max()-dt.min())\/(N-1.)\n bin1 = dt.min()+ bw*np.arange(N)\n #define the array to hold the occurrence count\n bincount= np.array([])\n for bin in bin1:\n count = np.where((dt>=bin)&(dt 100:\n use_index=False\n elif isinstance(cmap, LinearSegmentedColormap):\n use_index=False\n elif isinstance(cmap, ListedColormap):\n use_index=True\n if use_index:\n ind = np.arange(int(N)) % cmap.N\n return cycler(\"color\",cmap(ind))\n else:\n colors = cmap(np.linspace(0,1,N))\n return cycler(\"color\",colors)\n```\nUsage for the \"continuous\" case: \n```\nimport matplotlib.pyplot as plt\nN = 6\nplt.rcParams[\"axes.prop_cycle\"] = get_cycle(\"viridis\", N)\n\nfig, ax = plt.subplots()\nfor i in range(N):\n ax.plot([0,1], [i, 2*i])\n\nplt.show()\n```\nUsage for the \"discrete\" case \n```\nimport matplotlib.pyplot as plt\n\nplt.rcParams[\"axes.prop_cycle\"] = get_cycle(\"tab20c\")\n\nfig, ax = plt.subplots()\nfor i in range(15):\n ax.plot([0,1], [i, 2*i])\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/30079590\/use-matplotlib-color-map-for-color-cycle","best_answers_votes":42,"tags":["python","matplotlib"],"question_length":952,"response_length":2685,"tag_count":2} {"question":"Placement of text e.g. suptitle inside the frame So far i have placed my suptitles above the frame, like this: How can i get the suptitles from above the frame into the frame? So far i have a solution that just prints a text and sets it on the right position with computing xlim and ylim. However this is errorprone and if the text is different it just looks aweful. Is there a way to set the suplabel into the frame? Or just place text below the frame and centered? it would be really convenient, if i did not need to know about the data that is displayed inside the frame.","response":"Your solution using text is also my go-to solution. However, you don't need to compute the position based on xlim and ylim. If you set transform=ax.transAxes the coordinates for positioning the text are taken as being relative to the axes bounding box (0,0 being the lower left corner). Like so: \n```\ndata = range(1,10);\nfig = figure()\nfor i in range(6):\n ax = fig.add_subplot(2,3,i)\n\n ax.text(.5,.9,'centered title',\n horizontalalignment='center',\n transform=ax.transAxes)\n\n ax.plot(data)\nshow()\n```\nHope that helps!","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/12957801\/placement-of-text-e-g-suptitle-inside-the-frame","best_answers_votes":47,"tags":["python","matplotlib","title","suptitle"],"question_length":574,"response_length":540,"tag_count":4} {"question":"How do you just show the text label in plot legend? (e.g. remove a label's line in the legend) I want to show the text for a line's label in the legend, but not a line too (As shown in the figure below): I have tried to minimise the legend's line and label, and overwrite only the new-label too (as in the code below). However, the legend brings both back. \n```\nlegend = ax.legend(loc=0, shadow=False) \n for label in legend.get_lines(): \n label.set_linewidth(0.0) \n for label in legend.get_texts(): \n label.set_fontsize(0) \n\n ax.legend(loc=0, title='New Title')\n```","response":"At that point, it's arguably easier to just use annotate. For example: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndata = np.random.normal(0, 1, 1000).cumsum()\n\nfig, ax = plt.subplots()\nax.plot(data)\nax.annotate('Label', xy=(-12, -12), xycoords='axes points',\n size=14, ha='right', va='top',\n bbox=dict(boxstyle='round', fc='w'))\nplt.show()\n```\nHowever, if you did want to use legend, here's how you'd do it. You'll need to explicitly hide the legend handles in addition to setting their size to 0 and removing their padding. \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndata = np.random.normal(0, 1, 1000).cumsum()\n\nfig, ax = plt.subplots()\nax.plot(data, label='Label')\n\nleg = ax.legend(handlelength=0, handletextpad=0, fancybox=True)\nfor item in leg.legendHandles:\n item.set_visible(False)\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/25123127\/how-do-you-just-show-the-text-label-in-plot-legend-e-g-remove-a-labels-line","best_answers_votes":28,"tags":["python","matplotlib","plot","legend","figure"],"question_length":588,"response_length":856,"tag_count":5} {"question":"Latex on python: \\alpha and \\beta don't work? I'm using matplotlib to produce some graphics, and I'm using latex for the legends. More specifically, I'm trying to use something like this: \n```\nloglog(x,x, '-r',label='$ \\alpha $')\nlegend()\nshow()\n```\nHowever, this code does not present a legend on the figure, and gets error after I close the image. I'm using the enthought package (for mac), but the error comes from the pylab\/scipy. The error the appears is exactly: \n```\n$ lpha $ (at char 0), (line:1, col:1)\n```\nHowever, if use the \\mu or \\gamma, it works well!! I only found about this problem on \\beta and \\alpha. Does anyone knows what this can be? I believe python is interpreting \"\\a\" as some character... but I don't know how should I debug \/ avoid it.","response":"The issue is that \\a and \\b have special meaning inside a Python string literal. I recommend that you use raw strings whenever there is a backslash embedded inside a string: \n```\nr'$ \\alpha $'\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/7836730\/latex-on-python-alpha-and-beta-dont-work","best_answers_votes":62,"tags":["python","matplotlib"],"question_length":762,"response_length":196,"tag_count":2} {"question":"How to change font properties of a matplotlib colorbar label? In matplotlib, I want to change the font properties for a colorbar label. For example I want the label to appear bold. Here is some example code: \n```\nfrom matplotlib.pylab import *\npcolor(arange(20).reshape(4,5))\ncb = colorbar(label='a label')\n```\nand the result, where I want \"a label\" to appear bold: All other answers on this site only answer how to change ticklabels or change all fonts in general (via modification of the matplotlibrc file)","response":"This two-liner can be used with any Text property (http:\/\/matplotlib.org\/api\/text_api.html#matplotlib.text.Text) \n```\ncb = plt.colorbar()\ncb.set_label(label='a label',weight='bold')\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/23172282\/how-to-change-font-properties-of-a-matplotlib-colorbar-label","best_answers_votes":27,"tags":["python","matplotlib"],"question_length":508,"response_length":185,"tag_count":2} {"question":"Linewidth is added to the length of a line When I draw a line segment in matplotlib the linewidth seems to be added to the length of the line. Below my code (not the most pythonic code, but it should do the trick). Am I doing something wrong or is this just a feature of matplotlib? \n```python\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nL1 = 100\nL2 = 75\nL3 = 100\nY = 3\nN = 5\nl_prev = 0\nfor l, c in zip(np.linspace(0, L1, N), range(N)):\n plt.plot([l_prev, l], [0, 0], 'r', linewidth=20)\n l_prev = l\nl_prev = L1\nfor l, c in zip(np.linspace(L1, L1 + L2, N), range(N)):\n plt.plot([l_prev, l], [Y, Y], 'g', linewidth=1)\n l_prev = l\nl_prev = L1\nfor l, c in zip(np.linspace(L1, L1 + L3, N), range(N)):\n p = plt.plot([l_prev, l], [-Y, -Y], 'b', linewidth=10)\n l_prev = l\nplt.axvspan(xmin=L1, xmax=L1)\nplt.axis([-5, 205, -5, 5])\nplt.show()\n```\nWhat I expected to see is three line segments: [0,L1], [L1,L2] and [L1,L3]. But the first line [0,L1] extends to L1 + 'the diameter'....","response":"It looks like the default solid_capstyle is projecting, which isn't the one you want: \n```\nplt.figure()\nplt.plot([0, 100], [5, 5], linewidth=50, linestyle=\"-\", c=\"blue\",\n solid_capstyle=\"butt\")\nplt.plot([0, 100], [15, 15], linewidth=50, linestyle=\"-\", c=\"red\",\n solid_capstyle=\"round\")\nplt.plot([0, 100], [25, 25], linewidth=50, linestyle=\"-\", c=\"purple\",\n solid_capstyle=\"projecting\")\nplt.axvline(x=100, c=\"black\")\nplt.xlim(0, 125)\nplt.ylim(0, 30)\nplt.savefig(\"cap.png\")\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/10297220\/linewidth-is-added-to-the-length-of-a-line","best_answers_votes":67,"tags":["python","matplotlib"],"question_length":998,"response_length":499,"tag_count":2} {"question":"White lines in matplotlib's pcolor In some pdf viewers such as Preview on OSX, plots made with matplotlib's pcolor have white lines (see image below). How can I get rid of them? The source code is very simple (choose any data for x,y,z): \n```\nimport matplotlib\nmatplotlib.use(\"pdf\")\nimport matplotlib.pyplot as pl\npl.figure()\npl.pcolormesh(x,y,z,cmap=\"Blues\",linewidth=0) \npl.savefig(\"heatmap.pdf\")\n```","response":"The comments have a good solution that utilizes imshow. When imshow is not appropriate for the input data (e.g. it is not uniformly spaced) this generally solves this problem, \n```\npcol = pl.pcolormesh(x,y,z,cmap=\"Blues\",linewidth=0,)\npcol.set_edgecolor('face')\n```\nIf that approach does not reduce the lines sufficiently, you can also try this: \n```\npl.pcolormesh(x,y,z,cmap=\"Blues\",linewidth=0,rasterized=True)\n```\nIn addition to reducing the lines between squares this approach also tends to reduce file size a bit, which is sometimes useful. In this case you may want to tune the dpi setting when saving (e.g. pl.savefig(\"heatmap.pdf\", dpi=300)) until you get something that is satisfactory.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/27092991\/white-lines-in-matplotlibs-pcolor","best_answers_votes":40,"tags":["python","matplotlib"],"question_length":402,"response_length":695,"tag_count":2} {"question":"How to write annotation outside the drawing in data coords My graphic goes from y=-1 to y=10 I want to write a small piece of text in an arbitrary position, say at x=2000, y=5: \n```\nax.annotate('MgII', xy=(2000.0, 5.0), xycoords='data')\n```\nNow I want the same, but this time the piece of text must be outside the graphic, but in the exact position I mark in data coordinates: \n```\nax.annotate('MgII', xy=(2000.0, 10.5), xycoords='data')\n```\nBut it then disappears (remember my graphic goes from -1 to 10). There is plenty of space free on top of the graphic. And, if I specify \n```\nxy=(2000.0, 9.999)\n```\nthen the label appears nearly where I want it, only it is too close to the top border of the picture. I want it at y=10.5, specifically.","response":"```\nax.annotate('MgII', xy=(2000.0, 10.5), xycoords='data', annotation_clip=False)\n```\nBy default in data units the annotation is only drawn if it is in axes. You might be better off using a blended transform: \n```\ntrans = ax.get_xaxis_transform() # x in data untis, y in axes fraction\n ann = ax.annotate('MgII', xy=(2000, 1.05 ), xycoords=trans)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/18537879\/how-to-write-annotation-outside-the-drawing-in-data-coords","best_answers_votes":57,"tags":["python","matplotlib"],"question_length":744,"response_length":350,"tag_count":2} {"question":"Matplotlib with annotation cut off from the saved figure I'm using matplotlib to draw something figure while using annotations. The problem I experienced is that the annotation (which is outside the plotting area) is cut off from the saved picture, as shown in the following figure. This following figure is the one I want: Anybody knows how to solve this problem? I noticed people suggest using plt.tight_plot() or fig.autolayout in rcParams, but that doesn't seem to work. Below is the code for producing the figure. \n```\nfig, ax = plt.subplots()\nax.set_xlim([-0.02,1.1])\nax.set_ylim([-0.02,1.1])\n\nax.plot([0,0,0,0.5,0.5,0.5,1,1,1], [0,0.5,1,0,0.5,1,0,0.5,1], 'go')\n\nax.annotate(\"Digit 2\",\n xy=(0.5, -0.1), xycoords='data',\n xytext=(0.5, -0.3), textcoords='data',\n arrowprops=dict(arrowstyle=\"->\",\n connectionstyle=\"arc3\"),\n annotation_clip=False,\n fontsize = 12,\n ha='center',\n )\n\nax.annotate(\"Level 2\",\n xy=(-0.1, 1), xycoords='data',\n xytext=(-0.35, 1), textcoords='data',\n arrowprops=dict(arrowstyle=\"->\",\n connectionstyle=\"arc3\"),\n annotation_clip=False,\n fontsize = 12,\n va='center',\n )\n\nplt.savefig('sample.png', dpi = 300)\n```","response":"Save figure with the bbox_inches argument \n```\nplt.savefig('sample.png', bbox_inches=\"tight\")\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/29901422\/matplotlib-with-annotation-cut-off-from-the-saved-figure","best_answers_votes":57,"tags":["python","matplotlib"],"question_length":1371,"response_length":97,"tag_count":2} {"question":"Adding a legend to a matplotlib boxplot with multiple plots on same axes I have a boxplot generated with matplotlib: However, I have no idea how to generate the legend. I get a warning saying: Legend does not support handles for dict instances. or No artists with labels found to put in legend. I've done a fair bit of searching and there doesn't seem to be an example showing how to achieve this. \n```\nbp1 = ax.boxplot(data1, positions=[1,4], notch=True, widths=0.35, patch_artist=True)\nbp2 = ax.boxplot(data2, positions=[2,5], notch=True, widths=0.35, patch_artist=True)\n\nax.legend([bp1, bp2], ['A', 'B'], loc='upper right')\n```","response":"The boxplot returns a dictionary of artists result : dict A dictionary mapping each component of the boxplot to a list of the matplotlib.lines.Line2D instances created. That dictionary has the following keys (assuming vertical boxplots): boxes: the main body of the boxplot showing the quartiles and the median\u2019s confidence intervals if enabled. [...] Using the boxes, you can get the legend artists as \n```\nax.legend([bp1[\"boxes\"][0], bp2[\"boxes\"][0]], ['A', 'B'], loc='upper right')\n```\nComplete example: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np; np.random.seed(1)\n\ndata1=np.random.randn(40,2)\ndata2=np.random.randn(30,2)\n\nfig, ax = plt.subplots()\nbp1 = ax.boxplot(data1, positions=[1,4], notch=True, widths=0.35, \n patch_artist=True, boxprops=dict(facecolor=\"C0\"))\nbp2 = ax.boxplot(data2, positions=[2,5], notch=True, widths=0.35, \n patch_artist=True, boxprops=dict(facecolor=\"C2\"))\n\nax.legend([bp1[\"boxes\"][0], bp2[\"boxes\"][0]], ['A', 'B'], loc='upper right')\n\nax.set_xlim(0,6)\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/47528955\/adding-a-legend-to-a-matplotlib-boxplot-with-multiple-plots-on-same-axes","best_answers_votes":46,"tags":["python","matplotlib","plot","boxplot","plot-annotations"],"question_length":630,"response_length":1043,"tag_count":5} {"question":"ConversionError: Failed to convert value(s) to axis units I have a dataframe dd containing 84 rows and 3 columns. Now I want to plot an Area Plot with it, and use its index as xticks, so I do the following: \n```py\ndd.plot(kind='area')\nplt.show()\n```\nBut I got this result: It turns out that some xticks are hidden automatically: there should be 84 xticks, but only 9 of them are shown (seems to be hidden automatically). I found a similar question here, but when I tried the method mentioned in the link, I got an CnoversionError: \n```py\nConversionError: Failed to convert value(s) to axis units: Index(['!', '\"', '#', '$', '%', '&', ''', '(', ')', '*', '+', ',', '-', '.',\n '\/', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', ':', ';', '<',\n '=', '>', '?', '@', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J',\n 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X',\n 'Y', 'Z', '[', '\\', ']', '^', '_', '`', 'a', 'b', 'c', 'd', 'e', 'f',\n 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't'],\n dtype='object')\n```\nI noticed that the difference between my question and the link above is that the index of my DataFrame has a dtype object (they are string), and I found that if I change the index to a list of int, the error does go away. The code to reproduce the Error: \n```py\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndd=pd.DataFrame(np.random.rand(84,3),index=[chr(ascii) for ascii in range(33,33+84)])\n\ndd.plot(kind='area',xticks=dd.index)\n\nplt.show()\n```","response":"We need to provide index positions of the labels in the xticks function, labels order should be as per the index positions. xticks function takes 3 arguments, ticks should be position of indexes of the labels labels argument takes the list of label values rotation takes how the label should be presented in the plot \n```py\nx = df['state']\ny = df['sales']\ntickvalues = range(0,len(x)) \/\/ or tickvalues = df.index\nplt.figure(figsize = (20,5))\nplt.xticks(ticks = tickvalues ,labels = labellist, rotation = 'vertical')\nplt.plot(x,y)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/58814857\/conversionerror-failed-to-convert-values-to-axis-units","best_answers_votes":26,"tags":["python","pandas","matplotlib"],"question_length":1559,"response_length":533,"tag_count":3} {"question":"Python NetworkX -- set node color automatically based on number of attribute options I am using NetworkX to analyze and visualize social networks. Often, the nodes within the network have attribute information associated with them, such as division. However, I do not always know how many options there might be for the attribute division. For example, sometimes there might be only 3 divisions represented as node attributes within the network, other times there might be 30 divisions. I've figured out how to set node colors based on node attributes (see code below). However, in this example, I knew how different options there were for the node attribute group (5 options), and set each color automatically. When there are only 3 or 5 options for node attributes, it's not difficult to pick the node attribute colors, but this becomes unrealistic when there are many more options. What I would like to figure out is how can I automatically pick the optimal node attribute colors based on the number of node attribute options provided. Sometime I have have 5 options for the attribute to color by, other times I might have 30 options for the node attribute to color by, and I don't want to set each node color individually. I'm not sure if this is something that I should be able to do with the colormap function, or if that is only for color nodes by numeric measures such as degree centrality. NETWORKX CODE \n```\nimport networkx as nx\n\npylab inline\n\n# create an empty graph\ng = nx.Graph()\n\n# open csv edgelist and read edges into graph\nfor line in open('phils_network_edgelist.csv', 'rb'):\n edge = line.rstrip().split(',')\n g.add_edge(edge[0], edge[1])\n\n# draw network without node color\nnx.draw(g, with_labels=False, node_size=25)\n```\n```\n# read in node attributes as list of tuples\ngroup_attr = []\nfor line in open('phils_network_attribute_group.csv', 'rb'):\n group_attr.append(tuple(line.rstrip().split(',')))\n\n# convert list of tuples into a dict\ngroup_attr_dict = dict(set(sorted(group_attr)))\n\n# set nodes attributes\nnx.set_node_attributes(g, \"group\", group_attr_dict)\n\n# create empty list for node colors\nnode_color = []\n\n# for each node in the graph\nfor node in g.nodes(data=True):\n\n # if the node has the attribute group1\n if 'group1' in node[1]['group']:\n node_color.append('blue')\n\n # if the node has the attribute group1\n elif 'group2' in node[1]['group']:\n node_color.append('red')\n\n # if the node has the attribute group1\n elif 'group3' in node[1]['group']:\n node_color.append('green')\n\n # if the node has the attribute group1\n elif 'group4' in node[1]['group']:\n node_color.append('yellow')\n\n # if the node has the attribute group1\n elif 'group5' in node[1]['group']:\n node_color.append('orange') \n\n# draw graph with node attribute color\nnx.draw(g, with_labels=False, node_size=25, node_color=node_color)\n```\nNETWORK DATA \n```\n\n```python\ng.nodes(data=True)\n#Output\n#[('BD', {'group': 'group5'}),\n# ('WC', {'group': 'group3'}),\n# ('BA', {'group': 'group4'}),\n# ('WM', {'group': 'group3'}),\n# ('JR', {'group': 'group1'}),\n# ('JS', {'group': 'group3'}),\n# ('JL', {'group': 'group4'}),\n# ('JM', {'group': 'group2'}),\n# ('JK', {'group': 'group2'}),\n# ('JF', {'group': 'group2'}),\n# ('JG', {'group': 'group2'}),\n# ('JA', {'group': 'group2'}),\n# ('JB', {'group': 'group4'}),\n# ('JC', {'group': 'group4'}),\n# ('RR', {'group': 'group3'}),\n# ('RS', {'group': 'group3'}),\n# ('TTI', {'group': 'group3'}),\n# ('RB', {'group': 'group1'}),\n# ('RL', {'group': 'group3'}),\n# ('RO', {'group': 'group4'}),\n# ('LHA', {'group': 'group2'}),\n# ('LHI', {'group': 'group1'}),\n# ('GF', {'group': 'group2'}),\n# ('GB', {'group': 'group4'}),\n# ('EM', {'group': 'group2'}),\n# ('HR', {'group': 'group5'}),\n# ('BS', {'group': 'group3'}),\n# ('HH', {'group': 'group4'}),\n# ('HA', {'group': 'group1'}),\n# ('PS', {'group': 'group1'}),\n# ('PW', {'group': 'group1'}),\n# ('PB', {'group': 'group1'}),\n# ('PC', {'group': 'group5'}),\n# ('MFR', {'group': 'group4'}),\n# ('JMA', {'group': 'group5'}),\n# ('PN', {'group': 'group4'}),\n# ('PL', {'group': 'group3'}),\n# ('ZL', {'group': 'group4'}),\n# ('EB', {'group': 'group2'}),\n# ('ET', {'group': 'group3'}),\n# ('EW', {'group': 'group1'}),\n# ('ER', {'group': 'group3'}),\n# ('MF', {'group': 'group3'}),\n# ('MA', {'group': 'group4'}),\n# ('MM', {'group': 'group2'}),\n# ('MN', {'group': 'group4'}),\n# ('MH', {'group': 'group3'}),\n# ('MK', {'group': 'group2'}),\n# ('JLA', {'group': 'group2'}),\n# ('MP', {'group': 'group1'}),\n# ('MS', {'group': 'group4'}),\n# ('MR', {'group': 'group4'}),\n# ('FI', {'group': 'group5'}),\n# ('CJ', {'group': 'group4'}),\n# ('CO', {'group': 'group5'}),\n# ('CM', {'group': 'group4'}),\n# ('CB', {'group': 'group2'}),\n# ('CG', {'group': 'group2'}),\n# ('CF', {'group': 'group5'}),\n# ('CD', {'group': 'group3'}),\n# ('CS', {'group': 'group2'}),\n# ('CP', {'group': 'group2'}),\n# ('CV', {'group': 'group2'}),\n# ('KC', {'group': 'group1'}),\n# ('KB', {'group': 'group3'}),\n# ('SY', {'group': 'group2'}),\n# ('KF', {'group': 'group2'}),\n# ('KD', {'group': 'group3'}),\n# ('KH', {'group': 'group1'}),\n# ('SW', {'group': 'group1'}),\n# ('KL', {'group': 'group2'}),\n# ('KP', {'group': 'group3'}),\n# ('KW', {'group': 'group1'}),\n# ('SM', {'group': 'group2'}),\n# ('SB', {'group': 'group4'}),\n# ('DJ', {'group': 'group2'}),\n# ('DD', {'group': 'group2'}),\n# ('DV', {'group': 'group5'}),\n# ('BJ', {'group': 'group3'}),\n# ('DR', {'group': 'group2'}),\n# ('KWI', {'group': 'group4'}),\n# ('TW', {'group': 'group2'}),\n# ('TT', {'group': 'group2'}),\n# ('LH', {'group': 'group3'}),\n# ('LW', {'group': 'group3'}),\n# ('TM', {'group': 'group3'}),\n# ('LS', {'group': 'group3'}),\n# ('LP', {'group': 'group2'}),\n# ('TG', {'group': 'group3'}),\n# ('JCU', {'group': 'group2'}),\n# ('AL', {'group': 'group1'}),\n# ('AP', {'group': 'group3'}),\n# ('AS', {'group': 'group3'}),\n# ('IM', {'group': 'group4'}),\n# ('AW', {'group': 'group3'}),\n# ('HHI', {'group': 'group1'})]\n```\n\n```python\ng.edges(data=True)\n#Output\n#[('BD', 'ZL', {}),\n# ('BD', 'JCU', {}),\n# ('BD', 'DJ', {}),\n# ('BD', 'BA', {}),\n# ('BD', 'CB', {}),\n# ('BD', 'CG', {}),\n# ('BD', 'AS', {}),\n# ('BD', 'MH', {}),\n# ('BD', 'AP', {}),\n# ('BD', 'HH', {}),\n# ('BD', 'TM', {}),\n# ('BD', 'CF', {}),\n# ('BD', 'CP', {}),\n# ('BD', 'DR', {}),\n# ('BD', 'CV', {}),\n# ('BD', 'EB', {}),\n# ('WC', 'JCU', {}),\n# ('WC', 'JS', {}),\n# ('BA', 'JR', {}),\n# ('BA', 'JB', {}),\n# ('BA', 'RR', {}),\n# ('BA', 'RS', {}),\n# ('BA', 'LH', {}),\n# ('BA', 'PC', {}),\n# ('BA', 'TTI', {}),\n# ('BA', 'PL', {}),\n# ('BA', 'JCU', {}),\n# ('BA', 'CF', {}),\n# ('BA', 'EB', {}),\n# ('BA', 'GF', {}),\n# ('BA', 'AS', {}),\n# ('BA', 'IM', {}),\n# ('BA', 'BJ', {}),\n# ('BA', 'CS', {}),\n# ('BA', 'KH', {}),\n# ('BA', 'SW', {}),\n# ('BA', 'MH', {}),\n# ('BA', 'MR', {}),\n# ('BA', 'HHI', {}),\n# ('WM', 'EM', {}),\n# ('WM', 'JCU', {}),\n# ('WM', 'CO', {}),\n# ('WM', 'LP', {}),\n# ('WM', 'AW', {}),\n# ('WM', 'KD', {}),\n# ('WM', 'TT', {}),\n# ('WM', 'JS', {}),\n# ('WM', 'PB', {}),\n# ('WM', 'JM', {}),\n# ('WM', 'MFR', {}),\n# ('WM', 'RB', {}),\n# ('WM', 'MR', {}),\n# ('WM', 'DV', {}),\n# ('WM', 'TG', {}),\n# ('WM', 'JF', {}),\n# ('WM', 'JMA', {}),\n# ('WM', 'FI', {}),\n# ('WM', 'JB', {}),\n# ('JR', 'GF', {}),\n# ('JR', 'MFR', {}),\n# ('JR', 'KH', {}),\n# ('JR', 'JB', {}),\n# ('JS', 'EM', {}),\n# ('JS', 'PS', {}),\n# ('JS', 'MF', {}),\n# ('JS', 'JCU', {}),\n# ('JS', 'KD', {}),\n# ('JS', 'MH', {}),\n# ('JS', 'TTI', {}),\n# ('JS', 'RB', {}),\n# ('JS', 'TG', {}),\n# ('JL', 'KB', {}),\n# ('JL', 'MN', {}),\n# ('JL', 'LW', {}),\n# ('JL', 'CS', {}),\n# ('JL', 'ET', {}),\n# ('JL', 'ER', {}),\n# ('JM', 'EM', {}),\n# ('JM', 'PS', {}),\n# ('JM', 'KD', {}),\n# ('JM', 'CD', {}),\n# ('JM', 'JK', {}),\n# ('JM', 'TG', {}),\n# ('JM', 'RO', {}),\n# ('JM', 'CV', {}),\n# ('JK', 'HR', {}),\n# ('JK', 'PS', {}),\n# ('JF', 'EM', {}),\n# ('JF', 'PS', {}),\n# ('JF', 'LP', {}),\n# ('JF', 'LHA', {}),\n# ('JF', 'CD', {}),\n# ('JF', 'RB', {}),\n# ('JF', 'JG', {}),\n# ('JF', 'KF', {}),\n# ('JG', 'CJ', {}),\n# ('JG', 'SY', {}),\n# ('JG', 'KF', {}),\n# ('JG', 'LHA', {}),\n# ('JG', 'CD', {}),\n# ('JG', 'RB', {}),\n# ('JG', 'BS', {}),\n# ('JA', 'CS', {}),\n# ('JB', 'KC', {}),\n# ('JB', 'JCU', {}),\n# ('JB', 'MA', {}),\n# ('JB', 'AW', {}),\n# ('JB', 'KWI', {}),\n# ('JB', 'KH', {}),\n# ('JB', 'CF', {}),\n# ('JB', 'EB', {}),\n# ('JB', 'PB', {}),\n# ('JB', 'MFR', {}),\n# ('JB', 'KW', {}),\n# ('JB', 'RB', {}),\n# ('JB', 'MR', {}),\n# ('JB', 'RL', {}),\n# ('JB', 'FI', {}),\n# ('JB', 'JMA', {}),\n# ('JC', 'SM', {}),\n# ('RR', 'MS', {}),\n# ('RR', 'SW', {}),\n# ('RR', 'LH', {}),\n# ('RS', 'LH', {}),\n# ('TTI', 'JCU', {}),\n# ('TTI', 'SW', {}),\n# ('TTI', 'CF', {}),\n# ('RB', 'EM', {}),\n# ('RB', 'PS', {}),\n# ('RB', 'SY', {}),\n# ('RB', 'JCU', {}),\n# ('RB', 'KD', {}),\n# ('RB', 'CF', {}),\n# ('RB', 'LHI', {}),\n# ('RB', 'CD', {}),\n# ('RB', 'MH', {}),\n# ('RB', 'CJ', {}),\n# ('RB', 'TG', {}),\n# ('RB', 'EB', {}),\n# ('RO', 'PS', {}),\n# ('LHA', 'CJ', {}),\n# ('LHA', 'SY', {}),\n# ('LHA', 'KF', {}),\n# ('LHA', 'CD', {}),\n# ('LHI', 'PS', {}),\n# ('LHI', 'CJ', {}),\n# ('GF', 'KC', {}),\n# ('GF', 'MA', {}),\n# ('GB', 'HR', {}),\n# ('GB', 'MM', {}),\n# ('GB', 'LS', {}),\n# ('EM', 'LP', {}),\n# ('EM', 'DV', {}),\n# ('EM', 'TG', {}),\n# ('HR', 'MM', {}),\n# ('HR', 'MH', {}),\n# ('HR', 'EB', {}),\n# ('HR', 'LS', {}),\n# ('BS', 'CD', {}),\n# ('HH', 'ZL', {}),\n# ('HH', 'CB', {}),\n# ('HH', 'CP', {}),\n# ('HH', 'DR', {}),\n# ('HH', 'CV', {}),\n# ('HA', 'SM', {}),\n# ('PS', 'KD', {}),\n# ('PS', 'CF', {}),\n# ('PS', 'TG', {}),\n# ('PW', 'CM', {}),\n# ('PW', 'TW', {}),\n# ('PW', 'TT', {}),\n# ('PW', 'MH', {}),\n# ('PW', 'AL', {}),\n# ('PW', 'MP', {}),\n# ('PW', 'CS', {}),\n# ('PW', 'HHI', {}),\n# ('PW', 'EW', {}),\n# ('PB', 'CO', {}),\n# ('PB', 'KH', {}),\n# ('PB', 'CF', {}),\n# ('PB', 'MFR', {}),\n# ('PB', 'AW', {}),\n# ('PB', 'MA', {}),\n# ('PC', 'CS', {}),\n# ('PC', 'JCU', {}),\n# ('PC', 'SW', {}),\n# ('MFR', 'KC', {}),\n# ('MFR', 'JCU', {}),\n# ('MFR', 'KH', {}),\n# ('MFR', 'MH', {}),\n# ('MFR', 'MR', {}),\n# ('JMA', 'KWI', {}),\n# ('JMA', 'AW', {}),\n# ('PN', 'SB', {}),\n# ('PL', 'HHI', {}),\n# ('PL', 'MK', {}),\n# ('PL', 'LH', {}),\n# ('ZL', 'CB', {}),\n# ('ZL', 'AP', {}),\n# ('ZL', 'CP', {}),\n# ('ZL', 'DR', {}),\n# ('ZL', 'CV', {}),\n# ('EB', 'JCU', {}),\n# ('EB', 'DJ', {}),\n# ('EB', 'CM', {}),\n# ('EB', 'SW', {}),\n# ('EB', 'MM', {}),\n# ('EB', 'LS', {}),\n# ('EB', 'CS', {}),\n# ('EB', 'CP', {}),\n# ('EB', 'CV', {}),\n# ('ET', 'LW', {}),\n# ('ET', 'ER', {}),\n# ('ET', 'KB', {}),\n# ('EW', 'TW', {}),\n# ('EW', 'TT', {}),\n# ('EW', 'HHI', {}),\n# ('EW', 'AL', {}),\n# ('ER', 'LW', {}),\n# ('ER', 'KB', {}),\n# ('MA', 'KW', {}),\n# ('MA', 'AW', {}),\n# ('MA', 'MR', {}),\n# ('MM', 'LS', {}),\n# ('MH', 'JCU', {}),\n# ('MH', 'SY', {}),\n# ('MH', 'DJ', {}),\n# ('MH', 'CM', {}),\n# ('MH', 'AL', {}),\n# ('MH', 'SW', {}),\n# ('MH', 'CF', {}),\n# ('MH', 'LS', {}),\n# ('MH', 'CS', {}),\n# ('MH', 'TG', {}),\n# ('MH', 'CP', {}),\n# ('MH', 'CV', {}),\n# ('MK', 'LH', {}),\n# ('MK', 'KL', {}),\n# ('MK', 'JLA', {}),\n# ('MK', 'MS', {}),\n# ('MK', 'CS', {}),\n# ('JLA', 'CM', {}),\n# ('JLA', 'KL', {}),\n# ('JLA', 'MS', {}),\n# ('JLA', 'CS', {}),\n# ('JLA', 'SB', {}),\n# ('JLA', 'HHI', {}),\n# ('MP', 'TW', {}),\n# ('MP', 'TT', {}),\n# ('MP', 'HHI', {}),\n# ('MS', 'CS', {}),\n# ('MS', 'HHI', {}),\n# ('FI', 'KW', {}),\n# ('FI', 'AW', {}),\n# ('FI', 'CF', {}),\n# ('CJ', 'SY', {}),\n# ('CJ', 'DD', {}),\n# ('CJ', 'CD', {}),\n# ('CO', 'AW', {}),\n# ('CM', 'TW', {}),\n# ('CM', 'TT', {}),\n# ('CM', 'AL', {}),\n# ('CM', 'CS', {}),\n# ('CB', 'DJ', {}),\n# ('CB', 'CP', {}),\n# ('CB', 'CV', {}),\n# ('CG', 'CF', {}),\n# ('CF', 'JCU', {}),\n# ('CF', 'AW', {}),\n# ('CF', 'KH', {}),\n# ('CF', 'LH', {}),\n# ('CF', 'AP', {}),\n# ('CF', 'AS', {}),\n# ('CF', 'KW', {}),\n# ('CF', 'CS', {}),\n# ('CF', 'CV', {}),\n# ('CD', 'SY', {}),\n# ('CD', 'LP', {}),\n# ('CD', 'KF', {}),\n# ('CS', 'JCU', {}),\n# ('CS', 'TW', {}),\n# ('CS', 'TT', {}),\n# ('CS', 'AS', {}),\n# ('CS', 'LH', {}),\n# ('CS', 'SB', {}),\n# ('CS', 'HHI', {}),\n# ('CP', 'DJ', {}),\n# ('CP', 'AP', {}),\n# ('CP', 'DR', {}),\n# ('CP', 'CV', {}),\n# ('CV', 'DJ', {}),\n# ('CV', 'AP', {}),\n# ('CV', 'DR', {}),\n# ('KB', 'LW', {}),\n# ('SY', 'KF', {}),\n# ('KF', 'AP', {}),\n# ('KD', 'TG', {}),\n# ('SW', 'BJ', {}),\n# ('SW', 'IM', {}),\n# ('SW', 'LH', {}),\n# ('KL', 'TT', {}),\n# ('KP', 'TM', {}),\n# ('KW', 'JCU', {}),\n# ('SB', 'AL', {}),\n# ('DJ', 'TG', {}),\n# ('BJ', 'IM', {}),\n# ('KWI', 'AW', {}),\n# ('TW', 'TT', {}),\n# ('TW', 'AL', {}),\n# ('TW', 'HHI', {}),\n# ('TT', 'AL', {}),\n# ('TT', 'HHI', {}),\n# ('LH', 'JCU', {}),\n# ('JCU', 'AP', {}),\n# ('JCU', 'AS', {}),\n# ('AL', 'HHI', {})]\n#```\n```","response":"Here is an example of how to use a colormap. It's a little tricky. If you want a customized discrete colormap you can try this SO answer Matplotlib discrete colorbar \n```\nimport matplotlib.pyplot as plt\n# create number for each group to allow use of colormap\nfrom itertools import count\n# get unique groups\ngroups = set(nx.get_node_attributes(g,'group').values())\nmapping = dict(zip(sorted(groups),count()))\nnodes = g.nodes()\ncolors = [mapping[g.nodes[n]['group']] for n in nodes]\n\n# drawing nodes and edges separately so we can capture collection for colobar\npos = nx.spring_layout(g)\nec = nx.draw_networkx_edges(g, pos, alpha=0.2)\nnc = nx.draw_networkx_nodes(g, pos, nodelist=nodes, node_color=colors, node_size=100, cmap=plt.cm.jet)\nplt.colorbar(nc)\nplt.axis('off')\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/28910766\/python-networkx-set-node-color-automatically-based-on-number-of-attribute-opt","best_answers_votes":43,"tags":["python","matplotlib","networkx"],"question_length":11965,"response_length":783,"tag_count":3} {"question":"Centering x-tick labels between tick marks in matplotlib I want to have the x-tick date labels centered between the tick marks, instead of centered about the tick marks as shown in the photo below. I have read the documentation but to no avail - does anyone know a way to do this? Here is everything that I've used for my x-axis tick formatting if it helps: \n```\nday_fmt = '%d' \nmyFmt = mdates.DateFormatter(day_fmt)\nax.xaxis.set_major_formatter(myFmt) \nax.xaxis.set_major_locator(matplotlib.dates.DayLocator(interval=1)) \n\nfor tick in ax.xaxis.get_major_ticks():\n tick.tick1line.set_markersize(0)\n tick.tick2line.set_markersize(0)\n tick.label1.set_horizontalalignment('center')\n```","response":"One way to do it is to use the minor ticks. The idea is that you set the minor ticks so that they are located halfway between the major ticks, and you manually specify the labels. For example: \n```\nimport matplotlib.ticker as ticker\n\n# a is an axes object, e.g. from figure.get_axes()\n\n# Hide major tick labels\na.xaxis.set_major_formatter(ticker.NullFormatter())\n\n# Customize minor tick labels\na.xaxis.set_minor_locator(ticker.FixedLocator([1.5,2.5,3.5,4.5,5.5]))\na.xaxis.set_minor_formatter(ticker.FixedFormatter(['1','2','3','4','5']))\n```\nThe three lines: \"Hide\" the 1,2,3,4,... that you have on the major ticks Set minor ticks halfway between the major ticks (assuming your major ticks are at 1,2,3...) Manually specifies the labels for the minor ticks. Here, '1' would be between 1.0 and 2.0 on the graph. This is just a simple example. You would probably want to streamline it a bit by populating the lists in a loop or something. You can also experiment with other locators or formatters. Edit: Alternatively, as suggested in the comments: \n```\n# Hide major tick labels\na.set_xticklabels('')\n\n# Customize minor tick labels\na.set_xticks([1.5,2.5,3.5,4.5,5.5], minor=True)\na.set_xticklabels(['1','2','3','4','5'], minor=True)\n```\nExample: Before: After:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/17158382\/centering-x-tick-labels-between-tick-marks-in-matplotlib","best_answers_votes":30,"tags":["python","matplotlib","pandas"],"question_length":700,"response_length":1263,"tag_count":3} {"question":"The preferred way to set matplotlib figure\/axes properties Say I have a matplotlib axes called ax, and I want to set several of its properties. Currently, I do it like this: \n```\nax.set_yscale('log')\nax.set_xlim([0,10])\nax.set_xlabel('some label')\n```\nBut it gets tedious after a while. Then I ran into this method: \n```\nax.set(yscale='log', xlim=[0,10], xlabel='some label')\n```\nMuch more concise, but it seems a bit undocumented. I mean all the documentation says is \"A tkstyle set command, pass kwargs to set properties\". What is the preferred or idiomatic way? Is the set method api stable?","response":"Pyplot tutorial appears to recommend ax.set_xxx() functions, but also mentions .setp(xxx=). On the other hand, .set(xxx=) function is not used and .setp(xxx=), while documented, is not used in any examples (Pyplot API). I understand it that matplotlib supports both imperative programming style and matlab-like style. Reason being the target user bases -- those already familiar with Python and those who used Matlab before. I conclude that matplotlib recommended api is .set_xxx(). A quick check through the gallery confirms this. Edit: it appears there are examples of both in the gallery now. Similar duality exists for keyword arguments to plot functions, except that imperative API is not that obvious.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/13399293\/the-preferred-way-to-set-matplotlib-figure-axes-properties","best_answers_votes":18,"tags":["python","matplotlib","idioms"],"question_length":594,"response_length":707,"tag_count":3} {"question":"Creating square subplots (of equal height and width) in matplotlib When I run this code \n```\nfrom pylab import *\n\nfigure()\nax1 = subplot(121)\nplot([1, 2, 3], [1, 2, 3])\nsubplot(122, sharex=ax1, sharey=ax1)\nplot([1, 2, 3], [1, 2, 3])\ndraw()\nshow()\n```\nI get two subplots which are \"squished\" in the X-dimension. How do I get these subplots such that the height of the Y-axis equals the width of the X-axis, for both subplots? I am using matplotlib v.0.99.1.2 on Ubuntu 10.04. Update 2010-07-08: Let's look at some things that don't work. After Googling around all day, I thought that it might be related to auto-scaling. So I tried fiddling with that. \n```\nfrom pylab import *\n\nfigure()\nax1 = subplot(121, autoscale_on=False)\nplot([1, 2, 3], [1, 2, 3])\nsubplot(122, sharex=ax1, sharey=ax1)\nplot([1, 2, 3], [1, 2, 3])\ndraw()\nshow()\n```\nmatplotlib insists on auto-scaling. \n```\nfrom pylab import *\n\nfigure()\nax1 = subplot(121, autoscale_on=False)\nplot([1, 2, 3], [1, 2, 3])\nsubplot(122, sharex=ax1, sharey=ax1, autoscale_on=False)\nplot([1, 2, 3], [1, 2, 3])\ndraw()\nshow()\n```\nIn this one, the data completely disappears. WTF, matplotlib? Just WTF? Okay, well maybe if we fix the aspect ratio? \n```\nfrom pylab import *\n\nfigure()\nax1 = subplot(121, autoscale_on=False)\nplot([1, 2, 3], [1, 2, 3])\naxes().set_aspect('equal')\nsubplot(122, sharex=ax1, sharey=ax1)\nplot([1, 2, 3], [1, 2, 3])\ndraw()\nshow()\n```\nThis one causes the first subplot to disappear entirely. That's hilarious! Who came up with that one? In all seriousness, now... should this really be such a hard thing to accomplish?","response":"Your problem in setting the aspect of the plots is coming in when you're using sharex and sharey. One workaround is to just not used shared axes. For example, you could do this: \n```\nfrom pylab import *\n\nfigure()\nsubplot(121, aspect='equal')\nplot([1, 2, 3], [1, 2, 3])\nsubplot(122, aspect='equal')\nplot([1, 2, 3], [1, 2, 3])\nshow()\n```\nHowever, a better workaround is to change the \"adjustable\" keywarg... You want adjustable='box', but when you're using shared axes, it has to be adjustable='datalim' (and setting it back to 'box' gives an error). However, there's a third option for adjustable to handle exactly this case: adjustable=\"box-forced\". For example: \n```\nfrom pylab import *\n\nfigure()\nax1 = subplot(121, aspect='equal', adjustable='box-forced')\nplot([1, 2, 3], [1, 2, 3])\nsubplot(122, aspect='equal', adjustable='box-forced', sharex=ax1, sharey=ax1)\nplot([1, 2, 3], [1, 2, 3])\nshow()\n```\nOr in more modern style (note: this part of the answer wouldn't have worked in 2010): \n```\nimport matplotlib.pyplot as plt\n\nfig, axes = plt.subplots(ncols=2, sharex=True, sharey=True)\nfor ax in axes:\n ax.plot([1, 2, 3], [1, 2, 3])\n ax.set(adjustable='box-forced', aspect='equal')\n\nplt.show()\n```\nEither way, you'll get something similar to:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/3207850\/creating-square-subplots-of-equal-height-and-width-in-matplotlib","best_answers_votes":27,"tags":["python","matplotlib"],"question_length":1583,"response_length":1247,"tag_count":2} {"question":"Plot all pandas dataframe columns separately I have a pandas dataframe who just has numeric columns, and I am trying to create a separate histogram for all the features \n```\nind group people value value_50\n 1 1 5 100 1\n 1 2 2 90 1\n 2 1 10 80 1\n 2 2 20 40 0\n 3 1 7 10 0\n 3 2 23 30 0\n```\nbut in my real life data there are 50+ columns, how can I create a separate plot for all of them I have tried \n```\ndf.plot.hist( subplots = True, grid = True)\n```\nIt gave me an overlapping unclear plot. how can I arrange them using pandas subplots = True. Below example can help me to get graphs in (2,2) grid for four columns. But its a long method for all 50 columns \n```\nfig, [(ax1,ax2),(ax3,ax4)] = plt.subplots(2,2, figsize = (20,10))\n```","response":"Pandas subplots=True will arange the axes in a single column. \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\ndf = pd.DataFrame(np.random.rand(7,20))\n\ndf.plot(subplots=True)\n\nplt.tight_layout()\nplt.show()\n```\nHere, tight_layout isn't applied, because the figure is too small to arange the axes nicely. One can use a bigger figure (figsize=(...)) though. In order to have the axes on a grid, one can use the layout parameter, e.g. \n```\ndf.plot(subplots=True, layout=(4,5))\n```\nThe same can be achieved if creating the axes via plt.subplots() \n```\nfig, axes = plt.subplots(nrows=4, ncols=5)\ndf.plot(subplots=True, ax=axes)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/55567706\/plot-all-pandas-dataframe-columns-separately","best_answers_votes":75,"tags":["python","pandas","matplotlib","subplot"],"question_length":816,"response_length":653,"tag_count":4} {"question":"How to write text above the bars on a bar plot (Python)? I have this graph: I want to write the count above each column. These values are in the first and second lists. Can you help me solve this problem? I tried something without success. This is the code for the graph: \n```\ncountListFast = [1492.0, 497.0, 441.0, 218.0, 101.0, 78.0, 103.0]\ncountListSlow = [1718.0, 806.0, 850.0, 397.0, 182.0, 125.0, 106.0]\n\nerrorRateListOfFast = ['9.09', '9.09', '9.38', '9.40', '7.89', '8.02', '10.00']\nerrorRateListOfSlow = ['10.00', '13.04', '14.29', '12.50', '14.29', '14.53', '11.11']\n\nopacity = 0.4\nbar_width = 0.35\n \nplt.xlabel('Tasks')\nplt.ylabel('Error Rate')\nplt.xticks(range(len(errorRateListOfFast)),('[10-20)', '[20-30)', '[30-50)', '[50-70)','[70-90)', '[90-120)', ' [120 < )'), rotation=30)\nplt.bar(np.arange(len(errorRateListOfFast))+ bar_width, errorRateListOfFast, bar_width, align='center', alpha=opacity, color='b', label='Fast <= 6 sec.')\nplt.bar(range(len(errorRateListOfSlow)), errorRateListOfSlow, bar_width, align='center', alpha=opacity, color='r', label='Slower > 6 sec.')\nplt.legend()\nplt.tight_layout()\nplt.show()\n```","response":"plt.bar() returns a list of rectangles that can be used to position suitable text above each of the bars as follows: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nerrorRateListOfFast = ['9.09', '9.09', '9.38', '9.40', '7.89', '8.02', '10.00']\nerrorRateListOfSlow = ['10.00', '13.04', '14.29', '12.50', '14.29', '14.53', '11.11']\n\n# Convert to floats\nerrorRateListOfFast = [float(x) for x in errorRateListOfFast]\nerrorRateListOfSlow = [float(x) for x in errorRateListOfSlow]\n\nopacity = 0.4\nbar_width = 0.35\n\nplt.xlabel('Tasks')\nplt.ylabel('Error Rate')\n\nplt.xticks(range(len(errorRateListOfFast)),('[10-20)', '[20-30)', '[30-50)', '[50-70)','[70-90)', '[90-120)', ' [120 < )'), rotation=30)\nbar1 = plt.bar(np.arange(len(errorRateListOfFast)) + bar_width, errorRateListOfFast, bar_width, align='center', alpha=opacity, color='b', label='Fast <= 6 sec.')\nbar2 = plt.bar(range(len(errorRateListOfSlow)), errorRateListOfSlow, bar_width, align='center', alpha=opacity, color='r', label='Slower > 6 sec.')\n\n# Add counts above the two bar graphs\nfor rect in bar1 + bar2:\n height = rect.get_height()\n plt.text(rect.get_x() + rect.get_width() \/ 2.0, height, f'{height:.0f}', ha='center', va='bottom')\n\nplt.legend()\nplt.tight_layout()\nplt.show()\n```\nGiving you: ha='center' and va='bottom' refer to how the text is aligned in relation to the x and y co-ordinates, i.e. horizontal and vertical alignment.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/40489821\/how-to-write-text-above-the-bars-on-a-bar-plot-python","best_answers_votes":56,"tags":["python","matplotlib","plot","graph"],"question_length":1140,"response_length":1409,"tag_count":4} {"question":"Matplotlib how to change figsize for matshow How to change figsize for matshow() in jupyter notebook? For example this code change figure size \n```\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\nd = pd.DataFrame({'one' : [1, 2, 3, 4, 5],\n 'two' : [4, 3, 2, 1, 5]})\nplt.figure(figsize=(10,5))\nplt.plot(d.one, d.two)\n```\nBut code below doesn't work \n```\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\nd = pd.DataFrame({'one' : [1, 2, 3, 4, 5],\n 'two' : [4, 3, 2, 1, 5]})\nplt.figure(figsize=(10,5))\nplt.matshow(d.corr())\n```","response":"By default, plt.matshow() produces its own figure, so in combination with plt.figure() two figures will be created and the one that hosts the matshow plot is not the one that has the figsize set. There are two options: Use the fignum argument \n```\nplt.figure(figsize=(10,5))\nplt.matshow(d.corr(), fignum=1)\n```\nPlot the matshow using matplotlib.axes.Axes.matshow instead of pyplot.matshow. \n```\nfig, ax = plt.subplots(figsize=(10,5))\nax.matshow(d.corr())\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/43021762\/matplotlib-how-to-change-figsize-for-matshow","best_answers_votes":50,"tags":["python","pandas","matplotlib"],"question_length":605,"response_length":458,"tag_count":3} {"question":"Fit a gaussian function I have a histogram (see below) and I am trying to find the mean and standard deviation along with code which fits a curve to my histogram. I think there is something in SciPy or matplotlib that can help, but every example I've tried doesn't work. \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nwith open('gau_b_g_s.csv') as f:\n v = np.loadtxt(f, delimiter= ',', dtype=\"float\", skiprows=1, usecols=None)\n\nfig, ax = plt.subplots()\n\nplt.hist(v, bins=500, color='#7F38EC', histtype='step')\n\nplt.title(\"Gaussian\")\nplt.axis([-1, 2, 0, 20000])\n\nplt.show()\n```","response":"Take a look at this answer for fitting arbitrary curves to data. Basically you can use scipy.optimize.curve_fit to fit any function you want to your data. The code below shows how you can fit a Gaussian to some random data (credit to this SciPy-User mailing list post). \n```\nimport numpy\nfrom scipy.optimize import curve_fit\nimport matplotlib.pyplot as plt\n\n# Define some test data which is close to Gaussian\ndata = numpy.random.normal(size=10000)\n\nhist, bin_edges = numpy.histogram(data, density=True)\nbin_centres = (bin_edges[:-1] + bin_edges[1:])\/2\n\n# Define model function to be used to fit to the data above:\ndef gauss(x, *p):\n A, mu, sigma = p\n return A*numpy.exp(-(x-mu)**2\/(2.*sigma**2))\n\n# p0 is the initial guess for the fitting coefficients (A, mu and sigma above)\np0 = [1., 0., 1.]\n\ncoeff, var_matrix = curve_fit(gauss, bin_centres, hist, p0=p0)\n\n# Get the fitted curve\nhist_fit = gauss(bin_centres, *coeff)\n\nplt.plot(bin_centres, hist, label='Test data')\nplt.plot(bin_centres, hist_fit, label='Fitted data')\n\n# Finally, lets get the fitting parameters, i.e. the mean and standard deviation:\nprint 'Fitted mean = ', coeff[1]\nprint 'Fitted standard deviation = ', coeff[2]\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/11507028\/fit-a-gaussian-function","best_answers_votes":49,"tags":["python","matplotlib","scipy","histogram","curve-fitting"],"question_length":589,"response_length":1205,"tag_count":5} {"question":"How to plot a bar graph from a pandas series? Consider my series as below: First column is article_id and the second column is frequency count. \n```\narticle_id \n1 39 \n2 49 \n3 187 \n4 159 \n5 158 \n ... \n16947 14 \n16948 7 \n16976 2 \n16977 1 \n16978 1 \n16980 1 \n\nName: article_id, dtype: int64\n```\nI got this series from a dataframe with the following command: \n```\nlogs.loc[logs['article_id'] <= 17029].groupby('article_id')['article_id'].count()\n```\nlogs is the dataframe here and article_id is one of the columns in it. How do I plot a bar chart(using Matlplotlib) such that the article_id is on the X-axis and the frequency count on the Y-axis ? My natural instinct was to convert it into a list using .tolist() but that doesn't preserve the article_id.","response":"IIUC you need Series.plot.bar: \n```\n#pandas 0.17.0 and above\ns.plot.bar()\n#pandas below 0.17.0\ns.plot('bar')\n```\nSample: \n```\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\ns = pd.Series({16976: 2, 1: 39, 2: 49, 3: 187, 4: 159, \n 5: 158, 16947: 14, 16977: 1, 16948: 7, 16978: 1, 16980: 1},\n name='article_id')\nprint (s)\n1 39\n2 49\n3 187\n4 159\n5 158\n16947 14\n16948 7\n16976 2\n16977 1\n16978 1\n16980 1\nName: article_id, dtype: int64\n\n\ns.plot.bar()\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/37514686\/how-to-plot-a-bar-graph-from-a-pandas-series","best_answers_votes":49,"tags":["pandas","matplotlib","plot","ipython","series"],"question_length":825,"response_length":558,"tag_count":5} {"question":"Show mean in the box plot I am new to Matplotlib, and as I am learning how to draw box plot in python, I was wondering if there is a way to show mean in the box plots? Below is my code.. \n```\nfrom pylab import *\nimport matplotlib.pyplot as plt\ndata1=np.random.rand(100,1)\ndata2=np.random.rand(100,1)\ndata_to_plot=[data1,data2]\n#Create a figure instance\nfig = plt.figure(1, figsize=(9, 6))\n# Create an axes instance\naxes = fig.add_subplot(111) \n# Create the boxplot\nbp = axes.boxplot(data_to_plot,**showmeans=True**)\n```\nEven though I have showmean flag on, it gives me the following error. \n```\nTypeError: boxplot() got an unexpected keyword argument 'showmeans'\n```","response":"This is a minimal example and produces the desired result: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndata_to_plot = np.random.rand(100,5)\n\nfig = plt.figure(1, figsize=(9, 6))\nax = fig.add_subplot(111) \nbp = ax.boxplot(data_to_plot, showmeans=True)\n\nplt.show()\n```\nEDIT: If you want to achieve the same with matplotlib version 1.3.1 you'll have to plot the means manually. This is an example of how to do it: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndata_to_plot = np.random.rand(100,5)\npositions = np.arange(5) + 1\n\nfig, ax = plt.subplots(1,2, figsize=(9,4))\n\n# matplotlib > 1.4\nbp = ax[0].boxplot(data_to_plot, positions=positions, showmeans=True)\nax[0].set_title(\"Using showmeans\")\n\n#matpltolib < 1.4\nbp = ax[1].boxplot(data_to_plot, positions=positions)\nmeans = [np.mean(data) for data in data_to_plot.T]\nax[1].plot(positions, means, 'rs')\nax[1].set_title(\"Plotting means manually\")\n\nplt.show()\n```\nResult:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/29777017\/show-mean-in-the-box-plot","best_answers_votes":47,"tags":["python","boxplot","matplotlib"],"question_length":669,"response_length":943,"tag_count":3} {"question":"Removing Horizontal Lines in image (OpenCV, Python, Matplotlib) Using the following code I can remove horizontal lines in images. See result below. \n```\nimport cv2\nfrom matplotlib import pyplot as plt\n\nimg = cv2.imread('image.png',0)\n\nlaplacian = cv2.Laplacian(img,cv2.CV_64F)\nsobelx = cv2.Sobel(img,cv2.CV_64F,1,0,ksize=5)\n\nplt.subplot(2,2,1),plt.imshow(img,cmap = 'gray')\nplt.title('Original'), plt.xticks([]), plt.yticks([])\nplt.subplot(2,2,2),plt.imshow(laplacian,cmap = 'gray')\nplt.title('Laplacian'), plt.xticks([]), plt.yticks([])\nplt.subplot(2,2,3),plt.imshow(sobelx,cmap = 'gray')\nplt.title('Sobel X'), plt.xticks([]), plt.yticks([])\n\nplt.show()\n```\nThe result is pretty good, not perfect but good. What I want to achieve is the one showed here. I am using this code. Source image.. One of my questions is: how to save the Sobel X without that grey effect applied ? As original but processed.. Also, is there a better way to do it ? EDIT Using the following code for the source image is good. Works pretty well. \n```\nimport cv2\nimport numpy as np\n\nimg = cv2.imread(\"image.png\")\nimg=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)\n\nimg = cv2.bitwise_not(img)\nth2 = cv2.adaptiveThreshold(img,255, cv2.ADAPTIVE_THRESH_MEAN_C,cv2.THRESH_BINARY,15,-2)\ncv2.imshow(\"th2\", th2)\ncv2.imwrite(\"th2.jpg\", th2)\ncv2.waitKey(0)\ncv2.destroyAllWindows()\n\nhorizontal = th2\nvertical = th2\nrows,cols = horizontal.shape\n\n#inverse the image, so that lines are black for masking\nhorizontal_inv = cv2.bitwise_not(horizontal)\n#perform bitwise_and to mask the lines with provided mask\nmasked_img = cv2.bitwise_and(img, img, mask=horizontal_inv)\n#reverse the image back to normal\nmasked_img_inv = cv2.bitwise_not(masked_img)\ncv2.imshow(\"masked img\", masked_img_inv)\ncv2.imwrite(\"result2.jpg\", masked_img_inv)\ncv2.waitKey(0)\ncv2.destroyAllWindows()\n\nhorizontalsize = int(cols \/ 30)\nhorizontalStructure = cv2.getStructuringElement(cv2.MORPH_RECT, (horizontalsize,1))\nhorizontal = cv2.erode(horizontal, horizontalStructure, (-1, -1))\nhorizontal = cv2.dilate(horizontal, horizontalStructure, (-1, -1))\ncv2.imshow(\"horizontal\", horizontal)\ncv2.imwrite(\"horizontal.jpg\", horizontal)\ncv2.waitKey(0)\ncv2.destroyAllWindows()\n\nverticalsize = int(rows \/ 30)\nverticalStructure = cv2.getStructuringElement(cv2.MORPH_RECT, (1, verticalsize))\nvertical = cv2.erode(vertical, verticalStructure, (-1, -1))\nvertical = cv2.dilate(vertical, verticalStructure, (-1, -1))\ncv2.imshow(\"vertical\", vertical)\ncv2.imwrite(\"vertical.jpg\", vertical)\ncv2.waitKey(0)\ncv2.destroyAllWindows()\n\nvertical = cv2.bitwise_not(vertical)\ncv2.imshow(\"vertical_bitwise_not\", vertical)\ncv2.imwrite(\"vertical_bitwise_not.jpg\", vertical)\ncv2.waitKey(0)\ncv2.destroyAllWindows()\n\n#step1\nedges = cv2.adaptiveThreshold(vertical,255, cv2.ADAPTIVE_THRESH_MEAN_C,cv2.THRESH_BINARY,3,-2)\ncv2.imshow(\"edges\", edges)\ncv2.imwrite(\"edges.jpg\", edges)\ncv2.waitKey(0)\ncv2.destroyAllWindows()\n\n#step2\nkernel = np.ones((2, 2), dtype = \"uint8\")\ndilated = cv2.dilate(edges, kernel)\ncv2.imshow(\"dilated\", dilated)\ncv2.imwrite(\"dilated.jpg\", dilated)\ncv2.waitKey(0)\ncv2.destroyAllWindows()\n\n# step3\nsmooth = vertical.copy()\n\n#step 4\nsmooth = cv2.blur(smooth, (4,4))\ncv2.imshow(\"smooth\", smooth)\ncv2.imwrite(\"smooth.jpg\", smooth)\ncv2.waitKey(0)\ncv2.destroyAllWindows()\n\n#step 5\n(rows, cols) = np.where(img == 0)\nvertical[rows, cols] = smooth[rows, cols]\n\ncv2.imshow(\"vertical_final\", vertical)\ncv2.imwrite(\"vertical_final.jpg\", vertical)\ncv2.waitKey(0)\ncv2.destroyAllWindows()\n```\nBut if I have this image ? I tried to execute the code above and the result is really poor... Other images which I am working on are these...","response":"Obtain binary image. Load the image, convert to grayscale, then Otsu's threshold to obtain a binary black\/white image. Detect and remove horizontal lines. To detect horizontal lines, we create a special horizontal kernel and morph open to detect horizontal contours. From here we find contours on the mask and \"fill in\" the detected horizontal contours with white to effectively remove the lines Repair image. At this point the image may have gaps if the horizontal lines intersected through characters. To repair the text, we create a vertical kernel and morph close to reverse the damage After converting to grayscale, we Otsu's threshold to obtain a binary image \n```\nimage = cv2.imread('1.png')\ngray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)\nthresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]\n```\nNext we create a special horizontal kernel to detect horizontal lines. We draw these lines onto a mask and then find contours on the mask. To remove the lines, we fill in the contours with white Detected lines Mask Filled in contours \n```\n# Remove horizontal\nhorizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (25,1))\ndetected_lines = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, horizontal_kernel, iterations=2)\ncnts = cv2.findContours(detected_lines, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)\ncnts = cnts[0] if len(cnts) == 2 else cnts[1]\nfor c in cnts:\n cv2.drawContours(image, [c], -1, (255,255,255), 2)\n```\nThe image currently has gaps. To fix this, we construct a vertical kernel to repair the image \n```\n# Repair image\nrepair_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,6))\nresult = 255 - cv2.morphologyEx(255 - image, cv2.MORPH_CLOSE, repair_kernel, iterations=1)\n```\nNote depending on the image, the size of the kernel will change. You can think of the kernel as (horizontal, vertical). For instance, to detect longer lines, we could use a (50,1) kernel instead. If we wanted thicker lines, we could increase the 2nd parameter to say (50,2). Here's the results with the other images Detected lines Original -> Removed Detected lines Original -> Removed Full code \n```\nimport cv2\n\nimage = cv2.imread('1.png')\ngray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)\nthresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]\n\n# Remove horizontal\nhorizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (25,1))\ndetected_lines = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, horizontal_kernel, iterations=2)\ncnts = cv2.findContours(detected_lines, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)\ncnts = cnts[0] if len(cnts) == 2 else cnts[1]\nfor c in cnts:\n cv2.drawContours(image, [c], -1, (255,255,255), 2)\n\n# Repair image\nrepair_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,6))\nresult = 255 - cv2.morphologyEx(255 - image, cv2.MORPH_CLOSE, repair_kernel, iterations=1)\n\ncv2.imshow('thresh', thresh)\ncv2.imshow('detected_lines', detected_lines)\ncv2.imshow('image', image)\ncv2.imshow('result', result)\ncv2.waitKey()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/46274961\/removing-horizontal-lines-in-image-opencv-python-matplotlib","best_answers_votes":54,"tags":["python","opencv","matplotlib","image-processing","computer-vision"],"question_length":3626,"response_length":2998,"tag_count":5} {"question":"Annotate Subplots in a Figure with A, B, C When submitting papers to scientific journals one quite frequently needs to enumerate the different subplots of a figure with A, B, ... . This sounds like a very common problem and I was trying to find an elegant way to do that automatically with matplotlib, but I was surprised to find nothing on it. But maybe I am not using the right search terms. Ideally, I am searching for a way to annotate such that the letters stay in place relative to the subplot if the figure is resized or the subplot is moved via fig.subplots_adjust, fig.tight_layout, or similar. Any help or solution will appreciated.","response":"If you want the annotation relative to the subplot then plotting it using ax.text seems the most convenient way to me. Consider something like: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport string\n\nfig, axs = plt.subplots(2,2,figsize=(8,8))\naxs = axs.flat\n\nfor n, ax in enumerate(axs):\n \n ax.imshow(np.random.randn(10,10), interpolation='none') \n ax.text(-0.1, 1.1, string.ascii_uppercase[n], transform=ax.transAxes, \n size=20, weight='bold')\n```\nedit: With the new plt.subplot_mosiac, the example above can be written as. Perhaps slightly more elagant. And consider adding constrained_layout=True. \n```py\nfig, axs = plt.subplot_mosaic(\"AB;CD\", figsize=(10,10))\n\nfor n, (key, ax) in enumerate(axs.items()):\n\n ax.imshow(np.random.randn(10,10), interpolation='none') \n ax.text(-0.1, 1.1, key, transform=ax.transAxes, \n size=20, weight='bold')\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/25543978\/annotate-subplots-in-a-figure-with-a-b-c","best_answers_votes":52,"tags":["python","matplotlib","subplot","plot-annotations"],"question_length":642,"response_length":904,"tag_count":4} {"question":"Embedding a matplotlib figure inside a WxPython panel How do I embed a matplotlib figure object inside a WxPython panel? I googled around and saw complicated examples involving interactive graphics and other extra stuff. Can anybody help with a minimal example?","response":"This is a minimal example for a Panel with a matplotlib canvas: \n```\nfrom numpy import arange, sin, pi\nimport matplotlib\nmatplotlib.use('WXAgg')\n\nfrom matplotlib.backends.backend_wxagg import FigureCanvasWxAgg as FigureCanvas\nfrom matplotlib.backends.backend_wx import NavigationToolbar2Wx\nfrom matplotlib.figure import Figure\n\nimport wx\n\nclass CanvasPanel(wx.Panel):\n def __init__(self, parent):\n wx.Panel.__init__(self, parent)\n self.figure = Figure()\n self.axes = self.figure.add_subplot(111)\n self.canvas = FigureCanvas(self, -1, self.figure)\n self.sizer = wx.BoxSizer(wx.VERTICAL)\n self.sizer.Add(self.canvas, 1, wx.LEFT | wx.TOP | wx.GROW)\n self.SetSizer(self.sizer)\n self.Fit()\n\n def draw(self):\n t = arange(0.0, 3.0, 0.01)\n s = sin(2 * pi * t)\n self.axes.plot(t, s)\n\n\nif __name__ == \"__main__\":\n app = wx.PySimpleApp()\n fr = wx.Frame(None, title='test')\n panel = CanvasPanel(fr)\n panel.draw()\n fr.Show()\n app.MainLoop()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/10737459\/embedding-a-matplotlib-figure-inside-a-wxpython-panel","best_answers_votes":47,"tags":["python","wxpython","matplotlib"],"question_length":261,"response_length":1032,"tag_count":3} {"question":"How to display LaTeX f-strings in matplotlib [duplicate] This question already has an answer here: Format a string that has extra curly braces in it (1 answer) Closed 5 years ago. The community reviewed whether to reopen this question 3 years ago and left it closed: Original close reason(s) were not resolved In Python 3.6, there is the new f-string to include variables in strings which is great, but how do you correctly apply these strings to get super or subscripts printed for matplotlib? (to actually see the result with the subscript, you need to draw the variable foo on a matplotlib plot) In other words how do I get this behaviour: \n```\nvar = 123\n foo = r'text$_{%s}$' % var\n text123<\/sub>\n```\nUsing the new f-string syntax? So far, I have tried using a raw-string literal combined with an f-string, but this only seems to apply the subscript to the first character of the variable: \n```\nvar = 123\n foo = fr'text$_{var}$'\n text1<\/sub>23\n```\nBecause the { has an ambiguous function as delimiting what r should consider subscript and what f delimits as a place for the variable.","response":"You need to escape the curly brackets by doubling them up, and then add in one more to use in the LaTeX formula. This gives: \n```\nfoo = f'text$_{{{var}}}$'\n```\nExample: \n```\nplt.figure()\nplt.plot([1,2,3], [3,4,5])\nvar = 123\nplt.text(1, 4,f'text$_{{{var}}}$')\n```\nOutput: Incidentally, in this example, you don't actually need to use a raw-string literal.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/60150031\/how-to-display-latex-f-strings-in-matplotlib","best_answers_votes":46,"tags":["python","python-3.x","string","matplotlib","f-string"],"question_length":1109,"response_length":354,"tag_count":5} {"question":"Matplotlib boxplot x axis It's easier to ask this with a figure. At the moment i obtain the following boxplot graph using matplotlib: Is there a way to obtain a figure like that, but with each box in a position coherent with the corresponding x-axis number (like in a normal scatter plot, but with boxes instead of points)? At the moment the numbers on the x-axis are added by means of the labels= argument.","response":"You need to specify the positions argument to the boxplot constructor. \n```\nfrom matplotlib import pyplot as plt\n\nplt.boxplot([[1,4],[2,5],[3,6]], positions=[2,4,5.5])\n```\nBy default it uses the values [1, 2, ..., n] but you can specify a different x position for each bar and the xticks will be updated automatically.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/35871627\/matplotlib-boxplot-x-axis","best_answers_votes":44,"tags":["python","matplotlib","boxplot"],"question_length":407,"response_length":318,"tag_count":3} {"question":"Matplotlib: Add colorbar to non-mappable object I have a series of lines representing the change of a variable; each with a unique color. For that reason I want to add a colorbar next to the plot. The desired output is shown below. The problem is that plot is a non-mappable object, i.e. the colorbar has to be added manually. I consider my current solution (below) sub-optimal as it involves size parameters of which I have no interest in controlling. I'd prefer a similar solution as for a mappable object (example below current solution). Desired output Current solution \n```\nimport numpy as np\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\n\nx = np.linspace(0, 5, 100)\nN = 20\ncmap = plt.get_cmap('jet',N)\n\nfig = plt.figure(figsize=(8,6))\nax1 = fig.add_axes([0.10,0.10,0.70,0.85])\n\nfor i,n in enumerate(np.linspace(0,2,N)):\n y = np.sin(x)*x**n\n ax1.plot(x,y,c=cmap(i))\n\nplt.xlabel('x')\nplt.ylabel('y')\n\nax2 = fig.add_axes([0.85,0.10,0.05,0.85])\nnorm = mpl.colors.Normalize(vmin=0,vmax=2)\ncb1 = mpl.colorbar.ColorbarBase(ax2,cmap=cmap,norm=norm,orientation='vertical')\n\nplt.show()\n```\nDesired solution (obviously replacing imshow) \n```\nfig,ax = plt.subplots()\ncax = ax.imshow(..)\ncbar = fig.colorbar(cax,aspect=10)\nplt.show()\n```","response":"You may define your own ScalarMappable and use it just as if it was present in the plot. (Note that I changed the numbero f colors to 21 to have nice spacings of 0.1) \n```\nimport numpy as np\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\n\nx = np.linspace(0, 5, 100)\nN = 21\ncmap = plt.get_cmap('jet',N)\n\nfig = plt.figure(figsize=(8,6))\nax1 = fig.add_axes([0.10,0.10,0.70,0.85])\n\nfor i,n in enumerate(np.linspace(0,2,N)):\n y = np.sin(x)*x**n\n ax1.plot(x,y,c=cmap(i))\n\nplt.xlabel('x')\nplt.ylabel('y')\n\nnorm = mpl.colors.Normalize(vmin=0,vmax=2)\nsm = plt.cm.ScalarMappable(cmap=cmap, norm=norm)\nsm.set_array([])\nplt.colorbar(sm, ticks=np.linspace(0,2,N), \n boundaries=np.arange(-0.05,2.1,.1))\n\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/43805821\/matplotlib-add-colorbar-to-non-mappable-object","best_answers_votes":43,"tags":["python","matplotlib"],"question_length":1281,"response_length":733,"tag_count":2} {"question":"How to set Dataframe Column value as X-axis labels Say I have data in following format: \n```\nRegion Men Women\nCity1 10 5\nCity2 50 89\n```\nWhen I load it in Dataframe and plot graph, it shows index as X-axis labels instead of Region name. How do I get names on X-axis? So far I tried: \n```\nimport pandas as pd\nimport matplotlib.pyplot as plt \nplt.style.use('ggplot')\nax = df[['Men','Women']].plot(kind='bar', title =\"Population\",figsize=(15,10),legend=True, fontsize=12)\nax.set_xlabel(\"Areas\",fontsize=12)\nax.set_ylabel(\"Population\",fontsize=12)\nplt.show()\n```\nCurrently it shows x ticks as 0,1,2..","response":"plot.bar() method inherits its arguments from plot(), which has rot argument: from the docs: rot : int, default None Rotation for ticks (xticks for vertical, yticks for horizontal plots) it also uses per default index as ticks for x axis: use_index : boolean, default True Use index as ticks for x axis \n```\n\n```python\ndf.plot.bar(x='Region', rot=0, title='Population', figsize=(15,10), fontsize=12)\n#Output\n#\n#```\n#alternatively you can set index explicitly - it might be useful for multi-level indexes (axes): \n#```\n#df.set_index('Region').plot.bar(rot=0, title='Population', figsize=(15,10), fontsize=12)\n#```\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/38683709\/how-to-set-dataframe-column-value-as-x-axis-labels","best_answers_votes":21,"tags":["python","pandas","dataframe","matplotlib","bar-chart"],"question_length":613,"response_length":657,"tag_count":5} {"question":"How to set number of ticks in plt.colorbar? When I plot a matrix with a colorbar, then the colorbar has 10 ticks. Since the colorbar has to be pretty small, the ticklabels overlap. Therefore I want to reduce the number of ticks from 10 to 5. I do not want to reduce the font size! Is there an easy way to do this? I do not want to set the ticks manually...","response":"The MaxNLocator ticker might suit your purposes? class matplotlib.ticker.MaxNLocator Select no more than N intervals at nice locations For example: \n```\nfrom matplotlib import ticker\n\n# (generate plot here)\ncb = plt.colorbar()\ntick_locator = ticker.MaxNLocator(nbins=5)\ncb.locator = tick_locator\ncb.update_ticks()\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/22012096\/how-to-set-number-of-ticks-in-plt-colorbar","best_answers_votes":51,"tags":["python","matplotlib","colorbar"],"question_length":356,"response_length":328,"tag_count":3} {"question":"Plotting a line over several graphs I don't know how this thing is called, or even how to describe it, so the title may be a little bit misleading. The first attached graph was created with pyplot. I would like to draw a straight line that goes through all graphs instead of the three red dot I currently use. Is it possible in pyplot? Second image is what I am looking for.","response":"You can pull this off by turning clipping off for the relevant lines. There's probably a cleaner way to do this -- you might be able to draw lines on the main frame directly -- but the following worked for me: \n```\nfrom matplotlib import pyplot as plt\nfrom numpy import arange, sin, cos\n\nxx = arange(100)\ncut = (xx > 0) & (xx % 17 == 0)\ny1 = sin(xx)\ny2 = (xx**2) % 2.0+cos(xx+0.5)\n\nfig = plt.figure()\nax1 = fig.add_subplot(211)\nax1.plot(xx, y1, c=\"blue\",zorder=1)\nax1.scatter(xx[cut], y1[cut], c=\"red\",zorder=2)\nax2 = fig.add_subplot(212)\nax2.plot(xx, y2, c=\"green\",zorder=1)\nax2.scatter(xx[cut], y2[cut], c=\"red\",zorder=2)\n\nfor x in xx[cut]:\n ax1.axvline(x=x,ymin=-1.2,ymax=1,c=\"red\",linewidth=2,zorder=0, clip_on=False)\n ax2.axvline(x=x,ymin=0,ymax=1.2,c=\"red\",linewidth=2, zorder=0,clip_on=False)\n\nplt.draw()\nfig.savefig('pic.png')\n```\nWith a bit more work you could modify the line drawing to handle the general case of multiple subplot windows, but I'm profoundly lazy. :^)","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/6146290\/plotting-a-line-over-several-graphs","best_answers_votes":41,"tags":["python","plot","numerical","matplotlib"],"question_length":374,"response_length":984,"tag_count":4} {"question":"Python : Matplotlib annotate line break (with and without latex) I have a very basic question : how to do a line break with matplotlib in python with an \"annotate\" command. I tried \"\\\" and \"\\n\" but it does not work. And how to do this for a \"Latex\" annotation and for a normal text annotation ? Thank you very much.","response":"What exactly did you try? Were you, by chance, using a raw string (e.g. r\"whatever\")? '\\n' works perfectly, but if you're using a raw string to avoid latex sequences being interpreted as an escape, it will be interpreted by python as '\\' and 'n' instead of a newline. As an example: \n```\nimport matplotlib.pyplot as plt\n\nplt.annotate('Testing\\nThis\\nOut', xy=(0.5, 0.5))\n\nplt.show()\n```\nOn the other hand, if we use a raw string: \n```\nimport matplotlib.pyplot as plt\n\nplt.annotate(r'Testing\\nThis\\nOut', xy=(0.5, 0.5))\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/10284847\/python-matplotlib-annotate-line-break-with-and-without-latex","best_answers_votes":23,"tags":["python","latex","matplotlib"],"question_length":315,"response_length":534,"tag_count":3} {"question":"How can I place a table on a plot in Matplotlib? I'm not having any success in getting the matplotlib table commands to work. Here's an example of what I'd like to do: Can anyone help with the table construction code? \n```\nimport pylab as plt\n\nplt.figure()\nax=plt.gca()\ny=[1,2,3,4,5,4,3,2,1,1,1,1,1,1,1,1]\nplt.plot([10,10,14,14,10],[2,4,4,2,2],'r')\ncol_labels=['col1','col2','col3']\nrow_labels=['row1','row2','row3']\ntable_vals=[11,12,13,21,22,23,31,32,33]\n# the rectangle is where I want to place the table\nplt.text(11,4.1,'Table Title',size=8)\nplt.plot(y)\nplt.show()\n```","response":"AFAIK, you can't arbitrarily place a table on the matplotlib plot using only native matplotlib features. What you can do is take advantage of the possibility of latex text rendering. However, in order to do this you should have working latex environment in your system. If you have one, you should be able to produce graphs such as below: \n```\nimport pylab as plt\nimport matplotlib as mpl\n\nmpl.rc('text', usetex=True)\nplt.figure()\nax=plt.gca()\ny=[1,2,3,4,5,4,3,2,1,1,1,1,1,1,1,1]\n#plt.plot([10,10,14,14,10],[2,4,4,2,2],'r')\ncol_labels=['col1','col2','col3']\nrow_labels=['row1','row2','row3']\ntable_vals=[11,12,13,21,22,23,31,32,33]\ntable = r'''\\begin{tabular}{ c | c | c | c } & col1 & col2 & col3 \\\\\\hline row1 & 11 & 12 & 13 \\\\\\hline row2 & 21 & 22 & 23 \\\\\\hline row3 & 31 & 32 & 33 \\end{tabular}'''\nplt.text(9,3.4,table,size=12)\nplt.plot(y)\nplt.show()\n```\nThe result is: Please take in mind that this is quick'n'dirty example; you should be able to place the table correctly by playing with text coordinates. Please also refer to the docs if you need to change fonts etc. UPDATE: more on pyplot.table According to the documentation, plt.table adds a table to current axes. From sources it's obvious, that table location on the graph is determined in relation to axes. Y coordinate can be controlled with keywords top (above graph), upper (in the upper half), center (in the center), lower (in the lower half) and bottom (below graph). X coordinate is controlled with keywords left and right. Any combination of the two works, e.g. any of top left, center right and bottom is OK to use. So the closest graph to what you want could be made with: \n```\nimport matplotlib.pylab as plt\n\nplt.figure()\nax=plt.gca()\ny=[1,2,3,4,5,4,3,2,1,1,1,1,1,1,1,1]\n#plt.plot([10,10,14,14,10],[2,4,4,2,2],'r')\ncol_labels=['col1','col2','col3']\nrow_labels=['row1','row2','row3']\ntable_vals=[[11,12,13],[21,22,23],[31,32,33]]\n# the rectangle is where I want to place the table\nthe_table = plt.table(cellText=table_vals,\n colWidths = [0.1]*3,\n rowLabels=row_labels,\n colLabels=col_labels,\n loc='center right')\nplt.text(12,3.4,'Table Title',size=8)\n\nplt.plot(y)\nplt.show()\n```\nAnd this gives you Hope this helps!","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/8524401\/how-can-i-place-a-table-on-a-plot-in-matplotlib","best_answers_votes":52,"tags":["python","matplotlib"],"question_length":572,"response_length":2257,"tag_count":2} {"question":"How to plot precision and recall of multiclass classifier? I'm using scikit learn, and I want to plot the precision and recall curves. the classifier I'm using is RandomForestClassifier. All the resources in the documentations of scikit learn uses binary classification. Also, can I plot a ROC curve for multiclass? Also, I only found for SVM for multilabel and it has a decision_function which RandomForest doesn't have","response":"From scikit-learn documentation: Precision-Recall: Precision-recall curves are typically used in binary classification to study the output of a classifier. In order to extend the precision-recall curve and average precision to multi-class or multi-label classification, it is necessary to binarize the output. One curve can be drawn per label, but one can also draw a precision-recall curve by considering each element of the label indicator matrix as a binary prediction (micro-averaging). Receiver Operating Characteristic (ROC): ROC curves are typically used in binary classification to study the output of a classifier. In order to extend ROC curve and ROC area to multi-class or multi-label classification, it is necessary to binarize the output. One ROC curve can be drawn per label, but one can also draw a ROC curve by considering each element of the label indicator matrix as a binary prediction (micro-averaging). Therefore, you should binarize the output and consider precision-recall and roc curves for each class. Moreover, you are going to use predict_proba to get class probabilities. I divide the code into three parts: general settings, learning and prediction precision-recall curve ROC curve 1. general settings, learning and prediction \n```\nfrom sklearn.datasets import fetch_openml\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.multiclass import OneVsRestClassifier\nfrom sklearn.metrics import precision_recall_curve, roc_curve\nfrom sklearn.preprocessing import label_binarize\n\nimport matplotlib.pyplot as plt\n#%matplotlib inline\n\nmnist = fetch_openml(\"mnist_784\")\ny = mnist.target\ny = y.astype(np.uint8)\nn_classes = len(set(y))\n\nY = label_binarize(mnist.target, classes=[*range(n_classes)])\n\nX_train, X_test, y_train, y_test = train_test_split(mnist.data,\n Y,\n random_state = 42)\n\nclf = OneVsRestClassifier(RandomForestClassifier(n_estimators=50,\n max_depth=3,\n random_state=0))\nclf.fit(X_train, y_train)\n\ny_score = clf.predict_proba(X_test)\n```\n2. precision-recall curve \n```\n# precision recall curve\nprecision = dict()\nrecall = dict()\nfor i in range(n_classes):\n precision[i], recall[i], _ = precision_recall_curve(y_test[:, i],\n y_score[:, i])\n plt.plot(recall[i], precision[i], lw=2, label='class {}'.format(i))\n \nplt.xlabel(\"recall\")\nplt.ylabel(\"precision\")\nplt.legend(loc=\"best\")\nplt.title(\"precision vs. recall curve\")\nplt.show()\n```\n3. ROC curve \n```\n# roc curve\nfpr = dict()\ntpr = dict()\n\nfor i in range(n_classes):\n fpr[i], tpr[i], _ = roc_curve(y_test[:, i],\n y_score[:, i]))\n plt.plot(fpr[i], tpr[i], lw=2, label='class {}'.format(i))\n\nplt.xlabel(\"false positive rate\")\nplt.ylabel(\"true positive rate\")\nplt.legend(loc=\"best\")\nplt.title(\"ROC curve\")\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/56090541\/how-to-plot-precision-and-recall-of-multiclass-classifier","best_answers_votes":59,"tags":["python","matplotlib","scikit-learn","roc","precision-recall"],"question_length":420,"response_length":3030,"tag_count":5} {"question":"Show the values in the grid using matplotlib I m trying to generate heatmaps for the some data and my code is shown below: \n```\ndata = [['basis', 2007, 2008],\n [1, 2.2, 3.4],\n [2, 0, -2.2],\n [3, -4.1, -2.5],\n [4, -5.8, 1.2],\n [5, -5.4, -3.6],\n [6, 1.4, -5.9]]\n\nx_header = data[0][1:]\ny_header = [i for i in range(1, 13)]\ndata=data[1:]\nfor i in range(len(data)):\n data[i] = data[i][1:]\narr = np.array(data)\nfig, ax = plt.subplots()\n#heatmap = plt.pcolor(arr, cmap = 'RdBu')\nnorm = MidpointNormalize(midpoint=0)\nim = ax.imshow(data, norm=norm, cmap=plt.cm.seismic, interpolation='none')\n\nax.set_xticks(np.arange(arr.shape[1]), minor=False)\nax.set_yticks(np.arange(arr.shape[0]), minor=False)\nax.xaxis.tick_top()\nax.set_xticklabels(x_header, rotation=90)\nax.set_yticklabels(y_header)\n\nfig.colorbar(im)\nplt.show()\n```\nIt generates the image I also want to show values inside the grid. Is there any way to do that?","response":"Sure, just do something like: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndata = np.random.random((4, 4))\n\nfig, ax = plt.subplots()\n# Using matshow here just because it sets the ticks up nicely. imshow is faster.\nax.matshow(data, cmap='seismic')\n\nfor (i, j), z in np.ndenumerate(data):\n ax.text(j, i, '{:0.1f}'.format(z), ha='center', va='center')\n\nplt.show()\n```\nHowever, the labels are hard to see, so you might want a box around them: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndata = np.random.random((4, 4))\n\nfig, ax = plt.subplots()\n# Using matshow here just because it sets the ticks up nicely. imshow is faster.\nax.matshow(data, cmap='seismic')\n\nfor (i, j), z in np.ndenumerate(data):\n ax.text(j, i, '{:0.1f}'.format(z), ha='center', va='center',\n bbox=dict(boxstyle='round', facecolor='white', edgecolor='0.3'))\n\nplt.show()\n```\nAlso, in many cases, ax.annotate is more useful that ax.text. It's much more flexible in how you can position text, but it's also more complex. Have a look at the examples here: http:\/\/matplotlib.org\/users\/annotations_guide.html","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/20998083\/show-the-values-in-the-grid-using-matplotlib","best_answers_votes":56,"tags":["python","matplotlib"],"question_length":954,"response_length":1109,"tag_count":2} {"question":"Matplotlib colorbar ticks on left\/opposite side One could generate a vertical colorbar like so(simplified): \n```\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\n\nplt.figure()\nc_ax=plt.subplot(111)\ncb = mpl.colorbar.ColorbarBase(c_ax,orientation='vertical')\nplt.savefig('my_colorbar.png')\n```\nResulting in something like this(rotated for space reasons): Is it possible to get the ticks labels on the opposite side?","response":"You can switch the position of the ticks using c_ax.yaxis.set_ticks_position() So for your example: \n```\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\n\nplt.viridis()\n\nfig=plt.figure()\nc_ax=plt.subplot(199)\n\ncb = mpl.colorbar.ColorbarBase(c_ax,orientation='vertical')\n\nc_ax.yaxis.set_ticks_position('left')\n\nplt.savefig('my_colorbar.png')\n```\nNote you can also move the colorbar label in a similar way: \n```\nc_ax.yaxis.set_label_position('left')\n```\nAnd finally, in case you have a horizontal colorbar, and want to move the ticks and labels to the top, you can use the equivalent function on the xaxis: \n```\nc_ax.xaxis.set_label_position('top')\nc_ax.xaxis.set_ticks_position('top')\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/36939063\/matplotlib-colorbar-ticks-on-left-opposite-side","best_answers_votes":49,"tags":["python","matplotlib","colorbar"],"question_length":422,"response_length":695,"tag_count":3} {"question":"How to hide lines with .plot [duplicate] This question already has answers here: Remove line through legend marker with .plot (4 answers) Closed 8 years ago. I do not want to connect points with lines. I know that for that I can use scatter. But, scatter does not work after plot. So, basically I have to lists of points. The points from the first list I do want to connect with lines while the points from the second list should not be connect with lines. How can one achieve it in matplotlib? This is what I have tried: \n```\nplt.figure()\nplt.plot(xys[:,0], xys[:,1], marker='o', color='g')\n\n# WHAT SHOULD I DO HERE?\n#plt.plot(xys_bad[:,0], xys_bad[:,1], color='r', linewidth=0.0, markersize = 10.0)\nplt.scatter(xys_bad[:,0], xys_bad[:,1], color='r')\n\nplt.show()\n```","response":"As describe in matplotlib documentation you should use the 'None' linestyle: \n```\nplt.plot(xys_bad[:,0], xys_bad[:,1], color='r', linestyle='None', markersize = 10.0)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/44736034\/how-to-hide-lines-with-plot","best_answers_votes":54,"tags":["python","matplotlib"],"question_length":767,"response_length":170,"tag_count":2} {"question":"Converting image from RGB to HSV color space I couldn't find such function (i.e. RGB_to_HSV()) in Scipy or Matplotlib's documentations, and Google doesn't show pointers, except ActiveState recipe which demonstrates rgb2hsv function, though not usable on Numpy array as is. Does someone know of a shortcut? Edit: Sorry, just found matplotlib.colors.rgb_to_hsv() which is exactly what I was looking for. Should I delete this question?","response":"Matplotlib provides RGB to HSV conversion function: matplotlib.colors.rgb_to_hsv(): matplotlib.colors.rgb_to_hsv(arr) convert rgb values in a numpy array to hsv values input and output arrays should have shape (M,N,3)","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/15278323\/converting-image-from-rgb-to-hsv-color-space","best_answers_votes":53,"tags":["numpy","matplotlib","scipy","python-imaging-library"],"question_length":432,"response_length":217,"tag_count":4} {"question":"How to have actual values in matplotlib Pie Chart displayed I have a pie chart drawing the values extracted from a CSV file. The proportion of the values are currently displayed with the percentage displayed \"autopct='%1.1f%%'\". Is there a way to display the actual values which are represented in the dataset for each slice. \n```\n#Pie for Life Expectancy in Boroughs\nimport pandas as pd\nimport matplotlib\nimport matplotlib.pyplot as plt\n\n# show plots inline\n%matplotlib inline\n\n# use ggplot style\nmatplotlib.style.use('ggplot')\n\n#read data\nlifeEx = pd.read_csv('LEpie.csv')\n\n#Select columns\ndf = pd.DataFrame()\ndf['LB'] = lifeEx[['Regions']]\ndf['LifeEx'] = lifeEx[['MinLF']]\ncolorz = ['#B5DF00','#AD1FFF', '#BF1B00','#5FB1FF','#FFC93F']\nexploda = (0, 0, 0, 0.1, 0)\n\n\n#plotting\nplt.pie(df['LifeEx'], labels=df['LB'], colors=colorz, autopct='%1.1f%%', explode = exploda, shadow = True,startangle=90)\n\n#labeling\nplt.title('Min Life expectancy across London Regions', fontsize=12)\n```","response":"Using the autopct keyword As we know that the percentage shown times the sum of all actual values must be the actual value, we can define this as a function and supply this function to plt.pie using the autopct keyword. \n```\nimport matplotlib.pyplot as plt\nimport numpy\n\nlabels = 'Frogs', 'Hogs', 'Dogs'\nsizes = numpy.array([5860, 677, 3200])\ncolors = ['yellowgreen', 'gold', 'lightskyblue']\n\ndef absolute_value(val):\n a = numpy.round(val\/100.*sizes.sum(), 0)\n return a\n\nplt.pie(sizes, labels=labels, colors=colors,\n autopct=absolute_value, shadow=True)\n\nplt.axis('equal')\nplt.show()\n```\nCare must be taken since the calculation involves some error, so the supplied value is only accurate to some decimal places. A little bit more advanced may be the following function, that tries to get the original value from the input array back by comparing the difference between the calculated value and the input array. This method does not have the problem of inaccuracy but relies on input values which are sufficiently distinct from one another. \n```\ndef absolute_value2(val):\n a = sizes[ numpy.abs(sizes - val\/100.*sizes.sum()).argmin() ]\n return a\n```\nChanging text after pie creation The other option is to first let the pie being drawn with the percentage values and replace them afterwards. To this end, one would store the autopct labels returned by plt.pie() and loop over them to replace the text with the values from the original array. Attention, plt.pie() only returns three arguments, the last one being the labels of interest, when autopct keyword is provided so we set it to an empty string here. \n```\nlabels = 'Frogs', 'Hogs', 'Dogs'\nsizes = numpy.array([5860, 677, 3200])\ncolors = ['yellowgreen', 'gold', 'lightskyblue']\n\np, tx, autotexts = plt.pie(sizes, labels=labels, colors=colors,\n autopct=\"\", shadow=True)\n\nfor i, a in enumerate(autotexts):\n a.set_text(\"{}\".format(sizes[i]))\n\nplt.axis('equal')\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/41088236\/how-to-have-actual-values-in-matplotlib-pie-chart-displayed","best_answers_votes":39,"tags":["python","matplotlib","graph","charts","pie-chart"],"question_length":981,"response_length":1957,"tag_count":5} {"question":"tick frequency when using seaborn\/matplotlib boxplot I am plotting with seaborn a series of boxplots with \n```py\nsns.boxplot(full_array)\n```\nwhere full_array contains 200 arrays. Therefore, I have 200 boxplots and ticks on the x-axis from 0 to 200. The xticks are too close to each other and I would like to show only some of them, for instance, a labeled xtick every 20, or so. I tried several solutions as those mentioned here but they did not work. Every time I sample the xticks, I get wrong labels for the ticks, as they get numbered from 0 to N, with unit spacing. For instance, with the line \n```py\nax.xaxis.set_major_locator(ticker.MultipleLocator(20))\n```\nI get a labelled xtick every 20 but the labels are 1, 2, 3, 4 instead of 20, 40, 60, 80...","response":"The seaborn boxplot uses a FixedLocator and a FixedFormatter, i.e. \n```\nprint ax.xaxis.get_major_locator()\nprint ax.xaxis.get_major_formatter()\n```\nprints \n```\n\n\n```\nIt's therefore not sufficient to set the locator to a MultipleLocator since the ticks' values would still be set by the fixed formatter. Instead you would want to set a ScalarFormatter, which sets the ticklabels to correspond to the numbers at their position. \n```\nimport matplotlib.pyplot as plt\nimport matplotlib.ticker as ticker\nimport seaborn.apionly as sns\nimport numpy as np\n\nax = sns.boxplot(data = np.random.rand(20,30))\n\nax.xaxis.set_major_locator(ticker.MultipleLocator(5))\nax.xaxis.set_major_formatter(ticker.ScalarFormatter())\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/44521648\/tick-frequency-when-using-seaborn-matplotlib-boxplot","best_answers_votes":41,"tags":["python","matplotlib","seaborn","boxplot","xticks"],"question_length":755,"response_length":844,"tag_count":5} {"question":"Add custom border to certain cells in a matplotlib \/ seaborn plot Right now I`m using Seaborn's clustermap to generate some clustered heatmaps - so far so good. For a certain use case, I need to draw colored borders around specific cells. Is there a way to do that? Or with pcolormesh in matplotlib, or any other way?","response":"You can do this by overplotting a Rectangle patch on the cell that you would want to highlight. Using the example plot from the seaborn docs \n```\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nsns.set()\nflights = sns.load_dataset(\"flights\")\nflights = flights.pivot(\"month\", \"year\", \"passengers\")\ng = sns.clustermap(flights)\n```\nWe can highlight a cell by doing \n```\nfrom matplotlib.patches import Rectangle\nax = g.ax_heatmap\n\nax.add_patch(Rectangle((3, 4), 1, 1, fill=False, edgecolor='blue', lw=3))\nplt.show()\n```\nThis will produce the plot with a highlighted cell like so: Note the the indexing of the cells is 0 based with the origin at the bottom left.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/31290778\/add-custom-border-to-certain-cells-in-a-matplotlib-seaborn-plot","best_answers_votes":41,"tags":["python","matplotlib","seaborn"],"question_length":317,"response_length":663,"tag_count":3} {"question":"Python matplotlib change default color for values exceeding colorbar range When filling a grid with color such as when using contourf in pyplot, I need to find a way to change the color that pyplot uses to fill data that exceed the specified range of the colorbar. I wish to have a static colorbar that does not automatically change its range to fit the max\/min of the data, so having occasional extreme values that exceed its bounds is inevitable, and colors need to be specified for such values. The default color for values exceeding the bounds of the colorbar is white, which can glaringly clash with the surrounding data if the colormap does not have white as its end colors. Example image is shown below - notice the white fill when values exceed the negative range of the colorbar: I believe there is a way to specify which color to use at each bound if they are exceeded by using rcParams, but I have not been able to find information on this anywhere. Any help would be appreciated.","response":"The out-of-bounds colors can be set using the set_over and set_under methods of the colormap; see the documentation. You'll need to specify these values when you create your colormap. I don't see any matplotlibrc setting to set the default for this, though. You might also want to ask on the matplotlib mailing list. Edit: I see what is going on. The white area you describe is not beyond the limits of the color range. It is simply the blank background of the axes. Because you are only plotting certain levels, any levels outside that range will not be plotted at all, leaving those areas blank. To get what you want, do this: \n```\ncs = pyplot.contourf(x,y,z,levels=np.arange(50, 220, 20), cmap=pyplot.cm.jet, extend=\"both\")\ncs.cmap.set_under('k')\ncs.set_clim(50, 210)\ncb = pyplot.colorbar(cs)\n```\nThe \"extend\" argument is the key; it tells contourf to go ahead and plot all contours, but collapse all outside the given range into \"too big\" and \"too small\" categories. The cs.set_clim call is necessary to work around an oddity I discovered in contourf while debugging this; for some reason when you use extend, it manipulates the data limits, so we need to reset them back to what we want them to be. Also, just as a matter of style, you shouldn't be doing things like Colormap.set_under(cmap,color='k'). This is calling the class method and explicitly passing the instance in, which is an odd way to do it. Just do cmap.set_under(color=\"k\").","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/11386054\/python-matplotlib-change-default-color-for-values-exceeding-colorbar-range","best_answers_votes":40,"tags":["python","matplotlib","contour","color-mapping","colorbar"],"question_length":991,"response_length":1445,"tag_count":5} {"question":"How to remove relative shift in matplotlib axis When I try to do a plot against a range with big enough numbers I get an axis with relative shift for all the ticks. For example: \n```\nplot([1000, 1001, 1002], [1, 2, 3])\n```\nI get these ticks on axis of abscissas: \n```\n0.0 0.5 1.0 1.5 2.0\n +1e3\n```\nThe question is how to remove +1e3 and get just: \n```\n1000.0 1000.5 1001.0 1001.5 1002.0\n```","response":"```\nplot([1000, 1001, 1002], [1, 2, 3])\ngca().get_xaxis().get_major_formatter().set_useOffset(False)\ndraw()\n```\nThis grabs the current axes, gets the x-axis axis object and then the major formatter object and sets useOffset to false (doc). In newer versions (1.4+) of matplotlib the default behavior can be changed via the axes.formatter.useoffset rcparam.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/11855363\/how-to-remove-relative-shift-in-matplotlib-axis","best_answers_votes":31,"tags":["python","matplotlib","axis"],"question_length":440,"response_length":356,"tag_count":3} {"question":"Control the number of rows within a legend I am currently trying to plot a large amount of data on a single plot. I have structured my representation using repeated colors and symbols. However, when plotting the final results, the legend appears slightly off because I cannot control the number of rows within it. Thus, instead of getting 5 repeated green, then 5 repeated red, 5 repeated blue then 2 other, I get 5 -4 -4 -4 (where I would have prefered 5 - 5 - 5 - 2) You can clearly see this in attached image. Right now I use these options for the legend: \n```\naxp.legend(loc=\"lower right\",ncol=4)\n```","response":"I also had this problem a couple of times and use this workaround by adding dummy items to the legend to fill the last column, if there are more elegant methods available I would also be very interested to hear about them. \n```\nimport numpy as np\nimport matplotlib.pylab as pl\n\npl.figure()\n\npl.plot(np.arange(10), np.random.random([10,5]), color='r', label='red')\npl.plot(np.arange(10), np.random.random([10,5]), color='g', label='green')\npl.plot(np.arange(10), np.random.random([10,5]), color='b', label='blue')\npl.plot(np.arange(10), np.random.random([10,2]), color='k', label='black')\n\n# Add empty dummy legend items\npl.plot(np.zeros(1), np.zeros([1,3]), color='w', alpha=0, label=' ')\n\npl.legend(ncol=4)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/34212241\/control-the-number-of-rows-within-a-legend","best_answers_votes":31,"tags":["matplotlib","legend"],"question_length":604,"response_length":711,"tag_count":2} {"question":"change strength of antialiasing in matplotlib Is it possible to increase the antialiasing in matplotlib? I can still see some aliasing in my data, I tried several backends and it is still there. The antialiasing flag of the lines is set. Here you can see what I mean It's a sample taken from a Screenshot. It's probably not the best example but I guess one can see the stairs in the line. It was taken with the wxagg backend. I'm using matplotlib version 1.01 with Windows 7. Update: I don't have the code which produced the previous picture anymore, but I still have the problem. Below is a simple code example which shows the aliasing. \n```\nimport numpy as np\nimport matplotlib\nmatplotlib.use('wxAgg')\nimport matplotlib.pyplot as pl\nprint 'Backend:', pl.get_backend()\n\nx = np.linspace(0,6,100)\ny = np.sin(x)\n\nfor a in range(10):\n pl.plot( x, a\/10.*x, linewidth=1)\n\npl.show()\n```\nIt print's Backend: WXAgg And the resulting plot looks like the following. Especially the lower red curve shows clear aliasing.","response":"If you save the picture as a .svg file, you will have infinite precision. You can then edit this .svg with something like InkScape, and get how much precision\/antialiasing as you like.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/7767340\/change-strength-of-antialiasing-in-matplotlib","best_answers_votes":6,"tags":["python","matplotlib","antialiasing"],"question_length":1011,"response_length":184,"tag_count":3} {"question":"Plot line graph from histogram data in matplotlib I have a numpy array of ints representing time periods, which I'm currently plotting in a histogram to get a nice distribution graph, using the following code: \n```\nax.hist(data,bins=100,range=(minimum,maximum),facecolor=\"r\")\n```\nHowever I'm trying to modify this graph to represent the exact same data using a line instead of bars, so I can overlay more samples to the same plot and have them be clear (otherwise the bars overlap each other). What I've tried so far is to collate the data array into an array of tuples containing (time, count), and then plot it using \n```\nax.plot(data[:,0],data[:,1],color=\"red\",lw=2)\n```\nHowever that's not giving me anything close, as I can't accurately simulate the bins option of the histogram in my plot. Is there a better way to do this?","response":"I am very late to the party - but maybe this will be useful to someone else. I think what you need to do is set the histtype parameter to 'step', i.e. \n```\nax.hist(data,bins=100,range=(minimum,maximum),facecolor=\"r\", histtype = 'step')\n```\nSee also http:\/\/matplotlib.sourceforge.net\/examples\/pylab_examples\/histogram_demo_extended.html","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/8822370\/plot-line-graph-from-histogram-data-in-matplotlib","best_answers_votes":52,"tags":["python","numpy","matplotlib"],"question_length":828,"response_length":335,"tag_count":3} {"question":"How to automatically annotate maximum value in pyplot I'm trying to figure out how I can automatically annotate the maximum value in a figure window. I know you can do this by manually entering in x,y coordinates to annotate whatever point you want using the .annotate() method, but I want the annotation to be automatic, or to find the maximum point by itself. Here's my code so far: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nfrom pandas import Series, DataFrame\n\ndf = pd.read_csv('macrodata.csv') #Read csv file into dataframe\nyears = df['year'] #Get years column\ninfl = df['infl'] #Get inflation rate column\n\nfig10 = plt.figure()\nwin = fig10.add_subplot(1,1,1)\nfig10 = plt.plot(years, infl, lw = 2)\n\nfig10 = plt.xlabel(\"Years\")\nfig10 = plt.ylabel(\"Inflation\")\nfig10 = plt.title(\"Inflation with Annotations\")\n```","response":"If x and y are the arrays to plot, you get the coordinates of the maximum via \n```\nxmax = x[numpy.argmax(y)]\nymax = y.max()\n```\nThis can be incorporated into a function that you may simply call with your data. \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nx = np.linspace(-2,8, num=301)\ny = np.sinc((x-2.21)*3)\n\n\nfig, ax = plt.subplots()\nax.plot(x,y)\n\ndef annot_max(x,y, ax=None):\n xmax = x[np.argmax(y)]\n ymax = y.max()\n text= \"x={:.3f}, y={:.3f}\".format(xmax, ymax)\n if not ax:\n ax=plt.gca()\n bbox_props = dict(boxstyle=\"square,pad=0.3\", fc=\"w\", ec=\"k\", lw=0.72)\n arrowprops=dict(arrowstyle=\"->\",connectionstyle=\"angle,angleA=0,angleB=60\")\n kw = dict(xycoords='data',textcoords=\"axes fraction\",\n arrowprops=arrowprops, bbox=bbox_props, ha=\"right\", va=\"top\")\n ax.annotate(text, xy=(xmax, ymax), xytext=(0.94,0.96), **kw)\n\nannot_max(x,y)\n\n\nax.set_ylim(-0.3,1.5)\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/43374920\/how-to-automatically-annotate-maximum-value-in-pyplot","best_answers_votes":51,"tags":["python","pandas","numpy","matplotlib"],"question_length":848,"response_length":931,"tag_count":4} {"question":"How to plot multiple pandas columns I have dataframe total_year, which contains three columns (year, action, comedy). How can I plot two columns (action and comedy) on y-axis? My code plots only one: \n```\ntotal_year[-15:].plot(x='year', y='action', figsize=(10,5), grid=True)\n```","response":"Several column names may be provided to the y argument of the pandas plotting function. Those should be specified in a list, as follows. \n```\ndf.plot(x=\"year\", y=[\"action\", \"comedy\"])\n```\nComplete example: \n```\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\ndf = pd.DataFrame({\"year\": [1914,1915,1916,1919,1920],\n \"action\" : [2.6,3.4,3.25,2.8,1.75],\n \"comedy\" : [2.5,2.9,3.0,3.3,3.4] })\ndf.plot(x=\"year\", y=[\"action\", \"comedy\"])\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/47775220\/how-to-plot-multiple-pandas-columns","best_answers_votes":54,"tags":["python","pandas","matplotlib","plot","data-science"],"question_length":279,"response_length":485,"tag_count":5} {"question":"matplotlib interactive mode: determine if figure window is still displayed I am using matplotlib in interactive mode to show the user a plot that will help them enter a range of variables. They have the option of hitting \"?\" to show this plot, and the prompt for variables will then be repeated. How do I know to not re-draw this plot if it's still being displayed? Superficially, I have this clunky (pseudo-ish) code: \n```\nanswer = None\ndone_plot = False\nwhile answer == None:\n answer = get_answer()\n if answer == '?':\n if done_plot:\n have_closed = True\n ##user's already requested a plot - has s\/he closed it?\n ## some check here needed:\n have_closed = ?????\n\n if have_closed == False:\n print 'You already have the plot on display, will not re-draw'\n answer = None\n continue\n plt.ion()\n fig = plt.figure()\n ### plotting stuff\n done_plot = True\n answer = None\n else:\n ###have an answer from the user...\n```\nwhat can I use (in terms of plt.gca(), fig etc...) to determine if I need to re-plot? Is there a status somewhere I can check? Many thanks, David","response":"In the same vein as unutbu's answer, you can also check whether a given figure is still opened with \n```\nimport matplotlib.pyplot as plt\n\nif plt.fignum_exists(
):\n # Figure is still opened\nelse:\n # Figure is closed\n```\nThe figure number of a figure is in fig.number. PS: Note that the \"number\" in figure(num=\u2026) can actually be a string: it is displayed in the window title. However, the figure still has a number attribute which is numeric. The original string num value can however be used with fignum_exists() (since 2015, according to Mark H, in the comments). PPS: That said, subplots(\u2026, num=) properly recovers the existing figure with the given string number. Thus, figures are still known by their string number in some parts of Matplotlib.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/7557098\/matplotlib-interactive-mode-determine-if-figure-window-is-still-displayed","best_answers_votes":39,"tags":["python","matplotlib"],"question_length":1211,"response_length":767,"tag_count":2} {"question":"How can I convert numbers to a color scale in matplotlib? I'm making a bar plot and I want the colors of the bars to vary from red to blue according to a color gradient. I have a dimension of the data frame that tells me where on the red-blue scale each bar should be. My current method is to manually convert these values to RGB colors by linearly interpolating between the RGB red and blue colors but I want an automatic way of converting my numeric values to a color scale. I also need to be able to have a colorbar legend to help interpret it.","response":"It's pretty straight forward to create a barchart and set the bar colors according to a value from the dataframe. A colormap and a normalization instance help converting the values to colors, which are understood by the color argument of matplotlib.Axes.bar. The colorbar is then created from a ScalarMappable using the same normalization and colormap as the bars. \n```\nimport matplotlib.pyplot as plt\nimport matplotlib.colors\nimport numpy as np; np.random.seed(0)\nimport pandas as pd\n\nx = np.arange(12)\ny = np.random.rand(len(x))*51\nc = np.random.rand(len(x))*3+1.5\ndf = pd.DataFrame({\"x\":x,\"y\":y,\"c\":c})\n\ncmap = plt.cm.rainbow\nnorm = matplotlib.colors.Normalize(vmin=1.5, vmax=4.5)\n\nfig, ax = plt.subplots()\nax.bar(df.x, df.y, color=cmap(norm(df.c.values)))\nax.set_xticks(df.x)\n\nsm = plt.cm.ScalarMappable(cmap=cmap, norm=norm)\nsm.set_array([]) # only needed for matplotlib < 3.1\nfig.colorbar(sm)\n\nplt.show()\n```\nFor using a custom colormap with bar plots see Barplot colored according a colormap?","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/43009724\/how-can-i-convert-numbers-to-a-color-scale-in-matplotlib","best_answers_votes":50,"tags":["python","matplotlib","colors"],"question_length":547,"response_length":1000,"tag_count":3} {"question":"How to plot complex numbers (Argand Diagram) using matplotlib I'd like to create an Argand Diagram from a set of complex numbers using matplotlib. Are there any pre-built functions to help me do this? Can anyone recommend an approach? Image by LeonardoG, CC-SA-3.0","response":"I'm not sure exactly what you're after here...you have a set of complex numbers, and want to map them to the plane by using their real part as the x coordinate and the imaginary part as y? If so you can get the real part of any python imaginary number with number.real and the imaginary part with number.imag. If you're using numpy, it also provides a set of helper functions numpy.real and numpy.imag etc. which work on numpy arrays. So for instance if you had an array of complex numbers stored something like this: \n```\n\n```python\na = n.arange(5) + 1j*n.arange(6,11)\n```\n\n```python\na\n#Output\n#array([ 0. +6.j, 1. +7.j, 2. +8.j, 3. +9.j, 4.+10.j])\n#```\n#...you can just do \n#```\n```\n\n```python\nfig,ax = subplots()\n```\n\n```python\nax.scatter(a.real,a.imag)\n```\nThis plots dots on an argand diagram for each point. edit: For the plotting part, you must of course have imported matplotlib.pyplot via from matplotlib.pyplot import * or (as I did) use the ipython shell in pylab mode.\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/17445720\/how-to-plot-complex-numbers-argand-diagram-using-matplotlib","best_answers_votes":23,"tags":["python","numpy","matplotlib","plot","complex-numbers"],"question_length":264,"response_length":963,"tag_count":5} {"question":"How can I display text over columns in a bar chart in matplotlib? I have a bar chart and I want over each column to display some text,how can I do that ?","response":"I believe this will point you in the right direction: http:\/\/matplotlib.sourceforge.net\/examples\/pylab_examples\/barchart_demo.html. The part that you are most interested in is: \n```\ndef autolabel(rects):\n for rect in rects:\n height = rect.get_height()\n plt.text(rect.get_x()+rect.get_width()\/2., 1.05*height, '%d'%int(height),\n ha='center', va='bottom')\n```\nThe placement of the text is determined by the height function, or the height of the column, and the number that is put on top of each column is written by: '%d' %int(height). So all you need to do is create an array of strings, called 'name', that you want at the top of the columns and iterate through. Be sure to change the format to be for a string (%s) and not a double. \n```\ndef autolabel(rects):\n# attach some text labels\n for ii,rect in enumerate(rects):\n height = rect.get_height()\n plt.text(rect.get_x()+rect.get_width()\/2., 1.02*height, '%s'% (name[ii]),\n ha='center', va='bottom')\nautolabel(rects1)\n```\nThat should do it!","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/7423445\/how-can-i-display-text-over-columns-in-a-bar-chart-in-matplotlib","best_answers_votes":39,"tags":["python","matplotlib","bar-chart"],"question_length":153,"response_length":1055,"tag_count":3} {"question":"Can matplotlib errorbars have a linestyle set? Is it possible to set the same linestyle on matplotlib errorbars as the data points? In the example below, two lines are plotted. One of them is dashed because of the ls='-.' parameter. However, the errorbars are solid lines. Is it possible to modify the style\/look of the errorbars to match the results line? \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nx = np.array(range(0,10))\ny = np.array(range(0,10))\nyerr = np.array(range(1,11)) \/ 5.0\nyerr2 = np.array(range(1,11)) \/ 4.0\n\ny2 = np.array(range(0,10)) * 1.2\n\nplt.errorbar(x, y, yerr=yerr, lw=8, errorevery=2, ls='-.')\nplt.errorbar(x, y2, yerr=yerr2, lw=8, errorevery=3)\nplt.show()\n```","response":"It is trivial, changing the linestyle of the errorbars only require a simple .set_linestyle call: \n```\neb1=plt.errorbar(x, y, yerr=yerr, lw=2, errorevery=2, ls='-.')\neb1[-1][0].set_linestyle('--') #eb1[-1][0] is the LineCollection objects of the errorbar lines\neb2=plt.errorbar(x, y2, yerr=yerr2, lw=2, errorevery=3)\neb2[-1][0].set_linestyle('-.')\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/22995797\/can-matplotlib-errorbars-have-a-linestyle-set","best_answers_votes":38,"tags":["python","matplotlib","plot","errorbar"],"question_length":697,"response_length":351,"tag_count":4} {"question":"canvas.mpl_connect in jupyter notebook I have the following code in test.py: \n```\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.plot(np.random.rand(10))\n\ndef onclick(event):\n print('button=%d, x=%d, y=%d, xdata=%f, ydata=%f' %\n (event.button, event.x, event.y, event.xdata, event.ydata))\n\ncid = fig.canvas.mpl_connect('button_press_event', onclick)\n```\nwhen i run test.py in the command line by \"python test.py\", 'button=%d, x=%d, y=%d, xdata=%f, ydata=%f' gets printed as i click the plot however, the results are not printed in jupyter notebook. how to fix it? thanks in advance!","response":"It will depend which backend you use in jupyter notebook. If you use the inline backend (i.e. %matplotlib inline), interactive features cannot work, because the plots are just png images. If you use the notebook backend (i.e. %matplotlib notebook) the interactive features do work, but the question would be where to print the result to. So in order to show the text one may add it to the figure as follows \n```\n%matplotlib notebook\nimport numpy as np\nimport matplotlib.pyplot as plt\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.plot(np.random.rand(10))\ntext=ax.text(0,0, \"\", va=\"bottom\", ha=\"left\")\n\ndef onclick(event):\n tx = 'button=%d, x=%d, y=%d, xdata=%f, ydata=%f' % (event.button, event.x, event.y, event.xdata, event.ydata)\n text.set_text(tx)\n\ncid = fig.canvas.mpl_connect('button_press_event', onclick)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/43923313\/canvas-mpl-connect-in-jupyter-notebook","best_answers_votes":35,"tags":["python","canvas","matplotlib","jupyter-notebook"],"question_length":595,"response_length":824,"tag_count":4} {"question":"matplotlib: limits when using plot and imshow in same axes I've been trying to plot an ellipse into an imshow plot. It works, but plotting the ellipse after plotting the image seems to increase xlim and ylim, resulting in a border, which I'd like to get rid of: Note that there is NO white border directly after calling imshow only. My code looks as follows: \n```\nself.dpi = 100\nself.fig = Figure((6.0, 6.0), dpi=self.dpi)\nself.canvas = FigureCanvas(self.fig)\nself.canvas.setMinimumSize(800, 400)\nself.cax = None\nself.axes = self.fig.add_subplot(111)\nself.axes.imshow(channel1, interpolation=\"nearest\")\nself.canvas.draw()\nself.axes.plot(dat[0], dat[1], \"b-\")\n```\nI've tried setting the limits before and after calling \"plot\", with no effect \n```\n# get limits after calling imshow\nxlim, ylim = pylab.xlim(), pylab.ylim()\n...\n# set limits before\/after calling plot\nself.axes.set_xlim(xlim)\nself.axes.set_ylim(ylim)\n```\nHow can I force plot not to increase existing figure limits? Solution (thanks to Joe): \n```\n#for newer matplotlib versions\nself.axes.imshow(channel1, interpolation=\"nearest\")\nself.axes.autoscale(False)\nself.axes.plot(dat[0], dat[1], \"b-\")\n\n#for older matplotlib versions (worked for me using 0.99.1.1)\nself.axes.imshow(channel1, interpolation=\"nearest\")\nself.axes.plot(dat[0], dat[1], \"b-\", scalex=False, scaley=False)\n```","response":"What's happening is that the axis is autoscaling to match the extents of each item you plot. Images are autoscaled much tighter than lines, etc (imshow basically calls ax.axis('image')). Getting the axis limits before and setting them after should have worked. (It's cleaner to just do limits = axes.axis() before and axes.axis(limits) after, though.) However, if you don't want things to autoscale, it's best to just turn autoscaling off after the initial plot. Try axes.autoscale(False) after plotting the image. As an example, compare this: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfig, ax = plt.subplots()\nax.imshow(np.random.random((10,10)))\nax.plot(range(11))\nplt.show()\n```\nWith this: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfig, ax = plt.subplots()\nax.imshow(np.random.random((10,10)))\nax.autoscale(False)\nax.plot(range(11))\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/9120749\/matplotlib-limits-when-using-plot-and-imshow-in-same-axes","best_answers_votes":35,"tags":["python","plot","scipy","matplotlib"],"question_length":1339,"response_length":880,"tag_count":4} {"question":"Non-linear axes for imshow in matplotlib I am generating 2D arrays on log-spaced axes (for instance, the x pixel coordinates are generated using logspace(log10(0.95), log10(2.08), n). I want to display the image using a plain old imshow, in its native resolution and scaling (I don't need to stretch it; the data itself is already log scaled), but I want to add ticks, labels, lines that are in the correct place on the log axes. How do I do this? Ideally I could just use commands line axvline(1.5) and the line would be in the correct place (58% from the left), but if the only way is to manually translate between logscale coordinates and image coordinates, that's ok, too. For linear axes, using extents= in the call to imshow does what I want, but I don't see a way to do the same thing with a log axis. Example: \n```\nfrom matplotlib.colors import LogNorm\n\nx = logspace(log10(10), log10(1000), 5)\nimshow(vstack((x,x)), extent=[10, 1000, 0, 100], cmap='gray', norm=LogNorm(), interpolation='nearest')\naxvline(100, color='red')\n```\nThis example does not work, because extent= only applies to linear scales, so when you do axvline at 100, it does not appear in the center. I'd like the x axis to show 10, 100, 1000, and axvline(100) to put a line in the center at the 100 point, while the pixels remain equally spaced.","response":"In my view, it is better to use pcolor and regular (non-converted) x and y values. pcolor gives you more flexibility and regular x and y axis are less confusing. \n```\nimport pylab as plt\nimport numpy as np\nfrom matplotlib.colors import LogNorm\nfrom matplotlib.ticker import LogFormatterMathtext\n\nx=np.logspace(1, 3, 6)\ny=np.logspace(0, 2,3)\nX,Y=np.meshgrid(x,y)\nz = np.logspace(np.log10(10), np.log10(1000), 5)\nZ=np.vstack((z,z))\n\nim = plt.pcolor(X,Y,Z, cmap='gray', norm=LogNorm())\nplt.axvline(100, color='red')\n\nplt.xscale('log')\nplt.yscale('log')\n\nplt.colorbar(im, orientation='horizontal',format=LogFormatterMathtext())\nplt.show()\n```\nAs pcolor is slow, a faster solution is to use pcolormesh instead. \n```\nim = plt.pcolormesh(X,Y,Z, cmap='gray', norm=LogNorm())\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/11488800\/non-linear-axes-for-imshow-in-matplotlib","best_answers_votes":22,"tags":["image","plot","matplotlib","logarithm"],"question_length":1320,"response_length":770,"tag_count":4} {"question":"What is the name of the default seaborn color palette? From the top of https:\/\/seaborn.pydata.org\/tutorial\/color_palettes.html: But what is the default one called? That is, how do I invoke it in sns.set_palette() if I changed to a different palette and want to revert to the default? I also couldn't find an answer at https:\/\/seaborn.pydata.org\/generated\/seaborn.color_palette.html.","response":"By default seaborn uses the matplotlib default colors. If you have not changed the default matplotlib parameters, this would be the colors from the \"tab10\" colormap. \n```\nsns.set_palette(\"tab10\")\n```\nIf however you have set some style beforehands, e.g. using sns.set(), this would set the palette to \"deep\". Hence you would get this back using \n```\nsns.set_palette(\"deep\")\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/48958426\/what-is-the-name-of-the-default-seaborn-color-palette","best_answers_votes":26,"tags":["matplotlib","seaborn"],"question_length":382,"response_length":376,"tag_count":2} {"question":"Figure title with several colors Is it possible to have multiple font colors in matplotlib figure titles? Something like this","response":"The following snippet seems to work. \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.arange(0, 5, 0.1);\ny = np.sin(x)\nfig1 = plt.figure(1)\nfig1.text(0.45, 0.95, \"Case A\", ha=\"center\", va=\"bottom\", size=\"medium\",color=\"red\")\nfig1.text(0.5, 0.95, \"&\", ha=\"center\", va=\"bottom\", size=\"medium\")\nfig1.text(0.55,0.95,\"Case B\", ha=\"center\", va=\"bottom\", size=\"medium\",color=\"blue\")\nplt.plot(x, y)\nplt.show()\n```\nAs far as I can see the title generated by matplotlib title function only contains one text object and hence can only have one font color. This is the reason for making multiple text elements on the figure.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/9350171\/figure-title-with-several-colors","best_answers_votes":16,"tags":["python","matplotlib"],"question_length":125,"response_length":627,"tag_count":2} {"question":"How do you change the default font color for all text in matplotlib? This only changed it for chart titles, but left axis titles the default color: \n```\nimport matplotlib as mpl\nmpl.rcParams['text.color'] = 'blue'\n```","response":"You can set each text property separately, e.g.: \n```\nCOLOR = 'blue'\nmpl.rcParams['text.color'] = COLOR\nmpl.rcParams['axes.labelcolor'] = COLOR\nmpl.rcParams['xtick.color'] = COLOR\nmpl.rcParams['ytick.color'] = COLOR\n```\nThe full list of params is at https:\/\/matplotlib.org\/users\/customizing.html.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/48958208\/how-do-you-change-the-default-font-color-for-all-text-in-matplotlib","best_answers_votes":48,"tags":["python","matplotlib"],"question_length":217,"response_length":296,"tag_count":2} {"question":"How to label a line in matplotlib? I followed the documentation but still failed to label a line. \n```\nplt.plot([min(np.array(positions)[:,0]), max(np.array(positions)[:,0])], [0,0], color='k', label='East') # West-East\nplt.plot([0,0], [min(np.array(positions)[:,1]), max(np.array(positions)[:,1])], color='k', label='North') # South-North\n```\nIn the code snippet above, I am trying to plot out the North direction and the East direction. position contains the points to be plotted. But I end up with 2 straight lines with NO labels as follows: Where went wrong?","response":"The argument label is used to set the string that will be shown in the legend. For example consider the following snippet: \n```\nimport matplotlib.pyplot as plt\n plt.plot([1,2,3],'r-',label='Sample Label Red')\n plt.plot([0.5,2,3.5],'b-',label='Sample Label Blue')\n plt.legend()\n plt.show()\n```\nThis will plot 2 lines as shown: The arrow function supports labels. Do check this link: http:\/\/matplotlib.org\/api\/pyplot_api.html#matplotlib.pyplot.arrow","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/17941083\/how-to-label-a-line-in-matplotlib","best_answers_votes":51,"tags":["python","matplotlib"],"question_length":562,"response_length":451,"tag_count":2} {"question":"how to set readable xticks in seaborn's facetgrid? i have this plot of a dataframe with seaborn's facetgrid: \n```\nimport seaborn as sns\nimport matplotlib.pylab as plt\nimport pandas\nimport numpy as np\n\nplt.figure()\ndf = pandas.DataFrame({\"a\": map(str, np.arange(1001, 1001 + 30)),\n \"l\": [\"A\"] * 15 + [\"B\"] * 15,\n \"v\": np.random.rand(30)})\ng = sns.FacetGrid(row=\"l\", data=df)\ng.map(sns.pointplot, \"a\", \"v\")\nplt.show()\n```\nseaborn plots all the xtick labels instead of just picking a few and it looks horrible: Is there a way to customize it so that it plots every n-th tick on x-axis instead of all of them?","response":"You have to skip x labels manually like in this example: \n```\nimport seaborn as sns\nimport matplotlib.pylab as plt\nimport pandas\nimport numpy as np\n\ndf = pandas.DataFrame({\"a\": range(1001, 1031),\n \"l\": [\"A\",] * 15 + [\"B\",] * 15,\n \"v\": np.random.rand(30)})\ng = sns.FacetGrid(row=\"l\", data=df)\ng.map(sns.pointplot, \"a\", \"v\")\n\n# iterate over axes of FacetGrid\nfor ax in g.axes.flat:\n labels = ax.get_xticklabels() # get x labels\n for i,l in enumerate(labels):\n if(i%2 == 0): labels[i] = '' # skip even labels\n ax.set_xticklabels(labels, rotation=30) # set new labels\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/43727278\/how-to-set-readable-xticks-in-seaborns-facetgrid","best_answers_votes":34,"tags":["python","pandas","matplotlib","seaborn"],"question_length":649,"response_length":638,"tag_count":4} {"question":"boxplot using precalculated (summary) statistics I need to do a boxplot (in Python and matplotlib) but I do not have the original \"raw\" data. What I have are precalculated values for max, min, mean, median and IQR (normal distribution) but still I'd like to do a boxplot. Of course plotting outliers isn't possible, but beside that I guess all information is there. I've search all over to find an answer without success. The closest I've come is the same question but for R (which I'm unfamiliar with). See Is it possible to plot a boxplot from previously-calculated statistics easily (in R?)","response":"Thanks to the comment of @tacaswell I was able to find the required documentation and come up with an example using Matplotlib 1.4.3. However, this example does not automatically scale the figure to the correct size. \n```\nimport matplotlib.pyplot as plt\n\nitem = {}\n\nitem[\"label\"] = 'box' # not required\nitem[\"mean\"] = 5 # not required\nitem[\"med\"] = 5.5\nitem[\"q1\"] = 3.5\nitem[\"q3\"] = 7.5\n#item[\"cilo\"] = 5.3 # not required\n#item[\"cihi\"] = 5.7 # not required\nitem[\"whislo\"] = 2.0 # required\nitem[\"whishi\"] = 8.0 # required\nitem[\"fliers\"] = [] # required if showfliers=True\n\nstats = [item]\n\nfig, axes = plt.subplots(1, 1)\naxes.bxp(stats)\naxes.set_title('Default')\ny_axis = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\ny_values = [\"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\"]\nplt.yticks(y_axis, y_values)\n```\nRelevant links to the documentation: matplotlib.axes.Axes.bxp boxplot_stats datastructure Boxplot drawer function","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/23655798\/boxplot-using-precalculated-summary-statistics","best_answers_votes":22,"tags":["python","matplotlib","boxplot"],"question_length":593,"response_length":909,"tag_count":3} {"question":"Turn off axis border for polar matplotlib plot I have a polar axes in matplotlib that has text which extends outside of the range of the axes. I would like to remove the border for the axis -- or set it to the color of the background so that the text is more legible. How can I do this? Simply increasing the size of the axes is not an acceptable solution (because the figure is embeddable in a GUI and it becomes too small if this is done). Changing the color of the background to be black so that the border is not visible is also not an acceptable solution. A considerable amount of code that does various parts of plotting things is omitted, but here is the generation of the figure and axes itself: \n```\nimport pylab as pl\nfig = pl.figure(figsize=(5,5), facecolor='white')\naxes = pl.subplot(111, polar=True, axisbg='white')\n\npl.xticks([])\npl.yticks([])\npl.ylim(0,10)\n\n# ... draw lots of things\n```","response":"Just add this line: axes.spines['polar'].set_visible(False) and it should go away! eewh, all the anatomy terms.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/22847765\/turn-off-axis-border-for-polar-matplotlib-plot","best_answers_votes":35,"tags":["python","matplotlib","polar-coordinates"],"question_length":902,"response_length":111,"tag_count":3} {"question":"python matplotlib update scatter plot from a function I am trying to automatically update a scatter plot. The source of my X and Y values is external, and the data is pushed automatically into my code in a non-predicted time intervals (rounds). I have only managed to plot all the data when the whole process ended, whereas I am trying to constantly add and plot data into my canvas. What I DO get (at the end of the whole run) is this: Whereas, what I am after is this: A simplified version of my code: \n```\nimport matplotlib.pyplot as plt\n\ndef read_data():\n #This function gets the values of xAxis and yAxis\n xAxis = [some values] #these valuers change in each run\n yAxis = [other values] #these valuers change in each run\n\n plt.scatter(xAxis,yAxis, label = 'myPlot', color = 'k', s=50) \n plt.xlabel('x')\n plt.ylabel('y')\n plt.show()\n```","response":"There are several ways to animate a matplotlib plot. In the following let's look at two minimal examples using a scatter plot. (a) use interactive mode plt.ion() For an animation to take place we need an event loop. One way of getting the event loop is to use plt.ion() (\"interactive on\"). One then needs to first draw the figure and can then update the plot in a loop. Inside the loop, we need to draw the canvas and introduce a little pause for the window to process other events (like the mouse interactions etc.). Without this pause the window would freeze. Finally we call plt.waitforbuttonpress() to let the window stay open even after the animation has finished. \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nplt.ion()\nfig, ax = plt.subplots()\nx, y = [],[]\nsc = ax.scatter(x,y)\nplt.xlim(0,10)\nplt.ylim(0,10)\n\nplt.draw()\nfor i in range(1000):\n x.append(np.random.rand(1)*10)\n y.append(np.random.rand(1)*10)\n sc.set_offsets(np.c_[x,y])\n fig.canvas.draw_idle()\n plt.pause(0.1)\n\nplt.waitforbuttonpress()\n```\n(b) using FuncAnimation Much of the above can be automated using matplotlib.animation.FuncAnimation. The FuncAnimation will take care of the loop and the redrawing and will constantly call a function (in this case animate()) after a given time interval. The animation will only start once plt.show() is called, thereby automatically running in the plot window's event loop. \n```\nimport matplotlib.pyplot as plt\nimport matplotlib.animation\nimport numpy as np\n\nfig, ax = plt.subplots()\nx, y = [],[]\nsc = ax.scatter(x,y)\nplt.xlim(0,10)\nplt.ylim(0,10)\n\ndef animate(i):\n x.append(np.random.rand(1)*10)\n y.append(np.random.rand(1)*10)\n sc.set_offsets(np.c_[x,y])\n\nani = matplotlib.animation.FuncAnimation(fig, animate, \n frames=2, interval=100, repeat=True) \nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/42722691\/python-matplotlib-update-scatter-plot-from-a-function","best_answers_votes":50,"tags":["python","matplotlib","scatter-plot"],"question_length":866,"response_length":1828,"tag_count":3} {"question":"How to control scientific notation in matplotlib? This is my data frame I'm trying to plot: \n```py\nmy_dic = {'stats': {'apr': 23083904, 'may': 16786816, 'june': 26197936}}\nmy_df = pd.DataFrame(my_dic)\n```\nThis is how I plot it: \n```py\nax = my_df['stats'].plot(kind='bar', legend=False)\nax.set_xlabel(\"Month\", fontsize=12)\nax.set_ylabel(\"Stats\", fontsize=12)\nax.ticklabel_format(useOffset=False) #AttributeError: This method only works with the ScalarFormatter.\nplt.show()\n```\nThe plot: I'd like to control the scientific notation. I tried to suppress it by this line as was suggested in other questions plt.ticklabel_format(useOffset=False) but I get this error back - \n```none\nAttributeError: This method only works with the ScalarFormatter\n```\nIdeally, I'd like to show my data in (mln).","response":"Adding this line helps to get numbers in a plain format but with ',' which looks much nicer: \n```\nax.get_yaxis().set_major_formatter(\n matplotlib.ticker.FuncFormatter(lambda x, p: format(int(x), ',')))\n```\nAnd then I can use int(x)\/ to convert to million or thousand as I wish:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/46735745\/how-to-control-scientific-notation-in-matplotlib","best_answers_votes":26,"tags":["python","pandas","matplotlib","scientific-notation","yticks"],"question_length":790,"response_length":280,"tag_count":5} {"question":"Move and resize legends-box in matplotlib I'm creating plots using Matplotlib that I save as SVG, export to .pdf + .pdf_tex using Inkscape, and include the .pdf_tex-file in a LaTeX document. This means that I can input LaTeX-commands in titles, legends etc., giving an image like this which renders like this when I use it in my LaTeX document. Notice that the font for the numbers on the axes change, and the LaTeX-code in the legend is compiled: Code for the plot (how to export to SVG not shown here, but can be shown on request): \n```\nimport numpy as np\nx = np.linspace(0,1,100)\ny = x**2\n\nimport matplotlib.pyplot as plt\nplt.plot(x, y, label = '{\\\\footnotesize \\$y = x^2\\$}')\nplt.legend(loc = 'best')\nplt.show()\n```\nThe problem is, as you can see, that the alignment and size of the box around the legend is wrong. This is because the size of the text of the label changes when the image is passed through Inkscape + pdflatex (because \\footnotesize etc. disappears, and the font size changes). I have figured out that I can choose the placement of the label by either \n```\nplt.label(loc = 'upper right')\n```\nor if I want more control I can use \n```\nplt.label(bbox_to_anchor = [0.5, 0.2])\n```\nbut I haven't found any way of making the box around the label smaller. Is this possible? An alternative to making the box smaller is to remove the outline of the box using something like \n```\nlegend = plt.legend()\nlegend.get_frame().set_edgecolor('1.0')\n```\nand then moving the label to where I want it. In that case I would like to be able to set the placement of the label by first letting python\/matplotlib place it using \n```\nplt.label(loc = 'upper right')\n```\nand then for example moving it a bit to the right. Is this possible? I have tried using get_bbox_to_anchor() and set_bbox_to_anchor(), but can't seem to get it to work.","response":"You can move a legend after automatically placing it by drawing it, and then getting the bbox position. Here's an example: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# Plot data\nx = np.linspace(0,1,100)\ny = x**2\nfig = plt.figure()\nax = fig.add_subplot(221) #small subplot to show how the legend has moved. \n\n# Create legend\nplt.plot(x, y, label = '{\\\\footnotesize \\$y = x^2\\$}')\nleg = plt.legend( loc = 'upper right')\n\nplt.draw() # Draw the figure so you can find the positon of the legend. \n\n# Get the bounding box of the original legend\nbb = leg.get_bbox_to_anchor().transformed(ax.transAxes.inverted()) \n\n# Change to location of the legend. \nxOffset = 1.5\nbb.x0 += xOffset\nbb.x1 += xOffset\nleg.set_bbox_to_anchor(bb, transform = ax.transAxes)\n\n# Update the plot\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/23238041\/move-and-resize-legends-box-in-matplotlib","best_answers_votes":31,"tags":["python","matplotlib","legend"],"question_length":1830,"response_length":793,"tag_count":3} {"question":"Changing marker style in scatter plot according to third variable I am dealing with a multi-column dictionary. I want to plot two columns and subsequently change color and style of the markers according to a third and fourth column. I struggle with changing the marker style in the pylab scatter plot. My approach, which works for color, unfortunately does not work for marker style. \n```\nx=[1,2,3,4,5,6]\ny=[1,3,4,5,6,7]\nm=['k','l','l','k','j','l']\n\nfor i in xrange(len(m)):\n m[i]=m[i].replace('j','o')\n m[i]=m[i].replace('k','x')\n m[i]=m[i].replace('l','+')\n\nplt.scatter(x,y,marker=m)\nplt.show()\n```","response":"The problem is that marker can only be a single value and not a list of markers, as the color parmeter. You can either do a grouping by marker value so you have the x and y lists that have the same marker and plot them: \n```\nxs = [[1, 2, 3], [4, 5, 6]]\nys = [[1, 2, 3], [4, 5, 6]]\nm = ['o', 'x']\nfor i in range(len(xs)):\n plt.scatter(xs[i], ys[i], marker=m[i])\nplt.show()\n```\nOr you can plot every single dot (which I would not recommend): \n```\nx=[1,2,3,4,5,6]\ny=[1,3,4,5,6,7]\nm=['k','l','l','k','j','l']\n\nmapping = {'j' : 'o', 'k': 'x', 'l': '+'}\n\nfor i in range(len(x)):\n plt.scatter(x[i], y[i], marker=mapping[m[i]])\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/18800944\/changing-marker-style-in-scatter-plot-according-to-third-variable","best_answers_votes":24,"tags":["python","dictionary","matplotlib","marker","scatter"],"question_length":609,"response_length":640,"tag_count":5} {"question":"Dynamically serving a matplotlib image to the web using python This question has been asked in a similar way here but the answer was way over my head (I'm super new to python and web development) so I'm hoping there's a simpler way or it could be explained differently. I'm trying to generate an image using matplotlib and serve it without first writing a file to the server. My code is probably kind of silly, but it goes like this: \n```\nimport cgi\nimport matplotlib.pyplot as pyplot\nimport cStringIO #I think I will need this but not sure how to use\n\n...a bunch of matplotlib stuff happens....\npyplot.savefig('test.png')\n\nprint \"Content-type: text\/html\\n\"\nprint \"\"\"\n...a bunch of text and html here...\n<\/img>\n...more text and html...\n<\/body><\/html>\n\"\"\"\n```\nI think that instead of doing pyplot.savefig('test.png'), I am supposed to create a cstringIO object and then do something like this: \n```\nmybuffer=cStringIO.StringIO()\npyplot.savefig(mybuffer, format=\"png\")\n```\nBut I am pretty lost from there. All the examples I've seen (e.g. http:\/\/lost-theory.org\/python\/dynamicimg.html) involve doing something like \n```\nprint \"Content-type: image\/png\\n\"\n```\nand I don't get how to integrate that with the HTML I'm already outputting.","response":"You should first write to a cStringIO object then write the HTTP header then write the content of the cStringIO to stdout Thus, if an error in savefig occured, you could still return something else, even another header. Some errors won't be recognized earlier, e.g., some problems with texts, too large image dimensions etc. You need to tell savefig where to write the output. You can do: \n```\nformat = \"png\"\nsio = cStringIO.StringIO()\npyplot.savefig(sio, format=format)\nprint \"Content-Type: image\/%s\\n\" % format\nmsvcrt.setmode(sys.stdout.fileno(), os.O_BINARY) # Needed this on windows, IIS\nsys.stdout.write(sio.getvalue())\n```\nIf you want to embed the image into HTML: \n```\nprint \"Content-Type: text\/html\\n\"\nprint \"\"\"\n...a bunch of text and html here...\n\n...more text and html...\n<\/body><\/html>\"\"\" % sio.getvalue().encode(\"base64\").strip()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/14824522\/dynamically-serving-a-matplotlib-image-to-the-web-using-python","best_answers_votes":21,"tags":["python","matplotlib","cgi"],"question_length":1263,"response_length":894,"tag_count":3} {"question":"matplotlib: log transform counts in hist2d Is there a simple way to get log transformed counts when plotting a two dimensional histogram in matplotlib? Unlike the pyplot.hist method, the pyplot.hist2d method does not seem to have a log parameter. Currently I'm doing the following: \n```\nimport numpy as np\nimport matplotlib as mpl\nimport matplotlib.pylab as plt\n\nmatrix, *opt = np.histogram2d(x, y)\nimg = plt.imshow(matrix, norm = mpl.colors.LogNorm(), cmap = mpl.cm.gray, \n interpolation=\"None\")\n```\nWhich plots the expected histogram, but the axis labels show the indices of the bins and thus not the expected value.","response":"It's kind of embarrassing, but the answer to my question is actually in the docstring of the corresponding code: \n```\nNotes\n-----\n Rendering the histogram with a logarithmic color scale is\n accomplished by passing a :class:`colors.LogNorm` instance to\n the *norm* keyword argument. Likewise, power-law normalization\n (similar in effect to gamma correction) can be accomplished with\n :class:`colors.PowerNorm`.\n```\nSo this works: \n```\nimport matplotlib as mpl\nimport matplotlib.pylab as plt\npar = plt.hist2d(x, y, norm=mpl.colors.LogNorm(), cmap=mpl.cm.gray)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/23309272\/matplotlib-log-transform-counts-in-hist2d","best_answers_votes":46,"tags":["python","matplotlib"],"question_length":634,"response_length":576,"tag_count":2} {"question":"Correcting matplotlib colorbar ticks I've placed a color bar alongside a choropleth map. Because the data being plotted are discrete rather than continuous values, I've used a LinearSegmentedColormap (using the recipe from the scipy cookbook), which I've initialised with my max counted value + 1, in order to show a colour for 0. However, I now have two problems: The tick labels are incorrectly spaced (except for 5, more or less) \u2013 they should be located in the middle of the colour they identify; i.e. 0 - 4 should be moved up, and 6 - 10 should be moved down. If I initialise the colorbar with drawedges=True, so that I can style its dividers properties, I get this: I'm creating my colormap and colorbar like so: \n```\ncbmin, cbmax = min(counts), max(counts)\n# this normalises the counts to a 0,1 interval\ncounts \/= np.max(np.abs(counts), axis=0)\n# density is a discrete number, so we have to use a discrete color ramp\/bar\ncm = cmap_discretize(plt.get_cmap('YlGnBu'), int(cbmax) + 1)\nmappable = plt.cm.ScalarMappable(cmap=cm)\nmappable.set_array(counts)\n# set min and max values for the colour bar ticks\nmappable.set_clim(cbmin, cbmax)\npc = PatchCollection(patches, match_original=True)\n# impose our colour map onto the patch collection\npc.set_facecolor(cm(counts))\nax.add_collection(pc,)\ncb = plt.colorbar(mappable, drawedges=True)\n```\nSo I'm wondering whether my converting the counts to a 0,1 interval is one of the problems. Update : Having tried what Hooked suggested, the 0-value is correct, but subsequent values are set progressively higher, to the the point where 9 is where 10 should be: Here's the code I used: \n```\ncb = plt.colorbar(mappable)\nlabels = np.arange(0, int(cbmax) + 1, 1)\nloc = labels + .5\ncb.set_ticks(loc)\ncb.set_ticklabels(labels)\n```\nAnd just to confirm, labels definitely has the correct values: \n```\n\n```python\nnp.arange(0, int(cbmax) + 1, 1)\n#Output\n#array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])\n#```\n```","response":"You are suffering from an off-by-one error. You have 10 ticklabels spread among 11 colors. You might be able to correct the error by using np.linspace instead of np.arange. Using np.linspace the third argument is the number of values desired. This reduces the amount of mental gymnastics needed to avoid the off-by-one error: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport matplotlib.cm as cm\nimport matplotlib.colors as mcolors\n\ndef colorbar_index(ncolors, cmap):\n cmap = cmap_discretize(cmap, ncolors)\n mappable = cm.ScalarMappable(cmap=cmap)\n mappable.set_array([])\n mappable.set_clim(-0.5, ncolors+0.5)\n colorbar = plt.colorbar(mappable)\n colorbar.set_ticks(np.linspace(0, ncolors, ncolors))\n colorbar.set_ticklabels(range(ncolors))\n\ndef cmap_discretize(cmap, N):\n \"\"\"Return a discrete colormap from the continuous colormap cmap.\n\n cmap: colormap instance, eg. cm.jet. \n N: number of colors.\n\n Example\n x = resize(arange(100), (5,100))\n djet = cmap_discretize(cm.jet, 5)\n imshow(x, cmap=djet)\n \"\"\"\n\n if type(cmap) == str:\n cmap = plt.get_cmap(cmap)\n colors_i = np.concatenate((np.linspace(0, 1., N), (0.,0.,0.,0.)))\n colors_rgba = cmap(colors_i)\n indices = np.linspace(0, 1., N+1)\n cdict = {}\n for ki,key in enumerate(('red','green','blue')):\n cdict[key] = [ (indices[i], colors_rgba[i-1,ki], colors_rgba[i,ki])\n for i in xrange(N+1) ]\n # Return colormap object.\n return mcolors.LinearSegmentedColormap(cmap.name + \"_%d\"%N, cdict, 1024)\n\nfig, ax = plt.subplots()\nA = np.random.random((10,10))*10\ncmap = plt.get_cmap('YlGnBu')\nax.imshow(A, interpolation='nearest', cmap=cmap)\ncolorbar_index(ncolors=11, cmap=cmap) \nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/18704353\/correcting-matplotlib-colorbar-ticks","best_answers_votes":26,"tags":["python","matplotlib","color-mapping"],"question_length":1937,"response_length":1776,"tag_count":3} {"question":"Dates in the xaxis for a matplotlib plot with imshow So I am new to programming with matplotlib. I have created a color plot using imshow() and an array. At first the axis were just the row and column number of my array. I used extent = (xmin,xmax,ymin,ymax) to get the x-axis in unix time and altitude, respectively. I want to change the x-axis from unix time (982376726,982377321) to UT(02:25:26, 02:35:21). I have created a list of the time range in HH:MM:SS. I am not sure how to replace my current x-axis with these new numbers, without changing the color plot (or making it disappear). I was looking at datetime.time but I got confused with it. Any help would be greatly appreciated!","response":"I have put together some example code which should help you with your problem. The code first generates some randomised data using numpy.random. It then calculates your x-limits and y-limits where the x-limits will be based off of two unix timestamps given in your question and the y-limits are just generic numbers. The code then plots the randomised data and uses pyplot methods to convert the x-axis formatting to nicely represented strings (rather than unix timestamps or array numbers). The code is well commented and should explain everything you need, if not please comment and ask for clarification. \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport matplotlib.dates as mdates\n\nimport datetime as dt\n\n# Generate some random data for imshow\nN = 10\narr = np.random.random((N, N))\n\n# Create your x-limits. Using two of your unix timestamps you first\n# create a list of datetime.datetime objects using map.\nx_lims = list(map(dt.datetime.fromtimestamp, [982376726, 982377321]))\n\n# You can then convert these datetime.datetime objects to the correct\n# format for matplotlib to work with.\nx_lims = mdates.date2num(x_lims)\n\n# Set some generic y-limits.\ny_lims = [0, 100]\n\nfig, ax = plt.subplots()\n\n# Using ax.imshow we set two keyword arguments. The first is extent.\n# We give extent the values from x_lims and y_lims above.\n# We also set the aspect to \"auto\" which should set the plot up nicely.\nax.imshow(arr, extent = [x_lims[0], x_lims[1], y_lims[0], y_lims[1]], \n aspect='auto')\n\n# We tell Matplotlib that the x-axis is filled with datetime data, \n# this converts it from a float (which is the output of date2num) \n# into a nice datetime string.\nax.xaxis_date()\n\n# We can use a DateFormatter to choose how this datetime string will look.\n# I have chosen HH:MM:SS though you could add DD\/MM\/YY if you had data\n# over different days.\ndate_format = mdates.DateFormatter('%H:%M:%S')\n\nax.xaxis.set_major_formatter(date_format)\n\n# This simply sets the x-axis data to diagonal so it fits better.\nfig.autofmt_xdate()\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/23139595\/dates-in-the-xaxis-for-a-matplotlib-plot-with-imshow","best_answers_votes":42,"tags":["python","datetime","matplotlib","plot","imshow"],"question_length":689,"response_length":2053,"tag_count":5} {"question":"How to change marker size with pandas.plot() I have a chart created from df.plot(style=\"o\") where the o markers are too big. Would it be possible to size them down? \n```\nimport pandas as pd\ndf = pd.DataFrame(range(1, 10))\ndf.plot(style=\"o\")\n```\nHow can I shrink them down?","response":"After investigation, it looks like that you can pass the ms, short for markersize (which also work) argument directly to pandas.plot() such has: \n```\nimport pandas as pd\ndf = pd.DataFrame(range(1, 10))\ndf.plot(style=\"o\", ms=3)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/41107232\/how-to-change-marker-size-with-pandas-plot","best_answers_votes":41,"tags":["python","pandas","matplotlib"],"question_length":272,"response_length":230,"tag_count":3} {"question":"matplotlib set shared axis [duplicate] This question already has answers here: How to share x axes of two subplots after they have been created (6 answers) Closed 3 years ago. Using matplotlib, it seems the only time to set the sharex or sharey axis parameters are during sub_plot creation (add_subplot(), subplot(), subplots()). For an axes class there are methods for getting axis sharing (get_shared_x_axes(), get_shared_y_axes()), but no corresponding methods for setting sharing. Maybe this is an API oversight, or perhaps it did not fit architecturally. Is there a way to change the shared axis parameter? For those that ask why: I'm using a matrix of plots dynamically, and can control this using view limits, but it just seems like there could be an easier way, and turning sharing on\/off and using autoscale would be it. Thanks.","response":"Just to mention that a method for sharing axes after their creation does exist by now. For two axes ax1 and ax2 you can use \n```\nax1.get_shared_x_axes().join(ax1, ax2)\n```\nSee How share x axis of two subplots after they are created?.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/17329292\/matplotlib-set-shared-axis","best_answers_votes":26,"tags":["python","matplotlib"],"question_length":837,"response_length":233,"tag_count":2} {"question":"Linestyle in matplotlib step function Is it possible to set the linestyle in a matplotlib step function to dashed, dotted, etc.? I've tried: \n```\nstep(x, linestyle='--'), \nstep(x, '--')\n```\nBut it did not help.","response":"As of mpl 1.3.0 this is fixed upstream You have to come at it a bit sideways as step seems to ignore linestyle. If you look at what step is doing underneath, it is just a thin wrapper for plot. You can do what you want by talking to plot directly: \n```\nimport matplotlib.pyplot as plt\n\nplt.plot(range(5), range(5), linestyle='--', drawstyle='steps')\nplt.plot(range(5), range(5)[::-1], linestyle=':', drawstyle='steps')\nplt.xlim([-1, 5])\nplt.ylim([-1, 5])\n```\n['steps', 'steps-pre', 'steps-mid', 'steps-post'] are the valid values for drawstyle and control where the step is drawn. Pull request resulting from this question, I personally think this is a bug. [edit: this has been pulled into master and should show up in v1.3.0].","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/15188005\/linestyle-in-matplotlib-step-function","best_answers_votes":40,"tags":["python","matplotlib","linestyle"],"question_length":210,"response_length":728,"tag_count":3} {"question":"Position 5 subplots in Matplotlib I would like to position 5 subplots such that there are three of top and two at the bottom but next to each other. The current code gets close but I would like the final result to look like the following (ignore gray lines): \n```\nimport matplotlib.pyplot as plt\n\nax1 = plt.subplot(231)\nax2 = plt.subplot(232)\nax3 = plt.subplot(233)\nax4 = plt.subplot(234)\nax5 = plt.subplot(236)\n\nplt.show()\n```","response":"You can use colspan When you use suplot2grid instead of subplot. \n```\nimport matplotlib.pyplot as plt\n\nax1 = plt.subplot2grid(shape=(2,6), loc=(0,0), colspan=2)\nax2 = plt.subplot2grid((2,6), (0,2), colspan=2)\nax3 = plt.subplot2grid((2,6), (0,4), colspan=2)\nax4 = plt.subplot2grid((2,6), (1,1), colspan=2)\nax5 = plt.subplot2grid((2,6), (1,3), colspan=2)\n```\nAnd then every subplot needs to be 2 cols wide, so that the subplots in the second row can be shifted by 1 column.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/26767281\/position-5-subplots-in-matplotlib","best_answers_votes":25,"tags":["python","matplotlib"],"question_length":427,"response_length":471,"tag_count":2} {"question":"matplotlib: add circle to plot How do I add a small filled circle or point to a countour plot in matplotlib?","response":"Here is an example, using pylab.Circle: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ne = np.e\nX, Y = np.meshgrid(np.linspace(0, 5, 100), np.linspace(0, 5, 100))\nF = X ** Y\nG = Y ** X\n\nfig = plt.figure()\nax = fig.add_subplot(1, 1, 1)\ncirc = plt.Circle((e, e), radius=0.07, color='g')\nplt.contour(X, Y, (F - G), [0])\nax.add_patch(circ)\nplt.show()\n```\nAnd here is another example (though not a contour plot) from the docs. Or, you could just use plot: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ne = np.e\nX, Y = np.meshgrid(np.linspace(0, 5, 100), np.linspace(0, 5, 100))\nF = X ** Y\nG = Y ** X\n\nfig = plt.figure()\nax = fig.add_subplot(1, 1, 1)\nplt.contour(X, Y, (F - G), [0])\nplt.plot([e], [e], 'g.', markersize=20.0)\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/3439639\/matplotlib-add-circle-to-plot","best_answers_votes":38,"tags":["python","matplotlib"],"question_length":108,"response_length":754,"tag_count":2} {"question":"matplotlib large set of colors for plots I have a lot of graphs I want to plot in one plot. I've just started with matplotlib and can't find a good way to generate a lot of distinguishable colors :( Maybe cycling over HSV with SV at maximum? I'm thinking of something like \n```\nargs=[]\nfor i,(x,y) in enumerate(data):\n args.extend([x,y,hsv(i)])\nplot(*args)\n```\nAny suggestions? :)","response":"I think you have the right idea, except that the colors will be more distinguishable if you pass the colormap hsv numbers which are spread out over the range (0,1): \n```\nhsv = plt.get_cmap('hsv')\nhsv(float(i)\/(len(data)-1))\n```\nor, using NumPy: \n```\ncolors = hsv(np.linspace(0, 1.0, len(kinds)))\n```\nFor example: \n```\nimport datetime as DT\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.dates as mdates\nimport scipy.interpolate as interpolate\n\ndates = [DT.date(year, 9, 1) for year in range(2003, 2009)]\nt = list(map(mdates.date2num, dates))\njec = (100, 70, 125, 150, 300, 250)\nplt.plot(dates, jec, 'k.', markersize = 20)\nnew_t = np.linspace(min(t), max(t), 80)\nnew_dates = map(mdates.num2date, new_t)\nkinds = ('cubic', 'quadratic', 'slinear', 'nearest', 'linear', 'zero', 4, 5)\ncmap = plt.get_cmap('jet')\ncolors = cmap(np.linspace(0, 1.0, len(kinds)))\nfor kind, color in zip(kinds, colors):\n new_jec = interpolate.interp1d(t, jec, kind=kind)(new_t)\n plt.plot(new_t, new_jec, '-', label=str(kind), color=color)\nplt.legend(loc = 'best')\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/7513262\/matplotlib-large-set-of-colors-for-plots","best_answers_votes":32,"tags":["python","plot","matplotlib"],"question_length":383,"response_length":1078,"tag_count":3} {"question":"Set size of subplot in matplotlib [duplicate] This question already has answers here: How do I change the figure size with subplots? (6 answers) Closed 2 years ago. I wonder how to set the size of the subplot when figure contains multiple subplots (5 \u00d7 2 in my case). No matter how big I allow the whole figure to be, the subplots always seem to be small. I would like to have direct control of the size of the subplot in this figure. The simplified version of the code is pasted below. \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.random.randn(20)\ny = np.random.randn(20)\n\nfig = plt.figure(figsize=(20, 8))\n\nfor i in range(0,10):\n ax = fig.add_subplot(5, 2, i+1)\n plt.plot(x, y, 'o')\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)\n # x and y axis should be equal length\n x0,x1 = ax.get_xlim()\n y0,y1 = ax.get_ylim()\n ax.set_aspect(abs(x1-x0)\/abs(y1-y0))\n\nplt.show()\nfig.savefig('plot.pdf', bbox_inches='tight')\n```","response":"Just switch figure size width and height from: \n```\nfig = plt.figure(figsize=(20, 8))\n```\nto: \n```\nfig = plt.figure(figsize=(8, 20))\n```\nto use the whole page for your plots. This will change your plot from: to:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/41530975\/set-size-of-subplot-in-matplotlib","best_answers_votes":31,"tags":["python","matplotlib","subplot"],"question_length":969,"response_length":211,"tag_count":3} {"question":"Python: Frequency of occurrences I have list of integers and want to get frequency of each integer. This was discussed here The problem is that approach I'm using gives me frequency of floating numbers when my data set consist of integers only. Why that happens and how I can get frequency of integers from my data? I'm using pyplot.histogram to plot a histogram with frequency of occurrences \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom numpy import *\ndata = loadtxt('data.txt',dtype=int,usecols=(4,)) #loading 5th column of csv file into array named data. \nplt.hist(data) #plotting the column as histogram\n```\nI'm getting the histogram, but I've noticed that if I \"print\" hist(data) \n```\nhist=np.histogram(data)\nprint hist(data)\n```\nI get this: \n```\n(array([ 2323, 16338, 1587, 212, 26, 14, 3, 2, 2, 2]), \narray([ 1. , 2.8, 4.6, 6.4, 8.2, 10. , 11.8, 13.6, 15.4,\n 17.2, 19. ]))\n```\nWhere the second array represent values and first array represent number of occurrences. In my data set all values are integers, how that happens that second array have floating numbers and how should I get frequency of integers? UPDATE: This solves the problem, thank you Lev for the reply. \n```\nplt.hist(data, bins=np.arange(data.min(), data.max()+1))\n```\nTo avoid creating a new question how I can plot columns \"in the middle\" for each integer? Say, I want column for integer 3 take space between 2.5 and 3.5 not between 3 and 4.","response":"If you don't specify what bins to use, np.histogram and pyplot.hist will use a default setting, which is to use 10 equal bins. The left border of the 1st bin is the smallest value and the right border of the last bin is the largest. This is why the bin borders are floating point numbers. You can use the bins keyword arguments to enforce another choice of bins, e.g.: \n```\nplt.hist(data, bins=np.arange(data.min(), data.max()+1))\n```\nEdit: the easiest way to shift all bins to the left is probably just to subtract 0.5 from all bin borders: \n```\nplt.hist(data, bins=np.arange(data.min(), data.max()+1)-0.5)\n```\nAnother way to achieve the same effect (not equivalent if non-integers are present): \n```\nplt.hist(data, bins=np.arange(data.min(), data.max()+1), align='left')\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/22127769\/python-frequency-of-occurrences","best_answers_votes":23,"tags":["python","matplotlib"],"question_length":1474,"response_length":776,"tag_count":2} {"question":"Cannot edit text in chart exported by Matplotlib and opened in Illustrator I am exporting charts from matplotlib and editing them in Illustrator. It's great that I can edit the lines, but the text also comes in as lines, so I cannot change fonts, edit text, etc. I've exported as EPS, PDF, and PS with the same issues. I'm using matplotlib version 1.0.1 with python 2.7.1 on OSX Snow Leaopard. I appreciate any insights offered! I tried using pdf2ps as suggested here, but all that did was degrade the quality of the image without making the text rendered as real text. pdftops looked nicer, but still can't edit the text results.","response":"You can edit the text in Acrobat\/Illustrator if you set pdf.fonttype to 42 (TrueType), and export in pdf. You can set this in your ~\/matplotlib\/matplotlibrc: \n```\npdf.fonttype : 42 # Output Type 3 (Type3) or Type 42 (TrueType)\n```\n..or dynamically: \n```\n\n```python\nimport matplotlib as mpl\n```\n\n```python\nmpl.rcParams['pdf.fonttype'] = 42\n#Output\n#```\n#Apparently it defaults to Type3 which Acrobat\/Illustrator can't deal with.\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/5956182\/cannot-edit-text-in-chart-exported-by-matplotlib-and-opened-in-illustrator","best_answers_votes":28,"tags":["python","pdf","matplotlib","adobe-illustrator","eps"],"question_length":630,"response_length":399,"tag_count":5} {"question":"Find the area between two curves plotted in matplotlib (fill_between area) I have a list of x and y values for two curves, both having weird shapes, and I don't have a function for any of them. I need to do two things: Plot it and shade the area between the curves like the image below. Find the total area of this shaded region between the curves. I'm able to plot and shade the area between those curves with fill_between and fill_betweenx in matplotlib, but I have no idea on how to calculate the exact area between them, specially because I don't have a function for any of those curves. Any ideas? I looked everywhere and can't find a simple solution for this. I'm quite desperate, so any help is much appreciated. Thank you very much! EDIT: For future reference (in case anyone runs into the same problem), here is how I've solved this: connected the first and last node\/point of each curve together, resulting in a big weird-shaped polygon, then used shapely to calculate the polygon's area automatically, which is the exact area between the curves, no matter which way they go or how nonlinear they are. Works like a charm! :) Here is my code: \n```py\nfrom shapely.geometry import Polygon\n\nx_y_curve1 = [(0.121,0.232),(2.898,4.554),(7.865,9.987)] #these are your points for curve 1 (I just put some random numbers)\nx_y_curve2 = [(1.221,1.232),(3.898,5.554),(8.865,7.987)] #these are your points for curve 2 (I just put some random numbers)\n\npolygon_points = [] #creates a empty list where we will append the points to create the polygon\n\nfor xyvalue in x_y_curve1:\n polygon_points.append([xyvalue[0],xyvalue[1]]) #append all xy points for curve 1\n\nfor xyvalue in x_y_curve2[::-1]:\n polygon_points.append([xyvalue[0],xyvalue[1]]) #append all xy points for curve 2 in the reverse order (from last point to first point)\n\nfor xyvalue in x_y_curve1[0:1]:\n polygon_points.append([xyvalue[0],xyvalue[1]]) #append the first point in curve 1 again, to it \"closes\" the polygon\n\npolygon = Polygon(polygon_points)\narea = polygon.area\nprint(area)\n```\nEDIT 2: Thank you for the answers. Like Kyle explained, this only works for positive values. If your curves go below 0 (which is not my case, as showed in the example chart), then you would have to work with absolute numbers.","response":"The area calculation is straightforward in blocks where the two curves don't intersect: thats the trapezium as has been pointed out above. If they intersect, then you create two triangles between x[i] and x[i+1], and you should add the area of the two. If you want to do it directly, you should handle the two cases separately. Here's a basic working example to solve your problem. First, I will start with some fake data: \n```\n#!\/usr\/bin\/python\nimport numpy as np\n\n# let us generate fake test data\nx = np.arange(10)\ny1 = np.random.rand(10) * 20\ny2 = np.random.rand(10) * 20\n```\nNow, the main code. Based on your plot, looks like you have y1 and y2 defined at the same X points. Then we define, \n```\nz = y1-y2\ndx = x[1:] - x[:-1]\ncross_test = np.sign(z[:-1] * z[1:])\n```\ncross_test will be negative whenever the two graphs cross. At these points, we want to calculate the x coordinate of the crossover. For simplicity, I will calculate x coordinates of the intersection of all segments of y. For places where the two curves don't intersect, they will be useless values, and we won't use them anywhere. This just keeps the code easier to understand. Suppose you have z1 and z2 at x1 and x2, then we are solving for x0 such that z = 0: \n```\n# (z2 - z1)\/(x2 - x1) = (z0 - z1) \/ (x0 - x1) = -z1\/(x0 - x1)\n# x0 = x1 - (x2 - x1) \/ (z2 - z1) * z1\nx_intersect = x[:-1] - dx \/ (z[1:] - z[:-1]) * z[:-1]\ndx_intersect = - dx \/ (z[1:] - z[:-1]) * z[:-1]\n```\nWhere the curves don't intersect, area is simply given by: \n```\nareas_pos = abs(z[:-1] + z[1:]) * 0.5 * dx # signs of both z are same\n```\nWhere they intersect, we add areas of both triangles: \n```\nareas_neg = 0.5 * dx_intersect * abs(z[:-1]) + 0.5 * (dx - dx_intersect) * abs(z[1:])\n```\nNow, the area in each block x[i] to x[i+1] is to be selected, for which I use np.where: \n```\nareas = np.where(cross_test < 0, areas_neg, areas_pos)\ntotal_area = np.sum(areas)\n```\nThat is your desired answer. As has been pointed out above, this will get more complicated if the both the y graphs were defined at different x points. If you want to test this, you can simply plot it (in my test case, y range will be -20 to 20) \n```\nnegatives = np.where(cross_test < 0)\npositives = np.where(cross_test >= 0)\nplot(x, y1)\nplot(x, y2)\nplot(x, z)\nplt.vlines(x_intersect[negatives], -20, 20)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/25439243\/find-the-area-between-two-curves-plotted-in-matplotlib-fill-between-area","best_answers_votes":7,"tags":["python","matplotlib","scipy","area"],"question_length":2279,"response_length":2320,"tag_count":4} {"question":"scatter plot in matplotlib This is my first matplotlib program, so sorry for my ignorance. I've two arrays of string. say, A = ['test1','test2'] and B = ['test3','test4']. If any correlation exists between A and B element, their corr value will be set to 1. \n```\ntest1 | test2\ntest3 | 1 | 0\n\ntest4 | 0 | 1\n```\nNow, I want to draw a scatter diagram where my X axis will be elements of A, Y axis will be elements of B and if correlation value is 1, it'll be marked in the scattered plot. how to do that?","response":"Maybe something like this: \n```\nimport matplotlib.pyplot\nimport pylab\n\nx = [1,2,3,4]\ny = [3,4,8,6]\n\nmatplotlib.pyplot.scatter(x,y)\n\nmatplotlib.pyplot.show()\n```\nEDIT: Let me see if I understand you correctly now: You have: \n```\ntest1 | test2 | test3\ntest3 | 1 | 0 | 1\n\ntest4 | 0 | 1 | 0\n\ntest5 | 1 | 1 | 0\n```\nNow you want to represent the above values in in a scatter plot, such that value of 1 is represented by a dot. Let's say you results are stored in a 2-D list: \n```\nresults = [[1, 0, 1], [0, 1, 0], [1, 1, 0]]\n```\nWe want to transform them into two variables so we are able to plot them. And I believe this code will give you what you are looking for: \n```\nimport matplotlib\nimport pylab\n\n\nresults = [[1, 0, 1], [0, 1, 0], [1, 1, 0]]\n\nx = []\ny = []\n\nfor ind_1, sublist in enumerate(results):\n for ind_2, ele in enumerate(sublist):\n if ele == 1:\n x.append(ind_1)\n y.append(ind_2) \n\n\nmatplotlib.pyplot.scatter(x,y)\n\nmatplotlib.pyplot.show()\n```\nNotice that I do need to import pylab, and you would have play around with the axis labels. Also this feels like a work around, and there might be (probably is) a direct method to do this.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/10336614\/scatter-plot-in-matplotlib","best_answers_votes":77,"tags":["python","matplotlib","scatter-plot"],"question_length":513,"response_length":1201,"tag_count":3} {"question":"Why matplotlib does not plot? I started to learn MatPlotLib using this tutorial for beginners. Here is the first example. \n```\nfrom pylab import *\nX = np.linspace(-np.pi, np.pi, 256,endpoint=True)\nC,S = np.cos(X), np.sin(X)\n```\nIf I write these 3 lines into my python file and execute it in the command line (by typing python file_name.py), nothing happens. No error message, no plot. Does anybody know why I do not see the plot? ADDED Of course I need to use show. But even if I add the following 3 lines: \n```\nplot(X,C)\nplot(X,S)\nshow()\n```\nit still does no generate anything. ADDED Here are the lines that I use now: \n```\nimport pylab as p\nC = [1,2,3,4]\nS = [10, 20, 30, 10]\np.plot(C,S)\np.show()\n```\nI still have the same result (nothing).","response":"It could be a problem with the backend. What is the output of python -c 'import matplotlib; import matplotlib.pyplot; print(matplotlib.backends.backend)'? If it is the 'agg' backend, what you see is the expected behaviour as it is a non-interactive backend that does not show anything to the screen, but work with plt.savefig(...). You should switch to, e.g., TkAgg or Qt4Agg to be able to use show. You can do it in the matplotlib.rc file. @shashank: I run matplotlib both on 12.04 and 12.10 without problems. In both cases I use the Qt4Agg backend. If you don't have the matplotlibrc set, the default backend is used. I'm sure that for Precise matplotlib repo was built with TkAgg. If the Quantal version has been built with e.g. Agg, then that would explain the difference","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/14558843\/why-matplotlib-does-not-plot","best_answers_votes":55,"tags":["python","matplotlib"],"question_length":742,"response_length":775,"tag_count":2} {"question":"How to disable xkcd in a matplotlib figure? You turn on xkcd style by: \n```\nimport matplotlib.pyplot as plt\nplt.xkcd()\n```\nBut how to disable it? I try: \n```\nself.fig.clf()\n```\nBut it won't work.","response":"In a nutshell, either use the context manager as @Valentin mentioned, or call plt.rcdefaults() afterwards. What's happening is that the rc parameters are being changed by plt.xkcd() (which is basically how it works). plt.xkcd() saves the current rc params returns a context manager (so that you can use a with statement) that resets them at the end. If you didn't hold on to the context manager that plt.xkcd() returns, then you can't revert to the exact same rc params that you had before. In other words, let's say you had done something like plt.rc('lines', linewidth=2, color='r') before calling plt.xkcd(). If you didn't do with plt.xkcd(): or manager = plt.xkcd(), then the state of rcParams after calling plt.rc will be lost. However, you can revert back to the default rcParams by calling plt.rcdefaults(). You just won't retain any specific changes you made before calling plt.xkcd().","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/22284843\/how-to-disable-xkcd-in-a-matplotlib-figure","best_answers_votes":42,"tags":["python","matplotlib"],"question_length":195,"response_length":893,"tag_count":2} {"question":"Adding units to heatmap annotation in Seaborn I am trying to show a table of percentages as a heatmap in Seaborn: \n```\nsns.heatmap(S, annot=True, fmt=\".1f\", linewidths=1.0, square=1)\n```\nHowever, I'd like the percent sign to appear after each number in the heatmap annotations. The fmt flag only seems to accept number format specifiers. Is there a way to do this either within Seaborn or with some matplotlib tweaking?","response":"You have to iterate over all text values of a heatmap and add % sign: \n```\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np\nfrom matplotlib.ticker import FuncFormatter\n\nsns.set()\nfig, ax0 = plt.subplots(1,1)\ndata = np.array([[10.01,20.20],[-0.23,0.],[23.1049,-100.000000]])\nax = sns.heatmap(data, annot=True, fmt = '.1f', square=1, linewidth=1.)\nfor t in ax.texts: t.set_text(t.get_text() + \" %\")\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/37602885\/adding-units-to-heatmap-annotation-in-seaborn","best_answers_votes":38,"tags":["python","matplotlib","seaborn"],"question_length":419,"response_length":435,"tag_count":3} {"question":"Black and white boxplots in Seaborn I am trying to draw multiple black-and-white boxplots using Python's Seaborn package. By default the plots are using a color palette. I would like to draw them in solid black outline. The best I can come up with is: \n```\n# figure styles\nsns.set_style('white')\nsns.set_context('paper', font_scale=2)\nplt.figure(figsize=(3, 5))\nsns.set_style('ticks', {'axes.edgecolor': '0', \n 'xtick.color': '0',\n 'ytick.color': '0'})\n\nax = sns.boxplot(x=\"test1\", y=\"test2\", data=dataset, color='white', width=.5)\nsns.despine(offset=5, trim=True)\nsns.plt.show()\n```\nWhich produces something like: I would like the box outlines to be black without any fill or changes in the color palette.","response":"I was just exploring this and it seems there is another way to do this now. Basically, there are the keywords boxprops, medianprops, whiskerprops and (you guessed it) capprops, all of which are dictionaries that may be passed to the boxplot func. I choose to define them above and then unpack them for readability: \n```py\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\n_to_plot = pd.DataFrame(\n {\n 0: np.random.normal(0, 1, 100),\n 1: np.random.normal(0, 2, 100),\n 2: np.random.normal(-1, 1, 100),\n 3: np.random.normal(-2, 2, 100)\n }\n).melt()\n\nPROPS = {\n 'boxprops':{'facecolor':'none', 'edgecolor':'red'},\n 'medianprops':{'color':'green'},\n 'whiskerprops':{'color':'blue'},\n 'capprops':{'color':'magenta'}\n}\n\nfig, ax = plt.subplots(figsize=(10, 10))\nsns.boxplot(x='variable',y='value',\n data=_to_plot,\n showfliers=False,\n linewidth=1,\n ax=ax,\n **PROPS)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/43434020\/black-and-white-boxplots-in-seaborn","best_answers_votes":34,"tags":["python","matplotlib","seaborn"],"question_length":753,"response_length":993,"tag_count":3} {"question":"How to change the color of a single bar if condition is True Is it possible to change only the color of a single bar in a graph made by matplotlib? Let's say I've evaluation 1 to 10 and for each one I've a graph generate when the user choice the evaluation. For each evaluation one of this boys will win. So for each graph, I would like to leave the winner bar in a different color, let's say Jim won evaluation1. Jim bar would be red, and the others blue. I have a dictionary with the values, what I tried to do was something like this: \n```\nfor value in dictionary.keys(): # keys are the names of the boys\n if winner == value:\n facecolor = 'red'\n else:\n facecolor = 'blue'\n\nax.bar(ind, num, width, facecolor=facecolor)\n```\nAnyone knows a way of doing this?","response":"You need to use color instead of facecolor. You can also specify color as a list instead of a scalar value. So for your example, you could have color=['r','b','b','b','b'] For example, \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfig = plt.figure()\nax = fig.add_subplot(111)\n\nN = 5\nind = np.arange(N)\nwidth = 0.5\nvals = [1,2,3,4,5]\ncolors = ['r','b','b','b','b']\nax.barh(ind, vals, width, color=colors)\n\nplt.show()\n```\nis a full example showing you what you want. To answer your comment: \n```\ncolors = []\nfor value in dictionary.keys(): # keys are the names of the boys\n if winner == value:\n colors.append('r')\n else:\n colors.append('b')\n\nbar(ind,num,width,color=colors)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/3832809\/how-to-change-the-color-of-a-single-bar-if-condition-is-true","best_answers_votes":39,"tags":["python","matplotlib","seaborn","bar-chart"],"question_length":778,"response_length":706,"tag_count":4} {"question":"AttributeError: Unknown property legend in seaborn The seaborn stripplot has a function which allows hue. Using the example from https:\/\/stanford.edu\/~mwaskom\/software\/seaborn\/generated\/seaborn.stripplot.html \n```\nimport seaborn as sns\nsns.set_style(\"whitegrid\")\ntips = sns.load_dataset(\"tips\")\nax = sns.stripplot(x=tips[\"total_bill\"])\nax = sns.stripplot(x=\"sex\", y=\"total_bill\", hue=\"day\", data=tips, jitter=True)\n```\nIn this case, the legend is quite small, showing a different hue for each day. However, I would like to remove the legend. Normally, one includes a parameter legend=False. However, for stripplot, this appears to output an attribute error: \n```\nAttributeError: Unknown property legend\n```\nCan one remove the legend for stripplots? If so, how does one do this?","response":"Use ax.legend_.remove() like here: \n```\nimport seaborn as sns\nimport matplotlib.pylab as plt\nsns.set_style(\"whitegrid\")\ntips = sns.load_dataset(\"tips\")\nax = sns.stripplot(x=\"sex\", y=\"total_bill\", hue=\"day\", data=tips, jitter=True)\n\n# remove legend from axis 'ax'\nax.legend_.remove()\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/38131062\/attributeerror-unknown-property-legend-in-seaborn","best_answers_votes":55,"tags":["python","pandas","matplotlib","legend","seaborn"],"question_length":777,"response_length":298,"tag_count":5} {"question":"Change color of specific ticks at plot with matplotlib Using matplotlib, is there an option to change the color of specific tick labels on the axis? I have a simple plot that show some values by days, and I need to mark some days as 'special' day so I want to mark these with a different color but not all ticks just some specific.","response":"You can get a list of tick labels using ax.get_xticklabels(). This is actually a list of text objects. As a result, you can use set_color() on an element of that list to change the color: \n```\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots(figsize=(5,4))\nax.plot([1,2,3])\n\nax.get_xticklabels()[3].set_color(\"red\")\n\nplt.show()\n```\nAlternatively, you can get the current axes using plt.gca(). The below code will give the same result \n```\nimport matplotlib.pyplot as plt\n\nplt.figure(figsize=(5,4))\nplt.plot([1, 2, 3])\n\nplt.gca().get_xticklabels()[3].set_color(\"red\")\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/49997934\/change-color-of-specific-ticks-at-plot-with-matplotlib","best_answers_votes":47,"tags":["python","matplotlib"],"question_length":331,"response_length":589,"tag_count":2} {"question":"Increase space between rows on FacetGrid plot I've the following code which creates the plot you can see in the picture: \n```\ng = sns.FacetGrid(data, col=\"Provincia\",col_wrap=6,size=2.5)\ng.map(sns.barplot, \"Anio\", \"Diff\");\ng.set_axis_labels(\"A\u00f1o\", \"Porcentaje de aumento\");\n\nfor ax in g.axes.flat:\n _ = plt.setp(ax.get_yticklabels(), visible=True)\n _ = plt.setp(ax.get_xticklabels(), visible=False)\n _ = plt.setp(ax.get_xticklabels()[0], visible=True) \n _ = plt.setp(ax.get_xticklabels()[-1], visible=True)\n```\nThe problem, as you can see in the picture, is that the x ticks collapse with the col name below. What is the proper way to increase this space in order to fix this?","response":"tight layout You can use tight_layout to automatically adjust the spacings \n```\ng.fig.tight_layout()\n```\nor, if you have matplotlib.pyplot imported as plt, \n```\nplt.tight_layout()\n```\nsubplots adjust You can use plt.subplots_adjust to manually set the spacings between subplots, \n```\nplt.subplots_adjust(hspace=0.4, wspace=0.4)\n```\nwhere hspace is the space in height, and wspace is the space width direction. gridspec keyword arguments You could also use gridspec_kws in the FacetGrid initialization, \n```\ng = sns.FacetGrid(data, ... , gridspec_kws={\"wspace\":0.4})\n```\nHowever, this can only be used if col_wrap is not set. (So it might not be an option in the particular case from the question).","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/43669229\/increase-space-between-rows-on-facetgrid-plot","best_answers_votes":46,"tags":["python","matplotlib","seaborn","facet-grid"],"question_length":691,"response_length":697,"tag_count":4} {"question":"Elegant way to match a string to a random color matplotlib I want to translate the labels of some data to colors for graphing with matplotlib I have a list of names [\"bob\", \"joe\", \"andrew\", \"pete\"] Is there a built in way to map these strings with color values in matplotlib? I thought about randomly creating hex values but I could end up with similar colors or non visible colors. I've tried a couple different ways of trying to create key values from the below cmap answer: this: \n```\n#names is a list of distinct names\ncmap = plt.get_cmap('cool')\ncolors = cmap(np.linspace(0, 1, len(names)))\nclr = {names[i]: colors[i] for i in range(len(names))}\nax.scatter(x, y, z, c=clr)\n```","response":"Choose a color map, such as viridis: \n```\ncmap = plt.get_cmap('viridis')\n```\nThe colormap, cmap, is a function which can take an array of values from 0 to 1 and map them to RGBA colors. np.linspace(0, 1, len(names)) produces an array of equally spaced numbers from 0 to 1 of length len(names). Thus, \n```\ncolors = cmap(np.linspace(0, 1, len(names)))\n```\nselects equally-spaced colors from the viridis color map. Note that this is not using the value of the string, it only uses the ordinal position of the string in the list to select a color. Note also that these are not random colors, this is just an easy way to generate unique colors from an arbitrary list of strings. So: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ncmap = plt.get_cmap('viridis')\nnames = [\"bob\", \"joe\", \"andrew\", \"pete\"]\ncolors = cmap(np.linspace(0, 1, len(names)))\nprint(colors)\n# [[ 0.267004 0.004874 0.329415 1. ]\n# [ 0.190631 0.407061 0.556089 1. ]\n# [ 0.20803 0.718701 0.472873 1. ]\n# [ 0.993248 0.906157 0.143936 1. ]]\n\nx = np.linspace(0, np.pi*2, 100)\nfor i, (name, color) in enumerate(zip(names, colors), 1):\n plt.plot(x, np.sin(x)\/i, label=name, c=color)\nplt.legend()\nplt.show()\n```\nThe problem with \n```\nclr = {names[i]: colors[i] for i in range(len(names))}\nax.scatter(x, y, z, c=clr)\n```\nis that the c parameter of ax.scatter expects a sequence of RGB(A) values of the same length as x or a single color. clr is a dict, not a sequence. So if colors is the same length as x then you could use \n```\nax.scatter(x, y, z, c=colors)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/31883097\/elegant-way-to-match-a-string-to-a-random-color-matplotlib","best_answers_votes":35,"tags":["python","matplotlib"],"question_length":681,"response_length":1567,"tag_count":2} {"question":"animated subplots using matplotlib I have this code. I want to add a subplot to draw the cosine function. (I do not want to create a class). The second plot should be dynamically updated as well \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.animation as animation\n\ndef data_gen():\n t = data_gen.t\n cnt = 0\n while cnt < 1000:\n cnt+=1\n t += 0.05\n yield t, np.sin(2*np.pi*t) * np.exp(-t\/10.)\ndata_gen.t = 0\n\nfig, ax = plt.subplots()\nline, = ax.plot([], [], lw=2)\nax.set_ylim(-1.1, 1.1)\nax.set_xlim(0, 5)\nax.grid()\nxdata, ydata = [], []\ndef run(data):\n # update the data\n t,y = data\n xdata.append(t)\n ydata.append(y)\n xmin, xmax = ax.get_xlim()\n\n if t >= xmax:\n ax.set_xlim(xmin, 2*xmax)\n ax.figure.canvas.draw()\n line.set_data(xdata, ydata)\n\n return line,\n\nani = animation.FuncAnimation(fig, run, data_gen, blit=True, interval=10,\n repeat=False)\nplt.show()\n```","response":"Basically you can use a very similar structure as the one you have in your example. You only need to create an additional axes (subplot) and a second line object: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.animation as animation\n\ndef data_gen():\n t = data_gen.t\n cnt = 0\n while cnt < 1000:\n cnt+=1\n t += 0.05\n y1 = np.sin(2*np.pi*t) * np.exp(-t\/10.)\n y2 = np.cos(2*np.pi*t) * np.exp(-t\/10.)\n # adapted the data generator to yield both sin and cos\n yield t, y1, y2\n\ndata_gen.t = 0\n\n# create a figure with two subplots\nfig, (ax1, ax2) = plt.subplots(2,1)\n\n# intialize two line objects (one in each axes)\nline1, = ax1.plot([], [], lw=2)\nline2, = ax2.plot([], [], lw=2, color='r')\nline = [line1, line2]\n\n# the same axes initalizations as before (just now we do it for both of them)\nfor ax in [ax1, ax2]:\n ax.set_ylim(-1.1, 1.1)\n ax.set_xlim(0, 5)\n ax.grid()\n\n# initialize the data arrays \nxdata, y1data, y2data = [], [], []\ndef run(data):\n # update the data\n t, y1, y2 = data\n xdata.append(t)\n y1data.append(y1)\n y2data.append(y2)\n\n # axis limits checking. Same as before, just for both axes\n for ax in [ax1, ax2]:\n xmin, xmax = ax.get_xlim()\n if t >= xmax:\n ax.set_xlim(xmin, 2*xmax)\n ax.figure.canvas.draw()\n\n # update the data of both line objects\n line[0].set_data(xdata, y1data)\n line[1].set_data(xdata, y2data)\n\n return line\n\nani = animation.FuncAnimation(fig, run, data_gen, blit=True, interval=10,\n repeat=False)\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/29832055\/animated-subplots-using-matplotlib","best_answers_votes":44,"tags":["python","matplotlib","animation"],"question_length":956,"response_length":1594,"tag_count":3} {"question":"Multiple legends in for-loop The following program executes fine but only one legend is displayed. How can I have all the four legends displayed? kindly see the image attached. \n```\nimport matplotlib.pyplot as plt\ndct = {'list_1' : [1,2,4,3,1],'list_2' : [2,4,5,1,2],'list_3' : [1,1,3,4,6],'list_4' : [1,1,2,2,1]}\n\nxs = [0,1,2,3,4]\n\n\nfor i in [1,2,3,4]:\n plt.plot(xs,dct['list_%s' %i])\n plt.legend(['%s data' %i])\n\nplt.show()\n```","response":"```\nimport matplotlib.pyplot as plt\ndct = {'list_1' : [1,2,4,3,1],'list_2' : [2,4,5,1,2],'list_3' : [1,1,3,4,6],'list_4' : [1,1,2,2,1]}\n\nxs = [0,1,2,3,4]\n\n\nfor i in [1,2,3,4]:\n plt.plot(xs,dct['list_%s' %i], label='%s data' % i)\n\nplt.legend()\n\nplt.show()\n```\nYou are running up against the way that legend works, each time it is called it destroys the current legend and replaces it with the new one. If you only give legend a list of strings it iterates through the artists (the objects that represent the data to be drawn) in the axes until it runs out of labels (hence why your first curve is labeled as the 4th). If you include the kwarg label in the plot command, when you call legend with out any arguments, it will iterate through the artists* and generate legend entries for the artists with labels. [*] there are some exceptions on which artists it will pick up","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/14826119\/multiple-legends-in-for-loop","best_answers_votes":34,"tags":["python","matplotlib","legend"],"question_length":435,"response_length":873,"tag_count":3} {"question":"Setting the display range suplot of errorbars in matplotlib I'm trying to plot my data using matplot lib. I have 3 separetes sets of data I want to plot in 3 subplots (I'm using this is my guidence): \n```\nplt.figure()\n\nfig, axs = plt.subplots(nrows=3, ncols = 1, sharex=False)\nax1 = axs[0]\nax1.errorbar(X,Y,Err,fmt='o')\nax1.set_xscale('log')\nax1.set_yscale('log')\nax1.set_title('epsilon=1.5, kappa = 2')\nplt.show()\n```\nHowever I get the x range from 1 (or 0, I'm not sure) to 100 and I want to reduce it. I tried this, by adding using: \n```\nax1.xlim(0.5,13.5)\n```\nBut I get an error: AttributeError: 'AxesSubplot' object has no attribute 'xlim' How can I change the range then?","response":"You might want to use Axes.axis(*v, **kwargs): \n```\nax1.axis(xmin=0.5,xmax=13.5)\n```\nFrom the documentation: Set\/Get the axis properties If len(*v)==0, you can pass in xmin, xmax, ymin, ymax as kwargs selectively to alter just those limits without changing the others.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/8224894\/setting-the-display-range-suplot-of-errorbars-in-matplotlib","best_answers_votes":30,"tags":["python","matplotlib"],"question_length":677,"response_length":268,"tag_count":2} {"question":"Plot a Single XGBoost Decision Tree I am using method on https:\/\/machinelearningmastery.com\/visualize-gradient-boosting-decision-trees-xgboost-python\/ to plot a XGBoost Decision Tree \n```\nfrom numpy import loadtxt\nfrom xgboost import XGBClassifier\nfrom xgboost import plot_tree\nimport matplotlib.pyplot as plt\n# load data\ndataset = loadtxt('pima-indians-diabetes.csv', delimiter=\",\")\n# split data into X and y\nX = dataset[:,0:8]\ny = dataset[:,8]\n# fit model no training data\nmodel = XGBClassifier()\nmodel.fit(X, y)\n# plot single tree\nplot_tree(model)\nplt.show()\n```\nAs I got 150 features,the plot looks quite small for all split points,how to draw a clear one or save in local place or any other ways\/ideas could clearly show this \u2018tree\u2019 is quite appreciated","response":"I had the same problem recently and the only way I found is by trying diffent figure size (it can still be bluery with big figure. For exemple, to plot the 4th tree, use: \n```\nfig, ax = plt.subplots(figsize=(30, 30))\nxgb.plot_tree(model, num_trees=4, ax=ax)\nplt.show()\n```\nTo save it, you can do \n```\nplt.savefig(\"temp.pdf\")\n```\nAlso, each tree seperates two classes so you have as many tree as class.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/51323595\/plot-a-single-xgboost-decision-tree","best_answers_votes":24,"tags":["python","matplotlib","xgboost"],"question_length":758,"response_length":401,"tag_count":3} {"question":"Seaborn configuration hides default matplotlib [duplicate] This question already has answers here: How can I use seaborn without changing the matplotlib defaults? (2 answers) Closed 6 years ago. Seaborn provides of a handful of graphics which are very interesting for scientifical data representation. Thus I started using these Seaborn graphics interspersed with other customized matplotlib plots. The problem is that once I do: \n```\nimport seaborn as sb\n```\nThis import seems to set the graphic parameters for seaborn globally and then all matplotlib graphics below the import get the seaborn parameters (they get a grey background, linewithd changes, etc, etc). In SO there is an answer explaining how to produce seaborn plots with matplotlib configuration, but what I want is to keep the matplotlib configuration parameters unaltered when using both libraries together and at the same time be able to produce, when needed, original seaborn plots.","response":"If you never want to use the seaborn style, but do want some of the seaborn functions, you can import seaborn using this following line (documentation): \n```\nimport seaborn.apionly as sns\n```\nIf you want to produce some plots with the seaborn style and some without, in the same script, you can turn the seaborn style off using the seaborn.reset_orig function. It seems that doing the apionly import essentially sets reset_orig automatically on import, so its up to you which is most useful in your use case. Here's an example of switching between matplotlib defaults and seaborn: \n```\nimport matplotlib.pyplot as plt\nimport matplotlib\nimport numpy as np\n\n# a simple plot function we can reuse (taken from the seaborn tutorial)\ndef sinplot(flip=1):\n x = np.linspace(0, 14, 100)\n for i in range(1, 7):\n plt.plot(x, np.sin(x + i * .5) * (7 - i) * flip)\n\nsinplot()\n\n# this will have the matplotlib defaults\nplt.savefig('seaborn-off.png')\nplt.clf()\n\n# now import seaborn\nimport seaborn as sns\n\nsinplot()\n\n# this will have the seaborn style\nplt.savefig('seaborn-on.png')\nplt.clf()\n\n# reset rc params to defaults\nsns.reset_orig()\n\nsinplot()\n\n# this should look the same as the first plot (seaborn-off.png)\nplt.savefig('seaborn-offagain.png')\n```\nwhich produces the following three plots: seaborn-off.png: seaborn-on.png: seaborn-offagain.png:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/33099348\/seaborn-configuration-hides-default-matplotlib","best_answers_votes":23,"tags":["python","matplotlib","seaborn"],"question_length":950,"response_length":1349,"tag_count":3} {"question":"Plot 2-dimensional NumPy array using specific columns I have a 2D numpy array that's created like this: \n```\ndata = np.empty((number_of_elements, 7))\n```\nEach row with 7 (or whatever) floats represents an object's properties. The first two for example are the x and y position of the object, the others are various properties that could even be used to apply color information to the plot. I want to do a scatter plot from data, so that if p = data[i], an object is plotted as a point with p[:2] as its 2D position and with say p[2:4] as a color information (the length of that vector should determine a color for the point). Other columns should not matter to the plot at all. How should I go about this?","response":"Setting up a basic matplotlib figure is easy: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\nfig = plt.figure()\nax = fig.add_subplot(1, 1, 1)\n```\nPicking off the columns for x, y and color might look something like this: \n```\nN = 100\ndata = np.random.random((N, 7))\nx = data[:,0]\ny = data[:,1]\npoints = data[:,2:4]\n# color is the length of each vector in `points`\ncolor = np.sqrt((points**2).sum(axis = 1))\/np.sqrt(2.0)\nrgb = plt.get_cmap('jet')(color)\n```\nThe last line retrieves the jet colormap and maps each of the float values (between 0 and 1) in the array color to a 3-tuple RGB value. There is a list of colormaps to choose from here. There is also a way to define custom colormaps. Making a scatter plot is now straight-forward: \n```\nax.scatter(x, y, color = rgb)\nplt.show()\n# plt.savefig('\/tmp\/out.png') # to save the figure to a file\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/13634349\/plot-2-dimensional-numpy-array-using-specific-columns","best_answers_votes":29,"tags":["python","numpy","matplotlib"],"question_length":705,"response_length":860,"tag_count":3} {"question":"How to set space between the axis and the label My code: \n```\n#importing required libraries\nimport numpy as np\nfrom matplotlib.patches import Polygon\nimport matplotlib.pyplot as plt\n\n\n#plotting data\n\ndef func(x):\n return (x-3)*(x+2)*(3*x+5)+25\na,b=2,9\nx=np.linspace(0,10,30)\ny=func(x)\nfig,ax=plt.subplots(1,1,dpi=135)\nax.plot(x,y,linewidth=2.5,color='c')\nix=np.linspace(a,b)\niy=func(ix)\nverts=[(a,0),*zip(ix,iy),(b,0)]\npoly=Polygon(verts,facecolor='0.7',edgecolor='0.9')\nax.add_patch(poly)\nax.set_ylim(bottom=0)\nax.text(6.5,150,r'$\\int_a^b f(x)$',horizontalalignment='center',fontsize=15)\nax.set_xlabel('X')\nax.set_ylabel('Y',rotation=0)\nax.set_xticks((a,b))\nax.set_xticklabels(('a','b'))\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nax.xaxis.set_ticks_position('bottom')\nax.set_yticks([])\nplt.show()\n```\nThe above code gives: My Question: So as you have saw the above plot the ylabel 'Y' is touching with the y-axis and there is space between the xlabel 'X' and x-axis How can I set space between the axis and the label(like there is a space between xlabel and x-axis)? My attempt: In the above code I commented set_ylabel() method and tried text() method \n```\n#ax.set_ylabel('Y',rotation=0)\nfig.text(0.1,0.5,'Y')\n#this code creating a little space between y label and y-axis but I want the same amount of space that is between x label an x axis\n```\nExpected output:","response":"You can set bounding by using labelpad argument like this \n```\nax.set_ylabel('Y', rotation=0, labelpad=10)\n```\nalso you can add space after 'Y ' label in set_ylabel line as following \n```\nax.set_ylabel('Y ',rotation=0)\n```\nNote: As you mentioned you want the same spaces between both axis labels so you can set 'X' label using: \n```\nax.text(max(x)\/2, -(max(y)\/10),'X')\n```\nand 'Y' label using: \n```\nax.text(-(max(x)\/10), max(y)\/2,'Y')\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/67253174\/how-to-set-space-between-the-axis-and-the-label","best_answers_votes":34,"tags":["python","matplotlib","visualization"],"question_length":1399,"response_length":438,"tag_count":3} {"question":"Automatically setting y-axis limits for bar graph using matplotlib Here's an example of graphing large values. \n```py\nimport matplotlib.pyplot as plt\nx = [1,2,3,4,5]\ny = [1000, 1002, 1001, 1003, 1005]\nplt.bar(x,y) \nplt.show()\n```\nThe y-axis starts at 0, so the bars all look equal. I know you can use plt.ylim to manually set the limits, but is there a way for matplotlib to automatically (and smartly) set the limits to reasonable values (like 998-1008), and also perhaps show an axis break?","response":"A little bit of simple algebra will help fix the limits: \n```\nimport matplotlib.pyplot as plt\nimport math\nx = [1,2,3,4,5]\ny = [1000, 1002, 1001, 1003, 1005]\nlow = min(y)\nhigh = max(y)\nplt.ylim([math.ceil(low-0.5*(high-low)), math.ceil(high+0.5*(high-low))])\nplt.bar(x,y) \nplt.show()\n```\nIn this way, you are able to find the difference between your y-values and use them to set the scale along the y-axis. I used math.ceil (as opposed to math.floor) in order to obtain the values you specified and ensure integers. As far as an axis break goes, I'd suggest looking at this example.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/11216319\/automatically-setting-y-axis-limits-for-bar-graph-using-matplotlib","best_answers_votes":33,"tags":["python","matplotlib","plot","bar-chart"],"question_length":492,"response_length":581,"tag_count":4} {"question":"Plotting histograms against classes in pandas \/ matplotlib Is there a idiomatic way to plot the histogram of a feature for two classes? In pandas, I basically want \n```\ndf.feature[df.class == 0].hist()\ndf.feature[df.class == 1].hist()\n```\nTo be in the same plot. I could do \n```\ndf.feature.hist(by=df.class)\n```\nbut that gives me two separate plots. This seems to be a common task so I would imagine there to be an idiomatic way to do this. Of course I could manipulate the histograms manually to fit next to each other but usually pandas does that quite nicely. Basically I want this matplotlib example in one line of pandas: http:\/\/matplotlib.org\/examples\/pylab_examples\/barchart_demo.html I thought I was missing something, but maybe it is not possible (yet).","response":"How about df.groupby(\"class\").feature.hist()? To see overlapping distributions you'll probably need to pass alpha=0.4 to hist(). Alternatively, I'd be tempted to use a kernel density estimate instead of a histogram with df.groupby(\"class\").feature.plot(kind='kde'). As an example, I plotted the iris dataset's classes using: \n```\niris.groupby(\"Name\").PetalWidth.plot(kind='kde', ax=axs[1])\niris.groupby(\"Name\").PetalWidth.hist(alpha=0.4, ax=axs[0])\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/21548750\/plotting-histograms-against-classes-in-pandas-matplotlib","best_answers_votes":30,"tags":["python","matplotlib","plot","pandas"],"question_length":762,"response_length":452,"tag_count":4} {"question":"Matplotlib does not display hatching when rendering to pdf I am attempting to use the hatching feature in matplotlib, which works fine when displaying to screen. However when I save the figure to pdf format, the hatch marks are not rendered: \n```\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nx = np.linspace(0,2*np.pi,100)\n\nplt.figure()\nplt.fill(x,np.sin(x),color='blue',alpha=0.5,hatch='\/')\nplt.show()\nplt.savefig('.\/test.pdf',format='pdf')\n```\nI am using matplotlib 1.0.1 in pylab on OS X 10.6.6. This may be a platform specific issue having to do with the backend renderer, but I'm not sure. Any suggestions would be most appreciated.","response":"TL;DR: use alpha=.99 to render hatches when exporting in PDF It's nearly 2020 and the bug still exists when using plt.bar(). When rendering in PNG, everything is rendered properly. However, PDF export has a glitch when rendering hatches. Hatches are not visible, sometimes visible when zooming-in\/out (sometimes not when tested on different computers), it is not clear where the bug comes from. We realized it's linked with the alpha option. When using alpha=.5, the color is 50% visible, as well as hatches (50% visible as well). Good step, we have almost visible hatches. Therefore, let's just try with alpha=.99 so that everything is nearly 100% visible. It works! Houray! In our workaround, no need to duplicate lines like in previous answer. Keep the color option as it is and just set alpha=.99.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/5195466\/matplotlib-does-not-display-hatching-when-rendering-to-pdf","best_answers_votes":21,"tags":["python","matplotlib"],"question_length":662,"response_length":801,"tag_count":2} {"question":"Removing axis margins in 3D plot I spent last few days trying to find a way to remove tiny margins from axis in a 3D plot. I tried ax.margins(0) and ax.autoscale_view('tight') and other approaches, but these small margins are still there. In particular, I don't like that the bar histograms are elevated, i.e., their bottom is not at the zero level -- see example image. ![unwanted margins on all axis][1] In gnuplot, I would use \"set xyplane at 0\". In matplotlib, since there are margins on every axis on both sides, it would be great to be able to control each of them. Edit: HYRY's solution below works well, but the 'X' axis gets a grid line drawn over it at Y=0:","response":"There is not property or method that can modify this margins. You need to patch the source code. Here is an example: \n```\nfrom mpl_toolkits.mplot3d import Axes3D\nimport matplotlib.pyplot as plt\nimport numpy as np\n###patch start###\nfrom mpl_toolkits.mplot3d.axis3d import Axis\nif not hasattr(Axis, \"_get_coord_info_old\"):\n def _get_coord_info_new(self, renderer):\n mins, maxs, centers, deltas, tc, highs = self._get_coord_info_old(renderer)\n mins += deltas \/ 4\n maxs -= deltas \/ 4\n return mins, maxs, centers, deltas, tc, highs\n Axis._get_coord_info_old = Axis._get_coord_info \n Axis._get_coord_info = _get_coord_info_new\n###patch end###\n\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nfor c, z in zip(['r', 'g', 'b', 'y'], [30, 20, 10, 0]):\n xs = np.arange(20)\n ys = np.random.rand(20)\n\n # You can provide either a single color or an array. To demonstrate this,\n # the first bar of each set will be colored cyan.\n cs = [c] * len(xs)\n cs[0] = 'c'\n ax.bar(xs, ys, zs=z, zdir='y', color=cs, alpha=0.8)\n\nax.set_xlabel('X')\nax.set_ylabel('Y')\nax.set_zlabel('Z')\n\nplt.show()\n```\nThe result is: Edit To change the color of the grid lines: \n```\nfor axis in (ax.xaxis, ax.yaxis, ax.zaxis):\n axis._axinfo['grid']['color'] = 0.7, 1.0, 0.7, 1.0\n```\nEdit2 Set X & Y lim: \n```\nax.set_ylim3d(-1, 31)\nax.set_xlim3d(-1, 21)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/16488182\/removing-axis-margins-in-3d-plot","best_answers_votes":18,"tags":["python","matplotlib","axis","margins","matplotlib-3d"],"question_length":667,"response_length":1388,"tag_count":5} {"question":"Multiple figures in a single window I want to create a function which plot on screen a set of figures in a single window. By now I write this code: \n```\nimport pylab as pl\n\ndef plot_figures(figures):\n \"\"\"Plot a dictionary of figures.\n\n Parameters\n ----------\n figures : dictionary\n\n \"\"\"\n for title in figures:\n pl.figure()\n pl.imshow(figures[title])\n pl.gray()\n pl.title(title)\n pl.axis('off')\n```\nIt works perfectly but I would like to have the option for plotting all the figures in single window. And this code doesn't. I read something about subplot but it looks quite tricky.","response":"You can define a function based on the subplots command (note the s at the end, different from the subplot command pointed by urinieto) of matplotlib.pyplot. Below is an example of such a function, based on yours, allowing to plot multiples axes in a figure. You can define the number of rows and columns you want in the figure layout. \n```\ndef plot_figures(figures, nrows = 1, ncols=1):\n \"\"\"Plot a dictionary of figures.\n\n Parameters\n ----------\n figures : dictionary\n ncols : number of columns of subplots wanted in the display\n nrows : number of rows of subplots wanted in the figure\n \"\"\"\n\n fig, axeslist = plt.subplots(ncols=ncols, nrows=nrows)\n for ind,title in enumerate(figures):\n axeslist.ravel()[ind].imshow(figures[title], cmap=plt.gray())\n axeslist.ravel()[ind].set_title(title)\n axeslist.ravel()[ind].set_axis_off()\n plt.tight_layout() # optional\n```\nBasically, the function creates a number of axes in the figures, according to the number of rows (nrows) and columns (ncols) you want, and then iterates over the list of axis to plot your images and adds the title for each of them. Note that if you only have one image in your dictionary, your previous syntax plot_figures(figures) will work since nrows and ncols are set to 1 by default. An example of what you can obtain: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# generation of a dictionary of (title, images)\nnumber_of_im = 6\nfigures = {'im'+str(i): np.random.randn(100, 100) for i in range(number_of_im)}\n\n# plot of the images in a figure, with 2 rows and 3 columns\nplot_figures(figures, 2, 3)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/11159436\/multiple-figures-in-a-single-window","best_answers_votes":22,"tags":["python","image","matplotlib","subplot"],"question_length":649,"response_length":1648,"tag_count":4} {"question":"Running matplotlib in tkinter I have this beautiful sphere I made in matplotlib. How would I go about putting it in a tkinter frame widget? It'd be nice to be able to integrate it into an existing tkinter GUI. Also is it possible to rid of the menu bar below the display? I have no need to save the output or zoom, so it's useless to me. \n```\nfrom mpl_toolkits.mplot3d import axes3d,Axes3D\nimport matplotlib.pyplot as plt\nfrom matplotlib import cm\nimport numpy as np\n\nfig = plt.figure()\nax = Axes3D(fig) #<-- Note the difference from your original code..\n\nu = np.linspace(0, 2 * np.pi, 100)\nv = np.linspace(0, np.pi, 100)\n\nx = 10 * np.outer(np.cos(u), np.sin(v))\ny = 10 * np.outer(np.sin(u), np.sin(v))\nz = 10 * np.outer(np.ones(np.size(u)), np.cos(v))\n\nax.plot_surface(x, y, z, rstride=4, cstride=4, color='lightgreen',linewidth=0)\n#,antialiased=False\n#cmap=cm.jet\nplt.show()\n```","response":"Have a look at the examples for embedding plots in a tk GUI, it should be enough to get you started in the right direction. user_interfaces example code: embedding_in_tk.py user_interfaces example code: embedding_in_tk2.py As for removing the toolbar, it's a case of not adding it when you are embedding plots in a GUI. If you are using matplotlib.pyplot the toolbar will be created automatically for every figure. If you are writing your own user interface code, you can add the toolbar as a widget.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/3845407\/running-matplotlib-in-tkinter","best_answers_votes":22,"tags":["python","matplotlib","tkinter"],"question_length":882,"response_length":500,"tag_count":3} {"question":"Indicating the statistically significant difference in bar graph I use a bar graph to indicate the data of each group. Some of these bars differ significantly from each other. How can I indicate the significant difference in the bar plot? \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nmenMeans = (5, 15, 30, 40)\nmenStd = (2, 3, 4, 5)\nind = np.arange(4) # the x locations for the groups\nwidth=0.35\np1 = plt.bar(ind, menMeans, width=width, color='r', yerr=menStd)\nplt.xticks(ind+width\/2., ('A', 'B', 'C', 'D') )\n```\nI am aiming for","response":"The answer above inspired me to write a small but flexible function myself: \n```\ndef barplot_annotate_brackets(num1, num2, data, center, height, yerr=None, dh=.05, barh=.05, fs=None, maxasterix=None):\n \"\"\" \n Annotate barplot with p-values.\n\n :param num1: number of left bar to put bracket over\n :param num2: number of right bar to put bracket over\n :param data: string to write or number for generating asterixes\n :param center: centers of all bars (like plt.bar() input)\n :param height: heights of all bars (like plt.bar() input)\n :param yerr: yerrs of all bars (like plt.bar() input)\n :param dh: height offset over bar \/ bar + yerr in axes coordinates (0 to 1)\n :param barh: bar height in axes coordinates (0 to 1)\n :param fs: font size\n :param maxasterix: maximum number of asterixes to write (for very small p-values)\n \"\"\"\n\n if type(data) is str:\n text = data\n else:\n # * is p < 0.05\n # ** is p < 0.005\n # *** is p < 0.0005\n # etc.\n text = ''\n p = .05\n\n while data < p:\n text += '*'\n p \/= 10.\n\n if maxasterix and len(text) == maxasterix:\n break\n\n if len(text) == 0:\n text = 'n. s.'\n\n lx, ly = center[num1], height[num1]\n rx, ry = center[num2], height[num2]\n\n if yerr:\n ly += yerr[num1]\n ry += yerr[num2]\n\n ax_y0, ax_y1 = plt.gca().get_ylim()\n dh *= (ax_y1 - ax_y0)\n barh *= (ax_y1 - ax_y0)\n\n y = max(ly, ry) + dh\n\n barx = [lx, lx, rx, rx]\n bary = [y, y+barh, y+barh, y]\n mid = ((lx+rx)\/2, y+barh)\n\n plt.plot(barx, bary, c='black')\n\n kwargs = dict(ha='center', va='bottom')\n if fs is not None:\n kwargs['fontsize'] = fs\n\n plt.text(*mid, text, **kwargs)\n```\nwhich allows me to get some nice annotations relatively simple, e.g.: \n```\nheights = [1.8, 2, 3]\nbars = np.arange(len(heights))\n\nplt.figure()\nplt.bar(bars, heights, align='center')\nplt.ylim(0, 5)\nbarplot_annotate_brackets(0, 1, .1, bars, heights)\nbarplot_annotate_brackets(1, 2, .001, bars, heights)\nbarplot_annotate_brackets(0, 2, 'p < 0.0075', bars, heights, dh=.2)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/11517986\/indicating-the-statistically-significant-difference-in-bar-graph","best_answers_votes":33,"tags":["python","matplotlib","bar-chart","plot-annotations"],"question_length":548,"response_length":2160,"tag_count":4} {"question":"Python\/Matplotlib - How to put text in the corner of equal aspect figure I would like to put text in the right bottom corner of equal aspect figure. I set the position relative to the figure by ax.transAxes, but I have to define the relative coordinate value manually depending on height scales of each figures. What would be a good way to know axes height scale and the correct text position within the script? \n```\nax = plt.subplot(2,1,1)\n ax.plot([1,2,3],[1,2,3])\n ax.set_aspect('equal')\n ax.text(1,-0.15, 'text', transform=ax.transAxes, ha='right', fontsize=16)\n print ax.get_position().height\n\n ax = plt.subplot(2,1,2)\n ax.plot([10,20,30],[1,2,3])\n ax.set_aspect('equal')\n ax.text(1,-0.15, 'text', transform=ax.transAxes, ha='right', fontsize=16)\n print ax.get_position().height\n```","response":"Use annotate. In fact, I hardly ever use text. Even when I want to place things in data coordinates, I usually want to offset it by some fixed distance in points, which is much easier with annotate. As a quick example: \n```\nimport matplotlib.pyplot as plt\n\nfig, axes = plt.subplots(nrows=2, subplot_kw=dict(aspect=1))\n\naxes[0].plot(range(1, 4))\naxes[1].plot(range(10, 40, 10), range(1, 4))\n\nfor ax in axes:\n ax.annotate('Test', xy=(1, 0), xycoords='axes fraction', fontsize=16,\n horizontalalignment='right', verticalalignment='bottom')\nplt.show()\n```\nIf you'd like it slightly offset from the corner, you can specify an offset through the xytext kwarg (and textcoords to control how the values of xytext are interpreted). I'm also using the ha and va abbreviations for horizontalalignment and verticalalignment here: \n```\nimport matplotlib.pyplot as plt\n\nfig, axes = plt.subplots(nrows=2, subplot_kw=dict(aspect=1))\n\naxes[0].plot(range(1, 4))\naxes[1].plot(range(10, 40, 10), range(1, 4))\n\nfor ax in axes:\n ax.annotate('Test', xy=(1, 0), xycoords='axes fraction', fontsize=16,\n xytext=(-5, 5), textcoords='offset points',\n ha='right', va='bottom')\nplt.show()\n```\nIf you're trying to place it below the axes, you can use the offset to place it a set distance below in points: \n```\nimport matplotlib.pyplot as plt\n\nfig, axes = plt.subplots(nrows=2, subplot_kw=dict(aspect=1))\n\naxes[0].plot(range(1, 4))\naxes[1].plot(range(10, 40, 10), range(1, 4))\n\nfor ax in axes:\n ax.annotate('Test', xy=(1, 0), xycoords='axes fraction', fontsize=16,\n xytext=(0, -15), textcoords='offset points',\n ha='right', va='top')\nplt.show()\n```\nAlso have a look at the Matplotlib annotation guide for more information.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/16122362\/python-matplotlib-how-to-put-text-in-the-corner-of-equal-aspect-figure","best_answers_votes":59,"tags":["python","matplotlib"],"question_length":787,"response_length":1774,"tag_count":2} {"question":"Diagonal tick labels I am plotting a bar chart in python using matplotlib.pyplot. The chart will contain a large number of bars, and each bar has its own label. Thus, the labels overlap, and they are no more readable. I would like that the label are displayed diagonally so that they do not overlab, such as in this image. This is my code: \n```\nimport matplotlib.pyplot as plt\nN =100\nmenMeans = range(N)\nind = range(N) \nticks = ind \nfig = plt.figure()\nax = fig.add_subplot(111)\nrects1 = ax.bar(ind, menMeans, align = 'center')\nax.set_xticks(ind)\nax.set_xticklabels( range(N) )\nplt.show()\n```\nHow can the labels be displayed diagonally?","response":"Instead of using set_xticks or set_xticklabels, which are both officially discouraged, you can just use the rotation parameter to xticks: \n```\nplt.xticks(rotation=45, ha=\"right\")\n```\nThis way you can specify the tick labels' rotation while letting matplotlib take care of their frequency\/spacing for you. Note that the use of ha=\"right\" to right-align the label text doesn't matter if your labels are all short (and you may want to remove it in that case), but is important if your labels are long and of variable length - it ensures that the end of the tick label is directly below the tick it's labelling, and prevents labels from being inconsistently spaced or even overlapping. A full working example, based on the code in the question: \n```\nimport matplotlib.pyplot as plt\nN =100\nmenMeans = range(N)\nind = range(N) \nticks = ind \nfig = plt.figure()\nax = fig.add_subplot(111)\nrects1 = ax.bar(ind, menMeans, align = 'center')\nplt.xticks(rotation=45, ha=\"right\")\nplt.show()\n```\nOutput:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/17827748\/diagonal-tick-labels","best_answers_votes":39,"tags":["python","matplotlib","bar-chart","x-axis"],"question_length":636,"response_length":987,"tag_count":4} {"question":"Combine picture and plot with Python Matplotlib I have a plot which has timestamps on the x-axis and some signal data on the y-axis. As a documentation I want to put timestamped pictures in relation to specific points in the plot. Is it possible to draw a line in a plot to a picture in a sequence of pictures below the plot?","response":"This demo from the matplotlib gallery shows how to insert pictures, draw lines to them, etc. I'll post the image from the gallery, and you can follow the link to see the code. And here's the code (from version 2.1.2): \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfrom matplotlib.patches import Circle\nfrom matplotlib.offsetbox import (TextArea, DrawingArea, OffsetImage,\n AnnotationBbox)\nfrom matplotlib.cbook import get_sample_data\n\n\nif 1:\n fig, ax = plt.subplots()\n\n # Define a 1st position to annotate (display it with a marker)\n xy = (0.5, 0.7)\n ax.plot(xy[0], xy[1], \".r\")\n\n # Annotate the 1st position with a text box ('Test 1')\n offsetbox = TextArea(\"Test 1\", minimumdescent=False)\n\n ab = AnnotationBbox(offsetbox, xy,\n xybox=(-20, 40),\n xycoords='data',\n boxcoords=\"offset points\",\n arrowprops=dict(arrowstyle=\"->\"))\n ax.add_artist(ab)\n\n # Annotate the 1st position with another text box ('Test')\n offsetbox = TextArea(\"Test\", minimumdescent=False)\n\n ab = AnnotationBbox(offsetbox, xy,\n xybox=(1.02, xy[1]),\n xycoords='data',\n boxcoords=(\"axes fraction\", \"data\"),\n box_alignment=(0., 0.5),\n arrowprops=dict(arrowstyle=\"->\"))\n ax.add_artist(ab)\n\n # Define a 2nd position to annotate (don't display with a marker this time)\n xy = [0.3, 0.55]\n\n # Annotate the 2nd position with a circle patch\n da = DrawingArea(20, 20, 0, 0)\n p = Circle((10, 10), 10)\n da.add_artist(p)\n\n ab = AnnotationBbox(da, xy,\n xybox=(1.02, xy[1]),\n xycoords='data',\n boxcoords=(\"axes fraction\", \"data\"),\n box_alignment=(0., 0.5),\n arrowprops=dict(arrowstyle=\"->\"))\n\n ax.add_artist(ab)\n\n # Annotate the 2nd position with an image (a generated array of pixels)\n arr = np.arange(100).reshape((10, 10))\n im = OffsetImage(arr, zoom=2)\n im.image.axes = ax\n\n ab = AnnotationBbox(im, xy,\n xybox=(-50., 50.),\n xycoords='data',\n boxcoords=\"offset points\",\n pad=0.3,\n arrowprops=dict(arrowstyle=\"->\"))\n\n ax.add_artist(ab)\n\n # Annotate the 2nd position with another image (a Grace Hopper portrait)\n fn = get_sample_data(\"grace_hopper.png\", asfileobj=False)\n arr_img = plt.imread(fn, format='png')\n\n imagebox = OffsetImage(arr_img, zoom=0.2)\n imagebox.image.axes = ax\n\n ab = AnnotationBbox(imagebox, xy,\n xybox=(120., -80.),\n xycoords='data',\n boxcoords=\"offset points\",\n pad=0.5,\n arrowprops=dict(\n arrowstyle=\"->\",\n connectionstyle=\"angle,angleA=0,angleB=90,rad=3\")\n )\n\n ax.add_artist(ab)\n\n # Fix the display limits to see everything\n ax.set_xlim(0, 1)\n ax.set_ylim(0, 1)\n\n plt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/3765056\/combine-picture-and-plot-with-python-matplotlib","best_answers_votes":27,"tags":["python","matplotlib"],"question_length":325,"response_length":3240,"tag_count":2} {"question":"Remove Seaborn barplot legend title I use seaborn to plot a grouped bar plot as in https:\/\/seaborn.pydata.org\/examples\/factorplot_bars.html Giving me: https:\/\/seaborn.pydata.org\/_images\/factorplot_bars.png there is a title (sex) on the legend which I would like to remove. How could I achieve that?","response":"This may be a hacky solution but it works: if you tell Seaborn to leave it off at the time of plotting and then add it back it doesn't have the legend title: \n```\ng = sns.factorplot(x='Age Group',y='ED',hue='Became Member',col='Coverage Type',\n col_wrap=3,data=gdf,kind='bar',ci=None,legend=False,palette='muted')\n# ^^^^^^^^^^^^\nplt.suptitle('ED Visit Rate per 1,000 Members per Year',size=16)\nplt.legend(loc='best')\nplt.subplots_adjust(top=.925)\nplt.show()\n```\nExample result:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/43151440\/remove-seaborn-barplot-legend-title","best_answers_votes":16,"tags":["python","matplotlib","bar-chart","seaborn"],"question_length":298,"response_length":551,"tag_count":4} {"question":"Draw arrow outside plot in Matplotlib I have the following gridded plot, and would like to draw an arrow (shown in blue using MS paint). How can I do it through matplotlib? I do not know of any command to do it.","response":"```\nimport matplotlib.pyplot as plt\n\nfg = plt.figure(1);\nfg.clf();\nax = fg.add_subplot(1,1,1)\nax.annotate('', xy=(0, -0.1), xycoords='axes fraction', xytext=(1, -0.1), \n arrowprops=dict(arrowstyle=\"<->\", color='b'))\nax.grid(True)\nfg.canvas.draw()\n```\ngives","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/23922804\/draw-arrow-outside-plot-in-matplotlib","best_answers_votes":26,"tags":["python","matplotlib"],"question_length":211,"response_length":267,"tag_count":2} {"question":"seaborn boxplot: Change color and shape of mean Simple question that I cannot seem to find the answer to. How do I change the color and shape of the mean indicator in a Seaborn Boxplot? It defaults to a Green Triangle and it generally difficult to see. I've tried to find the answer in both the seaborn documentation, as well as the matplotlib documentation. There is also a related question on stackoverflow where someone asked how to change around colors related to the seaborn boxplots and was able to change everything except for the mean indicator. \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\ndata = [[np.random.rand(100)] for i in range(3)]\n\nsns.boxplot(data=data, showmeans=True)\n\nplt.show()\n```","response":"The keyword argument you are looking for is meanprops. It is in the matplotlib boxplot documentation under \"other parameters\": \n```\nimport seaborn as sns\n\ndata = [[np.random.rand(100)] for i in range(3)]\n\nsns.boxplot(data=data, showmeans=True,\n meanprops={\"marker\":\"s\",\"markerfacecolor\":\"white\", \"markeredgecolor\":\"blue\"})\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/54132989\/seaborn-boxplot-change-color-and-shape-of-mean","best_answers_votes":43,"tags":["python-3.x","matplotlib","seaborn","boxplot"],"question_length":737,"response_length":349,"tag_count":4} {"question":"Setting axis labels for histogram pandas I'm fairly new to this, so there might be a very obvious answer to this. My apologies! I'm plotting two histograms via a groubpy. I'd like my subplots to each have the same x and y labels and a common title. I understood that sharex=True would do the trick, but apparently not if I set the axis only after the df.hist. I've tried various versions of setting the xlabels and am lost now. \n```\nimport pylab as pl\nfrom pandas import *\n\nhisto_survived = df.groupby('Survived').hist(column='Age', sharex=True, sharey=True)\npl.title(\"Histogram of Ages\")\npl.xlabel(\"Age\")\npl.ylabel(\"Individuals\")\n```\nSo what I end up with is labels only for the subplot. \n```\nOut: \n```\nAny idea on how to solve this? (Have to use pandas\/python.)","response":"Labels are properties of axes objects, that needs to be set on each of them. Here's an example that worked for me: \n```\nframe = pd.DataFrame([np.random.rand(20), np.sign(np.random.rand(20) - 0.5)]).T\nframe.columns = ['Age', 'Survived']\n\n# Note that you can let the hist function do the groupby\n# the function hist returns the list of axes created\naxarr = frame.hist(column='Age', by = 'Survived', sharex=True, sharey=True, layout = (2, 1))\n\nfor ax in axarr.flatten():\n ax.set_xlabel(\"Age\")\n ax.set_ylabel(\"Individuals\")\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/42832675\/setting-axis-labels-for-histogram-pandas","best_answers_votes":40,"tags":["python","pandas","matplotlib"],"question_length":800,"response_length":529,"tag_count":3} {"question":"How do I plot hatched bars using pandas? I am trying to achieve differentiation by hatch pattern instead of by (just) colour. How do I do it using pandas? It's possible in matplotlib, by passing the hatch optional argument as discussed here. I know I can also pass that option to a pandas plot, but I don't know how to tell it to use a different hatch pattern for each DataFrame column. \n```\ndf = pd.DataFrame(rand(10, 4), columns=['a', 'b', 'c', 'd'])\ndf.plot(kind='bar', hatch='\/');\n```\nFor colours, there is the colormap option described here. Is there something similar for hatching? Or can I maybe set it manually by modifying the Axes object returned by plot?","response":"Plot the grouped bars with pandas.DataFrame.plot, and then iterate through the bar patches, to add the hatches. Tested in python 3.11.4, pandas 2.1.0, matplotlib 3.7.2 \n```py\nimport pandas as pd\n\ndf = pd.DataFrame(np.random.rand(10, 4), columns=['a', 'b', 'c', 'd'])\n\nax = df.plot(kind='bar', legend=False, figsize=(10, 6), rot=0, width=0.8)\n\nbars = ax.patches\nhatches = ''.join(h*len(df) for h in 'x\/O.')\n\nfor bar, hatch in zip(bars, hatches):\n bar.set_hatch(hatch)\n\nax.legend(loc='lower center', ncol=4, bbox_to_anchor=(0.5, -0.15))\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/22833404\/how-do-i-plot-hatched-bars-using-pandas","best_answers_votes":31,"tags":["python","matplotlib","plot","pandas"],"question_length":665,"response_length":541,"tag_count":4} {"question":"colored wireframe plot in matplotlib I am trying to color a wireframe plot according to the z-value. I can't find any code examples on the internet. Here is an example of a surface plot that has the colors I want and a wireframe plot where I can't manage to get the colors on the lines: \n```\nimport numpy as np\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib import cm\nimport matplotlib.pyplot as plt\n\n# some numbers for the data\nP=12000 #W\nQ=1 #kg\/s\nDT=3 #K\ncp=4169.32 #J\/kgK\n\ndDT=np.logspace(-2,0,20,endpoint=True)\ndQ=Q*np.logspace(-3,-1,20,endpoint=True)\n\n# the plotting data\nm1,m2=np.meshgrid(dDT,dQ)\nerr=cp*np.sqrt((m1*Q)**2+(m2*DT)**2)\/P\n\n# the wiremesh plot that i need fixed\nfig=plt.figure()\nax = fig.add_subplot(111, projection='3d')\nax.plot_wireframe(m1, m2, err, color=err\/err.max(),cmap='jet')\nax.set_xlabel('dDT')\nax.set_ylabel('DQ')\nax.set_zlabel('relative error')\n\n# the surface plot that has the colors i want\nfig = plt.figure()\nax = fig.gca(projection='3d')\n\nsurf = ax.plot_surface(m1, m2, err,rstride=1, cstride=1, cmap=cm.jet,\n linewidth=0.1, antialiased=False)\n\nfig.colorbar(surf, shrink=0.5, aspect=5)\n\nax.set_xlabel('dDT')\nax.set_ylabel('DQ')\nax.set_zlabel('relative error')\nplt.show()\n```\nThank you for any help!","response":"When you use plot_wireframe, each line can only have one color. Instead, you can use plot_surface. To get plot_surface to set the edgecolors, you need to give it facecolors. Then you can set the alpha of facecolors to zero. \n```\nfrom mpl_toolkits.mplot3d import axes3d\nimport matplotlib.pyplot as plt\nfrom matplotlib import cm\n\nX, Y, Z = axes3d.get_test_data(0.2)\n\n# Normalize to [0,1]\nnorm = plt.Normalize(Z.min(), Z.max())\ncolors = cm.viridis(norm(Z))\nrcount, ccount, _ = colors.shape\n\nfig = plt.figure()\nax = fig.add_subplot(projection='3d')\nsurf = ax.plot_surface(X, Y, Z, rcount=rcount, ccount=ccount,\n facecolors=colors, shade=False)\nsurf.set_facecolor((0,0,0,0))\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/15134004\/colored-wireframe-plot-in-matplotlib","best_answers_votes":31,"tags":["python","colors","matplotlib","wireframe"],"question_length":1255,"response_length":706,"tag_count":4} {"question":"remove italics in latex subscript in matplotlib I would like to remove the italics font that appears when I use subscripts in labels. For example, the \"Teff\" in the x-label has \"eff\" in italics. I would like latex not render it in such a way. Generally, in latex this can be achieved with the \\rm{} command. However, that does not work in matplotlib. Please help. \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.arange(10)\ny = x\n\nplt.plot(x,y,'ro')\nplt.xlabel('Primary T$_{eff}$')\n```","response":"I have encountered this problem many times and it can be solved with this trick \n```\nplt.xlabel(r'Primary T$_{\\rm eff}$')\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/19671659\/remove-italics-in-latex-subscript-in-matplotlib","best_answers_votes":34,"tags":["python","matplotlib","latex"],"question_length":500,"response_length":125,"tag_count":3} {"question":"Changing matshow xticklabel position from top to bottom of the figure I'm using matshow() in matplotlib and the figures generated has the xticklabels on top by default. Is there a way to put the xticklabels on the bottom instead? Appreciate any help with this. \/M","response":"Try doing the following to adjust the x ticks positions: \n```\nax.xaxis.set_ticks_position('bottom')\n```\nSimilarly, for the y ticks: \n```\nax.yaxis.set_ticks_position('left')\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/17022154\/changing-matshow-xticklabel-position-from-top-to-bottom-of-the-figure","best_answers_votes":31,"tags":["python","numpy","matplotlib","plot"],"question_length":263,"response_length":176,"tag_count":4} {"question":"draw a border around subplots in matplotlib Anyone know how to draw a border around an individual subplot within a figure in matplotlib? I'm using pyplot. eg: \n```\nimport matplotlib.pyplot as plt\nf = plt.figure()\nax1 = f.add_subplot(211)\nax2 = f.add_subplot(212)\n# ax1.set_edgecolor('black')\n```\n..but Axes objects have no 'edgecolor', and I can't seem to find a way to outline the plot from the figure level either. I'm actually wrapping mpl code and adding a wx UI with controls that I would like to have context depending on which subplot is selected. i.e. User clicks on subplot within figure canvas -- subplot is 'selected' (has an outline drawn around it, ideally sawtooth) -- GUI updates to present controls to modify that specific subplot.","response":"You essentially want to draw outside of the axes, right? I adapted this from here. It would need clean up as I used some hard-coded \"fudge-factors\" in there. \n```\n#!\/usr\/bin\/env python\nfrom pylab import *\n\ndef f(t):\n s1 = cos(2*pi*t)\n e1 = exp(-t)\n return multiply(s1,e1)\n\nt1 = arange(0.0, 5.0, 0.1)\nt2 = arange(0.0, 5.0, 0.02)\nt3 = arange(0.0, 2.0, 0.01)\n\nfigure(figsize=(4, 4))\nsub1 = subplot(211)\nl = plot(t1, f(t1), 'bo', t2, f(t2), 'k--', markerfacecolor='green')\ngrid(True)\ntitle('A tale of 2 subplots')\nylabel('Damped oscillation')\n\n## I ADDED THIS\nautoAxis = sub1.axis()\nrec = Rectangle((autoAxis[0]-0.7,autoAxis[2]-0.2),(autoAxis[1]-autoAxis[0])+1,(autoAxis[3]-autoAxis[2])+0.4,fill=False,lw=2)\nrec = sub1.add_patch(rec)\nrec.set_clip_on(False)\n\nsubplot(212)\nplot(t3, cos(2*pi*t3), 'r.')\ngrid(True)\nxlabel('time (s)')\nylabel('Undamped')\n\nsavefig('test.png')\n```\nProduces:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/2027592\/draw-a-border-around-subplots-in-matplotlib","best_answers_votes":16,"tags":["python","matplotlib"],"question_length":747,"response_length":888,"tag_count":2} {"question":"matplotlib scatterplot x axis labels When I'm adding the c option to a scatterplot in matplotlib, the x axis labels dissapear. Here's an example: https:\/\/github.com\/Kornel\/scatterplot-matplotlib\/blob\/master\/Scatter%20plot%20x%20axis%20labels.ipynb Here's the same example as in the notebook: \n```\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n\ntest_df = pd.DataFrame({\n \"X\": [1, 2, 3, 4],\n \"Y\": [5, 4, 2, 1],\n \"C\": [1, 2, 3, 4]\n })\n```\nNow compare the result of: \n```\ntest_df.plot(kind=\"scatter\", x=\"X\", y=\"Y\", s=50);\n```\nTo: \n```\ntest_df.plot(kind=\"scatter\", x=\"X\", y=\"Y\", c=\"C\");\n```\nWhere are the x axis labels? Is this a feature I'm missing? Pandas version: 0.18.1 Matplotlib: 1.5.3 Python: 3.5.2 EDIT: The solution as pointed out by @Kewl is to call plt.subplots and specify the axes: \n```\nfig, ax = plt.subplots()\ntest_df.plot(kind=\"scatter\", x=\"X\", y=\"Y\", s=50, c=\"C\", cmap=\"plasma\", ax=ax);\n```\ngives P.S. It looks like a jupyter issue, the label is fine when called without a jupyter notebook","response":"That looks like a strange bug with pandas plotting to me! Here's a way around it: \n```\nfig, ax = plt.subplots()\ndf.plot(kind='scatter',x='X', y='Y', c='C', ax=ax)\nax.set_xlabel(\"X\")\nplt.show()\n```\nThis will give you the graph you expect:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/43121584\/matplotlib-scatterplot-x-axis-labels","best_answers_votes":25,"tags":["python","pandas","matplotlib"],"question_length":1032,"response_length":237,"tag_count":3} {"question":"Setting the size of the plotting canvas in Matplotlib I would like Matplotlib\/Pyplot to generate plots with a consistent canvas size. That is, the figures can well have different sizes to accomodate the axis descriptions, but the plotting area (the rectangle within which the curves are drawn) should always have the same size. Is there a simple way to achieve that? The option figsize of pyplot.figure() seems to set the overall size of the figure, not that of the canvas, so I get a different canvas size whenever the axis description occupies more or less space.","response":"This is one of my biggest frustrations with Matplotlib. I often work with raster data where for example i want to add a colormap, legend and some title. Any simple example from the matplotlib gallery doing so will result in a different resolution and therefore resampled data. Especially when doing image analysis you dont want any (unwanted) resampling. Here is what i usually do, although i would love to know if there are simpler or better ways. Lets start with loading a picture and outputting it just as it is with the same resolution: \n```\nimport matplotlib.pyplot as plt\nimport urllib2\n\n# load the image\nimg = plt.imread(urllib2.urlopen('http:\/\/upload.wikimedia.org\/wikipedia\/en\/thumb\/5\/56\/Matplotlib_logo.svg\/500px-Matplotlib_logo.svg.png'))\n\n# get the dimensions\nypixels, xpixels, bands = img.shape\n\n# get the size in inches\ndpi = 72.\nxinch = xpixels \/ dpi\nyinch = ypixels \/ dpi\n\n# plot and save in the same size as the original\nfig = plt.figure(figsize=(xinch,yinch))\n\nax = plt.axes([0., 0., 1., 1.], frameon=False, xticks=[],yticks=[])\nax.imshow(img, interpolation='none')\n\nplt.savefig('D:\\\\mpl_logo.png', dpi=dpi, transparent=True)\n```\nNote that i manually defined the axes position so that spans the entire figure. In a similar way as above you could add some margin around the image to allow for labels or colorbars etc. This example adds a 20% margin above the image, which is then used for plotting a title: \n```\nfig = plt.figure(figsize=(xinch,yinch\/.8))\n\nax = plt.axes([0., 0., 1., .8], frameon=False, xticks=[],yticks=[])\nax.imshow(img, interpolation='none')\nax.set_title('Matplotlib is fun!', size=16, weight='bold')\n\nplt.savefig('D:\\\\mpl_logo_with_title.png', dpi=dpi)\n```\nSo the figure y-size (height) is increased and the y-size of the axes is decreased equally. This gives a larger (overall) output image, but the axes area will still be the same size. It might be nice the have a figure or axes property like .set_scale() to force a true 1-on-x output.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/16057869\/setting-the-size-of-the-plotting-canvas-in-matplotlib","best_answers_votes":21,"tags":["python","graph","matplotlib","plot"],"question_length":565,"response_length":1977,"tag_count":4} {"question":"How to draw a line outside of an axis in matplotlib (in figure coordinates) Matplotlib has a function that writes text in figure coordinates (.figtext()) Is there a way to do the same but for drawing lines? In particular my goal is to draw lines to group some ticks on the y-axis together.","response":"Tested in python 3.8.12, matplotlib 3.4.3 \n```py\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.lines import Line2D\n\nx = np.linspace(0,10,100)\ny = np.sin(x)*(1+x)\n\nfig, ax = plt.subplots()\nax.plot(x,y,label='a')\n\n# new clear axis overlay with 0-1 limits\nax2 = plt.axes([0,0,1,1], facecolor=(1,1,1,0))\n\nx,y = np.array([[0.05, 0.1, 0.9], [0.05, 0.5, 0.9]])\nline = Line2D(x, y, lw=5., color='r', alpha=0.4)\nax2.add_line(line)\n\nplt.show()\n```\nBut if you want to align with ticks, then why not use plot coordinates?","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/5021663\/how-to-draw-a-line-outside-of-an-axis-in-matplotlib-in-figure-coordinates","best_answers_votes":22,"tags":["python","matplotlib","plot"],"question_length":289,"response_length":530,"tag_count":3} {"question":"Scatter plot colorbar - Matplotlib I'm trying to show a color bar of my scatter plot but I'm keep getting the error: TypeError: You must first set_array for mappable This is what I'm doing to plot: \n```\n# Just plotting the values of data that are nonzero \nx_data = numpy.nonzero(data)[0] # x coordinates\ny_data = numpy.nonzero(data)[1] # y coordinates\n\n# Mapping the values to RGBA colors\ndata = plt.cm.jet(data[x_data, y_data])\n\npts = plt.scatter(x_data, y_data, marker='s', color=data)\n\nplt.colorbar(pts)\n```\nIf I comment the line plt.colorbar(pts) I got the plot correctly, but I would like to plot the color bar too. Thank you in advance.","response":"You're passing in specific rgb values, so matplotlib can't construct a colormap, because it doesn't know how it relates to your original data. Instead of mapping the values to RGB colors, let scatter handle that for you. Instead of: \n```\n# Mapping the values to RGBA colors\ndata = plt.cm.jet(data[x_data, y_data])\n\npts = plt.scatter(x_data, y_data, marker='s', color=data)\n```\nDo: \n```\npts = plt.scatter(x_data, y_data, marker='s', c=data[x_data, y_data])\n```\n(Just pass in to c what you were originally passing into plt.cm.jet.) Then you'll be able to construct a colormap normally. The specific error is telling you that the colors have been manually set, rather than set through set_array (which handles mapping an array of data values to RGB).","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/24833102\/scatter-plot-colorbar-matplotlib","best_answers_votes":15,"tags":["python","matplotlib","plot","colorbar"],"question_length":642,"response_length":747,"tag_count":4} {"question":"Set minor tick label spacing on a log axis, and change colorbar tick label size I am trying to create a plot but I just want the ticklabels to show as shown where the log scale is shown as above. I only want the minor ticklabel for 50, 500 and 2000 to show. Is there anyway to specify the minor tick labels to show?? I have been trying to figure this out for a bit but haven't found a good solution. All I can think of is to get the minorticklabels() and set the fontsize to 0. This is shown below the first snippet of code. I was hoping there was a more clean solution. The other thing is changing the size of the ticklabels in the colorbar which I haven't figured out. If anyone knows of a way to do this please let me know because I don't see a method in colorbar that easily does this. First code: \n```\nfig = figure(figto)\nax = fig.add_subplot(111)\nactShape = activationTrace.shape\nsemitones = arange(actShape[1])\nfreqArray = arange(actShape[0])\nX,Y = meshgrid(self.testFreqArray,self.testFreqArray)\nZ = sum(activationTrace[:,:,beg:end],axis=2)\nsurf = ax.contourf(X,Y,Z, 8, cmap=cm.jet)\nax.set_position([0.12,0.15,.8,.8])\nax.set_ylabel('Log Frequency (Hz)')\nax.set_xlabel('Log Frequency (Hz)')\nax.set_xscale('log')\nax.set_yscale('log')\nax.xaxis.set_minor_formatter(FormatStrFormatter('%d'))\nax.yaxis.set_ticks_position('left')\nax.xaxis.set_ticks_position('bottom')\nax.tick_params(axis='both',reset=False,which='both',length=8,width=2)\nself.plotSetAxisLabels(ax,22)\nself.plotSetAxisTickLabels(ax,18)\ncbar = fig.colorbar(surf, shrink=0.5, aspect=20, fraction=.12,pad=.02)\ncbar.set_label('Activation',size=18)\nreturn ax, cbar\n```\nSecond Code: \n```\nfig = figure(figto)\nax = fig.add_subplot(111)\nactShape = activationTrace.shape\nsemitones = arange(actShape[1])\nfreqArray = arange(actShape[0])\nX,Y = meshgrid(self.testFreqArray,self.testFreqArray)\nZ = sum(activationTrace[:,:,beg:end],axis=2)\nsurf = ax.contourf(X,Y,Z, 8, cmap=cm.jet)\nax.set_position([0.12,0.15,.8,.8])\nax.set_ylabel('Log Frequency (Hz)')\nax.set_xlabel('Log Frequency (Hz)')\nax.set_xscale('log')\nax.set_yscale('log')\nax.xaxis.set_minor_formatter(FormatStrFormatter('%d'))\nax.yaxis.set_minor_formatter(FormatStrFormatter('%d'))\nax.yaxis.set_ticks_position('left')\nax.xaxis.set_ticks_position('bottom')\nax.tick_params(axis='both',reset=False,which='both',length=8,width=2)\nself.plotSetAxisLabels(ax,22)\nself.plotSetAxisTickLabels(ax,18)\ncbar = fig.colorbar(surf, shrink=0.5, aspect=20, fraction=.12,pad=.02)\ncbar.set_label('Activation',size=18)\ncount = 0\nfor i in ax.xaxis.get_minorticklabels():\n if (count%4 == 0):\n i.set_fontsize(12)\n else:\n i.set_fontsize(0)\n count+=1\nfor i in ax.yaxis.get_minorticklabels():\n if (count%4 == 0):\n i.set_fontsize(12)\n else:\n i.set_fontsize(0)\n count+=1\nreturn ax, cbar\n```\nFor the colorbar: Another quick question if you don't mind because trying to figure it out but not entirely sure. I want to use scientific notation which I can get with ScalarFormatter. How do I set the number of decimal places and the multiplier?? I'd like it to be like 8x10^8 or .8x10^9 to save space instead of putting all those zeros. I figure there is multiple ways to do this inside the axes object but what do you reckon is the best way. I can't figure out how to change the notation when changing to the ScalarFormatter. For the chart: Also, my data has points starting at 46 and then at successive multiplies of that multiplied by 2^(1\/12) so 46,49,50,55,58,61...3132. These are all rounded but lie close to the 2^(1\/12). I decided it better to place major tickers close to these numbers. Is the best way to use the fixed formatter and use a ticker every 15 or so in the freqArray. Then use a minor ticker at every other frequency. Can I do this and still maintain a log axis??","response":"Use FixedLocator to statically define explicit tick locations. Colorbar cbar will have an .ax attribute that will provide access to the usual axis methods including tick formatting. This is not a reference to an axes (e.g. ax1, ax2, etc.). \n```py\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfig = plt.figure()\nax = fig.add_subplot(111)\nx = np.arange(10,3000,100)\ny = np.arange(10,3000,100)\nX,Y = np.meshgrid(x,y)\nZ = np.random.random(X.shape)*8000000\nsurf = ax.contourf(X,Y,Z, 8, cmap=plt.cm.jet)\nax.set_ylabel('Log Frequency (Hz)')\nax.set_xlabel('Log Frequency (Hz)')\nax.set_xscale('log')\nax.set_yscale('log')\nax.xaxis.set_minor_formatter(plt.FormatStrFormatter('%d'))\n# defining custom minor tick locations:\nax.xaxis.set_minor_locator(plt.FixedLocator([50,500,2000]))\nax.yaxis.set_ticks_position('left')\nax.xaxis.set_ticks_position('bottom')\nax.tick_params(axis='both',reset=False,which='both',length=8,width=2)\ncbar = fig.colorbar(surf, shrink=0.5, aspect=20, fraction=.12,pad=.02)\ncbar.set_label('Activation',size=18)\n# access to cbar tick labels:\ncbar.ax.tick_params(labelsize=5) \nplt.show()\n```\nEdit If you want the tick marls, but you want to selectively show the labels, I see nothing wrong with your iteration, except I might use set_visible instead of making the fontsize zero. You might enjoy finer control using a FuncFormatter where you can use the value or position of the tick to decide whether it gets shown: \n```\ndef show_only_some(x, pos):\n s = str(int(x))\n if s[0] in ('2','5'):\n return s\n return ''\n\nax.xaxis.set_minor_formatter(plt.FuncFormatter(show_only_some))\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/6567724\/set-minor-tick-label-spacing-on-a-log-axis-and-change-colorbar-tick-label-size","best_answers_votes":159,"tags":["python","matplotlib","axis","colorbar"],"question_length":3805,"response_length":1611,"tag_count":4} {"question":"Plotting Pandas DataFrames in to Pie Charts using matplotlib Is it possible to print a DataFrame as a pie chart using matplotlib? The Pandas documentation on chart visualization has instructions for plotting lot of chart types including bar, histogram, scatter plot etc. But pie chart is missing?","response":"To plot a pie chart from a dataframe df you can use Panda's plot.pie: \n```\ndf.plot.pie(y='column_name')\n```\nExample: \n```\nimport pandas as pd\n\ndf = pd.DataFrame({'activity': ['Work', 'Sleep', 'Play'],\n 'hours': [8, 10, 6]})\ndf.set_index('activity', inplace=True)\nprint(df)\n# hours\n# activity \n# Work 8\n# Sleep 10\n# Play 6\nplot = df.plot.pie(y='hours', figsize=(7, 7))\n```\nNote that the labels of the pie chart are the index entries, this is the reason for using set_index to set the index to activity. To style the plot, you can use all those arguments that can be passed to DataFrame.plot(), here an example showing percentages: \n```\nplot = df.plot.pie(y='hours', title=\"Title\", legend=False, \\\n autopct='%1.1f%%', explode=(0, 0, 0.1), \\\n shadow=True, startangle=0)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/21090316\/plotting-pandas-dataframes-in-to-pie-charts-using-matplotlib","best_answers_votes":36,"tags":["python","pandas","matplotlib","pie-chart"],"question_length":296,"response_length":869,"tag_count":4} {"question":"Color matplotlib bar chart based on value Is there a way to color the bars of a barchart based on the bar's value. For example: \n```\n- values below -0.5: red\n- values between -0.5 to 0: green\n- values between 0 to 08: blue\n- etc\n```\nI have found some basic examples of bar coloring but nothing which can cater for value ranges, such as the above examples. UPDATE: Thank you kikocorreoso for your suggestion. This works great when both axes are numbers as per your example. However in my case my original data structure is a pandas dataframe. I then use df.stack() and plot the result. This means that the dataframes rows\/columns become the x axis of the plot and the dataframe cells are the Y axis (bars). I have tried masking as per your example but it doesn't seem to work when the Y axis are numbers and the X axis are names. eg: \n```\ncol1 col2 col3 col4\n row1 1 2 3 4\n row2 5 6 7 8\n row3 9 10 11 12\n row4 13 14 15 16\n```\nThe above dataframe needs to be plotted as a barchart with the row\/column combinations forming the x-axis. Each cell value will be a bar. And ultimately, coloring the bars as per the original question. Thanks","response":"You could use masks for your datasets. A basic example could be the following: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.arange(10)\ny = np.arange(10) * 0.1\n\nmask1 = y < 0.5\nmask2 = y >= 0.5\n\nplt.bar(x[mask1], y[mask1], color = 'red')\nplt.bar(x[mask2], y[mask2], color = 'blue')\nplt.show()\n```\nThe result should be: UPDATE: As you updated your question I update the code. For your simple case, and if I understood correctly, you could do the following (ugly) hack: \n```\nimport pandas as pd\n\ndf = pd.DataFrame({'col1':[1,2,3], 'col2':[4,5,6]}, \n index = ['row1','row2','row3'])\n\ndfstacked = df.stack()\n\nmask = dfstacked <= 3\n\ncolors = np.array(['b']*len(dfstacked))\ncolors[mask.values] = 'r'\n\ndfstacked.plot(kind = 'bar', rot = 45, color = colors)\nplt.show()\n```\nOr use a more OO solution. The code briefly explained: I create a mask for my red columns I create an array of colors Change the the array of colors in order to use other color for my masked values As the dfstacked dataframe has a MultiIndex the ticks are not well printed so I use the rot keyword to rotate them. If you want to automate it in order to get a nice plot you can use plt.tight_layout() before plt.show(). I hope it helps.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/33476401\/color-matplotlib-bar-chart-based-on-value","best_answers_votes":40,"tags":["python","matplotlib"],"question_length":1199,"response_length":1235,"tag_count":2} {"question":"How to plot one line in different colors I have two list as below: \n```\nlatt=[42.0,41.978567980875397,41.96622693388357,41.963791391892457,...,41.972407378075879]\nlont=[-66.706920989908909,-66.703116557977069,-66.707351643324543,...-66.718218142021925]\n```\nnow I want to plot this as a line, separate each 10 of those 'latt' and 'lont' records as a period and give it a unique color. what should I do?","response":"There are several different ways to do this. The \"best\" approach will depend mostly on how many line segments you want to plot. If you're just going to be plotting a handful (e.g. 10) line segments, then just do something like: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef uniqueish_color():\n \"\"\"There're better ways to generate unique colors, but this isn't awful.\"\"\"\n return plt.cm.gist_ncar(np.random.random())\n\nxy = (np.random.random((10, 2)) - 0.5).cumsum(axis=0)\n\nfig, ax = plt.subplots()\nfor start, stop in zip(xy[:-1], xy[1:]):\n x, y = zip(start, stop)\n ax.plot(x, y, color=uniqueish_color())\nplt.show()\n```\nIf you're plotting something with a million line segments, though, this will be terribly slow to draw. In that case, use a LineCollection. E.g. \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.collections import LineCollection\n\nxy = (np.random.random((1000, 2)) - 0.5).cumsum(axis=0)\n\n# Reshape things so that we have a sequence of:\n# [[(x0,y0),(x1,y1)],[(x0,y0),(x1,y1)],...]\nxy = xy.reshape(-1, 1, 2)\nsegments = np.hstack([xy[:-1], xy[1:]])\n\nfig, ax = plt.subplots()\ncoll = LineCollection(segments, cmap=plt.cm.gist_ncar)\ncoll.set_array(np.random.random(xy.shape[0]))\n\nax.add_collection(coll)\nax.autoscale_view()\n\nplt.show()\n```\nFor both of these cases, we're just drawing random colors from the \"gist_ncar\" coloramp. Have a look at the colormaps here (gist_ncar is about 2\/3 of the way down): http:\/\/matplotlib.org\/examples\/color\/colormaps_reference.html","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/17240694\/how-to-plot-one-line-in-different-colors","best_answers_votes":48,"tags":["python","matplotlib"],"question_length":401,"response_length":1524,"tag_count":2} {"question":"Coloring axis\/tick labels How would one color y-axis label and tick labels in red? So for example the \"y-label\" and values 0 through 40, to be colored in red. \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nx = np.arange(10)\n\nfig = plt.figure()\nax = plt.subplot(111)\nax.set_ylabel(\"y-label\")\n\nfor i in xrange(5):\n ax.plot(x, i * x, label='$y = %ix$' % i)\n\nax.legend()\n\nplt.show()\n```","response":"The xlabel can be colorized when setting it, \n```\nax.set_xlabel(\"x-label\", color=\"red\")\n```\nFor setting the ticklabels' color, one may either use tick_params, which sets the ticklabels' as well as the ticks' color \n```\nax.tick_params(axis='x', colors='red')\n```\nAlternatively, plt.setp can be used to only set the ticklabels' color, without changing the ticks' color. \n```\nplt.setp(ax.get_xticklabels(), color=\"red\")\n```\nNote that for changing the properties on the y-axis, one can replace the x with a y in the above.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/14165344\/coloring-axis-tick-labels","best_answers_votes":29,"tags":["matplotlib","colors","label","xticks","text-coloring"],"question_length":395,"response_length":518,"tag_count":5} {"question":"How to store an image in a variable I would like to store the image generated by matplotlib in a variable raw_data to use it as inline image. \n```\nimport os\nimport sys\nos.environ['MPLCONFIGDIR'] = '\/tmp\/'\nimport matplotlib\nmatplotlib.use(\"Agg\")\nimport matplotlib.pyplot as plt\n\nprint \"Content-type: image\/png\\n\"\nplt.plot(range(10, 20))\n\nraw_data = plt.show()\n\nif raw_data:\n uri = 'data:image\/png;base64,' + urllib.quote(base64.b64encode(raw_data))\n print '' % uri\nelse:\n print \"No data\"\n\n#plt.savefig(sys.stdout, format='png')\n```\nNone of the functions suit my use case: plt.savefig(sys.stdout, format='png') - Writes it to stdout. This does help.. as I have to embed the image in a html file. plt.show() \/ plt.draw() does nothing when executed from command line","response":"Have you tried cStringIO or an equivalent? \n```\nimport os\nimport sys\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport StringIO\nimport urllib, base64\n\nplt.plot(range(10, 20))\nfig = plt.gcf()\n\nimgdata = StringIO.StringIO()\nfig.savefig(imgdata, format='png')\nimgdata.seek(0) # rewind the data\n\nprint \"Content-type: image\/png\\n\"\nuri = 'data:image\/png;base64,' + urllib.quote(base64.b64encode(imgdata.buf))\nprint '' % uri\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/5314707\/how-to-store-an-image-in-a-variable","best_answers_votes":33,"tags":["python","html","image","matplotlib"],"question_length":788,"response_length":446,"tag_count":4} {"question":"multidimensional confidence intervals [closed] Closed. This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Guide the asker to update the question so it focuses on a single, specific problem. Narrowing the question will help others answer the question concisely. You may edit the question if you feel you can improve it yourself. If edited, the question will be reviewed and might be reopened. Closed 4 years ago. Improve this question I have numerous tuples (par1,par2), i.e. points in a 2 dimensional parameter space obtained from repeating an experiment multiple times. I'm looking for a possibility to calculate and visualize confidence ellipses (not sure if thats the correct term for this). Here an example plot that I found in the web to show what I mean: source: blogspot.ch\/2011\/07\/classification-and-discrimination-with.html So in principle one has to fit a multivariate normal distribution to a 2D histogram of data points I guess. Can somebody help me with this?","response":"It sounds like you just want the 2-sigma ellipse of the scatter of points? If so, consider something like this (From some code for a paper here: https:\/\/github.com\/joferkington\/oost_paper_code\/blob\/master\/error_ellipse.py): \n```\nimport numpy as np\n\nimport matplotlib.pyplot as plt\nfrom matplotlib.patches import Ellipse\n\ndef plot_point_cov(points, nstd=2, ax=None, **kwargs):\n \"\"\"\n Plots an `nstd` sigma ellipse based on the mean and covariance of a point\n \"cloud\" (points, an Nx2 array).\n\n Parameters\n ----------\n points : An Nx2 array of the data points.\n nstd : The radius of the ellipse in numbers of standard deviations.\n Defaults to 2 standard deviations.\n ax : The axis that the ellipse will be plotted on. Defaults to the \n current axis.\n Additional keyword arguments are pass on to the ellipse patch.\n\n Returns\n -------\n A matplotlib ellipse artist\n \"\"\"\n pos = points.mean(axis=0)\n cov = np.cov(points, rowvar=False)\n return plot_cov_ellipse(cov, pos, nstd, ax, **kwargs)\n\ndef plot_cov_ellipse(cov, pos, nstd=2, ax=None, **kwargs):\n \"\"\"\n Plots an `nstd` sigma error ellipse based on the specified covariance\n matrix (`cov`). Additional keyword arguments are passed on to the \n ellipse patch artist.\n\n Parameters\n ----------\n cov : The 2x2 covariance matrix to base the ellipse on\n pos : The location of the center of the ellipse. Expects a 2-element\n sequence of [x0, y0].\n nstd : The radius of the ellipse in numbers of standard deviations.\n Defaults to 2 standard deviations.\n ax : The axis that the ellipse will be plotted on. Defaults to the \n current axis.\n Additional keyword arguments are pass on to the ellipse patch.\n\n Returns\n -------\n A matplotlib ellipse artist\n \"\"\"\n def eigsorted(cov):\n vals, vecs = np.linalg.eigh(cov)\n order = vals.argsort()[::-1]\n return vals[order], vecs[:,order]\n\n if ax is None:\n ax = plt.gca()\n\n vals, vecs = eigsorted(cov)\n theta = np.degrees(np.arctan2(*vecs[:,0][::-1]))\n\n # Width and height are \"full\" widths, not radius\n width, height = 2 * nstd * np.sqrt(vals)\n ellip = Ellipse(xy=pos, width=width, height=height, angle=theta, **kwargs)\n\n ax.add_artist(ellip)\n return ellip\n\nif __name__ == '__main__':\n #-- Example usage -----------------------\n # Generate some random, correlated data\n points = np.random.multivariate_normal(\n mean=(1,1), cov=[[0.4, 9],[9, 10]], size=1000\n )\n # Plot the raw points...\n x, y = points.T\n plt.plot(x, y, 'ro')\n\n # Plot a transparent 3 standard deviation covariance ellipse\n plot_point_cov(points, nstd=3, alpha=0.5, color='green')\n\n plt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/12301071\/multidimensional-confidence-intervals","best_answers_votes":40,"tags":["python","matplotlib","scipy"],"question_length":1031,"response_length":2828,"tag_count":3} {"question":"How to align axis label to the right or top in matplotlib? By default matplotlib plots the axis label at the center of the axis. I would like to move the label in such way that it is aligned with the end of the axis, both for the horizontal and vertical axis. For example for the horizontal axis I would like to see: \n```\n+--------------------+\n| |\n| |\n| |\n| |\n| |\n+--------------------+\n label\n```\nIs it possibile to do it with the global setting of matplotlib?","response":"UPDATE Current 2023's Matplotlib has a new keyword loc=\u2026 for plt.[xy]label. From xlabel doc string loc : {'left', 'center', 'right'}, The label position. This is a high-level alternative for passing parameters *x* and *horizontalalignment*. and you can write simply \n```\nplt.xlabel('x_description', loc='right')\nplt.ylabel('y_description', loc='top')\n```\nMy other answer is still a good one, because the idea of getting an object, modifying it and setting it back is a good idea on its own, but here it is an alternative, cleaner solution: \n```\n...\nplt.xlabel('x_description', horizontalalignment='right', x=1.0)\nplt.ylabel('y_description', horizontalalignment='right', y=1.0)\n...\n```\nas you can see, no more magic numbers, and works both for xlabel and ylabel. Note that in both cases we are going to change the horizontal alignment, for reasons that were eventually clear to me when I first changed the vertical alignment in ylabel...","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/38186072\/how-to-align-axis-label-to-the-right-or-top-in-matplotlib","best_answers_votes":35,"tags":["python","matplotlib","axis-labels"],"question_length":573,"response_length":936,"tag_count":3} {"question":"How to change the line color in seaborn linear regression jointplot As described in the seaborn API the following code will produce a linear regression plot. \n```\nimport numpy as np, pandas as pd; np.random.seed(0)\nimport seaborn as sns; sns.set(style=\"white\", color_codes=True)\ntips = sns.load_dataset(\"tips\")\ng = sns.jointplot(x=\"total_bill\", y=\"tip\", data=tips, kind='reg')\nsns.plt.show()\n```\nHowever, with a lot of data points the regression line is not really visible anymore. How can I change its color? I could not find a builtin seaborn command. In case the line is in the background (i.e. behind the dots), I would also like to ask how to bring it to the front.","response":"There are a couple approaches, as mwaskom tactfully pointed out. You can pass arguments to the joint plot, but setting color there affects the whole scatterplot: \n```\nimport numpy as np, pandas as pd; np.random.seed(0)\nimport seaborn as sns#; sns.set(style=\"white\", color_codes=True)\ntips = sns.load_dataset(\"tips\")\ng = sns.jointplot(x=\"total_bill\", y=\"tip\", data=tips, kind='reg',\n joint_kws={'color':'green'}) # Scatter and regression all green\n```\nOr pass a dictionary of line-plotting keywords through that dictionary of scatterplot keywords. I read seaborn\/linearmodels.py to figure out where to do this, which was entertaining and informative in itself. Dict in dict: \n```\ng = sns.jointplot(x=\"total_bill\", y=\"tip\", data=tips, kind='reg',\n joint_kws={'line_kws':{'color':'cyan'}}) # Only regression cyan\n```\nOr you can access the line after it's been plotted and change it directly. This depends on the regression line being the first line plotted, so could break with seaborn updates. It's also aesthetically\/pedagogically different, as you don't recolor the uncertainty spread. It is a good way to get familiar with what the JointGrid object is and how else you might interact with it. (And maybe there are properties you can't set with the function call arguments, although I can't think of any.) \n```\ng = sns.jointplot(x=\"total_bill\", y=\"tip\", data=tips, kind='reg')\nregline = g.ax_joint.get_lines()[0]\nregline.set_color('red')\nregline.set_zorder(5)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/31568874\/how-to-change-the-line-color-in-seaborn-linear-regression-jointplot","best_answers_votes":46,"tags":["matplotlib","seaborn"],"question_length":670,"response_length":1497,"tag_count":2} {"question":"3d surface from a rectangular array of heights I am trying to plot some HDF data in matplotlib. After importing them using h5py, the data is stored in a form of array, like this: \n```\narray([[151, 176, 178],\n [121, 137, 130],\n [120, 125, 126])\n```\nIn this case, x and y values are just the indexes of the array's fields, while z value is the value of specific field. In the (x,y,z) form it would look like: \n```\n(1,1,151)\n(2,1,176)\n(3,1,178)\n(1,2,121)\n...\n```\nand so on. Is there an easy way to do a surface plot from this kind of data? I know I can change this to (x,y,z) tuples by iterating all over the array, but maybe it is not needed?","response":"If you want a 3-d surface plot, you have to create the meshgrid first. You can try: \n```\nfrom mpl_toolkits.mplot3d import Axes3D\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nX = np.arange(1, 10)\nY = np.arange(1, 10)\nX, Y = np.meshgrid(X, Y)\nR = np.sqrt(X**2 + Y**2)\nZ = np.sin(R)\nfig = plt.figure()\nax = fig.gca(projection='3d')\nsurf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap='hot', linewidth=0, antialiased=False)\nax.set_zlim(-1.01, 1.01)\n\nfig.colorbar(surf, shrink=0.5, aspect=5)\nplt.show()\n```\nwhich will generate, However, if the only relevant information is in the z-values, you can simply use imshow. Here, z-values are represented by their color. You can achieve this by: \n```\nim = plt.imshow(Z, cmap='hot')\nplt.colorbar(im, orientation='horizontal')\nplt.show()\n```\nWhich will give,","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/11766536\/3d-surface-from-a-rectangular-array-of-heights","best_answers_votes":46,"tags":["python","matplotlib","hdf5","geometry-surface","matplotlib-3d"],"question_length":652,"response_length":808,"tag_count":5} {"question":"Two different color colormaps in the same imshow matplotlib Let's suppose the example below \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nv1 = -1 + 2*np.random.rand(50,150)\nfig = plt.figure()\nax = fig.add_subplot(111)\np = ax.imshow(v1,interpolation='nearest')\ncb = plt.colorbar(p,shrink=0.5)\nplt.xlabel('Day')\nplt.ylabel('Depth')\ncb.set_label('RWU')\nplt.show()\n```\nI want to show the values below zero in a different colormap than the values above zero","response":"First of all, is it possible that you just want to use a diverging colormap, 'neutral' at zero, and diverging to two distinct colours? This is an example: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nv1 = -1+2*np.random.rand(50,150)\nfig,ax = plt.subplots()\np = ax.imshow(v1,interpolation='nearest',cmap=plt.cm.RdBu)\ncb = plt.colorbar(p,shrink=0.5)\nax.set_xlabel('Day')\nax.set_ylabel('Depth')\ncb.set_label('RWU')\nplt.show()\n```\nIf you really want to use two different colormaps, this is a solution with masked arrays: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom numpy.ma import masked_array\n\nv1 = -1+2*np.random.rand(50,150)\nv1a = masked_array(v1,v1<0)\nv1b = masked_array(v1,v1>=0)\nfig,ax = plt.subplots()\npa = ax.imshow(v1a,interpolation='nearest',cmap=cm.Reds)\ncba = plt.colorbar(pa,shrink=0.25)\npb = ax.imshow(v1b,interpolation='nearest',cmap=cm.winter)\ncbb = plt.colorbar(pb,shrink=0.25)\nplt.xlabel('Day')\nplt.ylabel('Depth')\ncba.set_label('positive')\ncbb.set_label('negative')\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/22128166\/two-different-color-colormaps-in-the-same-imshow-matplotlib","best_answers_votes":45,"tags":["python","matplotlib","color-mapping","imshow"],"question_length":463,"response_length":1024,"tag_count":4} {"question":"obtain the max y-value of a histogram I am looking for suggestions on how to calculate the maximum y-value of a histogram. \n```\n#simple histogram. how can I obtain the maximum value of, say, x and y?\n\nimport matplotlib.pyplot as plt\nhdata = randn(500)\nx = plt.hist(hdata)\ny = plt.hist(hdata, bins=40)\n```","response":"hist returns a tuple that contains the histogram bin locations and y values. Try this: \n```\ny, x, _ = plt.hist(hdata)\n\nprint x.max()\nprint y.max()\n```\nNote that len(y) = len(x) - 1.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/15558136\/obtain-the-max-y-value-of-a-histogram","best_answers_votes":39,"tags":["python","matplotlib"],"question_length":304,"response_length":181,"tag_count":2} {"question":"Line colour of 3D parametric curve in python's matplotlib.pyplot I've been googling quite some time with no success ... maybe my keywords are just lousy. Anyway, suppose I have three 1D numpy.ndarrays of the same length I'd like to plot them in 3D as a trajectory. Moreover, I'd like to be able to do either of the following things: Change the colour of the line as a function of z Change the colour of the line as a function of time (i.e. the index in the arrays) This demo has an example of making such a curve: \n```\nimport matplotlib as mpl\nfrom mpl_toolkits.mplot3d import Axes3D\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfig = plt.figure()\nax = fig.gca(projection='3d')\ntheta = np.linspace(-4 * np.pi, 4 * np.pi, 100)\nz = np.linspace(-2, 2, 100)\nr = z**2 + 1\nx = r * np.sin(theta)\ny = r * np.cos(theta)\nax.plot(x, y, z)\n\nplt.show()\n```\nBut how do I achieve 1 or 2? Solutions to only one or the other are welcome! Thanks in advance.","response":"As with normal 2d plots, you cannot have a gradient of color along an ordinary line. However, you can do it with scatter: \n```\nimport matplotlib as mpl\nfrom mpl_toolkits.mplot3d import Axes3D\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfig = plt.figure()\nax = fig.gca(projection='3d')\ntheta = np.linspace(-4 * np.pi, 4 * np.pi, 100)\nz = np.linspace(-2, 2, 100)\nr = z**2 + 1\nx = r * np.sin(theta)\ny = r * np.cos(theta)\n\n#1 colored by value of `z`\nax.scatter(x, y, z, c = plt.cm.jet(z\/max(z))) \n\n#2 colored by index (same in this example since z is a linspace too)\nN = len(z)\nax.scatter(x, y, z, c = plt.cm.jet(np.linspace(0,1,N)))\n\nplt.show()\n```\nI liked @Junuxx's hack so I applied it here: \n```\nfor i in xrange(N-1):\n ax.plot(x[i:i+2], y[i:i+2], z[i:i+2], color=plt.cm.jet(255*i\/N))\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/15617207\/line-colour-of-3d-parametric-curve-in-pythons-matplotlib-pyplot","best_answers_votes":26,"tags":["python","matplotlib"],"question_length":946,"response_length":798,"tag_count":2} {"question":"Plotting distance arrows in technical drawing I want to indicate a distance in one of my plots. What I have in mind is the way they do it in technical drawings, showing a double headed arrow with the distance as text beside it. Example: \n```\nfrom matplotlib.pyplot import *\n\nhlines(7,0,2, linestyles='dashed')\nhlines(11,0,2, linestyles='dashed')\nhlines(10,0,2, linestyles='dashed')\nhlines(8,0,2, linestyles='dashed')\nplot((1,1),(8,10), 'k',) # arrow line\nplot((1,1),(8,8), 'k', marker='v',) # lower arrowhead\nplot((1,1),(10,10), 'k', marker='^',) # upper arrowhead\ntext(1.1,9,\"D=1\")\n```\nThis results in something like this (two of the hlines are not really needed, they just increase the drawing area...): Is there a quicker way to do this, preferably with arrowheads which end on the exact spot, not below\/above where they should be? Extra points for placing the text automatically as well. Edit: I had been playing with annotate but since the string would have to be sacrificed this solution had lost some appeal to me. Thanks for pointing out the arrowstyle though, it wasn't working when I attempted something similar. I guess there is no way around writing a little function to do it with one call...","response":"```\nimport matplotlib.pyplot as plt\n\nplt.hlines(7, 0, 2, linestyles='dashed')\nplt.hlines(11, 0, 2, linestyles='dashed')\nplt.hlines(10, 0, 2, linestyles='dashed')\nplt.hlines(8, 0, 2, linestyles='dashed')\nplt.annotate(\n '', xy=(1, 10), xycoords='data',\n xytext=(1, 8), textcoords='data',\n arrowprops={'arrowstyle': '<->'})\nplt.annotate(\n 'D = 1', xy=(1, 9), xycoords='data',\n xytext=(5, 0), textcoords='offset points')\n\n# alternatively,\n# plt.text(1.01, 9, 'D = 1')\n\nplt.show()\n```\nyields For more information on the many options available with plt.annotate, see this page. As shown above, the text can be placed with either plt.annotate or plt.text. With plt.annotate you can specify the offset (e.g. (5, 0)) in points, whereas with plt.text you can specify the text location in data coordinates (e.g. (1.01, 9)).","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/14612637\/plotting-distance-arrows-in-technical-drawing","best_answers_votes":27,"tags":["python","matplotlib"],"question_length":1205,"response_length":827,"tag_count":2} {"question":"How to change the range of the x-axis and y-axis I am new to matlibplot and I'm trying to draw a circle with radius 1 but have both my x and y axis go from 0 to 3 with an increment of 0.25. Right now, I have drawn the graph and the circle but my x and y axis only go from 0 to 1, so there is little room left either above, below, to the left, or to the right of the circle. Here's the code so far: \n```\nimport numpy as np\n import matplotlib.pyplot as plt\n import scipy, pylab\n\n plt.axes()\n circle=plt.Circle((0, 0), radius=1, fc='w')\n plt.gca().add_patch(circle)\n plt.yticks(np.arange(0, 3, 0.25))\n plt.xticks(np.arange(0, 3, 0.25))\n plt.axis('scaled')\n plt.show()\n```\nI've looked at the following questions, but found them either to be a little too advanced for what I'm trying to accomplish or just a tad bit off-topic: \n```\nhttp:\/\/stackoverflow.com\/questions\/27170139\/how-to-change-the-range-of-my-x-axis-in-matplotlib\n\n http:\/\/stackoverflow.com\/questions\/22642511\/change-y-range-to-start-from-0-with-matplotlib\n\n http:\/\/stackoverflow.com\/questions\/27456185\/scaling-axis-for-a-scatter-plot-in-matlibplot-in-python\n\n http:\/\/stackoverflow.com\/questions\/22642511\/change-y-range-to-start-from-0-with-matplotlib\n```\nWhat I want to do now is, while keeping my circle in the same place on the graph, increase the range of my x and y axis from 0-1 to 0-3 while keeping the increment of 0.25 on each axis, allowing me to plot points all around the edge of the circle without having to worry about the top, bottom, or either side of the circle touching either of the two axis. I've looked through the matlibplot documentation, but can't seem to find a simple step-by-step explanation of how to change the spacing on my axis. Any insight on this would be brilliant! Thanks in advance!","response":"To change the axes range, you can use \n```\nplt.xlim([-3, 3])\nplt.ylim([-3, 3])\n```\nYou will then have to remove the line plt.axis('scaled') for this to work. \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport scipy, pylab\n\nplt.axes()\ncircle=plt.Circle((0, 0), radius=1, fc='w')\nplt.gca().add_patch(circle)\nplt.xlim([-3, 3])\nplt.ylim([-3, 3])\nplt.yticks(np.arange(-3, 3, 0.25))\nplt.xticks(np.arange(-3, 3, 0.25))\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/29337123\/how-to-change-the-range-of-the-x-axis-and-y-axis","best_answers_votes":39,"tags":["python","matplotlib"],"question_length":1810,"response_length":438,"tag_count":2} {"question":"How to deal with NaN values when plotting a boxplot I am using matplotlib to plot a box figure but there are some missing values (NaN). Then I found it doesn't display the box figure within the columns having NaN values. Do you know how to solve this problem? Here are the codes. \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n#==============================================================================\n# open data\n#==============================================================================\nfilename='C:\\\\Users\\\\liren\\\\OneDrive\\\\Data\\\\DATA in the first field-final\\\\ks.csv'\n\nAllData=np.genfromtxt(filename,delimiter=\";\",skip_header=0,dtype='str')\n\nTreatmentCode = AllData[1:,0]\nRepCode = AllData[1:,1]\nKsData= AllData[1:,2:].astype('float')\nDepthHeader = AllData[0,2:].astype('float')\nTreatmentUnique = np.unique(TreatmentCode)[[3,1,4,2,8,6,9,7,0,5,10],]\nnT = TreatmentUnique.size#nT=number of treatments\n#nD=number of deepth;nR=numbers of replications;nT=number of treatments;iT=iterms of treatments\nnD = 5\nnR = 6\nKsData_3D = np.zeros((nT,nD,nR)) \n\nfor iT in range(nT):\n Treatment = TreatmentUnique[iT]\n\n TreatmentFilter = TreatmentCode == Treatment\n\n KsData_Filtered = KsData[TreatmentFilter,:]\n \n KsData_3D[iT,:,:] = KsData_Filtered.transpose()iD = 4\n \nfig=plt.figure()\nax = fig.add_subplot(111)\nplt.boxplot(KsData_3D[:,iD,:].transpose())\nax.set_xticks(range(1,nT+1))\nax.set_xticklabels(TreatmentUnique)\nax.set_title(DepthHeader[iD])\n```\nHere is the final figure and some of the treatments are missing in the box.","response":"You can remove the NaNs from the data first, then plot the filtered data. To do that, you can first find the NaNs using np.isnan(data), then perform the bitwise inversion of that Boolean array using the ~: bitwise inversion operator. Use that to index the data array, and you filter out the NaNs. \n```\nfiltered_data = data[~np.isnan(data)]\n```\nIn a complete example (adapted from here) Tested in python 3.10, matplotlib 3.5.1, seaborn 0.11.2, numpy 1.21.5, pandas 1.4.2 For 1D data: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# fake up some data\nnp.random.seed(2022) # so the same data is created each time\nspread = np.random.rand(50) * 100\ncenter = np.ones(25) * 50\nflier_high = np.random.rand(10) * 100 + 100\nflier_low = np.random.rand(10) * -100\ndata = np.concatenate((spread, center, flier_high, flier_low), 0)\n\n# Add a NaN\ndata[40] = np.NaN\n\n# Filter data using np.isnan\nfiltered_data = data[~np.isnan(data)]\n\n# basic plot\nplt.boxplot(filtered_data)\n\nplt.show()\n```\nFor 2D data: For 2D data, you can't simply use the mask above, since then each column of the data array would have a different length. Instead, we can create a list, with each item in the list being the filtered data for each column of the data array. A list comprehension can do this in one line: [d[m] for d, m in zip(data.T, mask.T)] \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# fake up some data\nnp.random.seed(2022) # so the same data is created each time\nspread = np.random.rand(50) * 100\ncenter = np.ones(25) * 50\nflier_high = np.random.rand(10) * 100 + 100\nflier_low = np.random.rand(10) * -100\ndata = np.concatenate((spread, center, flier_high, flier_low), 0)\n\ndata = np.column_stack((data, data * 2., data + 20.))\n\n# Add a NaN\ndata[30, 0] = np.NaN\ndata[20, 1] = np.NaN\n\n# Filter data using np.isnan\nmask = ~np.isnan(data)\nfiltered_data = [d[m] for d, m in zip(data.T, mask.T)]\n\n# basic plot\nplt.boxplot(filtered_data)\n\nplt.show()\n```\nI'll leave it as an exercise to the reader to extend this to 3 or more dimensions, but you get the idea. The solution above is how to do this using matplotlib alone. Other alternatives (that use matplotlib under the hood) are available that have this behaviour built in, so no need to filter the data yourself. Use seaborn, which is a high-level API for matplotlib. seaborn.boxplot filters NaN under the hood. \n```py\nimport seaborn as sns\n\nsns.boxplot(data=data)\n```\n1D 2D Use pandas. NaN is also ignored if plotting from df.plot(kind='box') for pandas, which uses matplotlib as the default plotting backend. \n```py\nimport pandas as pd\n\ndf = pd.DataFrame(data)\n\ndf.plot(kind='box')\n```\n1D 2D","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/44305873\/how-to-deal-with-nan-values-when-plotting-a-boxplot","best_answers_votes":39,"tags":["python","pandas","matplotlib","seaborn","boxplot"],"question_length":1570,"response_length":2636,"tag_count":5} {"question":"Showing a simple matplotlib plot in plotly Dash Is it possible to show a simple matplotlib plot (the kind usually generated by plt.show()) in plotly's Dash framework? Or just plotly-like graphs with plotly's Scatters and Data traces? Specifically I guess I need a different component than Graph (see below) and a way to return the simple plot in the update_figure function. Example: \n```\nimport dash\nimport dash_core_components as dcc\nimport dash_html_components as html\nimport numpy as np\nimport matplotlib.pyplot as plt\n\napp = dash.Dash()\n\napp.layout = html.Div(children=[\n html.H1(children='Hello Dash'),\n\n dcc.Slider(\n id='n_points',\n min=10,\n max=100,\n step=1,\n value=50,\n ),\n\n dcc.Graph(id='example') # or something other than Graph?...\n])\n\n@app.callback(\n dash.dependencies.Output('example', 'figure'),\n [dash.dependencies.Input('n_points', 'value')]\n)\n\ndef update_figure(n_points):\n #create some matplotlib graph\n x = np.random.rand(n_points)\n y = np.random.rand(n_points)\n plt.scatter(x, y)\n # plt.show()\n return None # return what, I don't know exactly, `plt`?\n\nif __name__ == '__main__':\n app.run_server(debug=True)\n```","response":"If you don't want an interactive plot, you can return a static one (found from this help) \n```py\nimport io\nimport base64\n\n...\n \napp.layout = html.Div(children=[\n ...,\n\n html.Img(id='example') # img element\n])\n\n@app.callback(\n dash.dependencies.Output('example', 'src'), # src attribute\n [dash.dependencies.Input('n_points', 'value')]\n)\ndef update_figure(n_points):\n #create some matplotlib graph\n x = np.random.rand(n_points)\n y = np.random.rand(n_points)\n buf = io.BytesIO() # in-memory files\n plt.scatter(x, y)\n plt.savefig(buf, format = \"png\")\n plt.close()\n data = base64.b64encode(buf.getbuffer()).decode(\"utf8\") # encode to html elements\n buf.close()\n return \"data:image\/png;base64,{}\".format(data)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/49851280\/showing-a-simple-matplotlib-plot-in-plotly-dash","best_answers_votes":18,"tags":["python","matplotlib","plotly","plotly-dash"],"question_length":1204,"response_length":752,"tag_count":4} {"question":"Create a color generator from given colormap in matplotlib I have a series of lines that each need to be plotted with a separate colour. Each line is actually made up of several data sets (positive, negative regions etc.) and so I'd like to be able to create a generator that will feed one colour at a time across a spectrum, for example the gist_rainbow map shown here. I have found the following works but it seems very complicated and more importantly difficult to remember, \n```\nfrom pylab import *\n\nNUM_COLORS = 22\n\nmp = cm.datad['gist_rainbow']\nget_color = matplotlib.colors.LinearSegmentedColormap.from_list(mp, colors=['r', 'b'], N=NUM_COLORS)\n...\n# Then in a for loop\n this_color = get_color(float(i)\/NUM_COLORS)\n```\nMoreover, it does not cover the range of colours in the gist_rainbow map, I have to redefine a map. Maybe a generator is not the best way to do this, if so what is the accepted way?","response":"To index colors from a specific colormap you can use: \n```\nimport pylab\nNUM_COLORS = 22\n\ncm = pylab.get_cmap('gist_rainbow')\nfor i in range(NUM_COLORS):\n color = cm(1.*i\/NUM_COLORS) # color will now be an RGBA tuple\n\n# or if you really want a generator:\ncgen = (cm(1.*i\/NUM_COLORS) for i in range(NUM_COLORS))\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/3016283\/create-a-color-generator-from-given-colormap-in-matplotlib","best_answers_votes":37,"tags":["python","matplotlib","color-mapping"],"question_length":910,"response_length":317,"tag_count":3} {"question":"Repeating x axis labels for all facets using FacetGrid in seaborn I am working with the FacetGrid example presented here that results in the plot below. In my data set, there is quite a lot of plots, and it would be convenient to have the x axis labels repeated for each facet, not only at the bottom. For this example, the values 62, ..., 76 should be repeated for each of the A-J facets.","response":"The answer by Bazingaa works for matplotlib version 2.0.2. For newer versions of matplotlib, using ax.tick_params() and setting labelbottom=True seems to work: \n```\nfor ax in g.axes.flatten():\n ax.tick_params(labelbottom=True)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/52182322\/repeating-x-axis-labels-for-all-facets-using-facetgrid-in-seaborn","best_answers_votes":26,"tags":["python","matplotlib","plot","seaborn","facet-grid"],"question_length":389,"response_length":233,"tag_count":5} {"question":"Plot feature importance with xgboost When I plot the feature importance, I get this messy plot. I have more than 7000 variables. I understand the built-in function only selects the most important, although the final graph is unreadable. This is the complete code: \n```\nimport numpy as np\nimport pandas as pd\ndf = pd.read_csv('ricerice.csv')\narray=df.values\nX = array[:,0:7803]\nY = array[:,7804]\nfrom xgboost import XGBClassifier\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import accuracy_score\nseed=0\ntest_size=0.30\nX_train, X_test, y_train, y_test = train_test_split(X,Y,test_size=test_size, random_state=seed)\nfrom xgboost import XGBClassifier\nmodel = XGBClassifier()\nmodel.fit(X, Y)\nimport matplotlib.pyplot as plt\nfrom matplotlib import pyplot\nfrom xgboost import plot_importance\nfig1=plt.gcf()\nplot_importance(model)\nplt.draw()\nfig1.savefig('xgboost.png', figsize=(50, 40), dpi=1000)\n```\nAlthough the size of the figure, the graph is illegible.","response":"There are couple of points: To fit the model, you want to use the training dataset (X_train, y_train), not the entire dataset (X, y). You may use the max_num_features parameter of the plot_importance() function to display only top max_num_features features (e.g. top 10). With the above modifications to your code, with some randomly generated data the code and output are as below: \n```\nimport numpy as np\n\n# generate some random data for demonstration purpose, use your original dataset here\nX = np.random.rand(1000,100) # 1000 x 100 data\ny = np.random.rand(1000).round() # 0, 1 labels\n\nfrom xgboost import XGBClassifier\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import accuracy_score\nseed=0\ntest_size=0.30\nX_train, X_test, y_train, y_test = train_test_split(X,y,test_size=test_size, random_state=seed)\nfrom xgboost import XGBClassifier\nmodel = XGBClassifier()\nmodel.fit(X_train, y_train)\nimport matplotlib.pylab as plt\nfrom matplotlib import pyplot\nfrom xgboost import plot_importance\nplot_importance(model, max_num_features=10) # top 10 most important features\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/51905524\/plot-feature-importance-with-xgboost","best_answers_votes":26,"tags":["python","matplotlib","machine-learning","xgboost","feature-selection"],"question_length":980,"response_length":1115,"tag_count":5} {"question":"Flier colors in boxplot with matplotlib According to the documentation, the Axes.boxplot function takes a dictionary flierprop as argument to define the properties of the outliers. Unfortunately, I can't find the documentation concerning this dictionary. In particular, I would like to define the color of the border of the marker. By default, empty circles are drawn. One can set the face color, as shown in the example. Nevertheless, the circle border is always a black line. I tried with the keys color and markercolor (the former has no effect, the latter produces an error). What should I do to set a color for the marker line?","response":"To set marker color use property markerfacecolor but for border color - markeredgecolor: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# fake up some data\nspread = np.random.rand(50) * 100\ncenter = np.ones(25) * 50\nflier_high = np.random.rand(10) * 100 + 100\nflier_low = np.random.rand(10) * -100\ndata = np.concatenate((spread, center, flier_high, flier_low), 0)\n\n# plot. Set color of marker edge\nflierprops = dict(marker='o', markerfacecolor='r', markersize=12,\n linestyle='none', markeredgecolor='g')\nplt.boxplot(data, flierprops=flierprops)\n\nplt.show()\n```\nAccording to @Spiros the flierprops dictionary is documented here like other boxplot properties: http:\/\/matplotlib.org\/users\/dflt_style_changes.html?highlight=flierprops#boxplot","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/43342564\/flier-colors-in-boxplot-with-matplotlib","best_answers_votes":32,"tags":["python","matplotlib","statistics","boxplot"],"question_length":632,"response_length":765,"tag_count":4} {"question":"Please explain in detail 2D Histogram in Python I am trying to understand what are the values of a 2D histogram. I have 2 numpy arrays of the same length X and Y (float numbers in each one). For example the first 10 values of X: [ 88, 193, 60, 98, 78, 100, 75, 76, 130] and Y: [ 18. , 9. , 36.1, 18.5, 34.3, 32.9, 32.2, 22. , 15. ] When I use: \n```\nimport matplotlib.pyplot as plt\n\nplt.hist2d(X,Y, bins=(10,20))\n```\nI get a 2D histogram. But what does it mean? 1D histogram simply shows me how much of each item I have. Please explain me what does it mean in 2D. Thanks in advance!","response":"Suppose you have a 1D array, you plot the position of its values on the x axis, they are so dense that you can't tell the spatial distribution, you use a 1D histogram to show the distribution by count of boxes along the x axis. Problem solved. Then you have two 1D arrays, a list of 2D dots in (x, y) axes. You plot their positions on the x-y plane, again they are so dense and overlap with each other. You want to view the distribution better by count of boxes in the plane, so you try a 2D diagram. Problem solved. Here is an example \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n\n# prepare 2D random dots centered at (0, 0)\nn = 100000\nx = np.random.randn(n)\ny = x + np.random.randn(n)\n\n# plot data\nfig1 = plt.figure()\nplt.plot(x,y,'.r')\nplt.xlabel('x')\nplt.ylabel('y')\n```\ngives \n```\n# plot 2D histogram using pcolor\nfig2 = plt.figure()\nplt.hist2d(x, y, bins=100)\nplt.xlabel('x')\nplt.ylabel('y')\ncbar = plt.colorbar()\ncbar.ax.set_ylabel('Counts')\n```\ngives","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/42262563\/please-explain-in-detail-2d-histogram-in-python","best_answers_votes":32,"tags":["python","numpy","matplotlib","histogram2d"],"question_length":581,"response_length":989,"tag_count":4} {"question":"How to create a polar contour plot Update: I've done a full write-up of the way I found to do this on my blog at http:\/\/blog.rtwilson.com\/producing-polar-contour-plots-with-matplotlib\/ - you may want to check there first. I'm trying to plot a polar contour plot in matplotlib. I've found various resources on the internet, (a) I can't seem to get my code to work and (b) many of the resources appear rather old, and I'm wondering if there is a better way now. For example, http:\/\/www.mail-archive.com\/[email protected]\/msg01953.html suggests that something may be done to improve things soon, and that was in 2006! I'd love to be able to plot proper polar contour plots - like pcolor lets you do for its type of plot (see commented out section below), but I can't seem to find any way to do that, so I'm converting to cartesian co-ordinates first. Anyway, I have the code that follows: \n```\nfrom pylab import *\nimport numpy as np\n\nazimuths = np.arange(0, 360, 10)\nzeniths = np.arange(0, 70, 10)\nvalues = []\n\nfor azimuth in azimuths:\n for zenith in zeniths:\n print \"%i %i\" % (azimuth, zenith)\n # Run some sort of model and get some output\n # We'll just use rand for this example\n values.append(rand())\n\ntheta = np.radians(azimuths)\n\nvalues = np.array(values)\nvalues = values.reshape(len(zeniths), len(azimuths))\n\n# This (from http:\/\/old.nabble.com\/2D-polar-surface-plot-td28896848.html)\n# works fine\n##############\n# Create a polar axes\n# ax = subplot(111, projection='polar')\n# pcolor plot onto it\n# c = ax.pcolor(theta, zeniths, values)\n# show()\n\nr, t = np.meshgrid(zeniths, azimuths)\n\nx = r*np.cos(t)\ny = r*np.sin(t)\n\ncontour(x, y, values)\n```\nWhen I run that I get an error TypeError: Inputs x and y must be 1D or 2D.. I'm not sure why I get this, as both x and y are 2D. Am I doing something wrong? Also, it seems rather clunky to be putting my values returned from my model into a list and then reshaping it. Is there a better way to do this?","response":"You should just be able to use ax.contour or ax.contourf with polar plots just as you normally would... You have a few bugs in your code, though. You convert things to radians, but then use the values in degrees when you plot. Also, you're passing in r, theta to contour when it expects theta, r. As a quick example: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n#-- Generate Data -----------------------------------------\n# Using linspace so that the endpoint of 360 is included...\nazimuths = np.radians(np.linspace(0, 360, 20))\nzeniths = np.arange(0, 70, 10)\n\nr, theta = np.meshgrid(zeniths, azimuths)\nvalues = np.random.random((azimuths.size, zeniths.size))\n\n#-- Plot... ------------------------------------------------\nfig, ax = plt.subplots(subplot_kw=dict(projection='polar'))\nax.contourf(theta, r, values)\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/9071084\/how-to-create-a-polar-contour-plot","best_answers_votes":27,"tags":["python","numpy","matplotlib","contourf"],"question_length":1960,"response_length":839,"tag_count":4} {"question":"Multivariate (polynomial) best fit curve in python? How do you calculate a best fit line in python, and then plot it on a scatterplot in matplotlib? I was I calculate the linear best-fit line using Ordinary Least Squares Regression as follows: \n```\nfrom sklearn import linear_model\nclf = linear_model.LinearRegression()\nx = [[t.x1,t.x2,t.x3,t.x4,t.x5] for t in self.trainingTexts]\ny = [t.human_rating for t in self.trainingTexts]\nclf.fit(x,y)\nregress_coefs = clf.coef_\nregress_intercept = clf.intercept_\n```\nThis is multivariate (there are many x-values for each case). So, X is a list of lists, and y is a single list. For example: \n```\nx = [[1,2,3,4,5], [2,2,4,4,5], [2,2,4,4,1]] \ny = [1,2,3,4,5]\n```\nBut how do I do this with higher order polynomial functions. For example, not just linear (x to the power of M=1), but binomial (x to the power of M=2), quadratics (x to the power of M=4), and so on. For example, how to I get the best fit curves from the following? Extracted from Christopher Bishops's \"Pattern Recognition and Machine Learning\", p.7:","response":"The accepted answer to this question provides a small multi poly fit library which will do exactly what you need using numpy, and you can plug the result into the plotting as I've outlined below. You would just pass in your arrays of x and y points and the degree(order) of fit you require into multipolyfit. This returns the coefficients which you can then use for plotting using numpy's polyval. Note: The code below has been amended to do multivariate fitting, but the plot image was part of the earlier, non-multivariate answer. \n```\nimport numpy\nimport matplotlib.pyplot as plt\nimport multipolyfit as mpf\n\ndata = [[1,1],[4,3],[8,3],[11,4],[10,7],[15,11],[16,12]]\nx, y = zip(*data)\nplt.plot(x, y, 'kx')\n\nstacked_x = numpy.array([x,x+1,x-1])\ncoeffs = mpf(stacked_x, y, deg) \nx2 = numpy.arange(min(x)-1, max(x)+1, .01) #use more points for a smoother plot\ny2 = numpy.polyval(coeffs, x2) #Evaluates the polynomial for each x2 value\nplt.plot(x2, y2, label=\"deg=3\")\n```\nNote: This was part of the answer earlier on, it is still relevant if you don't have multivariate data. Instead of coeffs = mpf(..., use coeffs = numpy.polyfit(x,y,3) For non-multivariate data sets, the easiest way to do this is probably with numpy's polyfit: numpy.polyfit(x, y, deg, rcond=None, full=False, w=None, cov=False) Least squares polynomial fit. Fit a polynomial p(x) = p[0] * x**deg + ... + p[deg] of degree deg to points (x, y). Returns a vector of coefficients p that minimises the squared error.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/11856206\/multivariate-polynomial-best-fit-curve-in-python","best_answers_votes":30,"tags":["python","matplotlib","machine-learning","regression","scatter-plot"],"question_length":1054,"response_length":1480,"tag_count":5} {"question":"Python - animation with matplotlib.pyplot How can one create animated diagrams using popular matplotlib library? I am particularly interested in animated gifs.","response":"The matplotlib docs provide an entire section of examples on animation (see this scipy tutorial also). Most, however, involve using the various GUI widget backends. There is one in there, \"movie demo\", that shows how to produce an avi of a series of PNGS. To produce animated GIFs, I think your options are pretty limited. Last I checked, PIL didn't support them. You could however generate a series of PNGs using pyplot's savefig and then stitch them together using a call to ImageMagick or mencoder.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/2546780\/python-animation-with-matplotlib-pyplot","best_answers_votes":23,"tags":["python","matplotlib"],"question_length":159,"response_length":501,"tag_count":2} {"question":"Plotting 3-tuple data points in a surface \/ contour plot using matplotlib I have some surface data that is generated by an external program as XYZ values. I want to create the following graphs, using matplotlib: Surface plot Contour plot Contour plot overlayed with a surface plot I have looked at several examples for plotting surfaces and contours in matplotlib - however, the Z values seems to be a function of X and Y i.e. Y ~ f(X,Y). I assume that I will somehow need to transform my Y variables, but I have not seen any example yet, that shows how to do this. So, my question is this: given a set of (X,Y,Z) points, how may I generate Surface and contour plots from that data? BTW, just to clarify, I do NOT want to create scatter plots. Also although I mentioned matplotlib in the title, I am not averse to using rpy(2), if that will allow me to create these charts.","response":"for do a contour plot you need interpolate your data to a regular grid http:\/\/www.scipy.org\/Cookbook\/Matplotlib\/Gridding_irregularly_spaced_data a quick example: \n```\n\n```python\nxi = linspace(min(X), max(X))\n```\n\n```python\nyi = linspace(min(Y), max(Y))\n```\n\n```python\nzi = griddata(X, Y, Z, xi, yi)\n```\n\n```python\ncontour(xi, yi, zi)\n#Output\n#```\n#for the surface http:\/\/matplotlib.sourceforge.net\/examples\/mplot3d\/surface3d_demo.html \n#```\n```\n\n```python\nfrom mpl_toolkits.mplot3d import Axes3D\n```\n\n```python\nfig = figure()\n```\n\n```python\nax = Axes3D(fig)\n```\n\n```python\nxim, yim = meshgrid(xi, yi)\n```\n\n```python\nax.plot_surface(xim, yim, zi)\n```\n\n```python\nshow()\n```\n\n```python\nhelp(meshgrid(x, y))\n#Output\n# Return coordinate matrices from two coordinate vectors.\n# [...]\n# Examples\n# --------\n# >>> X, Y = np.meshgrid([1,2,3], [4,5,6,7])\n# >>> X\n# array([[1, 2, 3],\n# [1, 2, 3],\n# [1, 2, 3],\n# [1, 2, 3]])\n# >>> Y\n# array([[4, 4, 4],\n# [5, 5, 5],\n# [6, 6, 6],\n# [7, 7, 7]])\n#```\n#contour in 3D http:\/\/matplotlib.sourceforge.net\/examples\/mplot3d\/contour3d_demo.html \n#```\n```\n\n```python\nfig = figure()\n```\n\n```python\nax = Axes3D(fig)\n```\n\n```python\nax.contour(xi, yi, zi) # ax.contourf for filled contours\n```\n\n```python\nshow()\n#Output\n#```\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/3012783\/plotting-3-tuple-data-points-in-a-surface-contour-plot-using-matplotlib","best_answers_votes":25,"tags":["python","r","matplotlib","rpy2","surface"],"question_length":873,"response_length":1127,"tag_count":5} {"question":"How to use Matplotlib in Django? From some examples from the Internet I made the test code below. It works! ... BUT if I reload the page, the pie will draw itself with the same image. Some parts get darker every time I reload the page. When I restart the the development server, it is reset. How do I draw properly with Matplotlib in Django? It looks like it remembers some drawings... Source views.py (let urls.py link to it): \n```\nfrom pylab import figure, axes, pie, title\nfrom matplotlib.backends.backend_agg import FigureCanvasAgg\n\ndef test_matplotlib(request):\n f = figure(1, figsize=(6,6))\n ax = axes([0.1, 0.1, 0.8, 0.8])\n labels = 'Frogs', 'Hogs', 'Dogs', 'Logs'\n fracs = [15,30,45, 10]\n explode=(0, 0.05, 0, 0)\n pie(fracs, explode=explode, labels=labels, autopct='%1.1f%%', shadow=True)\n title('Raining Hogs and Dogs', bbox={'facecolor':'0.8', 'pad':5})\n\n canvas = FigureCanvasAgg(f) \n response = HttpResponse(content_type='image\/png')\n canvas.print_png(response)\n return response\n```\nI am using Django 1.0.1 and Python 2.6.2 (Linux).","response":"You need to remove the num parameter from the figure constructor and close the figure when you're done with it. \n```\nimport matplotlib.pyplot\n\ndef test_matplotlib(request):\n f = figure(figsize=(6,6))\n ....\n matplotlib.pyplot.close(f)\n```\nBy removing the num parameter, you'll avoid using the same figure at the same time. This could happen if 2 browsers request the image at the same time. If this is not an issue, another possible solution is to use the clear method, i.e. f.clear().","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/1874642\/how-to-use-matplotlib-in-django","best_answers_votes":28,"tags":["python","django","matplotlib"],"question_length":1080,"response_length":493,"tag_count":3} {"question":"MatplotLib 'saveFig()' Fullscreen I used MatplotLib with Cartopy to generate some data images. The problem is that when I set the frame size to fullscreen and use plt.show() the image is perfect and the resolution is fine. However, when I save this figure using 'plt.savefig()' the image saved keeps with its original size (not fullscreen). Showing outcome images: My code is the following: def plot_tec_cartopy(descfile): global matrixLon, matrixLat, matrixTec \n```\nax = plt.axes(projection=cartopy.crs.PlateCarree())\n\nv = np.linspace(0, 80, 46, endpoint=True)\ncp = plt.contourf(matrixLon, matrixLat, matrixTec, v, cmap=plt.cm.rainbow)\nplt.clim(0, 80)\nplt.colorbar(cp)\n\nax.add_feature(cartopy.feature.COASTLINE)\nax.add_feature(cartopy.feature.BORDERS, linestyle=':')\nax.set_extent([-85, -30, -60, 15])\n\n# Setting X and Y labels using LON\/LAT format\nax.set_xticks([-85, -75, -65, -55, -45, -35])\nax.set_yticks([-60, -55, -50, -45, -40, -35, -30, -25, -20, -15, -10, -5, 0, 5, 10, 15])\nlon_formatter = LongitudeFormatter(number_format='.0f',\n degree_symbol='',\n dateline_direction_label=True)\nlat_formatter = LatitudeFormatter(number_format='.0f',\n degree_symbol='')\nax.xaxis.set_major_formatter(lon_formatter)\nax.yaxis.set_major_formatter(lat_formatter)\n\nplt.title('Conte\u00fado Eletr\u00f4nico Total', style='normal', fontsize='12')\n\n# Acquiring Date\nyear, julianday = check_for_zero(descfile.split('.')[2]), descfile.split('.')[3]\nhour, minute = descfile.split('.')[4], descfile.split('.')[5].replace('h','')\ndate = datetime.datetime(int(year), 1, 1, int(hour), int(minute)) + datetime.timedelta(int(julianday)-1)\nmonth = date.month\nday = date.day\n\n# Set common labels\nax.text(1.22, 1.05, 'TEC', style='normal',\n verticalalignment='top', horizontalalignment='right',\n transform=ax.transAxes,\n color='black', fontsize=11)\nax.text(1, 0.005, 'EMBRACE\/INPE', style='italic',\n verticalalignment='bottom', horizontalalignment='right',\n transform=ax.transAxes,\n color='black', fontsize=10)\nax.text(1, 0.995, str(date) + ' UT', style='italic',\n verticalalignment='top', horizontalalignment='right',\n transform=ax.transAxes,\n color='black', fontsize=10)\nax.text(0.5, -0.08, 'Copyright \\N{COPYRIGHT SIGN} 2017 INPE - Instituto Nacional de',\n style='oblique', transform=ax.transAxes,\n verticalalignment='bottom', horizontalalignment='center',\n color='black', fontsize=8)\nax.text(0.5, -0.108, 'Pesquisas Espacias. Todos direitos reservados',\n style='oblique', transform=ax.transAxes,\n verticalalignment='bottom', horizontalalignment='center',\n color='black', fontsize=8)\n\nmanager = plt.get_current_fig_manager()\nmanager.resize(*manager.window.maxsize())\n\nfigName = 'tec.map' + '.' + str(year) + '.' + str(julianday) + '.' + str(hour) + '.' + str(minute) + 'h.png'\n#plt.show()\nplt.savefig(figName, dpi=500)\nplt.clf()\n```\nMaybe I need to set some parameter into savefig() to say it that it needs to save my modified frame? Can someone help me with this issue? Thanks in advance.","response":"Coming from MATLAB, it is not intuitive that your displayed figure does not have to affect the saved one in terms of dimensions, etc. Each one is handled by a different backend, and you can modify the dpi and size_inches as you choose. Increasing the DPI is definitely going to help you get a large figure, especially with a format like PNG, which does not know about the size in inches. However, it will not help you scale the text relative to the figure itself. To do that, you will have to use the object oriented API, specifically, figure.set_size_inches, which I don't think has an equivalent in plt. Replace \n```\nplt.savefig(figName, dpi=500)\n```\nwith \n```\nfig = plt.gcf()\nfig.set_size_inches((8.5, 11), forward=False)\nfig.savefig(figName, dpi=500)\n```\nThe size 8.5, 11 is the width and height of the standard paper size in the US, respectively. You can set it to whatever you want. For example, you can use your screen size, but in that case be sure to get the DPI right as well.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/45515320\/matplotlib-savefig-fullscreen","best_answers_votes":21,"tags":["python","matplotlib","cartopy"],"question_length":3103,"response_length":986,"tag_count":3} {"question":"Is it possible to set the marker edge alpha in Matplotlib? For example, say I want to plot with color='None' but markeredgecolor='mediumseagreen' with the edges having alpha=0.5. Is that possible?","response":"As answered in the comments: Either pass the color as an RGBA tuple or get the RGBA value from colorConverter: \n```\nmatplotlib.colors.colorConverter.to_rgba('mediumseagreen', alpha=.5)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/25791973\/is-it-possible-to-set-the-marker-edge-alpha-in-matplotlib","best_answers_votes":21,"tags":["python","matplotlib"],"question_length":196,"response_length":188,"tag_count":2} {"question":"Vertical xtick labels on top, not bottom I want to plot a confusion matrix using Pylab. The class labels along the horizontal axis are long, so I want to plot them rotated vertically. However, I also want to plot them on top of the axis, not below. This command can plot vertical labels on bottom: \n```\npylab.imshow(confusion_matrix)\npylab.xticks(..., rotation='vertical')\n```\nand this command can plot horizontal labels on top without rotation: \n```\npylab.matshow(confusion_matrix)\n```\nbut I cannot find anything that does both. The following command does not work. \n```\npylab.matshow(confusion_matrix)\npylab.xticks(..., rotation='vertical')\n```\nCan you suggest a way to plot a confusion matrix with xticks on top of the axis with vertical rotation? Thank you. EDIT Thank you, Mark, for your help. It got me on the right track by inspecting the tick properties more closely. The only difference with your answer and my desired answer is applying that idea to an AxesImage, not a plot. After investigation, here is the answer: \n```\nim = pylab.matshow(confusion_matrix)\nfor label in im.axes.xaxis.get_ticklabels():\n label.set_rotation(90)\nim.figure.show()\n```\nTo those reading... don't forget about show()! I forgot that I needed to refresh the figure. See output below. Confusion matrix with vertical labels. http:\/\/up.stevetjoa.com\/rotate_ticklabels.png","response":"If I understand you correctly, this will get you close. You might have to 'pad' your labels out with spaces to move them off the xaxis line. \n```\nfrom matplotlib import pylab \npylab.plot([0, 6], [0, 6])\npylab.xticks([1,2,3,4,5,6],('one','two','three','four','five','six'),rotation='vertical',verticalalignment='bottom')\n```\nEDIT IN RESPONSE TO COMMENT If you want them rotated vertical on the top x-axis, try this: \n```\npylab.plot([0, 6], [0, 6])\npylab.xticks([1,2,3,4,5,6],('one','two','three','four','five','six'))\nfor tick in pylab.gca().xaxis.iter_ticks():\n tick[0].label2On = True\n tick[0].label1On = False\n tick[0].label2.set_rotation('vertical')\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/1992640\/vertical-xtick-labels-on-top-not-bottom","best_answers_votes":12,"tags":["matplotlib"],"question_length":1357,"response_length":665,"tag_count":1} {"question":"Interchange location of y and z axis in a 3D plot By default, plotting a set of points (or whatever) in 3D with matplotlib, locates the z axis vertically, as seen here (code below): I need to interchange the z and y axis, so that that y axis is shown vertically. I've looked around but found no way to tell matplotlib to do this. Add: I do not want to have to resort to a hack where I interchange the data and labels. This is a simple 3 points 3D plot, but I have to plot much more complicated surfaces. I'm looking for a general solution, not just something that works with scatter plots. A simple way to tell matplotlib to put the y axis vertically instead of the z axis is the clean way to do it. MWE \n```\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\n\nfig = plt.figure()\nax = Axes3D(fig)\n\nax.scatter([0.2, 0.5, 0.8], [2.3, 0.47, 1.], [2.1, 5.3, 0.7])\n\nax.set_xlabel('x')\nax.set_ylabel('y')\nax.set_zlabel('z')\nplt.show()\n```","response":"I don't think that this is currently possible. ax.view_init() would need to accept a third angle too. I opened an issue over at github, https:\/\/github.com\/matplotlib\/matplotlib\/issues\/14453#issue-452397120, let's hope someone is committed to implement this feature. Update The third view angle was recently implemented and has been merged into the matplotlib main branch: https:\/\/github.com\/matplotlib\/matplotlib\/pull\/21426 It should appear in Matplotlib 3.6.0.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/34812843\/interchange-location-of-y-and-z-axis-in-a-3d-plot","best_answers_votes":6,"tags":["python","matplotlib","axis","matplotlib-3d"],"question_length":954,"response_length":461,"tag_count":4} {"question":"Implementing an iterator in Julia for an animation with PyPlot I am just trying to reproduce this simple example of an animation in Matplotlib but using PyPlot in Julia. I am having difficulties with the definition of the iterator simData() that is passed to the function funcAnimation , because it seems that PyPlot doesn't recognize the iterator that I defined in Julia (via a Task) as such. Here is my approach to define the same function simData(): \n```\nfunction simData()\n\n t_max = 10.0\n dt = 0.05\n x = 0.0\n t = 0.0\n\n function it()\n while t < t_max\n x = sin(pi*t)\n t = t+dt\n produce(x,t)\n end\n end\n Task(it)\nend\n```\nAs you can check, this kind of iterator yields in theory the same values than the python simData() generator of the example (try for example collect(simData()). However, I got this error when I try to do the animation \n```\nLoadError: PyError (:PyObject_Call) \nTypeError('PyCall.jlwrap object is not an iterator',)\n File \"\/usr\/local\/lib\/python2.7\/dist-packages\/matplotlib\/animation.py\", line 1067, in __init__\n TimedAnimation.__init__(self, fig, **kwargs)\n File \"\/usr\/local\/lib\/python2.7\/dist-packages\/matplotlib\/animation.py\", line 913, in __init__\n *args, **kwargs)\n File \"\/usr\/local\/lib\/python2.7\/dist-packages\/matplotlib\/animation.py\", line 591, in __init__\n self._init_draw()\n File \"\/usr\/local\/lib\/python2.7\/dist-packages\/matplotlib\/animation.py\", line 1092, in _init_draw\n self._draw_frame(next(self.new_frame_seq()))\n\nwhile loading In[5], in expression starting on line 42\n\n in pyerr_check at \/home\/diegotap\/.julia\/v0.4\/PyCall\/src\/exception.jl:56\n [inlined code] from \/home\/diegotap\/.julia\/v0.4\/PyCall\/src\/exception.jl:81\n in pycall at \/home\/diegotap\/.julia\/v0.4\/PyCall\/src\/PyCall.jl:402\n in call at \/home\/diegotap\/.julia\/v0.4\/PyCall\/src\/PyCall.jl:429\n```\nAs I mentioned, I think the problem is that the Julia iterator is not recognized as such by Python. Do you have any idea about how to fix that? PS: Here is a Jupyter notebook with the full code that I used to do the animation.","response":"In your code, you invoke FuncAnimation() this way: \n```\nani = anim.FuncAnimation(fig, simPoints, simData, blit = false, interval=10, repeat= true)\n```\nIn the original code, simData() was a generator, but in your code it isn't, it returns a generator, so I'd expect your code to invoke it this way: \n```\nani = anim.FuncAnimation(fig, simPoints, simData(), blit = false, interval=10, repeat= true)\n```\nLet's finish the problem -- since we can't get Python to recognize the return value of simData() as an iterator, we'll ignore that feature and have simPoints() call simData() to launch the task and then return a function for Python to animate: \n```\nusing PyCall\nusing PyPlot\npygui(true)\n\n@pyimport matplotlib.animation as animation\n\nfunction simData()\n t_max = 10.0\n dt = 0.05\n x = 0.0\n t = -dt\n\n function it()\n while t < t_max\n x = sin(pi * t)\n t = t + dt\n produce(x, t)\n end\n end\n\n Task(it)\nend\n\nfunction simPoints()\n task = simData()\n\n function points(frame_number)\n x, t = consume(task)\n line[:set_data](t, x)\n return(line, \"\")\n end\n\n points\nend\n\nfigure = plt[:figure]()\naxis = figure[:add_subplot](111)\nline = axis[:plot]([], [], \"bo\", ms = 10)[1]\naxis[:set_ylim](-1, 1)\naxis[:set_xlim](0, 10)\n\nani = animation.FuncAnimation(figure, simPoints(), blit=false, interval=10, frames=200, repeat=false)\n\nplt[:show]()\n```\nThis works for one pass of the bouncing ball across the graph and stops when it hits the right edge (unlike the original Python which repeats).","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/35142199\/implementing-an-iterator-in-julia-for-an-animation-with-pyplot","best_answers_votes":8,"tags":["python","animation","matplotlib","julia"],"question_length":2122,"response_length":1564,"tag_count":4} {"question":"Why does matplotlib require setting log scale before plt.scatter() but not plt.plot()? I found out in this helpful answer that plt.scatter() and plt.plot() behave differently when a logrithmic scale is used on the y axis. With plot, I can change to log any time before I use plt.show(), but log has to be set up-front, before the scatter method is used. Is this just a historical and irreversible artifact in matplotlib, or is this in the 'unexpected behavior' category? \n```\nimport matplotlib.pyplot as plt\n\nX = [0.997, 2.643, 0.354, 0.075, 1.0, 0.03, 2.39, 0.364, 0.221, 0.437]\nY = [15.487507, 2.320735, 0.085742, 0.303032, 1.0, 0.025435, 4.436435,\n 0.025435, 0.000503, 2.320735]\n\nplt.figure()\n\nplt.subplot(2,2,1)\nplt.scatter(X, Y)\nplt.xscale('log')\nplt.yscale('log')\nplt.title('scatter - scale last') \n\nplt.subplot(2,2,2)\nplt.plot(X, Y)\nplt.xscale('log')\nplt.yscale('log')\nplt.title('plot - scale last') \n\nplt.subplot(2,2,3)\nplt.xscale('log')\nplt.yscale('log')\nplt.scatter(X, Y)\nplt.title('scatter - scale first') \n\n\nplt.subplot(2,2,4)\nplt.xscale('log')\nplt.yscale('log')\nplt.plot(X, Y)\nplt.title('plot - scale first') \n\n\nplt.show()\n```","response":"This somehow has to do with the the display area (axes limits) calculated by matplotlib. This behaviour is fixed by manually editing the axes range by using set_xlim and set_ylim methods. \n```\nplt.figure()\nplt.scatter(X, Y)\nplt.yscale('log')\nplt.xscale('log')\naxes = plt.gca()\naxes.set_xlim([min(X),max(X)])\naxes.set_ylim([min(Y),max(Y)])\nplt.show()\n```\nThe exact reason of this behavior is however not yet figured out by me. Suggestions are welcomed. EDIT As mentioned in comments section, apparently Matplotlib has identified Autoscaling has fundamental problems as a Release Critical Issue on their official Github repo, which would be fixed in upcoming versions. Thanks.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/38800189\/why-does-matplotlib-require-setting-log-scale-before-plt-scatter-but-not-plt-p","best_answers_votes":3,"tags":["python","python-2.7","matplotlib"],"question_length":1151,"response_length":674,"tag_count":3} {"question":"Plot Feature Importance with feature names In R there are pre-built functions to plot feature importance of Random Forest model. But in python such method seems to be missing. I search for a method in matplotlib. model.feature_importances gives me following: \n```\narray([ 2.32421835e-03, 7.21472336e-04, 2.70491223e-03,\n 3.34521084e-03, 4.19443238e-03, 1.50108737e-03,\n 3.29160540e-03, 4.82320256e-01, 3.14117333e-03])\n```\nThen using following plotting function: \n```\n>> pyplot.bar(range(len(model.feature_importances_)), model.feature_importances_)\n>> pyplot.show()\n```\nI get a barplot but I would like to get barplot with labels while importance showing horizontally in a sorted fashion. I am also exploring seaborn and was not able to find a method.","response":"Quick answer for data scientists that ain't got no time to waste: Load the feature importances into a pandas series indexed by your column names, then use its plot method. For a classifier model trained using X: \n```\nfeat_importances = pd.Series(model.feature_importances_, index=X.columns)\nfeat_importances.nlargest(20).plot(kind='barh')\n```\nSlightly more detailed answer with a full example: Assuming you trained your model with data contained in a pandas dataframe, this is fairly painless if you load the feature importance into a panda's series, then you can leverage its indexing to get the variable names displayed easily. The plot argument kind='barh' gives us a horizontal bar chart, but you could easily substitute this argument for kind='bar' for a traditional bar chart with the feature names along the x-axis if you prefer. nlargest(n) is a pandas Series method which will return a subset of the series with the largest n values. This is useful if you've got lots of features in your model and you only want to plot the most important. A quick complete example using the classic Kaggle Titanic dataset... \n```\nimport pandas as pd\nfrom sklearn.ensemble import RandomForestClassifier\n%matplotlib inline # don't forget this if you're using jupyter!\n\nX = pd.read_csv(\"titanic_train.csv\")\nX = X[['Pclass', 'Age', 'Fare', 'Parch', 'SibSp', 'Survived']].dropna()\ny = X.pop('Survived')\n\nmodel = RandomForestClassifier()\nmodel.fit(X, y)\n\n(pd.Series(model.feature_importances_, index=X.columns)\n .nlargest(4)\n .plot(kind='barh')) # some method chaining, because it's sexy!\n```\nWhich will give you this:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/44511636\/plot-feature-importance-with-feature-names","best_answers_votes":72,"tags":["python","matplotlib","random-forest","seaborn"],"question_length":781,"response_length":1627,"tag_count":4} {"question":"Set yaxis of all subplots to the same range - Matplotlib I'm wondering how do I force my subplots to share the y-axis range. This is my code: \n```\nf, axes = plt.subplots(7, 1, sharex='col', sharey='row', figsize=(15, 30))\ndistance = []\n\nfor i in range(simulations):\n delta = numpy.zeros((simulations+samples, simulations+samples))\n data_x = sample_x[i*samples:(i*samples)+samples] + ensamble_x\n data_y = sample_y[i*samples:(i*samples)+samples] + ensamble_y\n for j in range(simulations+samples):\n for k in range(simulations+samples):\n if j <= k:\n dist = similarity_measure((data_x[j].flatten(), data_y[j].flatten()), (data_x[k].flatten(), data_y[k].flatten()))\n delta[j, k] = delta[k, j] = dist\n\n delta = 1-((delta+1)\/2) \n delta \/= numpy.max(delta)\n model = manifold.TSNE(n_components=2, random_state=0, metric='precomputed')\n coords = model.fit_transform(delta)\n\n mds = manifold.MDS(n_components=2, max_iter=3000, eps=1e-9, random_state=0,\n dissimilarity=\"precomputed\", n_jobs=1)\n coords = mds.fit(delta).embedding_\n\n close, far = find_distance(coords[:samples, :], coords[samples+i, :])\n\n distance.append((close, far))\n\n axes[i].scatter(coords[:samples, 0], coords[:samples, 1], marker='x', c=colors[i], s=50, edgecolor='None')\n axes[i].scatter(coords[samples:, 0], coords[samples:, 1], marker='o', c=colors, s=50, edgecolor='None')\n\n axes[i].scatter(coords[close, 0], coords[close, 1], marker='s', facecolor=\"none\", c=colors[i], s=50, edgecolor='None')\n axes[i].scatter(coords[far, 0] , coords[far, 1] , marker='s', facecolor=\"none\", c=colors[i], s=50, edgecolor='None')\n\n axes[i].set_title('Simulation '+str(i+1), fontsize=20)\n\nmarkers = []\nlabels = [str(n+1) for n in range(simulations)]\nfor i in range(simulations):\n markers.append(Line2D([0], [0], linestyle='None', marker=\"o\", markersize=10, markeredgecolor=\"none\", markerfacecolor=colors[i]))\nlgd = plt.legend(markers, labels, numpoints=1, bbox_to_anchor=(1.0, -0.055), ncol=simulations) \nplt.tight_layout()\nplt.ylim(-1, 1)\nplt.axis('equal')\nplt.savefig('Simulations.pdf', bbox_extra_artists=(lgd,), format='pdf', bbox_inches='tight')\n```\nAnd it's result: As can be seen, the y axis limits differs from one subplot to another. I'd like to use the max\/min range generated. Thank you. EDTI: MINIMAL EXAMPLE \n```\n%matplotlib inline\n\nfrom sklearn.preprocessing import normalize\nfrom sklearn import manifold\nfrom matplotlib import pyplot as plt\nfrom matplotlib.lines import Line2D\n\nimport numpy\nimport itertools\n\nf, axes = plt.subplots(7, 1, sharex='col', sharey='row', figsize=(15, 30))\ndistance = []\n\nfor i in range(7):\n delta = numpy.random.randint(0, 100, (100, 100))\n\n axes[i].scatter(delta[:, 0], delta[:, 1], marker='x', c='r', s=50, edgecolor='None')\n\n axes[i].set_title('Simulation '+str(i+1), fontsize=20)\n axes[i].set_ylim(0, 100)\n\nmarkers = []\n\nplt.tight_layout()\nplt.axis('equal')\n```","response":"Your 1st line \n```\nf, axes = plt.subplots(7, 1, sharex='col', sharey='row', figsize=(15, 30))\n```\nhas an inappropriate value for the sharey parameter. Using sharey='row' you're asking that all the subplots in each row share the same y axis, but each of your subplots is in a row by itself, so you see no sharing. If you try sharey=True or sharey='col' you'll get what you want. Addendum The following code \n```\n\n```python\na = np.random.random(21)\n```\n\n```python\nb = a+5\n```\n\n```python\nx = np.arange(21)\n```\n\n```python\nf, (ax, bx) = plt.subplots(2,1,sharey='row') # like yours\n```\n\n```python\nax.plot(x,a)\n#Output\n#[]\n```\n\n```python\nbx.plot(x,b)\n#Output\n#[]\n```\n\n```python\nf, (ax, bx) = plt.subplots(2,1,sharey='col') # like mine\n```\n\n```python\nax.plot(x,a)\n#Output\n#[]\n```\n\n```python\nbx.plot(x,b)\n#Output\n#[]\n```\n\n```python\n```\ngives me the following two plots. Can you spot a single difference?\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/36411164\/set-yaxis-of-all-subplots-to-the-same-range-matplotlib","best_answers_votes":36,"tags":["python","matplotlib","range","axis"],"question_length":2989,"response_length":1019,"tag_count":4} {"question":"Python legend attribute error Why am I getting an error here that relates to the plt.plot label? \n```\nfig = plt.figure()\nax = plt.gca()\nbarplt = plt.bar(bins,frq,align='center',label='Dgr')\nnormplt = plt.plot(bins_n,frq_n,'--r', label='Norm');\nax.set_xlim([min(bins)-1, max(bins)+1])\nax.set_ylim([0, max(frq)])\nplt.xlabel('Dgr')\nplt.ylabel('Frequency')\nplt.show()\nplt.legend(handles=[barplt,normplt])\n```\nThis is the error that I get: 'list' object has no attribute 'get_label'","response":"Because plt.plot can plot more than one line at once, it returns a list of line2D objects, even if you only plot one line (i.e. in your case, a list of length 1). When you grab its handle for the legend, you want to only use the first item of this list (the actual line2D object). There are (at least) two ways you can resolve this: 1) add a comma after normplt when you call plt.plot, to only store the first item from the list in normplt \n```\nbarplt = plt.bar(bins,frq,width,align='center',label='Dgr')\nnormplt, = plt.plot(bins_n,frq_n,'--r', label='Norm') # note the comma after normplt\n\nprint normplt\n# Line2D(Norm) <-- This is the line2D object, not a list, so we can use it in legend\n...\nplt.legend(handles=[barplt,normplt])\n```\n2) Use only the first item in the list when you call plt.legend (normplt[0]): \n```\nbarplt = plt.bar(bins,frq,width,align='center',label='Dgr')\nnormplt = plt.plot(bins_n,frq_n,'--r', label='Norm')\n\nprint normplt\n# [] \n# Note, this is a list containing the Line2D object. We just want the object, \n# so we can use normplt[0] in legend\n...\nplt.legend(handles=[barplt,normplt[0]])\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/36329269\/python-legend-attribute-error","best_answers_votes":36,"tags":["python-3.x","matplotlib"],"question_length":477,"response_length":1168,"tag_count":2} {"question":"Use Line2D to plot line in matplotlib I have the data: \n```\nx = [10,24,23,23,3]\ny = [12,2,3,4,2]\n```\nI want to plot it using matplotlib.lines.Line2D(xdata, ydata). I tried: \n```\nimport matplotlib.lines\nmatplotlib.lines.Line2D(x, y)\n```\nBut how do I show the line?","response":"You should add the line to a plot and then show it: \n```\n\n```python\nimport matplotlib.pyplot as plt\n```\n\n```python\nfrom matplotlib.lines import Line2D\n```\n\n```python\nfig = plt.figure()\n```\n\n```python\nax = fig.add_subplot(111)\n```\n\n```python\nx = [10,24,23,23,3]\n```\n\n```python\ny = [12,2,3,4,2]\n```\n\n```python\nline = Line2D(x, y)\n```\n\n```python\nax.add_line(line)\n#Output\n#\n```\n\n```python\nax.set_xlim(min(x), max(x))\n#Output\n#(3, 24)\n```\n\n```python\nax.set_ylim(min(y), max(y))\n#Output\n#(2, 12)\n```\n\n```python\nplt.show()\n```\nThe result:\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/28688210\/use-line2d-to-plot-line-in-matplotlib","best_answers_votes":30,"tags":["python","matplotlib"],"question_length":263,"response_length":529,"tag_count":2} {"question":"python matplotlib histogram specify different colours for different bars I want to colour different bars in a histogram based on which bin they belong to. e.g. in the below example, I want the first 3 bars to be blue, the next 2 to be red, and the rest black (the actual bars and colour is determined by other parts of the code). I can change the colour of all the bars using the color option, but I would like to be able to give a list of colours that are used. \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndata = np.random.rand(1000)\nplt.hist(data,color = 'r')\n```","response":"One way may be similar to approach in other answer: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots()\ndata = np.random.rand(1000)\n\nN, bins, patches = ax.hist(data, edgecolor='white', linewidth=1)\n\nfor i in range(0,3):\n patches[i].set_facecolor('b')\nfor i in range(3,5): \n patches[i].set_facecolor('r')\nfor i in range(5, len(patches)):\n patches[i].set_facecolor('black')\n\nplt.show()\n```\nResult:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/49290266\/python-matplotlib-histogram-specify-different-colours-for-different-bars","best_answers_votes":41,"tags":["python","matplotlib","histogram"],"question_length":578,"response_length":437,"tag_count":3} {"question":"bar plot xtick frequency I want to create a simple bar chart for pandas DataFrame object. However, the xtick on the chart appears to be too granular, whereas if I change the plot to line chart, xtick is optimized for better viewing. I was wondering if I can bring the same line chart xtick frequency to bar chart? Thanks. \n```\nlocks.plot(kind='bar',y='SUM')\n```\nEDIT Resultant plot:","response":"You can reduce the number of thicks by setting one every n ticks, doing something like: \n```\nn = 10\n\nax = locks.plot(kind='bar', y='SUM')\nticks = ax.xaxis.get_ticklocs()\nticklabels = [l.get_text() for l in ax.xaxis.get_ticklabels()]\nax.xaxis.set_ticks(ticks[::n])\nax.xaxis.set_ticklabels(ticklabels[::n])\n\nax.figure.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/19143857\/bar-plot-xtick-frequency","best_answers_votes":41,"tags":["python","matplotlib","plot","pandas"],"question_length":382,"response_length":326,"tag_count":4} {"question":"How to set the default figure size and DPI of all plots drawn by `matplotlib.pyplot` I know how to set figure size and DPI of one plot by fig, ax = plt.figure(figsize=(8,8), dpi=140). But I am wondering there is a way that can change the figure size or DPI of all plots without specifying these values each time. Could anyone help me\uff1f Thanks in advance.","response":"For your specific case, you probably want to set \n```\nimport matplotlib.pyplot as plt\nplt.rcParams['figure.figsize'] = [8.0, 8.0]\nplt.rcParams['figure.dpi'] = 140\n```\nThis will change the default for this one script. However, there are a number of ways how you can change the defaults permanently. You could modify the matplotlibrc file or create your own style sheet. Please refer to the matplotlib documentation for details: https:\/\/matplotlib.org\/tutorials\/introductory\/customizing.html https:\/\/matplotlib.org\/users\/dflt_style_changes.html","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/56231689\/how-to-set-the-default-figure-size-and-dpi-of-all-plots-drawn-by-matplotlib-pyp","best_answers_votes":36,"tags":["python","matplotlib","data-visualization"],"question_length":353,"response_length":542,"tag_count":3} {"question":"Seaborn Catplot set values over the bars I plotted a catplot in seaborn like this \n```\nimport seaborn as sns\nimport pandas as pd\n\ndata = {'year': [2016, 2013, 2014, 2015, 2016, 2013, 2014, 2015, 2016, 2013, 2014, 2015, 2016, 2013, 2014, 2015, 2016, 2013, 2014, 2015], 'geo_name': ['Michigan', 'Michigan', 'Michigan', 'Michigan', 'Washtenaw County, MI', 'Washtenaw County, MI', 'Washtenaw County, MI', 'Washtenaw County, MI', 'Ann Arbor, MI', 'Ann Arbor, MI', 'Ann Arbor, MI', 'Ann Arbor, MI', 'Philadelphia, PA', 'Philadelphia, PA', 'Philadelphia, PA', 'Philadelphia, PA', 'Ann Arbor, MI Metro Area', 'Ann Arbor, MI Metro Area', 'Ann Arbor, MI Metro Area', 'Ann Arbor, MI Metro Area'], 'geo': ['04000US26', '04000US26', '04000US26', '04000US26', '05000US26161', '05000US26161', '05000US26161', '05000US26161', '16000US2603000', '16000US2603000', '16000US2603000', '16000US2603000', '16000US4260000', '16000US4260000', '16000US4260000', '16000US4260000', '31000US11460', '31000US11460', '31000US11460', '31000US11460'], 'income': [50803.0, 48411.0, 49087.0, 49576.0, 62484.0, 59055.0, 60805.0, 61003.0, 57697.0, 55003.0, 56835.0, 55990.0, 39770.0, 37192.0, 37460.0, 38253.0, 62484.0, 59055.0, 60805.0, 61003.0], 'income_moe': [162.0, 163.0, 192.0, 186.0, 984.0, 985.0, 958.0, 901.0, 2046.0, 1688.0, 1320.0, 1259.0, 567.0, 424.0, 430.0, 511.0, 984.0, 985.0, 958.0, 901.0]}\ndf = pd.DataFrame(data)\n\ng = sns.catplot(x='year', y='income', data=df, kind='bar', hue='geo_name', legend=True)\ng.fig.set_size_inches(15,8)\ng.fig.subplots_adjust(top=0.81,right=0.86)\n```\nI am getting an output like shown below I want to add the values of each bar on its top in K representation. For example in 2013 the bar for Michigan is at 48411 so I want to add the value 48.4K on top of that bar. Likewise for all the bars.","response":"Use matplotlib.pyplot.bar_label as of matplotlib v3.4.2 See Adding value labels on a matplotlib bar chart for other plotting options related to the new method. See the matplotlib: Bar Label Demo page for additional formatting options. Use the fmt parameter for simple formats, and the labels parameter for customized string formatting. For horizontal bars, use v.get_width instead of v.get_height. Tested in python 3.11, pandas 1.5.3, matplotlib 3.7.1, seaborn 0.12.2 For single plot only \n```py\ng = sns.catplot(data=df, x='year', y='income', kind='bar', hue='geo_name', legend=True, height=6, aspect=2)\n\n# extract the matplotlib axes_subplot objects from the FacetGrid\nax = g.facet_axis(0, 0) # or ax = g.axes.flat[0]\n\n# iterate through the axes containers\nfor c in ax.containers:\n labels = [f'{(v.get_height() \/ 1000):.1f}K' for v in c]\n ax.bar_label(c, labels=labels, label_type='edge')\n```\nFor single or multiple plots \n```py\ng = sns.catplot(data=df, x='year', y='income', kind='bar', col='geo_name', col_wrap=3)\n\ng.fig.subplots_adjust(top=0.9)\ng.fig.suptitle('Bar Count with Annotations')\n\n# iterate through axes\nfor ax in g.axes.ravel():\n \n # add annotations\n for c in ax.containers:\n labels = [f'{(v.get_height() \/ 1000):.1f}K' for v in c]\n ax.bar_label(c, labels=labels, label_type='edge')\n ax.margins(y=0.2)\n\nplt.show()\n```\nAlso works for grouped bars in Facets (subplots) \n```py\ndf = sns.load_dataset('titanic')\ng = sns.catplot(data=df, kind='count', x='survived', hue='sex', col='class')\n\n# iterate through axes\nfor ax in g.axes.ravel():\n \n # add annotations\n for c in ax.containers:\n # add custom labels with the labels=labels parameter if needed\n # labels = [f'{h}' if (h := v.get_height()) > 0 else '' for v in c]\n ax.bar_label(c, label_type='edge')\n ax.margins(y=0.2)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/55586912\/seaborn-catplot-set-values-over-the-bars","best_answers_votes":28,"tags":["python","matplotlib","seaborn","bar-chart","plot-annotations"],"question_length":1801,"response_length":1852,"tag_count":5} {"question":"Ordering and Formatting Dates on X-Axis in Seaborn Bar Plot This seems so simple, but for the life of me I can't figure it out. I am new to Python and Seaborn, and I am doing all this online at PythonAnywhere. All I am trying to do is create a simple barplot in seaborn, with dates ordered properly (that is, ascending from left to right), on the x-axis. When I try this: \n```\nimport matplotlib.pyplot as plt\nimport matplotlib.dates as mdates\nimport datetime\nimport pandas as pd\nimport seaborn as sns\n\nemp = pd.DataFrame([[32, \"5\/31\/2018\"], [3, \"2\/28\/2018\"], [40, \"11\/30\/2017\"], [50, \"8\/31\/2017\"], [51, \"5\/31\/2017\"]], \n columns=[\"jobs\", \"12monthsEnding\"])\n\nfig = plt.figure(figsize = (10,7))\n\nsns.barplot(x = \"12monthsEnding\", y = \"uniqueClientExits\", data = emp, \nestimator = sum, ci = None)\n\nfig.autofmt_xdate()\nplt.show()\n```\nI get this: Nice looking bar graph but with the dates ordered descending from left to right And then when I try to convert the objects to datetime: (note: i'm using pd.to_datetime() below in order to try and recreate what happens when I use parse_dates in pd.read_csv(), which is how I'm actually creating the dataframe.) \n```\nemp = pd.DataFrame([[32, pd.to_datetime(\"5\/31\/2018\")], [3, pd.to_datetime(\"2\/28\/2018\")], [40, pd.to_datetime(\"11\/30\/2017\")], [50, pd.to_datetime(\"8\/31\/2017\")], [51, pd.to_datetime(\"5\/31\/2017\")]], \n columns=[\"jobs\", \"12monthsEnding\"])\n\nfig = plt.figure(figsize = (10,7))\n\nsns.barplot(x = \"12monthsEnding\", y = \"uniqueClientExits\", data = emp, \nestimator = sum, ci = None)\n\nfig.autofmt_xdate()\n\nplt.show()\n```\nI get this: Bar plot with the dates in the right order, but WRONG format I get the same bar plot, with the dates ordered properly, but in the full, long datetime format, with the time, etc. But all I want is the day\/month\/year. I've scoured stackoverflow for two days now and nothing has worked. I'm starting to wonder if part of the reason is because I'm working on PythonAnywhere. But I also can't find any reason why that would be. This is driving me nuts. Looking forward to any assistance. Thanks.","response":"Using your second approach, simply sort and reformat the datetime values to YYYY-MM-DD and pass values into set_xticklabels. Below demonstrates with random, seeded data: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\n\n# RANDOM DATA\nnp.random.seed(62918)\nemp = pd.DataFrame({'uniqueClientExits': [np.random.randint(15) for _ in range(50)],\n '12monthsEnding': pd.to_datetime(\n np.random.choice(\n pd.date_range('2018-01-01', periods=50), \n 50)\n )\n }, columns = ['uniqueClientExits','12monthsEnding'])\n\n# PLOTTING\nfig, ax = plt.subplots(figsize = (12,6)) \nfig = sns.barplot(x = \"12monthsEnding\", y = \"uniqueClientExits\", data = emp, \n estimator = sum, ci = None, ax=ax)\n\nx_dates = emp['12monthsEnding'].dt.strftime('%Y-%m-%d').sort_values().unique()\nax.set_xticklabels(labels=x_dates, rotation=45, ha='right')\n```\nTo check graph output, run a groupby().sum(): \n```\nprint(emp.groupby('12monthsEnding').sum().head())\n\n# uniqueClientExits\n# 12monthsEnding \n# 2018-01-01 12\n# 2018-01-02 4\n# 2018-01-04 11\n# 2018-01-06 13\n# 2018-01-08 10\n# 2018-01-11 11\n# 2018-01-14 9\n# 2018-01-15 0\n# 2018-01-16 4\n# 2018-01-17 5\n# ...\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/51105648\/ordering-and-formatting-dates-on-x-axis-in-seaborn-bar-plot","best_answers_votes":35,"tags":["python","pandas","matplotlib","seaborn","pythonanywhere"],"question_length":2094,"response_length":1625,"tag_count":5} {"question":"Controlling alpha value on 3D scatter plot using Python and matplotlib I'm plotting a 3D scatter plot using the function scatter and mplot3d. I'm choosing a single color for all points in the plot, but when drawn by matplotlib the transparency of the points is set relative to the distance from the camera. Is there any way to disable this feature? I've tried setting the alpha kwarg to None\/1 and also set vmin\/vmax to 1 (in an attempt to force the color scaling to be a solid single color) with no luck. I didn't see any other likely options related to this setting in the scatter documentation. Thanks!","response":"For Matplotlib 1.4+, the answer provided below by @fraxel is the best solution: call ax.scatter with the argument depthshade=False. There is no arguments that can control this. Here is some hack method. Disable set_edgecolors and set_facecolors method, so that mplot3d can't update the alpha part of the colors: \n```\nfrom mpl_toolkits.mplot3d import Axes3D\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfig = plt.figure()\nax = fig.gca(projection='3d')\n\nx = np.random.sample(20)\ny = np.random.sample(20)\nz = np.random.sample(20)\ns = ax.scatter(x, y, z, c=\"r\")\ns.set_edgecolors = s.set_facecolors = lambda *args:None\n\nax.legend()\nax.set_xlim3d(0, 1)\nax.set_ylim3d(0, 1)\nax.set_zlim3d(0, 1)\n\nplt.show()\n```\nIf you want call set_edgecolors and set_facecolors methods later, you can backup these two methods before disable them: \n```\ns._set_facecolors, s._set_edgecolors = s.set_facecolors, s.set_edgecolors\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/15533246\/controlling-alpha-value-on-3d-scatter-plot-using-python-and-matplotlib","best_answers_votes":17,"tags":["python","numpy","matplotlib"],"question_length":605,"response_length":912,"tag_count":3} {"question":"How to add a legend to matplotlib pie chart? Using this example http:\/\/matplotlib.org\/examples\/pie_and_polar_charts\/pie_demo_features.html how could I add a legend to this pie chart? My problem is that I have One big slice 88.4%, the second largest slice is 10.6%, and the other slices are 0.7 and 0.3%. The labels around the pie don't appear (except for the biggest slice) and neither the percentage values for the smaller slices. So I guess I can add a legend showing the names and the values. But I haven't found out how... \n```\n# -*- coding: UTF-8 -*-\nimport matplotlib.pyplot as plt\n# The slices will be ordered and plotted counter-clockwise.\nlabels = 'Rayos X', 'RMN en soluci\u00f3n', 'Microscop\u00eda electr\u00f3nica', 'Otros'\nsizes = [88.4, 10.6, 0.7, 0.3]\ncolors = ['yellowgreen', 'gold', 'lightskyblue', 'lightcoral']\nexplode = (0.1, 0, 0, 0)\nplt.pie(sizes, explode=explode, labels=labels, colors=colors, shadow=True, startangle=90)\nplt.legend(title=\"t\u00e9cnica\")\n# Set aspect ratio to be equal so that pie is drawn as a circle.\nplt.axis('equal')\nplt.show()\n```","response":"I checked your code, and the plt.legend() creates a legend, just how you want it to be; maybe set the loc=\"lower left\", so it does not overlap with the relevant pieces of pie. For me, the strings are displayed properly, besides the non standard chars - which might cause the problem that they are not displayed to you at all. Only the biggest slice and \"Otros\" do not contain special chars. Maybe also try to resize the figure, as they might be pushed out of the canvas. Please refer to how to write accents with matplotlib and try again with proper strings. The percentages are not shown, because you did not set them to be shown. Please refer to the example posted by you, as you omitted autopct='%1.1f%%'which will plot the percentages. In this special case, I would rather not plot the percentages, as they will overlap just like the labels on the border, as some slices are too small. Maybe add these information to the legend. Putting it all together (besides the special chars - I had some problems activating TeX), try the following code: \n```\n# -*- coding: UTF-8 -*-\nimport matplotlib.pyplot as plt\n# The slices will be ordered and plotted counter-clockwise.\nlabels = [r'Rayos X (88.4 %)', r'RMN en solucion (10.6 %)', \nr'Microscopia electronica (0.7 %)', r'Otros (0.3 %)']\nsizes = [88.4, 10.6, 0.7, 0.3]\ncolors = ['yellowgreen', 'gold', 'lightskyblue', 'lightcoral']\npatches, texts = plt.pie(sizes, colors=colors, startangle=90)\nplt.legend(patches, labels, loc=\"best\")\n# Set aspect ratio to be equal so that pie is drawn as a circle.\nplt.axis('equal')\nplt.tight_layout()\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/19852215\/how-to-add-a-legend-to-matplotlib-pie-chart","best_answers_votes":33,"tags":["matplotlib","legend","pie-chart"],"question_length":1056,"response_length":1595,"tag_count":3} {"question":"Plotting animated quivers in Python I am trying to animate a vector such as wind in Python. I tried to use quiver function in pylab and in combination with matplotlib.animation from matplotlib. However, the result says 'QuiverKey' object is not subscriptable. I think that it is because I don't understand fully about these two functions or just these two functions don't match together. Below is my code, it is actually the combination between quiver and animation functions from matplotlib. \n```\ndef update_line(num, data, line):\n line.set_data(data[...,:num])\n return line,\n\nX,Y = np.meshgrid(np.arange(0,2*np.pi,.2),np.arange(0,2*np.pi,.2) ) \nU = np.cos(X)\nV = np.sin(Y)\n\nfig1 = plt.figure()\nQ = quiver( X[::3, ::3], Y[::3, ::3], U[::3, ::3], V[::3, ::3],\n pivot='mid', color='r', units='inches' )\ndata = quiverkey(Q, 0.5, 0.03, 1, r'$1 \\frac{m}{s}$', fontproperties={'weight': 'bold'})\nplt.axis([-1, 7, -1, 7])\ntitle('scales with plot width, not view')\nl, = plt.plot([], [], 'r-') \nplt.xlabel('x')\nplt.ylabel('y')\nplt.title('test')\nline_ani = animation.FuncAnimation(fig1, update_line, 25, fargs=(data, l),\ninterval=50, blit=True)\nplt.show()\n```","response":"Here's an example to get you started: \n```\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfrom matplotlib import animation\n\nX, Y = np.mgrid[:2*np.pi:10j,:2*np.pi:5j]\nU = np.cos(X)\nV = np.sin(Y)\n\nfig, ax = plt.subplots(1,1)\nQ = ax.quiver(X, Y, U, V, pivot='mid', color='r', units='inches')\n\nax.set_xlim(-1, 7)\nax.set_ylim(-1, 7)\n\ndef update_quiver(num, Q, X, Y):\n \"\"\"updates the horizontal and vertical vector components by a\n fixed increment on each frame\n \"\"\"\n\n U = np.cos(X + num*0.1)\n V = np.sin(Y + num*0.1)\n\n Q.set_UVC(U,V)\n\n return Q,\n\n# you need to set blit=False, or the first set of arrows never gets\n# cleared on subsequent frames\nanim = animation.FuncAnimation(fig, update_quiver, fargs=(Q, X, Y),\n interval=50, blit=False)\nfig.tight_layout()\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/19329039\/plotting-animated-quivers-in-python","best_answers_votes":36,"tags":["python","animation","matplotlib"],"question_length":1164,"response_length":828,"tag_count":3} {"question":"Discretized continuous Fourier transform with numpy Consider a function f(t), how do I compute the continuous Fouriertransform g(w) and plot it (using numpy and matplotlib)? This or the inverse problem (g(w) given, plot of f(t) unknown) occurs if there exists no analytical solution to the Fourier Integral.","response":"You can use the numpy FFT module for that, but have to do some extra work. First let's look at the Fourier integral and discretize it: Here k,m are integers and N the number of data points for f(t). Using this discretization we get The sum in the last expression is exactly the Discrete Fourier Transformation (DFT) numpy uses (see section \"Implementation details\" of the numpy FFT module). With this knowledge we can write the following python script \n```\nimport numpy as np\nimport matplotlib.pyplot as pl\n\n#Consider function f(t)=1\/(t^2+1)\n#We want to compute the Fourier transform g(w)\n\n#Discretize time t\nt0=-100.\ndt=0.001\nt=np.arange(t0,-t0,dt)\n#Define function\nf=1.\/(t**2+1.)\n\n#Compute Fourier transform by numpy's FFT function\ng=np.fft.fft(f)\n#frequency normalization factor is 2*np.pi\/dt\nw = np.fft.fftfreq(f.size)*2*np.pi\/dt\n\n\n#In order to get a discretisation of the continuous Fourier transform\n#we need to multiply g by a phase factor\ng*=dt*np.exp(-complex(0,1)*w*t0)\/(np.sqrt(2*np.pi))\n\n#Plot Result\npl.scatter(w,g,color=\"r\")\n#For comparison we plot the analytical solution\npl.plot(w,np.exp(-np.abs(w))*np.sqrt(np.pi\/2),color=\"g\")\n\npl.gca().set_xlim(-10,10)\npl.show()\npl.close()\n```\nThe resulting plot shows that the script works","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/24077913\/discretized-continuous-fourier-transform-with-numpy","best_answers_votes":35,"tags":["math","numpy","matplotlib","fft","continuous-fourier"],"question_length":307,"response_length":1242,"tag_count":5} {"question":"Connecting two points in a 3D scatter plot In the code below, how do I create lines connecting each pair of scatter plots (i.e. linking the green circle to the yellow arrowhead) created by the two lines of code towards the end just before the .show() instruction? \n```\nimport matplotlib.pyplot\nfrom mpl_toolkits.mplot3d import Axes3D\n\ndates = [20020514, 20020515, 20020516, 20020517, 20020520]\nhighs = [1135, 1158, 1152, 1158, 1163]\nlows = [1257, 1253, 1259, 1264, 1252]\nupperLimits = [1125.0, 1125.0, 1093.75, 1125.0, 1125.0]\nlowerLimits = [1250.0, 1250.0, 1156.25, 1250.0, 1250.0]\n\nzaxisvalues0= [0, 0, 0, 0, 0]\nzaxisvalues1= [1, 1, 1, 1, 1]\nzaxisvalues2= [2, 2, 2, 2, 2]\n\nfig = matplotlib.pyplot.figure()\nax = fig.add_subplot(111, projection = '3d')\n\nax.plot(dates, zaxisvalues1, lowerLimits, color = 'b')\nax.plot(dates, zaxisvalues2, upperLimits, color = 'r')\n\nax.scatter(dates, zaxisvalues0, highs, color = 'g', marker = \"o\")\nax.scatter(dates, zaxisvalues0, lows, color = 'y', marker = \"^\")\n\nmatplotlib.pyplot.show()\n```","response":"Draw a line segment between those points: \n```\nimport matplotlib.pyplot\nfrom mpl_toolkits.mplot3d import Axes3D\n\ndates = [20020514, 20020515, 20020516, 20020517, 20020520]\nhighs = [1135, 1158, 1152, 1158, 1163]\nlows = [1257, 1253, 1259, 1264, 1252]\nupperLimits = [1125.0, 1125.0, 1093.75, 1125.0, 1125.0]\nlowerLimits = [1250.0, 1250.0, 1156.25, 1250.0, 1250.0]\n\nzaxisvalues0= [0, 0, 0, 0, 0]\nzaxisvalues1= [1, 1, 1, 1, 1]\nzaxisvalues2= [2, 2, 2, 2, 2]\n\nfig = matplotlib.pyplot.figure()\nax = fig.add_subplot(111, projection = '3d')\n\nax.plot(dates, zaxisvalues1, lowerLimits, color = 'b')\nax.plot(dates, zaxisvalues2, upperLimits, color = 'r')\n\nfor i,j,k,h in zip(dates,zaxisvalues0,lows,highs):\n ax.plot([i,i],[j,j],[k,h],color = 'g')\n\nax.scatter(dates, zaxisvalues0, highs, color = 'g', marker = \"o\")\nax.scatter(dates, zaxisvalues0, lows, color = 'y', marker = \"^\")\n\nmatplotlib.pyplot.show()\n```\nProduces:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/10572939\/connecting-two-points-in-a-3d-scatter-plot","best_answers_votes":35,"tags":["python","matplotlib","mplot3d","matplotlib-3d"],"question_length":1045,"response_length":928,"tag_count":4} {"question":"Matplotlib imshow offset to match axis? I'm plotting a bunch of UTM coordinates using a matplotlib.pyplot.scatter. I also have a background air photo that I know matches the extent of the figure exactly. When I plot my data and set the axis I can display the scatter correctly. If I plot the air photo using imshow it uses the pixel number as the axis location. I need to shift the image (numpy array) to it's correct UTM position. Any ideas? I'm fairly new to matplotlib and numpy. For example: I know that the top left corner of the image (imshow coordinate: 0,0) has the UTM coordinate (269658.4, 538318.2). How do I tell imshow the same thing? I should also say that I investigated Basemap but it doesn't appear to fully support UTM yet. My study area is very small.","response":"You need to use the extent keyword argument to imshow. As a quick example: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# Random points between 50000 and 51000\nx, y = 1000 * np.random.random((2, 10)) + 50000\n\n# A 10x10 \"image\"...\nimage = np.arange(100).reshape((10,10))\n\n# In a lot of cases, image data will be \"flipped\" vertically, so you may need \n# use the `origin` kwarg, as well (or just swap the ymin and ymax ordering).\nplt.imshow(image, extent=[x.min(), x.max(), y.min(), y.max()])\nplt.plot(x, y, 'ro')\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/12324176\/matplotlib-imshow-offset-to-match-axis","best_answers_votes":34,"tags":["python","numpy","matplotlib","gis","coordinate-systems"],"question_length":770,"response_length":538,"tag_count":5} {"question":"How to label and change the scale of Seaborn kdeplot's axes Here's my code \n```\nimport numpy as np\nfrom numpy.random import randn\nimport pandas as pd\nfrom scipy import stats\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\n\nfig = sns.kdeplot(treze, shade=True, color=c1,cut =0, clip=(0,2000))\nfig = sns.kdeplot(cjjardim, shade=True, color=c2,cut =0, clip=(0,2000))\n\nfig.figure.suptitle(\"Plot\", fontsize = 24)\n\nplt.xlabel('Purchase amount', fontsize=18)\nplt.ylabel('Distribution', fontsize=16)\n```\n, which results in the following plot: I want to do two things: 1) Change the scale of the y-axis by multiplying its values by 10000 and, if it's possible, add a % sign to the numbers. In other words, I want the y-axis values shown in the above plot to be 0%, 5%, 10%, 15%, 20%, 25%, and 30%. 2) Add more values to the x-axis. I'm particularly interested in showing the data in intervals of 200. In other words, I want the x-axis values shown in the plot to be 0, 200, 400, 600,... and so on.","response":"1) what you are looking for is most probably some combination of get_yticks() and set_yticks: \n```\nplt.yticks(fig.get_yticks(), fig.get_yticks() * 100)\nplt.ylabel('Distribution [%]', fontsize=16)\n```\nNote: as mwaskom is commenting times 10000 and a % sign is mathematically incorrect. 2) you can specify where you want your ticks via the xticks function. Then you have more ticks and data get easier to read. You do not get more data that way. \n```\nplt.xticks([0, 200, 400, 600])\nplt.xlabel('Purchase amount', fontsize=18)\n```\nNote: if you wanted to limit the view to your specified x-values you might even have a glimpse at plt.xlim() and reduce the figure to the interesting range.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/30819056\/how-to-label-and-change-the-scale-of-seaborn-kdeplots-axes","best_answers_votes":33,"tags":["python","matplotlib","seaborn"],"question_length":1020,"response_length":683,"tag_count":3} {"question":"in NetworkX cannot save a graph as jpg or png file I have a graph in NetworkX containing some info. After the graph is shown, I want to save it as jpg or png file. I used the matplotlib function savefig but when the image is saved, it does not contain anything. It is just a white image. Here is a sample code I wrote: \n```\nimport networkx as nx\nimport matplotlib.pyplot as plt\n\nfig = plt.figure(figsize=(12,12))\nax = plt.subplot(111)\nax.set_title('Graph - Shapes', fontsize=10)\n\nG = nx.DiGraph()\nG.add_node('shape1', level=1)\nG.add_node('shape2', level=2)\nG.add_node('shape3', level=2)\nG.add_node('shape4', level=3)\nG.add_edge('shape1', 'shape2')\nG.add_edge('shape1', 'shape3')\nG.add_edge('shape3', 'shape4')\npos = nx.spring_layout(G)\nnx.draw(G, pos, node_size=1500, node_color='yellow', font_size=8, font_weight='bold')\n\nplt.tight_layout()\nplt.show()\nplt.savefig(\"Graph.png\", format=\"PNG\")\n```\nWhy is the image saved without anything inside (just white) ? This is the image saved (just blank):","response":"It's related to plt.show method. Help of show method: \n```\ndef show(*args, **kw):\n \"\"\"\n Display a figure.\n\n When running in ipython with its pylab mode, display all\n figures and return to the ipython prompt.\n\n In non-interactive mode, display all figures and block until\n the figures have been closed; in interactive mode it has no\n effect unless figures were created prior to a change from\n non-interactive to interactive mode (not recommended). In\n that case it displays the figures but does not block.\n\n A single experimental keyword argument, *block*, may be\n set to True or False to override the blocking behavior\n described above.\n \"\"\"\n```\nWhen you call plt.show() in your script, it seems something like file object is still open, and plt.savefig method for writing can not read from that stream completely. but there is a block option for plt.show that can change this behavior, so you can use it: \n```\nplt.show(block=False)\nplt.savefig(\"Graph.png\", format=\"PNG\")\n```\nOr just comment it: \n```\n# plt.show()\nplt.savefig(\"Graph.png\", format=\"PNG\")\n```\nOr just save befor show it: \n```\nplt.savefig(\"Graph.png\", format=\"PNG\")\nplt.show()\n```\nDemo: Here","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/22635538\/in-networkx-cannot-save-a-graph-as-jpg-or-png-file","best_answers_votes":31,"tags":["python","image","matplotlib","save","networkx"],"question_length":995,"response_length":1194,"tag_count":5} {"question":"Top and bottom line on errorbar with python and seaborn I am trying to plot errorbars with python and seaborn but I am not entirely satisfied with how they look. The default seaborn error bars look like this : But I am looking to add the bottom and top lines on the error bars like this (in order to differentiated between the two error bars, it's the default matplotlib style) : How can I do this in seaborn ? Here is the code: \n```\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nfig1 = plt.figure(figsize=(20, 12))\n\n\nx_values = [1,2,3,4]\ny_values = [1,2,3,4]\n\ny_error = [1,0.5,0.75,0.25]\n\nplt.errorbar(x_values, y_values, yerr=y_error ,fmt='o', markersize=8)\n\nplt.show()\n```","response":"The capsize parameter should be enough, but for some reason You have to specify the cap.set_markeredgewidth for them to show up too.. Based on: Matplotlib Errorbar Caps Missing. \n```\n(_, caps, _) = plt.errorbar(\n x_values, y_values, yerr=y_error, fmt='o', markersize=8, capsize=20)\n\nfor cap in caps:\n cap.set_markeredgewidth(1)\n```\nreturns:","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/35915431\/top-and-bottom-line-on-errorbar-with-python-and-seaborn","best_answers_votes":24,"tags":["python","matplotlib","seaborn"],"question_length":685,"response_length":346,"tag_count":3} {"question":"KeyError when plotting a sliced pandas dataframe with datetimes I get a KeyError when I try to plot a slice of a pandas DataFrame column with datetimes in it. Does anybody know what could cause this? I managed to reproduce the error in a little self contained example (which you can also view here: http:\/\/nbviewer.ipython.org\/3714142\/): \n```\nimport numpy as np\nfrom pandas import DataFrame\nimport datetime\nfrom pylab import *\n\ntest = DataFrame({'x' : [datetime.datetime(2012,9,10) + datetime.timedelta(n) for n in range(10)], \n 'y' : range(10)})\n```\nNow if I plot: \n```\nplot(test['x'][0:5])\n```\nthere is not problem, but when I plot: \n```\nplot(test['x'][5:10])\n```\nI get the KeyError below (and the error message is not very helpfull to me). This only happens with datetime columns, not with other columns (as far as I experienced). E.g. plot(test['y'][5:10]) is not a problem. Ther error message: \n```\n---------------------------------------------------------------------------\nKeyError Traceback (most recent call last)\n in ()\n----> 1 plot(test['x'][5:10])\n\nC:\\Python27\\lib\\site-packages\\matplotlib\\pyplot.pyc in plot(*args, **kwargs)\n 2456 ax.hold(hold)\n 2457 try:\n-> 2458 ret = ax.plot(*args, **kwargs)\n 2459 draw_if_interactive()\n 2460 finally:\n\nC:\\Python27\\lib\\site-packages\\matplotlib\\axes.pyc in plot(self, *args, **kwargs)\n 3846 lines = []\n 3847 \n-> 3848 for line in self._get_lines(*args, **kwargs):\n 3849 self.add_line(line)\n 3850 lines.append(line)\n\nC:\\Python27\\lib\\site-packages\\matplotlib\\axes.pyc in _grab_next_args(self, *args, **kwargs)\n 321 return\n 322 if len(remaining) <= 3:\n--> 323 for seg in self._plot_args(remaining, kwargs):\n 324 yield seg\n 325 return\n\nC:\\Python27\\lib\\site-packages\\matplotlib\\axes.pyc in _plot_args(self, tup, kwargs)\n 298 x = np.arange(y.shape[0], dtype=float)\n 299 \n--> 300 x, y = self._xy_from_xy(x, y)\n 301 \n 302 if self.command == 'plot':\n\nC:\\Python27\\lib\\site-packages\\matplotlib\\axes.pyc in _xy_from_xy(self, x, y)\n 215 if self.axes.xaxis is not None and self.axes.yaxis is not None:\n 216 bx = self.axes.xaxis.update_units(x)\n--> 217 by = self.axes.yaxis.update_units(y)\n 218 \n 219 if self.command!='plot':\n\nC:\\Python27\\lib\\site-packages\\matplotlib\\axis.pyc in update_units(self, data)\n 1277 neednew = self.converter!=converter\n 1278 self.converter = converter\n-> 1279 default = self.converter.default_units(data, self)\n 1280 #print 'update units: default=%s, units=%s'%(default, self.units)\n 1281 if default is not None and self.units is None:\n\nC:\\Python27\\lib\\site-packages\\matplotlib\\dates.pyc in default_units(x, axis)\n 1153 'Return the tzinfo instance of *x* or of its first element, or None'\n 1154 try:\n-> 1155 x = x[0]\n 1156 except (TypeError, IndexError):\n 1157 pass\n\nC:\\Python27\\lib\\site-packages\\pandas\\core\\series.pyc in __getitem__(self, key)\n 374 def __getitem__(self, key):\n 375 try:\n--> 376 return self.index.get_value(self, key)\n 377 except InvalidIndexError:\n 378 pass\n\nC:\\Python27\\lib\\site-packages\\pandas\\core\\index.pyc in get_value(self, series, key)\n 529 \"\"\"\n 530 try:\n--> 531 return self._engine.get_value(series, key)\n 532 except KeyError, e1:\n 533 if len(self) > 0 and self.inferred_type == 'integer':\n\nC:\\Python27\\lib\\site-packages\\pandas\\_engines.pyd in pandas._engines.IndexEngine.get_value (pandas\\src\\engines.c:1479)()\n\nC:\\Python27\\lib\\site-packages\\pandas\\_engines.pyd in pandas._engines.IndexEngine.get_value (pandas\\src\\engines.c:1374)()\n\nC:\\Python27\\lib\\site-packages\\pandas\\_engines.pyd in pandas._engines.DictIndexEngine.get_loc (pandas\\src\\engines.c:2498)()\n\nC:\\Python27\\lib\\site-packages\\pandas\\_engines.pyd in pandas._engines.DictIndexEngine.get_loc (pandas\\src\\engines.c:2460)()\n\nKeyError: 0\n```","response":"HYRY explained why you get the KeyError. To plot with slices using matplotlib you can do: \n```\n\n```python\nplot(test['x'][5:10].values)\n#Output\n#[]\n```\n\n```python\nplot(test['x'][5:10].reset_index(drop=True))\n#Output\n#[]\n#```\n#x, y plotting in one go with 0.7.3 \n#```\n```\n\n```python\ntest[5:10].set_index('x')['y'].plot()\n#Output\n#\n#```\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/12406162\/keyerror-when-plotting-a-sliced-pandas-dataframe-with-datetimes","best_answers_votes":18,"tags":["python","pandas","numpy","matplotlib"],"question_length":4269,"response_length":440,"tag_count":4} {"question":"Pandas missing x tick labels When I add c to a pandas plot, x tick labels disappear. Does anyone know how to add them back? \n```\nimport pandas as pd\n\ndf = pd.DataFrame(\n {'mean': {0: 10,\n 1: 16,\n 2: 18,\n 3: 22,\n 4: 30},\n 'size': {0: 103, 1: 2509, 2: 41939, 3: 145997, 4: 143530},\n 'value': {0: 1.5, 1: 4.5, 2: 7.5, 3: 10.5, 4: 13.5}}\n)\n\nax = df.plot(kind='scatter', x='value', y='mean', s=60, c='size', cmap='RdYlGn')\n```\nTried to manually add x tick labels, but still not working. \n```\nax.set_xticks(df['value'])\nax.set_xticklabels(df['value'])\n```","response":"Okay, I think this is a bug with pandas plot. However, this SO post shows the following workaround. \n```\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\ndf = pd.DataFrame(\n {'mean': {0: 10,\n 1: 16,\n 2: 18,\n 3: 22,\n 4: 30},\n 'size': {0: 103, 1: 2509, 2: 41939, 3: 145997, 4: 143530},\n 'value': {0: 1.5, 1: 4.5, 2: 7.5, 3: 10.5, 4: 13.5}}\n)\nfig, ax = plt.subplots()\ndf.plot(kind='scatter', x='value', y='mean', s=60, c='size', cmap='RdYlGn', ax=ax)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/43578976\/pandas-missing-x-tick-labels","best_answers_votes":29,"tags":["python","pandas","matplotlib","plot"],"question_length":612,"response_length":518,"tag_count":4} {"question":"axis limits for scatter plot not holding in matplotlib I am trying to overlay a scatter plot onto a contour plot using matplotlib, which contains \n```\nplt.contourf(X, Y, XYprof.T, self.nLevels, extent=extentYPY, \\\n origin = 'lower')\n if self.doScatter == True and len(xyScatter['y']) != 0:\n plt.scatter(xyScatter['x'], xyScatter['y'], \\\n s=dSize, c=myColor, marker='.', edgecolor='none')\n plt.xlim(-xLimHist, xLimHist)\n plt.ylim(-yLimHist, yLimHist)\n plt.xlabel(r'$x$')\n plt.ylabel(r'$y$')\n```\nWhat ends up happening is the resulting plots extend to include all of the scatter points, which can exceed the limits for the contour plot. Is there any way to get around this?","response":"I used the following example to try and replicate your problem. If left to default, the range for x and y was -3 to 3. I input the xlim and ylim so the range for both was -2 to 2. It worked. \n```\nimport numpy as np\n import matplotlib.pyplot as plt\n from pylab import *\n\n # the random data\n x = np.random.randn(1000)\n y = np.random.randn(1000)\n\n fig = plt.figure(1, figsize=(5.5,5.5))\n\n X, Y = meshgrid(x, y)\n Z1 = bivariate_normal(X, Y, 1.0, 1.0, 0.0, 0.0)\n Z2 = bivariate_normal(X, Y, 1.5, 0.5, 1, 1)\n Z = 10 * (Z1 - Z2)\n\n origin = 'lower'\n CS = contourf(x, y, Z, 10, # [-1, -0.1, 0, 0.1],\n cmap=cm.bone,\n origin=origin)\n\n title('Nonsense')\n xlabel('x-stuff')\n ylabel('y-stuff')\n\n # the scatter plot:\n axScatter = plt.subplot(111)\n axScatter.scatter(x, y)\n\n # set axes range\n plt.xlim(-2, 2)\n plt.ylim(-2, 2)\n\n show()\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/10291221\/axis-limits-for-scatter-plot-not-holding-in-matplotlib","best_answers_votes":29,"tags":["python","matplotlib"],"question_length":729,"response_length":898,"tag_count":2} {"question":"How to plot a rectangle on a datetime axis using matplotlib? I tried to plot a rectangle on a graph with a datetime x-axis using the following code: \n```\nfrom datetime import datetime, timedelta\nfrom matplotlib.patches import Rectangle\nimport matplotlib.pyplot as plt\n\n# Create new plot\nfig = plt.figure()\nax = fig.add_subplot(111)\n\n# Create rectangle\nstartTime = datetime.now()\nwidth = timedelta(seconds = 1)\nendTime = startTime + width\nrect = Rectangle((startTime, 0), width, 1, color='yellow')\n\n# Plot rectangle\nax.add_patch(rect) ### ERROR HERE!!! ###\nplt.xlim([startTime, endTime])\nplt.ylim([0, 1])\nplt.show()\n```\nHowever, I get the error: \n```\nTypeError: unsupported operand type(s) for +: 'float' and 'datetime.timedelta'\n```\nWhat's going wrong? (I'm using matplotlib version 1.0.1)","response":"The problem is that matplotlib uses its own representation of dates\/times (floating number of days), so you have to convert them first. Furthermore, you will have to tell the xaxis that it should have date\/time ticks and labels. The code below does that: \n```\nfrom datetime import datetime, timedelta\nfrom matplotlib.patches import Rectangle\nimport matplotlib.pyplot as plt\nimport matplotlib.dates as mdates\n\n# Create new plot\nfig = plt.figure()\nax = fig.add_subplot(111)\n\n# Create rectangle x coordinates\nstartTime = datetime.now()\nendTime = startTime + timedelta(seconds = 1)\n\n# convert to matplotlib date representation\nstart = mdates.date2num(startTime)\nend = mdates.date2num(endTime)\nwidth = end - start\n\n# Plot rectangle\nrect = Rectangle((start, 0), width, 1, color='yellow')\nax.add_patch(rect) \n\n# assign date locator \/ formatter to the x-axis to get proper labels\nlocator = mdates.AutoDateLocator(minticks=3)\nformatter = mdates.AutoDateFormatter(locator)\nax.xaxis.set_major_locator(locator)\nax.xaxis.set_major_formatter(formatter)\n\n# set the limits\nplt.xlim([start-width, end+width])\nplt.ylim([-.5, 1.5])\n\n# go\nplt.show()\n```\nResult: NOTE: Matplotlib 1.0.1 is very old. I can't guarantee that my example will work. You should try to update!","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/31162780\/how-to-plot-a-rectangle-on-a-datetime-axis-using-matplotlib","best_answers_votes":28,"tags":["python","python-2.7","matplotlib"],"question_length":791,"response_length":1250,"tag_count":3} {"question":"Are there really only 4 Matplotlib Line Styles? I've been looking for new line styles in matplotlib, and the only line styles available are [\"-\", \"--\", \"-.\", \":\",]. (The style options ['', ' ', 'None',] don't count because they just hide the lines.) Are there really only 4 line styles in Matplotlib pyplot? Are there any extensions that add further line styles? Is there a way to customise line styles? How about some three character line styles like: '--.': dash dash dot '-..': dash dot dot '...': dot dot dot (space) 'xxx': x's in a line '\\\/': Zig zags ie '\\\/\\\/\\\/\\\/' '::': parrallel dots, ie ::::: These are just some ideas to expand the range of line styles.","response":"You can use the dashes kwarg to set custom dash styles. From the docs: Set the dash sequence, sequence of dashes with on off ink in points. If seq is empty or if seq = (None, None), the linestyle will be set to solid. Here's some examples based on a few of your suggestions. Obviously there are many more ways you could customise this. \n```\nimport matplotlib.pyplot as plt\n\nfig,ax = plt.subplots(1)\n\n# 3 dots then space\nax.plot(range(10), range(10), dashes=[3,6,3,6,3,18], lw=3,c='b')\n\n# dash dash dot\nax.plot(range(10), range(0,20,2), dashes=[12,6,12,6,3,6], lw=3,c='r')\n\n# dash dot dot\nax.plot(range(10), range(0,30,3), dashes=[12,6,3,6,3,6], lw=3,c='g')\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/33936134\/are-there-really-only-4-matplotlib-line-styles","best_answers_votes":21,"tags":["python","matplotlib","plot"],"question_length":663,"response_length":666,"tag_count":3} {"question":"Scale plot size of Matplotlib Plots in Jupyter Notebooks Is there a possibility to scale the plot size of matplotlib plots in jupyter notebooks? You could increase the plot size by changing the default values of figure.figsize, but this does not affect parameters like fontsize, linewidth, markersize etc. What I need is a plot where all the parameters are scaled accordingly. P.S.: To display plots in jupyter notebooks I use %matplotlib inline, see screenshot below. Edit For completeness, here is a code snippet doing exactly what I needed: \n```\ndef scale_plot_size(factor=1.5):\n import matplotlib as mpl\n default_dpi = mpl.rcParamsDefault['figure.dpi']\n mpl.rcParams['figure.dpi'] = default_dpi*factor\n```","response":"You don't want to change the figure size. You want to change the dpi (dots per inch). Also see Relationship between dpi and figure size. \n```\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ndef plot(dpi):\n fig, ax=plt.subplots(dpi=dpi)\n ax.plot([2,4,1,5], label=\"Label\")\n ax.legend()\n\nfor i in range(1,4):\n plot(i*72)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/44724369\/scale-plot-size-of-matplotlib-plots-in-jupyter-notebooks","best_answers_votes":22,"tags":["python","matplotlib","jupyter-notebook"],"question_length":718,"response_length":337,"tag_count":3} {"question":"How boxen plot is different from box plot? I want to know why there is boxen plot when we have box plot in sea-born library. I know one thing that boxen plot optimised way to represent data especially for large data-sets but i don't know why and other then this i don't have any good reasons to use boxen plot.","response":"The box plot shows the median as the centerline (50th percentile), then the 25th and 75th percentile as the box boundaries. Then the IQR method is used to calculate outlier boundaries (1.5 * IQR + Q3 for the upper boundary, for example). Q3 is the 3rd quartile, or 75th percentile of the data (75% of the data is below this value). Outliers outside of the outlier whiskers are shown as distinct points. Boxenplots (actually called letter-value plots in the original paper and in the lvplot R package; that's lv as in letter-value) show the distribution differently and are better for bigger datasets. Classic boxplots can have too many outliers and don't show as much information about the distribution. Letter-value plots (boxenplots) start with the median (Q2, 50th percentile) as the centerline. Each successive level outward contains half of the remaining data. So the first two sections out from the centerline contain 50% of the data. After that, the next two sections contain 25% of the data. This continues until we are at the outlier level. Each level out is shaded lighter. There are 4 methods for calculating outliers (described in the paper and available in seaborn). The default is to end up with around 5-8 outliers in each tail. I illustrated with Diziet's nice figure: The original paper (from authors including Hadley Wickham) is linked in the seaborn boxenplot docs. Formula The amount of data outside each level is outside = 2^(-l), where l = 1, 2, 3..., and amount of data inside each level is inside = 1-outside. For first few levels rounded to 3 significant digits: \n```\nl inside outside\n-------------------\n1 50.0% 50.0%\n2 75.0% 25.0%\n3 87.5% 12.5%\n4 93.75% 6.25%\n5 96.875% 3.125%\n6 98.438% 1.562%\n7 99.219% 0.781%\n8 99.609% 0.391%\n9 99.805% 0.195%\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/52403381\/how-boxen-plot-is-different-from-box-plot","best_answers_votes":37,"tags":["python","matplotlib","data-visualization","seaborn"],"question_length":310,"response_length":1816,"tag_count":4} {"question":"How to change size of plot in xgboost.plot_importance? \n```\nxgboost.plot_importance(model, importance_type='gain')\n```\nI am not able to change size of this plot. I want to save this figure with proper size so that I can use it in pdf. I want similar like figize","response":"It looks like plot_importance return an Axes object \n```\nax = xgboost.plot_importance(...)\nfig = ax.figure\nfig.set_size_inches(h, w)\n```\nIt also looks like you can pass an axes in \n```\nfig, ax = plt.subplots(figsize=(h, w))\nxgboost.plot_importance(..., ax=ax)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/40664776\/how-to-change-size-of-plot-in-xgboost-plot-importance","best_answers_votes":52,"tags":["python","python-3.x","matplotlib","machine-learning","xgboost"],"question_length":261,"response_length":263,"tag_count":5} {"question":"How to plot collections.Counter histogram using matplotlib? How to plot histogram of following Counter object?: \n```\nw = collections.Counter()\nl = ['a', 'b', 'b', 'b', 'c']\nfor o in l:\n w[o]+=1\n```","response":"Looking at your data and attempt, I guess you want a bar plot instead of a histogram. Histogram is used to plot a distribution but that is not what you have. You can simply use the keys and values as the arguments of plt.bar. This way, the keys will be automatically taken as the x-axis tick-labels. \n```\nimport collections\nimport matplotlib.pyplot as plt\nl = ['a', 'b', 'b', 'b', 'c']\nw = collections.Counter(l)\nplt.bar(w.keys(), w.values())\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/52572094\/how-to-plot-collections-counter-histogram-using-matplotlib","best_answers_votes":43,"tags":["python","matplotlib","collections"],"question_length":200,"response_length":446,"tag_count":3} {"question":"Seaborn lineplot high cpu; very slow compared to matplotlib I have the following dataframe. \n```\n\n```python\ndfFinal\n#Output\n#module vectime vecvalue\n#1906 client1.tcp [1.1007512, 1.1015024, 1.1022536, 1.1030048, 1... [0.0007512, 0.0007512, 0.0007512, 0.0007512, 0...\n#1912 client2.tcp [1.10079784, 1.10159568, 1.10239352, 1.1031913... [0.00079784, 0.00079784, 0.00079784, 0.0007978...\n#1918 client3.tcp [1.10084448, 1.10168896, 1.10258008, 1.1036111... [0.00084448, 0.00084448, 0.00089112, 0.0010310...\n#```\n#I want to plot the timeSeries vecvalue vs vectime for each module. The result is the following: To do so I can do as follows: 1) Matplotlib \n#```\n#start = datetime.datetime.now()\n#\n#for row in dfFinal.itertuples():\n# t = row.vectime\n# x = row.vecvalue\n# x = runningAvg(x)\n# plot(t,x)\n#\n#total = (datetime.datetime.now() - start).total_seconds()\n#print(\"Total time: \",total)\n#```\n#Doing so, takes 0.07005 seconds to accomplish. 2) Seaborn \n#```\n#start = datetime.datetime.now()\n#\n#for row in dfFinal.itertuples():\n# t = row.vectime\n# x = row.vecvalue\n# x = runningAvg(x)\n# DF = pd.DataFrame({'x':x, 't':t})\n# sns.lineplot(x='t', y='x', data=DF)\n#\n#total = (datetime.datetime.now() - start).total_seconds()\n#print(\"Total time: \",total)\n#```\n#Doing so, takes 19.157463 seconds to accomplish. Why is there such a huge difference? What is it that I'm doing so wrong that it takes that long to process a rather small DF?\n```","response":"Set ci=None in the call to lineplot; otherwise, confidence intervals will be computed resulting in some expensive (and unnecessary) df.groupby calls. An aside: the snakeviz module is a great tool for quickly finding computational bottlenecks.","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/56170909\/seaborn-lineplot-high-cpu-very-slow-compared-to-matplotlib","best_answers_votes":47,"tags":["python","pandas","performance","matplotlib","seaborn"],"question_length":1522,"response_length":242,"tag_count":5} {"question":"Matplotlib: Color-coded text in legend instead of a line On certain LCD monitors, the color of the horizontal lines in the legend is hard to tell apart. (See the image attached). So instead of drawing a line in the legend, is it possible to just color code the text itself? so another words, have \"y=0x\" in blue, \"y=1x\" in green, etc... \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nx = np.arange(10)\n\nfig = plt.figure()\nax = plt.subplot(111)\n\nfor i in xrange(5):\n ax.plot(x, i * x, label='$y = %ix$' % i)\n\nax.legend()\n\nplt.show()\n```\nPS. if the line could be made thicker just in the legend, but not in the plot, this would also work.","response":"I was wondering the same thing. Here is what I came up with to change the color of the font in the legend. I am not totally happy with this method, since it seems a little clumsy, but it seems to get the job done [Edit: see below for a better way]: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nx = np.arange(10)\n\nfig = plt.figure()\nax = plt.subplot(111)\n\ncolors = []\nfor i in xrange(5):\n line, = ax.plot(x, i * x, label='$y = %ix$' % i)\n colors.append(plt.getp(line,'color'))\n\nleg = ax.legend()\n\nfor color,text in zip(colors,leg.get_texts()):\n text.set_color(color)\n\nplt.show()\n```\n2016 Edit: Actually, there is a better way. You can simply iterate over the lines in the legend, which avoids needing to keep track of the colors as the lines are plotted. Much less clunky. Now, changing the line colors is basically a one-liner (okay, it's actually a two-liner). Here is the complete example: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nx = np.arange(10)\n\nfig = plt.figure()\nax = plt.subplot(111)\n\nfor i in xrange(5):\n ax.plot(x, i*x, label='$y = %ix$'%i)\n\nleg = ax.legend()\n\n# change the font colors to match the line colors:\nfor line,text in zip(leg.get_lines(), leg.get_texts()):\n text.set_color(line.get_color())\n\nplt.show()\n```\n2017 Edit: Lastly, if you really do want the color-coded text instead of a line (as the title suggests), then you can suppress the lines in the legend by using \n```\nleg = ax.legend(handlelength=0)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/13828246\/matplotlib-color-coded-text-in-legend-instead-of-a-line","best_answers_votes":27,"tags":["python","numpy","matplotlib","plot","legend"],"question_length":649,"response_length":1472,"tag_count":5} {"question":"Histogram with Boxplot above in Python Hi I wanted to draw a histogram with a boxplot appearing the top of the histogram showing the Q1,Q2 and Q3 as well as the outliers. Example phone is below. (I am using Python and Pandas) I have checked several examples using matplotlib.pyplot but hardly came out with a good example. And I also wanted to have the histogram curve appearing like in the image below. I also tried seaborn and it provided me the shape line along with the histogram but didnt find a way to incorporate with boxpot above it. can anyone help me with this to have this on matplotlib.pyplot or using pyplot","response":"```\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\nsns.set(style=\"ticks\")\n\nx = np.random.randn(100)\n\nf, (ax_box, ax_hist) = plt.subplots(2, sharex=True, \n gridspec_kw={\"height_ratios\": (.15, .85)})\n\nsns.boxplot(x, ax=ax_box)\nsns.distplot(x, ax=ax_hist)\n\nax_box.set(yticks=[])\nsns.despine(ax=ax_hist)\nsns.despine(ax=ax_box, left=True)\n```\nFrom seaborn v0.11.2, sns.distplot is deprecated. Use sns.histplot for axes-level plots instead. \n```py\nnp.random.seed(2022)\nx = np.random.randn(100)\n\nf, (ax_box, ax_hist) = plt.subplots(2, sharex=True, gridspec_kw={\"height_ratios\": (.15, .85)})\n\nsns.boxplot(x=x, ax=ax_box)\nsns.histplot(x=x, bins=12, kde=True, stat='density', ax=ax_hist)\n\nax_box.set(yticks=[])\nsns.despine(ax=ax_hist)\nsns.despine(ax=ax_box, left=True)\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/33381330\/histogram-with-boxplot-above-in-python","best_answers_votes":37,"tags":["python","matplotlib","histogram","boxplot","seaborn"],"question_length":620,"response_length":824,"tag_count":5} {"question":"How can I pass parameters to on_key in fig.canvas.mpl_connect('key_press_event', on_key)? I have a function \n```\ndef on_key(event):\n```\nWhich I call from \n```\nfig.canvas.mpl_connect('key_press_event', on_key)\n```\nI would like to pass the parameters plt1, plt2, plt3 to on_key... how can I do this?","response":"Probably \n```\ndef on_key(event, arg1, arg2, arg3):\n```\nand \n```\nfig.canvas.mpl_connect('key_press_event', lambda event: on_key(event, plt1, plt2, plt3))\n```\nor as list \n```\ndef on_key(event, args_list):\n```\nand \n```\nfig.canvas.mpl_connect('key_press_event', lambda event: on_key(event, [plt1, plt2, plt3]))\n```","best_answers_score":0.8,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/24960910\/how-can-i-pass-parameters-to-on-key-in-fig-canvas-mpl-connectkey-press-event","best_answers_votes":44,"tags":["python","matplotlib"],"question_length":297,"response_length":310,"tag_count":2} {"question":"Plotting Ellipsoid with Matplotlib Does anyone have sample code for plotting ellipsoids? There is one for sphere on matplotlib site, but nothing for ellipsoids. I am trying to plot \n```\nx**2 + 2*y**2 + 2*z**2 = c\n```\nwhere c is a constant (like 10) that defines an ellipsoid. I tried the meshgrid(x,y) route, reworked the equation so z is on one side, but the sqrt is a problem. The matplotlib sphere example works with angles, u,v, but I am not sure how to work that for ellipsoid.","response":"Here is how you can do it via spherical coordinates: \n```\n# from mpl_toolkits.mplot3d import Axes3D # Not needed with Matplotlib 3.6.3\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfig = plt.figure(figsize=plt.figaspect(1)) # Square figure\nax = fig.add_subplot(111, projection='3d')\n\ncoefs = (1, 2, 2) # Coefficients in a0\/c x**2 + a1\/c y**2 + a2\/c z**2 = 1 \n# Radii corresponding to the coefficients:\nrx, ry, rz = 1\/np.sqrt(coefs)\n\n# Set of all spherical angles:\nu = np.linspace(0, 2 * np.pi, 100)\nv = np.linspace(0, np.pi, 100)\n\n# Cartesian coordinates that correspond to the spherical angles:\n# (this is the equation of an ellipsoid):\nx = rx * np.outer(np.cos(u), np.sin(v))\ny = ry * np.outer(np.sin(u), np.sin(v))\nz = rz * np.outer(np.ones_like(u), np.cos(v))\n\n# Plot:\nax.plot_surface(x, y, z, rstride=4, cstride=4, color='b')\n\n# Adjustment of the axes, so that they all have the same span:\nmax_radius = max(rx, ry, rz)\nfor axis in 'xyz':\n getattr(ax, 'set_{}lim'.format(axis))((-max_radius, max_radius))\n\nplt.show()\n```\nThe resulting plot is similar to The program above actually produces a nicer looking \"square\" graphics. This solution is strongly inspired from the example in Matplotlib's gallery.","best_answers_score":0.799,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/7819498\/plotting-ellipsoid-with-matplotlib","best_answers_votes":37,"tags":["python","matplotlib"],"question_length":482,"response_length":1218,"tag_count":2} {"question":"Calling pylab.savefig without display in ipython I need to create a figure in a file without displaying it within IPython notebook. I am not clear on the interaction between IPython and matplotlib.pylab in this regard. But, when I call pylab.savefig(\"test.png\") the current figure get's displayed in addition to being saved in test.png. When automating the creation of a large set of plot files, this is often undesirable. Or in the situation that an intermediate file for external processing by another app is desired. Not sure if this is a matplotlib or IPython notebook question.","response":"This is a matplotlib question, and you can get around this by using a backend that doesn't display to the user, e.g. 'Agg': \n```\nimport matplotlib\nmatplotlib.use('Agg')\nimport matplotlib.pyplot as plt\n\nplt.plot([1,2,3])\nplt.savefig('\/tmp\/test.png')\n```\nEDIT: If you don't want to lose the ability to display plots, turn off Interactive Mode, and only call plt.show() when you are ready to display the plots: \n```\nimport matplotlib.pyplot as plt\n\n# Turn interactive plotting off\nplt.ioff()\n\n# Create a new figure, plot into it, then close it so it never gets displayed\nfig = plt.figure()\nplt.plot([1,2,3])\nplt.savefig('\/tmp\/test0.png')\nplt.close(fig)\n\n# Create a new figure, plot into it, then don't close it so it does get displayed\nplt.figure()\nplt.plot([1,3,2])\nplt.savefig('\/tmp\/test1.png')\n\n# Display all \"open\" (non-closed) figures\nplt.show()\n```","best_answers_score":0.7988,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/15713279\/calling-pylab-savefig-without-display-in-ipython","best_answers_votes":267,"tags":["matplotlib","jupyter-notebook"],"question_length":582,"response_length":851,"tag_count":2} {"question":"What is the necessity of plt.figure() in matplotlib? \n```\nplt.figure(figsize=(10,8))\n\nplt.scatter(df['attacker_size'][df['year'] == 298],\n # attacker size in year 298 as the y axis\n df['defender_size'][df['year'] == 298],\n # the marker as\n marker='x',\n # the color\n color='b',\n # the alpha\n alpha=0.7,\n # with size\n s = 124,\n # labelled this\n label='Year 298')\n```\nIn the above snippet of code collected from Scatterplot in Matplotlib, what is the necessity of plt.figure()? link above ais dead , self sustaining example : \n```\nimport matplotlib.pyplot as plt\n\nimport pandas as pd\n\ndata = {\n \"attacker_size\": [420, 380, 390],\n \"defender_size\": [50, 40, 45]\n}\n\ndf = pd.DataFrame(data, index = [\"day1\", \"day2\", \"day3\"])\n\nprint(df) \n\nplt.figure(figsize=(10,8))\n\nplt.scatter(df['attacker_size'],\n # attacker size in year 298 as the y axis\n df['defender_size'],\n # the marker as\n marker='x',\n # the color\n color='b',\n # the alpha\n alpha=0.7,\n # width size\n s = 150,\n # labelled this\n label='Test')\n```","response":"The purpose of using plt.figure() is to create a figure object. The whole figure is regarded as the figure object. It is necessary to explicitly use plt.figure() when we want to tweak the size of the figure and when we want to add multiple Axes objects in a single figure. \n```\n# in order to modify the size\nfig = plt.figure(figsize=(12,8))\n# adding multiple Axes objects \nfig, ax_lst = plt.subplots(2, 2) # a figure with a 2x2 grid of Axes\n```\nParts of a Figure","best_answers_score":0.7987,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/38666527\/what-is-the-necessity-of-plt-figure-in-matplotlib","best_answers_votes":43,"tags":["python","matplotlib"],"question_length":1166,"response_length":464,"tag_count":2} {"question":"Change xticklabels fontsize of seaborn heatmap Here is my question: I plot 7 variable's coefficient using sns.clustermap() x\/y tickslabel seems really small(In my case, s1,s2,... s9) My attempt label='big ==> no effect plt.tick_params(axis='both', which='minor', labelsize=12) ===> cbar label has changed, but the x\/y axes looks the same. Add My code: \n```\nds = pd.read_csv(\"xxxx.csv\")\n corr = ds.corr().mul(100).astype(int)\n \n cmap = sns.diverging_palette(h_neg=210, h_pos=350, s=90, l=30, as_cmap=True)\n\n sns.clustermap(data=corr_s, annot=True, fmt='d',cmap = \"Blues\",annot_kws={\"size\": 16},)\n```","response":"Consider calling sns.set(font_scale=1.4) before plotting your data. This will scale all fonts in your legend and on the axes. My plot went from this, To this, Of course, adjust the scaling to whatever you feel is a good setting. Code: \n```\nsns.set(font_scale=1.4)\ncmap = sns.diverging_palette(h_neg=210, h_pos=350, s=90, l=30, as_cmap=True)\nsns.clustermap(data=corr, annot=True, fmt='d', cmap=\"Blues\", annot_kws={\"size\": 16})\n```","best_answers_score":0.7982,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/34706845\/change-xticklabels-fontsize-of-seaborn-heatmap","best_answers_votes":88,"tags":["python","matplotlib","seaborn"],"question_length":599,"response_length":429,"tag_count":3} {"question":"Invert image displayed by imshow in matplotlib I wanted the imshow() function in matplotlib.pyplot to display images the opposite way, i.e upside down. Is there a simple way to do this?","response":"Specify the keyword argument origin='lower' or origin='upper' in your call to imshow.","best_answers_score":0.7977,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/8396101\/invert-image-displayed-by-imshow-in-matplotlib","best_answers_votes":278,"tags":["python","image","matplotlib"],"question_length":185,"response_length":85,"tag_count":3} {"question":"plot multiple pandas dataframes in one graph I have created 6 different dataframes that eliminate the outliers of their own original data frames. Now, I'm trying to plot all of the dataframes that eliminate the outliers on the same graph. This is my code that eliminates the outliers in each data frame: \n```\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nplt.style.use(\"ggplot\")\n\n#---Original DataFrame\nx = (g[0].time[:27236])\ny = (g[0].data.f[:27236])\ndf = pd.DataFrame({'Time': x, 'Data': y})\n\n#----Removes the outliers in a given DataFrame and plots a graph\nnewdf = df.copy()\nnewdf = df[~df.groupby('Data').transform( lambda x: abs(x-x.mean()) > 1.96*x.std()).values]\n#newdf.plot('Time', 'Data')\n\n#---Original DataFrame\nx = (q[0].time[:47374])\ny = (q[0].data.f[:47374])\ndf = pd.DataFrame({'Time': x, 'Data': y})\n\n#----Removes the outliers in a given DataFrame and plots a graph\nnewdf = df.copy()\nnewdf2 = df[~df.groupby('Data').transform( lambda x: abs(x-x.mean()) > 1.96*x.std()).values]\n#newdf2.plot('Time', 'Data')\n\n#---Original DataFrame\nx = (w[0].time[:25504])\ny = (w[0].data.f[:25504])\ndf = pd.DataFrame({'Time': x, 'Data': y})\n\n#----Removes the outliers in a given DataFrame and plots a graph\nnewdf = df.copy()\nnewdf3 = df[~df.groupby('Data').transform( lambda x: abs(x-x.mean()) > 1.96*x.std()).values]\n#newdf3.plot('Time', 'Data')\n\n#---Original DataFrame\nx = (e[0].time[:47172])\ny = (e[0].data.f[:47172])\ndf = pd.DataFrame({'Time': x, 'Data': y})\n\n#----Removes the outliers in a given DataFrame and plots a graph\nnewdf = df.copy()\nnewdf4 = df[~df.groupby('Data').transform( lambda x: abs(x-x.mean()) > 1.96*x.std()).values]\n#newdf4.plot('Time', 'Data')\n\n#---Original DataFrame\nx = (r[0].time[:21317])\ny = (r[0].data.f[:21317])\ndf = pd.DataFrame({'Time': x, 'Data': y})\n\n#----Removes the outliers in a given DataFrame and plots a graph\nnewdf = df.copy()\nnewdf5 = df[~df.groupby('Data').transform( lambda x: abs(x-x.mean()) > 1.96*x.std()).values]\n#newdf5.plot('Time', 'Data')\n\n#---Original DataFrame\nx = (t[0].time[:47211])\ny = (t[0].data.f[:47211])\ndf = pd.DataFrame({'Time': x, 'Data': y})\n\n#----Removes the outliers in a given DataFrame and plots a graph\nnewdf = df.copy()\nnewdf6 = df[~df.groupby('Data').transform( lambda x: abs(x-x.mean()) > 1.96*x.std()).values]\n#newdf6.plot('Time', 'Data')\n```\nIf I remove the comment newdf.plot() I will be able to plot all of the graphs separately but I want them all on one graph. And yes, I've already read over http:\/\/matplotlib.org\/examples\/pylab_examples\/subplots_demo.html but that link doesn't have any examples with multiple plots in one chart. I have also read this: http:\/\/pandas-docs.github.io\/pandas-docs-travis\/visualization.html which has some really great information but the examples that have multiple plots in one graph use the same data frame. I have 6 separate dataframes. I've thought of one solution to my problem would be to write all of the dataframes to the same excel file then plot them from excel, but that seems excessive and I don't need this data to be saved to an excel file. My question is this: How can I plot multiple pandas dataframes in the same graph. My graph after following Scott's advice What the graph should more or less look like","response":"You need to use the ax parameter in pandas.dataframe.plot. Use on the first df.plot to grab a handle on that axes: \n```\nax = newdf.plot()\n```\nthen on subsequent plots use the ax parameter. \n```\nnewdf2.plot(ax=ax)\n...\nnewdf5.plot(ax=ax)\n```","best_answers_score":0.7968,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/45467320\/plot-multiple-pandas-dataframes-in-one-graph","best_answers_votes":47,"tags":["python","pandas","matplotlib","dataframe"],"question_length":3253,"response_length":239,"tag_count":4} {"question":"In matplotlib, how do you draw R-style axis ticks that point outward from the axes? Because they are drawn inside the plot area, axis ticks are obscured by the data in many matplotlib plots. A better approach is to draw the ticks extending from the axes outward, as is the default in ggplot, R's plotting system. In theory, this can be done by redrawing the tick lines with the TICKDOWN and TICKLEFT line-styles for the x-axis and y-axis ticks respectively: \n```py\nimport matplotlib.pyplot as plt\nimport matplotlib.ticker as mplticker\nimport matplotlib.lines as mpllines\n\n# Create everything, plot some data stored in `x` and `y`\nfig = plt.figure()\nax = fig.gca()\nplt.plot(x, y)\n\n# Set position and labels of major and minor ticks on the y-axis\n# Ignore the details: the point is that there are both major and minor ticks\nax.yaxis.set_major_locator(mplticker.MultipleLocator(1.0))\nax.yaxis.set_minor_locator(mplticker.MultipleLocator(0.5))\n\nax.xaxis.set_major_locator(mplticker.MultipleLocator(1.0))\nax.xaxis.set_minor_locator(mplticker.MultipleLocator(0.5))\n\n# Try to set the tick markers to extend outward from the axes, R-style\nfor line in ax.get_xticklines():\n line.set_marker(mpllines.TICKDOWN)\n\nfor line in ax.get_yticklines():\n line.set_marker(mpllines.TICKLEFT)\n\n# In real life, we would now move the tick labels farther from the axes so our\n# outward-facing ticks don't cover them up\n\nplt.show()\n```\nBut in practice, that's only half the solution because the get_xticklines and get_yticklines methods return only the major tick lines. The minor ticks remain pointing inward. What's the work-around for the minor ticks?","response":"In your matplotlib config file, matplotlibrc, you can set: \n```\nxtick.direction : out # direction: in or out\nytick.direction : out # direction: in or out\n```\nand this will draw both the major and minor ticks outward by default, like R. For a single program, simply do: \n```\n>> from matplotlib import rcParams\n>> rcParams['xtick.direction'] = 'out'\n>> rcParams['ytick.direction'] = 'out'\n```","best_answers_score":0.7943,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/6260055\/in-matplotlib-how-do-you-draw-r-style-axis-ticks-that-point-outward-from-the-ax","best_answers_votes":29,"tags":["python","matplotlib","plot"],"question_length":1633,"response_length":408,"tag_count":3} {"question":"How can I remove the top and right axis? Instead of the default \"boxed\" axis style I want to have only the left and bottom axis, i.e.: \n```\n+------+ |\n| | |\n| | ---> |\n| | |\n+------+ +-------\n```\nThis should be easy, but I can't find the necessary options in the docs.","response":"This is the suggested Matplotlib 3 solution from the official website HERE: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.linspace(0, 2*np.pi, 100)\ny = np.sin(x)\n\nax = plt.subplot(111)\nax.plot(x, y)\n\n# Hide the right and top spines\nax.spines[['right', 'top']].set_visible(False)\n\nplt.show()\n```","best_answers_score":0.7939,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/925024\/how-can-i-remove-the-top-and-right-axis","best_answers_votes":302,"tags":["python","matplotlib"],"question_length":318,"response_length":312,"tag_count":2} {"question":"Add x and y labels to a pandas plot Suppose I have the following code that plots something very simple using pandas: \n```\nimport pandas as pd\nvalues = [[1, 2], [2, 5]]\ndf2 = pd.DataFrame(values, columns=['Type A', 'Type B'], \n index=['Index 1', 'Index 2'])\ndf2.plot(lw=2, colormap='jet', marker='.', markersize=10, \n title='Video streaming dropout by category')\n```\nHow do I easily set x and y-labels while preserving my ability to use specific colormaps? I noticed that the plot() wrapper for pandas DataFrames doesn't take any parameters specific for that.","response":"The df.plot() function returns a matplotlib.axes.AxesSubplot object. You can set the labels on that object. \n```\nax = df2.plot(lw=2, colormap='jet', marker='.', markersize=10, title='Video streaming dropout by category')\nax.set_xlabel(\"x label\")\nax.set_ylabel(\"y label\")\n```\nOr, more succinctly: ax.set(xlabel=\"x label\", ylabel=\"y label\"). Alternatively, the index x-axis label is automatically set to the Index name, if it has one. so df2.index.name = 'x label' would work too.","best_answers_score":0.7928,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/21487329\/add-x-and-y-labels-to-a-pandas-plot","best_answers_votes":446,"tags":["python","pandas","dataframe","matplotlib"],"question_length":584,"response_length":478,"tag_count":4} {"question":"changing default x range in histogram matplotlib I would like to change the default x range for the histogram plot. The range of the data is from 7 to 12. However, by default the histogram starts right at 7 and ends at 13. I want it to start at 6.5 and end at 12.5. However, the ticks should go from 7 to 12.How do I do it? \n```\nimport asciitable \nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.mlab as mlab\nimport pylab\nfrom pylab import xticks\n\ndata = asciitable.read(file)\nhmag = data['col8']\nvisits = data['col14']\norigin = data['col13']\n\n\nn, bins, patches = plt.hist(hmag, 30, facecolor='gray', align='mid')\nxticks(range(7,13))\npylab.rc(\"axes\", linewidth=8.0)\npylab.rc(\"lines\", markeredgewidth=2.0) \nplt.xlabel('H mag', fontsize=14)\nplt.ylabel('# of targets', fontsize=14)\npylab.xticks(fontsize=15)\npylab.yticks(fontsize=15)\nplt.grid(True)\nplt.savefig('hmag_histogram.eps', facecolor='w', edgecolor='w', format='eps')\nplt.show()\n```","response":"```\nplt.hist(hmag, 30, range=[6.5, 12.5], facecolor='gray', align='mid')\n```","best_answers_score":0.7928,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/12125880\/changing-default-x-range-in-histogram-matplotlib","best_answers_votes":99,"tags":["python","matplotlib","histogram","xrange"],"question_length":959,"response_length":76,"tag_count":4} {"question":"how to save an array as a grayscale image with matplotlib\/numpy? I am trying to save a numpy array of dimensions 128x128 pixels into a grayscale image. I simply thought that the pyplot.imsave function would do the job but it's not, it somehow converts my array into an RGB image. I tried to force the colormap to Gray during conversion but eventhough the saved image appears in grayscale, it still has a 128x128x4 dimension. Here is a code sample I wrote to show the behaviour : \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mplimg\nfrom matplotlib import cm\n\nx_tot = 10e-3\nnx = 128\n\nx = np.arange(-x_tot\/2, x_tot\/2, x_tot\/nx)\n\n[X, Y] = np.meshgrid(x,x)\nR = np.sqrt(X**2 + Y**2)\n\ndiam = 5e-3\nI = np.exp(-2*(2*R\/diam)**4)\n\nplt.figure()\nplt.imshow(I, extent = [-x_tot\/2, x_tot\/2, -x_tot\/2, x_tot\/2])\n\nprint I.shape\n\nplt.imsave('image.png', I)\nI2 = plt.imread('image.png')\nprint I2.shape\n\nmplimg.imsave('image2.png',np.uint8(I), cmap = cm.gray)\ntestImg = plt.imread('image2.png')\nprint testImg.shape\n```\nIn both cases the results of the \"print\" function are (128,128,4). Can anyone explain why the imsave function is creating those dimensions eventhough my input array is of a luminance type? And of course, does anyone have a solution to save the array into a standard grayscale format? Thanks!","response":"With PIL it should work like this \n```\nfrom PIL import Image\n\nI8 = (((I - I.min()) \/ (I.max() - I.min())) * 255.9).astype(np.uint8)\n\nimg = Image.fromarray(I8)\nimg.save(\"file.png\")\n```","best_answers_score":0.7927,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/26929052\/how-to-save-an-array-as-a-grayscale-image-with-matplotlib-numpy","best_answers_votes":26,"tags":["python","arrays","image","numpy","matplotlib"],"question_length":1328,"response_length":183,"tag_count":5} {"question":"Why set_xticks doesn't set the labels of ticks? \n```\nimport matplotlib.pyplot as plt\n\nx = range(1, 7)\ny = (220, 300, 300, 290, 320, 315)\n\ndef test(axes):\n axes.bar(x, y)\n axes.set_xticks(x, [i+100 for i in x])\n\nfig, (ax1, ax2) = plt.subplots(1, 2)\ntest(ax1)\ntest(ax2)\n```\nI am expecting the xlabs as 101, 102 ... However, if i switch to use plt.xticks(x, [i+100 for i in x]) and rewrite the function explicitly, it works.","response":".set_xticks() on the axes will set the locations and set_xticklabels() will set the displayed text. \n```\ndef test(axes):\n axes.bar(x,y)\n axes.set_xticks(x)\n axes.set_xticklabels([i+100 for i in x])\n```","best_answers_score":0.7911,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/21910986\/why-set-xticks-doesnt-set-the-labels-of-ticks","best_answers_votes":149,"tags":["python","matplotlib"],"question_length":427,"response_length":210,"tag_count":2} {"question":"Title for matplotlib legend I know it seems fairly redundant to have a title for a legend, but is it possible using matplotlib? Here's a snippet of the code I have: \n```\nimport matplotlib.patches as mpatches\nimport matplotlib.pyplot as plt\n\none = mpatches.Patch(facecolor='#f3f300', label='label1', linewidth = 0.5, edgecolor = 'black')\ntwo = mpatches.Patch(facecolor='#ff9700', label = 'label2', linewidth = 0.5, edgecolor = 'black')\nthree = mpatches.Patch(facecolor='#ff0000', label = 'label3', linewidth = 0.5, edgecolor = 'black')\n\nlegend = plt.legend(handles=[one, two, three], loc = 4, fontsize = 'small', fancybox = True)\n\nframe = legend.get_frame() #sets up for color, edge, and transparency\nframe.set_facecolor('#b4aeae') #color of legend\nframe.set_edgecolor('black') #edge color of legend\nframe.set_alpha(1) #deals with transparency\nplt.show()\n```\nI would want the title of the legend above label1. For reference, this is the output:","response":"Add the title parameter to the this line: \n```\nlegend = plt.legend(handles=[one, two, three], title=\"title\",\n loc=4, fontsize='small', fancybox=True)\n```\nSee also the official docs for the legend constructor.","best_answers_score":0.7909,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/44620013\/title-for-matplotlib-legend","best_answers_votes":195,"tags":["python","matplotlib","patch"],"question_length":943,"response_length":227,"tag_count":3} {"question":"Set axis tight only to x or y axis I have a plot look like this: Obviously, the left and right side is a waste of space, so I set \n```\nplt.axis('tight')\n```\nBut this gives me plot like this: The xlim looks right now, but the ylim is too tight for the plot. I'm wondering, if I can only set axis(tight) only to x axis in my case? So the plot may look something like this: It's certainly possible that I can do this manually by \n```\nplt.gca().set_xlim(left=-10, right=360)\n```\nBut I'm afraid this is not a very elegant solution.","response":"You want to use matplotlib's autoscale method from the matplotlib.axes.Axes class. Using the functional API, you apply a tight x axis using \n```\nplt.autoscale(enable=True, axis='x', tight=True)\n```\nor if you are using the object oriented API you would use \n```\nax = plt.gca() # only to illustrate what `ax` is\nax.autoscale(enable=True, axis='x', tight=True)\n```\nFor completeness, the axis kwarg can take 'x', 'y', or 'both', where the default is 'both'.","best_answers_score":0.7901,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/37558329\/set-axis-tight-only-to-x-or-y-axis","best_answers_votes":64,"tags":["python","matplotlib"],"question_length":526,"response_length":454,"tag_count":2} {"question":"pandas plot value counts barplot in descending manner [duplicate] This question already has answers here: Sorted bar charts with pandas\/matplotlib or seaborn (2 answers) Closed 7 years ago. I have a dataframe where i am trying to count the occurrence of each value. I plot it as horizontal bar but cant get it to be sorted. \n```\ndf = pd.DataFrame(['A','A','A','B','B','C'],columns = ['letters'])\n\ndf.value_counts()\n\nA 3\nB 2\nC 1\n```\nHow can i get it sorted in a descending manner?","response":"You can do it by changing your plotting line like this \n```\ndf.letters.value_counts().sort_values().plot(kind = 'barh')\n```","best_answers_score":0.7896,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/49059956\/pandas-plot-value-counts-barplot-in-descending-manner","best_answers_votes":53,"tags":["python","pandas","matplotlib","seaborn"],"question_length":479,"response_length":123,"tag_count":4} {"question":"how to use 'extent' in matplotlib.pyplot.imshow I managed to plot my data and would like to add a background image (map) to it. Data is plotted by the long\/lat values and I have the long\/lat values for the image's three corners (top left, top right and bottom left) too. I am trying to figure out how to use 'extent' option with imshow. However, the examples I found don't explain how to assign x and y for each corner ( in my case I have the information for three corners). How can I assign the location of three corners for the image when adding it to the plot? Thanks","response":"Specify, in the coordinates of your current axis, the corners of the rectangle that you want the image to be pasted over Extent defines the left and right limits, and the bottom and top limits. It takes four values like so: extent=[horizontal_min,horizontal_max,vertical_min,vertical_max]. Assuming you have longitude along the horizontal axis, then use extent=[longitude_top_left,longitude_top_right,latitude_bottom_left,latitude_top_left]. longitude_top_left and longitude_bottom_left should be the same, latitude_top_left and latitude_top_right should be the same, and the values within these pairs are interchangeable. If your first element of your image should be plotted in the lower left, then use the origin='lower' imshow option as well, otherwise the 'upper' default is what you want.","best_answers_score":0.788,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/6999621\/how-to-use-extent-in-matplotlib-pyplot-imshow","best_answers_votes":55,"tags":["python","plot","matplotlib"],"question_length":570,"response_length":794,"tag_count":3} {"question":"Matplotlib - Tcl_AsyncDelete: async handler deleted by the wrong thread? I'm asking this question because I can't solve one problem in Python\/Django (actually in pure Python it's ok) which leads to RuntimeError: tcl_asyncdelete async handler deleted by the wrong thread. This is somehow related to the way how I render matplotlib plots in Django. The way I do it is: \n```\n...\nimport matplotlib.pyplot as plt\n...\nfig = plt.figure()\n...\nplt.close()\n```\nI extremely minimized my code. But the catch is - even if I have just one line of code: \n```\nfig = plt.figure()\n```\nI see this RuntimeError happening. I hope I could solve the problem, If I knew the correct way of closing\/cleaning\/destroying plots in Python\/Django.","response":"By default matplotlib uses TK gui toolkit, when you're rendering an image without using the toolkit (i.e. into a file or a string), matplotlib still instantiates a window that doesn't get displayed, causing all kinds of problems. In order to avoid that, you should use an Agg backend. It can be activated like so -- \n```\nimport matplotlib\nmatplotlib.use('Agg')\nfrom matplotlib import pyplot\n```\nFor more information please refer to matplotlib documentation -- http:\/\/matplotlib.org\/faq\/howto_faq.html#matplotlib-in-a-web-application-server","best_answers_score":0.7869,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/27147300\/matplotlib-tcl-asyncdelete-async-handler-deleted-by-the-wrong-thread","best_answers_votes":97,"tags":["python","django","matplotlib"],"question_length":716,"response_length":539,"tag_count":3} {"question":"Extract csv file specific columns to list in Python What I'm trying to do is plot the latitude and longitude values of specific storms on a map using matplotlib,basemap,python, etc. My problem is that I'm trying to extract the latitude, longitude, and name of the storms on map but I keep getting errors between lines 41-44 where I try to extract the columns into the list. Here is what the file looks like: \n```\n1957,AUDREY,HU, 21.6N, 93.3W\n1957,AUDREY,HU,22.0N, 93.4W\n1957,AUDREY,HU,22.6N, 93.5W\n1957,AUDREY,HU,23.2N, 93.6W\n```\nI want the list to look like the following: \n```\nlatitude = [21.6N,22.0N,23.4N]\nlongitude = [93.3W, 93.5W,93.8W]\nname = [\"Audrey\",\"Audrey\"]\n```\nHere's what I have so far: \n```\ndata = np.loadtxt('louisianastormb.csv',dtype=np.str,delimiter=',',skiprows=1)\n'''print data'''\n\ndata = np.loadtxt('louisianastormb.csv',dtype=np.str,delimiter=',',skiprows=0)\n\nf= open('louisianastormb.csv', 'rb')\nreader = csv.reader(f, delimiter=',')\nheader = reader.next()\nzipped = zip(*reader)\n\nlatitude = zipped[3]\nlongitude = zipped[4]\nnames = zipped[1]\nx, y = m(longitude, latitude)\n```\nHere's the last error message\/traceback I received: Traceback (most recent call last): File \"\/home\/darealmzd\/lstorms.py\", line 42, in header = reader.next() _csv.Error: new-line character seen in unquoted field - do you need to open the file in universal-newline mode?","response":"This looks like a problem with line endings in your code. If you're going to be using all these other scientific packages, you may as well use Pandas for the CSV reading part, which is both more robust and more useful than just the csv module: \n```\nimport pandas\ncolnames = ['year', 'name', 'city', 'latitude', 'longitude']\ndata = pandas.read_csv('test.csv', names=colnames)\n```\nIf you want your lists as in the question, you can now do: \n```\nnames = data.name.tolist()\nlatitude = data.latitude.tolist()\nlongitude = data.longitude.tolist()\n```","best_answers_score":0.7867,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/19486369\/extract-csv-file-specific-columns-to-list-in-python","best_answers_votes":76,"tags":["python","csv","numpy","matplotlib"],"question_length":1370,"response_length":543,"tag_count":4} {"question":"Plotting dates on the x-axis I am trying to plot information against dates. I have a list of dates in the format \"01\/02\/1991\". I converted them by doing the following: \n```\nx = parser.parse(date).strftime('%Y%m%d'))\n```\nwhich gives 19910102 Then I tried to use num2date \n```\nimport matplotlib.dates as dates\nnew_x = dates.num2date(x)\n```\nPlotting: \n```\nplt.plot_date(new_x, other_data, fmt=\"bo\", tz=None, xdate=True)\n```\nBut I get an error. It says \"ValueError: year is out of range\". Any solutions?","response":"You can do this more simply using plot() instead of plot_date(). First, convert your strings to instances of Python datetime.date: \n```\nimport datetime as dt\n\ndates = ['01\/02\/1991','01\/03\/1991','01\/04\/1991']\nx = [dt.datetime.strptime(d,'%m\/%d\/%Y').date() for d in dates]\ny = range(len(x)) # many thanks to Kyss Tao for setting me straight here\n```\nThen plot: \n```\nimport matplotlib.pyplot as plt\nimport matplotlib.dates as mdates\n\nplt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%m\/%d\/%Y'))\nplt.gca().xaxis.set_major_locator(mdates.DayLocator())\nplt.plot(x,y)\nplt.gcf().autofmt_xdate()\n```\nResult:","best_answers_score":0.7863,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/9627686\/plotting-dates-on-the-x-axis","best_answers_votes":196,"tags":["python","datetime","matplotlib"],"question_length":499,"response_length":607,"tag_count":3} {"question":"What is the difference between set_xlim and set_xbound? From the help: set_xlim: Set the data limits for the xaxis. set_xbound: Set the lower and upper numerical bounds of the x-axis. That is not very clear, so let\u2019s say that I plot something: \n```py\nimport matplotlib.pylab as plt\nfig, ax = plt.subplots(1, 1)\nax.plot(xrange(10), xrange(10))\n```\nNow, either I do: \n```py\nax.set_xlim(2, 7)\n```\nor: \n```py\nax.set_xbound(2, 7)\n```\nI do not see the difference. I can dragg the plot, all the line is ploted between 0 and 9.","response":"The bound can changes automatically, if you later plot something that is not within the bound. In contrast, limits are fixed and do not change automatically. \n```\nimport pylab as p\n\nt = p.arange(0.0, 2.0, 0.01)\ns = p.sin(2*p.pi*t)\n\nax=p.subplot(111)\nax.plot(t, s, color='r',linewidth=1.0)\nax.set_ylim(-1,1)\nax.plot(t, s+1, color='g',linewidth=1.0, label=\"Graph2\")\np.show()\n\n\nax=p.subplot(111)\nax.plot(t, s, color='r',linewidth=1.0)\nax.set_ybound(-1,1)\nax.plot(t, s+1, color='g',linewidth=1.0, label=\"Graph2\")\np.show()\n```","best_answers_score":0.7842,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/11459672\/what-is-the-difference-between-set-xlim-and-set-xbound","best_answers_votes":31,"tags":["python","matplotlib","x-axis"],"question_length":519,"response_length":521,"tag_count":3} {"question":"Plot correlation matrix using pandas I have a data set with huge number of features, so analysing the correlation matrix has become very difficult. I want to plot a correlation matrix which we get using dataframe.corr() function from pandas library. Is there any built-in function provided by the pandas library to plot this matrix?","response":"If your main goal is to visualize the correlation matrix, rather than creating a plot per se, the convenient pandas styling options is a viable built-in solution: \n```\nimport pandas as pd\nimport numpy as np\n\nrs = np.random.RandomState(0)\ndf = pd.DataFrame(rs.rand(10, 10))\ncorr = df.corr()\ncorr.style.background_gradient(cmap='coolwarm')\n# 'RdBu_r', 'BrBG_r', & PuOr_r are other good diverging colormaps\n```\nNote that this needs to be in a backend that supports rendering HTML, such as the JupyterLab Notebook. Styling You can easily limit the digit precision (this is now .format(precision=2) in pandas 2.*): \n```\ncorr.style.background_gradient(cmap='coolwarm').set_precision(2)\n```\nOr get rid of the digits altogether if you prefer the matrix without annotations: \n```\ncorr.style.background_gradient(cmap='coolwarm').set_properties(**{'font-size': '0pt'})\n```\nThe styling documentation also includes instructions of more advanced styles, such as how to change the display of the cell the mouse pointer is hovering over. Time comparison In my testing, style.background_gradient() was 4x faster than plt.matshow() and 120x faster than sns.heatmap() with a 10x10 matrix. Unfortunately it doesn't scale as well as plt.matshow(): the two take about the same time for a 100x100 matrix, and plt.matshow() is 10x faster for a 1000x1000 matrix. Saving There are a few possible ways to save the stylized dataframe: Return the HTML by appending the render() method and then write the output to a file. Save as an .xslx file with conditional formatting by appending the to_excel() method. Combine with imgkit to save a bitmap Take a screenshot (like I have done here). Normalize colors across the entire matrix (pandas >= 0.24) By setting axis=None, it is now possible to compute the colors based on the entire matrix rather than per column or per row: \n```\ncorr.style.background_gradient(cmap='coolwarm', axis=None)\n```\nSingle corner heatmap Since many people are reading this answer I thought I would add a tip for how to only show one corner of the correlation matrix. I find this easier to read myself, since it removes the redundant information. \n```\n# Fill diagonal and upper half with NaNs\nmask = np.zeros_like(corr, dtype=bool)\nmask[np.triu_indices_from(mask)] = True\ncorr[mask] = np.nan\n(corr\n .style\n .background_gradient(cmap='coolwarm', axis=None, vmin=-1, vmax=1)\n .highlight_null(color='#f1f1f1') # Color NaNs grey\n .format(precision=2))\n```","best_answers_score":0.7836,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/29432629\/plot-correlation-matrix-using-pandas","best_answers_votes":463,"tags":["python","pandas","matplotlib","data-visualization","information-visualization"],"question_length":332,"response_length":2446,"tag_count":5} {"question":"Multiple datasets on the same scatter plot I want to plot multiple data sets on the same scatter plot: \n```\ncases = scatter(x[:4], y[:4], s=10, c='b', marker=\"s\")\ncontrols = scatter(x[4:], y[4:], s=10, c='r', marker=\"o\")\n\nshow()\n```\nThe above only shows the most recent scatter() I've also tried: \n```\nplt = subplot(111)\nplt.scatter(x[:4], y[:4], s=10, c='b', marker=\"s\")\nplt.scatter(x[4:], y[4:], s=10, c='r', marker=\"o\")\nshow()\n```","response":"You need a reference to an Axes object to keep drawing on the same subplot. \n```\nimport matplotlib.pyplot as plt\n\nx = range(100)\ny = range(100,200)\nfig = plt.figure()\nax1 = fig.add_subplot(111)\n\nax1.scatter(x[:4], y[:4], s=10, c='b', marker=\"s\", label='first')\nax1.scatter(x[40:],y[40:], s=10, c='r', marker=\"o\", label='second')\nplt.legend(loc='upper left')\nplt.show()\n```","best_answers_score":0.7834,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/4270301\/multiple-datasets-on-the-same-scatter-plot","best_answers_votes":174,"tags":["python","matplotlib","scatter-plot"],"question_length":433,"response_length":372,"tag_count":3} {"question":"Plotting pandas timedelta I have a pandas dataframe that has two datetime64 columns and one timedelta64 column that is the difference between the two columns. I'm trying to plot a histogram of the timedelta column to visualize the time differences between the two events. However, just using df['time_delta'] results in: TypeError: ufunc add cannot use operands with types dtype(' df2 = df1['time_delta'].astype(float) results in: TypeError: cannot astype a timedelta from [timedelta64[ns]] to [float64] How would one create a histogram of pandas timedelta data?","response":"Here are ways to convert timedeltas, docs are here \n```\n\n```python\npd.to_timedelta(np.arange(5),unit='d')+pd.to_timedelta(1,unit='s')\n#Output\n#0 0 days, 00:00:01\n#1 1 days, 00:00:01\n#2 2 days, 00:00:01\n#3 3 days, 00:00:01\n#4 4 days, 00:00:01\n#dtype: timedelta64[ns]\n#```\n#Convert to seconds (is an exact conversion) \n#```\n```\n\n```python\n(pd.to_timedelta(np.arange(5),unit='d')+pd.to_timedelta(1,unit='s')).astype('timedelta64[s]')\n#Output\n#0 1\n#1 86401\n#2 172801\n#3 259201\n#4 345601\n#dtype: float64\n#```\n#Convert using astype will round to that unit \n#```\n```\n\n```python\n(pd.to_timedelta(np.arange(5),unit='d')+pd.to_timedelta(1,unit='s')).astype('timedelta64[D]')\n#Output\n#0 0\n#1 1\n#2 2\n#3 3\n#4 4\n#dtype: float64\n#```\n#Division will give an exact repr \n#```\n```\n\n```python\n(pd.to_timedelta(np.arange(5),unit='d')+pd.to_timedelta(1,unit='s')) \/ np.timedelta64(1,'D')\n#Output\n#0 0.000012\n#1 1.000012\n#2 2.000012\n#3 3.000012\n#4 4.000012\n#dtype: float64\n#```\n```","best_answers_score":0.7823,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/23543909\/plotting-pandas-timedelta","best_answers_votes":58,"tags":["python","matplotlib","pandas"],"question_length":562,"response_length":962,"tag_count":3} {"question":"How to pass rgb color values to eventplot I'm simply trying to plot some tick marks with a specific color using matplotlib's eventplot. I'm running Python 3 in Jupyter notebook with %matplotlib inline. Here's an example code: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt \nspikes = 100*np.random.random(100)\nplt.eventplot(spikes, orientation='horizontal', linelengths=0.9, color=[0.3,0.3,0.5])\n```\nIt outputs the following error: \n```\nValueError: colors and positions are unequal sized sequences\n```\nThe error occurs presumably because I am not providing a list of colors of the same length as the data (but I wan't them to all just be the same color!). It also gives an error when I use a color string like 'crimson' or 'orchid'. But it works when I use a simple one-letter string like 'r'. Am I really restricted to just using the extremely limited set of one-letter color strings 'r','b','g','k','m','y', etc... or making a long color list when using this eventplot?","response":"According to the docs: you can pass an (r, g, b) or (r, g, b, a) tuple, where each of r, g, b and a are in the range [0,1]. \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nspikes = 100*np.random.random(100)\nplt.eventplot(spikes, orientation='horizontal', linelengths=0.9, color = [(0.3,0.3,0.5)])\n\nplt.show()\n```","best_answers_score":0.7818,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/45363146\/how-to-pass-rgb-color-values-to-eventplot","best_answers_votes":19,"tags":["python","matplotlib","colors"],"question_length":982,"response_length":321,"tag_count":3} {"question":"How do I change the font size of ticks of matplotlib.pyplot.colorbar.ColorbarBase? I would like to know how to change the font size of ticks of ColorbarBase of matplotlib. The following lines are a relevant part in my analysis script, in which ColorbarBase is used. \n```\nimport matplotlib.pyplot as plt\nfrom matplotlib.colors import LogNorm\nimport matplotlib as mpl\n\naxcb = fig.add_axes([0.9, 0.135, 0.02, 0.73])\n\ncb = mpl.colorbar.ColorbarBase(axcb, norm=LogNorm(vmin=7e-5, vmax=1), cmap=plt.cm.CMRmap)\ncb.set_label(\"Relative Photon Intensity\", labelpad=-1, size=14)\n```\nI am using matplotlib ver 1.4.3 with Python 2.7 on OS X.","response":"You can change the tick size using: \n```\nfont_size = 14 # Adjust as appropriate.\ncb.ax.tick_params(labelsize=font_size)\n```\nSee the docs for ax.tick_params here for more parameters that can be modified.","best_answers_score":0.7806,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/29074820\/how-do-i-change-the-font-size-of-ticks-of-matplotlib-pyplot-colorbar-colorbarbas","best_answers_votes":69,"tags":["matplotlib"],"question_length":628,"response_length":202,"tag_count":1} {"question":"How to edit a seaborn legend title and labels for figure-level functions I've created this plot using Seaborn and a pandas dataframe (data): My code: \n```py\nimport seaborn as sns\n\ng = sns.lmplot('credibility', 'percentWatched', data=data, hue='millennial', markers=[\"+\", \".\"])\n```\nYou may notice the plot's legend title is simply the variable name ('millennial') and the legend items are its values (0, 1). How can I edit the legend's title and labels? Ideally, the legend title would be 'Generation' and the labels would be \"Millennial\" and \"Older Generations\".","response":"If legend_out is set to True then legend is available through the g._legend property and it is a part of a figure. Seaborn legend is standard matplotlib legend object. Therefore you may change legend texts. Tested in python 3.8.11, matplotlib 3.4.3, seaborn 0.11.2 \n```py\nimport seaborn as sns\n\n# load the tips dataset\ntips = sns.load_dataset(\"tips\")\n\n# plot\ng = sns.lmplot(x=\"total_bill\", y=\"tip\", hue=\"smoker\", data=tips, markers=[\"o\", \"x\"], facet_kws={'legend_out': True})\n\n# title\nnew_title = 'My title'\ng._legend.set_title(new_title)\n# replace labels\nnew_labels = ['label 1', 'label 2']\nfor t, l in zip(g._legend.texts, new_labels):\n t.set_text(l)\n```\nAnother situation if legend_out is set to False. You have to define which axes has a legend (in below example this is axis number 0): \n```py\ng = sns.lmplot(x=\"total_bill\", y=\"tip\", hue=\"smoker\", data=tips, markers=[\"o\", \"x\"], facet_kws={'legend_out': False})\n\n# check axes and find which is have legend\nleg = g.axes.flat[0].get_legend()\nnew_title = 'My title'\nleg.set_title(new_title)\nnew_labels = ['label 1', 'label 2']\nfor t, l in zip(leg.texts, new_labels):\n t.set_text(l)\n```\nMoreover you may combine both situations and use this code: \n```py\ng = sns.lmplot(x=\"total_bill\", y=\"tip\", hue=\"smoker\", data=tips, markers=[\"o\", \"x\"], facet_kws={'legend_out': True})\n\n# check axes and find which is have legend\nfor ax in g.axes.flat:\n leg = g.axes.flat[0].get_legend()\n if not leg is None: break\n# or legend may be on a figure\nif leg is None: leg = g._legend\n\n# change legend texts\nnew_title = 'My title'\nleg.set_title(new_title)\nnew_labels = ['label 1', 'label 2']\nfor t, l in zip(leg.texts, new_labels):\n t.set_text(l)\n```\nThis code works for any seaborn plot which is based on Grid class.","best_answers_score":0.7797,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/45201514\/how-to-edit-a-seaborn-legend-title-and-labels-for-figure-level-functions","best_answers_votes":143,"tags":["python","matplotlib","seaborn","legend","legend-properties"],"question_length":562,"response_length":1760,"tag_count":5} {"question":"Changing matplotlib subplot size\/position after axes creation Is it possible to set the size\/position of a matplotlib subplot after the axes are created? I know that I can do: \n```\nimport matplotlib.pyplot as plt\n\nax = plt.subplot(111)\nax.change_geometry(3,1,1)\n```\nto put the axes on the top row of three. But I want the axes to span the first two rows. I have tried this: \n```\nimport matplotlib.gridspec as gridspec\n\nax = plt.subplot(111)\ngs = gridspec.GridSpec(3,1)\nax.set_subplotspec(gs[0:2])\n```\nbut the axes still fill the whole window. Update for clarity I want to change the position of an existing axes instance rather than set it when it is created. This is because the extent of the axes will be modified each time I add data (plotting data on a map using cartopy). The map may turn out tall and narrow, or short and wide (or something in between). So the decision on the grid layout will happen after the plotting function.","response":"Thanks to Molly pointing me in the right direction, I have a solution: \n```\nimport matplotlib.pyplot as plt\nimport matplotlib.gridspec as gridspec\n\nfig = plt.figure()\n\nax = fig.add_subplot(111)\n\ngs = gridspec.GridSpec(3,1)\nax.set_position(gs[0:2].get_position(fig))\nax.set_subplotspec(gs[0:2]) # only necessary if using tight_layout()\n\nfig.add_subplot(gs[2])\n\nfig.tight_layout() # not strictly part of the question\n\nplt.show()\n```","best_answers_score":0.7797,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/22881301\/changing-matplotlib-subplot-size-position-after-axes-creation","best_answers_votes":21,"tags":["python","matplotlib"],"question_length":935,"response_length":465,"tag_count":2} {"question":"Setting axis limits in matplotlib pyplot I have two subplots in a figure. I want to set the axes of the second subplot such that it has the same limits as the first subplot (which changes depending on the values plotted). \n```\nimport matplotlib.pyplot as plt\n\nplt.figure(1, figsize=(10, 20))\n\n## First subplot: Mean value in each period (mean over replications)\nplt.subplot(211, axisbg='w')\nplt.plot(time,meanVector[0:xMax], color='#340B8C', \n marker='x', ms=4, mec='#87051B', markevery=(asp, 2 * asp))\nplt.xticks(numpy.arange(0, T + 1, jump), rotation=-45)\nplt.axhline(y = Results[0], color='#299967', ls='--')\nplt.ylabel('Mean Value')\nplt.xlabel('Time')\nplt.grid(True)\n\n## Second subplot: moving average for determining warm-up period\n## (Welch method)\n\nplt.subplot(212) \nplt.plot(time[0:len(yBarWvector)],yBarWvector, color='#340B8C')\nplt.xticks(numpy.arange(0, T + 1, jump), rotation=-45)\nplt.ylabel('yBarW')\nplt.xlabel('Time')\nplt.xlim((0, T))\nplt.grid(True)\n```\nIn the second subplot, what should be the arguments for plt.ylim() function? I tried defining \n```\nymin, ymax = plt.ylim()\n```\nin the first subplot and then set \n```\nplt.ylim((ymin,ymax))\n```\nin the second subplot. But that did not work, because the returned value ymax is the maximum value taken by the y variable (mean value) in the first subplot and not the upper limit of the y-axis.","response":"Your proposed solution should work, especially if the plots are interactive (they will stay in sync if one changes). As alternative, you can manually set the y-limits of the second axis to match that of the first. Example: \n```\nfrom pylab import *\n\nx = arange(0.0, 2.0, 0.01)\ny1 = 3*sin(2*pi*x)\ny2 = sin(2*pi*x)\n\nfigure()\nax1 = subplot(211)\nplot(x, y1, 'b')\n\nsubplot(212)\nplot(x, y2, 'g')\nylim( ax1.get_ylim() ) # set y-limit to match first axis\n\nshow()\n```","best_answers_score":0.779,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/3645787\/setting-axis-limits-in-matplotlib-pyplot","best_answers_votes":15,"tags":["python","matplotlib"],"question_length":1366,"response_length":464,"tag_count":2} {"question":"Python: Creating a 2D histogram from a numpy matrix I'm new to python. I have a numpy matrix, of dimensions 42x42, with values in the range 0-996. I want to create a 2D histogram using this data. I've been looking at tutorials, but they all seem to show how to create 2D histograms from random data and not a numpy matrix. So far, I have imported: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib import colors\n```\nI'm not sure if these are correct imports, I'm just trying to pick up what I can from tutorials I see. I have the numpy matrix M with all of the values in it (as described above). In the end, i want it to look something like this: obviously, my data will be different, so my plot should look different. Can anyone give me a hand? Edit: For my purposes, Hooked's example below, using matshow, is exactly what I'm looking for.","response":"If you have the raw data from the counts, you could use plt.hexbin to create the plots for you (IMHO this is better than a square lattice): Adapted from the example of hexbin: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nn = 100000\nx = np.random.standard_normal(n)\ny = 2.0 + 3.0 * x + 4.0 * np.random.standard_normal(n)\nplt.hexbin(x,y)\n\nplt.show()\n```\nIf you already have the Z-values in a matrix as you mention, just use plt.imshow or plt.matshow: \n```\nXB = np.linspace(-1,1,20)\nYB = np.linspace(-1,1,20)\nX,Y = np.meshgrid(XB,YB)\nZ = np.exp(-(X**2+Y**2))\nplt.imshow(Z,interpolation='none')\n```","best_answers_score":0.7779,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/27156381\/python-creating-a-2d-histogram-from-a-numpy-matrix","best_answers_votes":31,"tags":["python","numpy","matrix","matplotlib","histogram"],"question_length":862,"response_length":606,"tag_count":5} {"question":"Plot pandas dates in matplotlib I have a fixed-width data file containing dates, but when I try to plot the data the dates are not displayed properly on the x-axis. My files looks like \n```\n2014-07-10 11:49:14.377102 45\n2014-07-10 11:50:14.449150 45\n2014-07-10 11:51:14.521168 21\n2014-07-10 11:52:14.574241 8\n2014-07-10 11:53:14.646137 11\n2014-07-10 11:54:14.717688 14\n```\netc and I use pandas to read in the file \n```\n#! \/usr\/bin\/env python\nimport pandas as pd\nimport matplotlib.pyplot as plt\ndata = pd.read_fwf('myfile.log',header=None,names=['time','amount'],widths=[27,5])\ndata.time = pd.to_datetime(data['time'], format='%Y-%m-%d %H:%M:%S.%f')\nplt.plot(data.time,data.amount)\nplt.show()\n```\nSo I suppose the issue here is conversion from pandas to matplotlib datetime, How would one do a conversion? I also tried with pandas directly: \n```\ndata.time = pd.to_datetime(data['time'], format='%Y-%m-%d %H:%M:%S.%f')\ndata.set_index('time') # Fails!!\ndata.time.plot()\n```\nbut this fails with TypeError: Empty 'Series': no numeric data to plot","response":"If you use a list containing the column name(s) instead of a string, data.set_index will work The following should show the dates on x-axis: \n```\n#! \/usr\/bin\/env python\nimport pandas as pd\nimport matplotlib.pyplot as plt\ndata = pd.read_fwf('myfile.log',header=None,names=['time','amount'],widths=[27,5])\ndata.time = pd.to_datetime(data['time'], format='%Y-%m-%d %H:%M:%S.%f')\ndata.set_index(['time'],inplace=True)\ndata.plot()\n\n#OR \nplt.plot(data.index, data.amount)\n```","best_answers_score":0.7774,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/25416955\/plot-pandas-dates-in-matplotlib","best_answers_votes":41,"tags":["python","pandas","datetime","dataframe","matplotlib"],"question_length":1060,"response_length":469,"tag_count":5} {"question":"Map values to colors in matplotlib I have a list of numbers as follows: \n```\nlst = [1.9378076554115014, 1.2084586588892861, 1.2133096565896173, \n 1.2427632053442292, 1.1809971732733273, 0.91960143581348919, \n 1.1106310149587162, 1.1106310149587162, 1.1527004351293346, \n 0.87318084435885079, 1.1666132876686799, 1.1666132876686799]\n```\nI want to convert these numbers to colors for display. I want gray scale but when I am using these numbers as it is, it gives me an error: \n```\nValueError: to_rgba: Invalid rgba arg \"1.35252299785\"\nto_rgb: Invalid rgb arg \"1.35252299785\"\ngray (string) must be in range 0-1\n```\n...which I understand is due to it exceeding 1. I next tried to divide the items in the list with the highest number in the list to give values less than 1. But this gives a very narrow color scale with hardly any difference between values. Is there any way in which I can give some min and max range to colors and convert these values to color? I am using matplotlib.","response":"The matplotlib.colors module is what you are looking for. This provides a number of classes to map from values to colourmap values. \n```\nimport matplotlib\nimport matplotlib.cm as cm\n\nlst = [1.9378076554115014, 1.2084586588892861, 1.2133096565896173, 1.2427632053442292, \n 1.1809971732733273, 0.91960143581348919, 1.1106310149587162, 1.1106310149587162, \n 1.1527004351293346, 0.87318084435885079, 1.1666132876686799, 1.1666132876686799]\n\nminima = min(lst)\nmaxima = max(lst)\n\nnorm = matplotlib.colors.Normalize(vmin=minima, vmax=maxima, clip=True)\nmapper = cm.ScalarMappable(norm=norm, cmap=cm.Greys_r)\n\nfor v in lst:\n print(mapper.to_rgba(v))\n```\nThe general approach is find the minima and maxima in your data. Use these to create a Normalize instance (other normalisation classes are available, e.g. log scale). Next you create a ScalarMappable using the Normalize instance and your chosen colormap. You can then use mapper.to_rgba(v) to map from an input value v, via your normalised scale, to a target color. \n```\nfor v in sorted(lst):\n print(\"%.4f: %.4f\" % (v, mapper.to_rgba(v)[0]) )\n```\nProduces the output: \n```\n0.8732: 0.0000\n0.9196: 0.0501\n1.1106: 0.2842\n1.1106: 0.2842\n1.1527: 0.3348\n1.1666: 0.3469\n1.1666: 0.3469\n1.1810: 0.3632\n1.2085: 0.3875\n1.2133: 0.3916\n1.2428: 0.4200\n1.9378: 1.0000\n```\nThe matplotlib.colors module documentation has more information if needed.","best_answers_score":0.7772,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/28752727\/map-values-to-colors-in-matplotlib","best_answers_votes":40,"tags":["python","matplotlib"],"question_length":999,"response_length":1395,"tag_count":2} {"question":"Partially transparent scatter plot, but with a solid color bar In Python, with Matplotlib, how to simply do a scatter plot with transparency (alpha < 1), but with a color bar that represents their color value, but has alpha = 1? Here is what one gets, with from pylab import *; scatter(range(10), arange(0, 100, 10), c=range(10), alpha=0.2); color_bar = colorbar(): How can the color bar be made non-transparent? PS: I tried color_bar.set_alpha(1); draw(), but this did not do anything\u2026","response":"Alright, I found one way to do it, that looks relatively clean: (using the ColorBar object from the question) \n```\ncolor_bar.set_alpha(1)\ncolor_bar.draw_all()\n# pylab.draw() or pyplot.draw() might be necessary\n```\nIt would be great to get a confirmation that this is the most robust way to proceed, though! :)","best_answers_score":0.7738,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/4478725\/partially-transparent-scatter-plot-but-with-a-solid-color-bar","best_answers_votes":48,"tags":["python","matplotlib","scatter","colorbar"],"question_length":486,"response_length":309,"tag_count":4} {"question":"How to drop connecting lines where the function is discontinuous I'm plotting some functions that have several discontinuities. Each function is given as a list. I want to connect points with lines only where the function is continuous. Here is a simplified example of what plot is doing. \n```\nx=linspace(0,1,100) \ny=zeros(100)\ny[x<0.5] = x[x<0.5]\ny[x>=0.5] = 1 + x[x>=0.5]\nplot(x, y, '-o')\n```\nThere is a discontinuity at x=0.5, but plot connects all points with lines regardless. My functions are different of course. They typically have several discontinuities in different places. The criterion for the discontinuity is simple. Say, if the function jumps by more than 0.5, I assume it is discontinuous at that point. Is there an option in plot to tell it to drop the connecting lines between the points where the function is discontinuous? I recall being able to do that easily with gnuplot.","response":"use nan to break the line into multiple segments: \n```\nimport numpy as np\nfrom pylab import *\nx=linspace(0,1,100) \ny=zeros(100)\ny[x<0.5] = x[x<0.5]\ny[x>=0.5] = 1 + x[x>=0.5]\n\npos = np.where(np.abs(np.diff(y)) >= 0.5)[0]\n\nx[pos] = np.nan\ny[pos] = np.nan\n\nplot(x, y, '-o')\n```\nEdit: to insert nan at discontinuities: \n```\npos = np.where(np.abs(np.diff(y)) >= 0.5)[0]+1\nx = np.insert(x, pos, np.nan)\ny = np.insert(y, pos, np.nan)\n```","best_answers_score":0.7728,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/10377593\/how-to-drop-connecting-lines-where-the-function-is-discontinuous","best_answers_votes":34,"tags":["matplotlib"],"question_length":898,"response_length":433,"tag_count":1} {"question":"Adding line to scatter plot using python's matplotlib I am using python's matplotlib and want to create a matplotlib.scatter() with additional line. The line should proceed from the lower left corner to the upper right corner independent of the scatters content. A linear regression through the data, like in this post, is not what I am looking for. Also it should be dynamically and independent of the scatter input. This should be the final plot: EDIT: Doing this got me the result: \n```\n# Scatter Plot\nx = data_calc_hourly.temp\ny = data_obs_hourly.temp\n\nlineStart = data_calc_hourly.temp.min() \nlineEnd = data_calc_hourly.temp.max() \n\nplt.figure()\nplt.scatter(x, y, color = 'k', alpha=0.5)\nplt.plot([lineStart, lineEnd], [lineStart, lineEnd], 'k-', color = 'r')\nplt.xlim(lineStart, lineEnd)\nplt.ylim(lineStart, lineEnd)\nplt.show()\n```\nIs there any better way ?","response":"This draws a diagonal line which is independent of the scatter plot data and which stays rooted to the axes even if you resize the window: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.lines as mlines\nimport matplotlib.transforms as mtransforms\n\nx, y = np.random.random((2, 100))*2\nfig, ax = plt.subplots()\nax.scatter(x, y, c='black')\nline = mlines.Line2D([0, 1], [0, 1], color='red')\ntransform = ax.transAxes\nline.set_transform(transform)\nax.add_line(line)\nplt.show()\n```","best_answers_score":0.772,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/40516661\/adding-line-to-scatter-plot-using-pythons-matplotlib","best_answers_votes":36,"tags":["python","matplotlib"],"question_length":865,"response_length":500,"tag_count":2} {"question":"Adding subplots to a subplot I'm trying to create a figure that consists of a 2x2 grid, where in each quadrant there are 2 vertically stacked subplots (i.e. a 2x1 grid). I can't seem to figure out how to achieve this, though. The closest I've gotten is using gridspec and some ugly code (see below), but because gridspec.update(hspace=X) changes the spacing for all of the subplots I'm still not where I'd like to be. Ideally what I want is to, using the picture below as an example, decrease the spacing between the subplots within each quadrant, while increasing the vertical spacing between the top and bottom quadrants (i.e. between 1-3 and 2-4). Is there a way to do this (with or without using gridspec)? What I originally envisioned is generating each of the sub-subplot grids (i.e. each 2x1 grid) and inserting them into the larger 2x2 grid of subplots, but I haven't figured out how to add a subplot to a subplot, if there is even a way. \n```\nimport matplotlib.pyplot as plt\nimport matplotlib.gridspec as gridspec\nplt.figure(figsize=(10, 8))\ngs = gridspec.GridSpec(4,2)\ngs.update(hspace=0.4)\nfor i in range(2):\n for j in range(4):\n ax = plt.subplot(gs[j,i])\n ax.spines['top'].set_visible(False)\n ax.spines['right'].set_visible(False)\n plt.tick_params(which='both', top='off', right='off')\n if j % 2 == 0:\n ax.set_title(str(i+j+1))\n ax.plot([1,2,3], [1,2,3])\n ax.spines['bottom'].set_visible(False)\n ax.get_xaxis().set_visible(False) \n else:\n ax.plot([1,2,3], [3,2,1])\n```","response":"You can nest your GridSpec using SubplotSpec. The outer grid will be a 2 x 2 and the inner grids will be 2 x 1. The following code should give you the basic idea. \n```\nimport matplotlib.pyplot as plt\nimport matplotlib.gridspec as gridspec\n\nfig = plt.figure(figsize=(10, 8))\nouter = gridspec.GridSpec(2, 2, wspace=0.2, hspace=0.2)\n\nfor i in range(4):\n inner = gridspec.GridSpecFromSubplotSpec(2, 1,\n subplot_spec=outer[i], wspace=0.1, hspace=0.1)\n\n for j in range(2):\n ax = plt.Subplot(fig, inner[j])\n t = ax.text(0.5,0.5, 'outer=%d, inner=%d' % (i, j))\n t.set_ha('center')\n ax.set_xticks([])\n ax.set_yticks([])\n fig.add_subplot(ax)\n\nfig.show()\n```","best_answers_score":0.7716,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/34933905\/adding-subplots-to-a-subplot","best_answers_votes":98,"tags":["python","matplotlib","figure","subplot"],"question_length":1582,"response_length":714,"tag_count":4} {"question":"matplotlib hatched fill_between without edges? I have a region I'd like to hatch which borders on an existing plot line (of the same colour) that is dashed. However, when I use fill_between the region to be hatched has a border drawn around it also. This border seems share properties with the lines that create the hatching so I cannot set edgecolour to \"none\" or set linestyle as \"--\" as the hatching is similarly affected. \n```\nimport matplotlib.pyploy as plt\nplt.plot([0,1],[0,1],ls=\"--\",c=\"b\")\nplt.fill_between([0,1],[0,1],color=\"none\",hatch=\"X\",edgecolor=\"b\")\nplt.show()\n```\nIn this example I'd want the diagonal line from 0,0 to 1,1 to be dashed. Many thanks in advance.","response":">2.0.1 Update As commented by @CatherineHolloway you need to use facecolor instead of color now: \n```\nimport matplotlib.pyplot as plt\nplt.plot([0,1],[0,1],ls=\"--\",c=\"b\")\nplt.fill_between([0,1],[0,1], facecolor=\"none\", hatch=\"X\", edgecolor=\"b\", linewidth=0.0)\nplt.show()\n```\nFormer answer This seems to do the trick! \n```\nimport matplotlib.pyplot as plt\nplt.plot([0,1],[0,1],ls=\"--\",c=\"b\")\nplt.fill_between([0,1],[0,1], color=\"none\", hatch=\"X\", edgecolor=\"b\", linewidth=0.0)\nplt.show()\n```","best_answers_score":0.7713,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/18386106\/matplotlib-hatched-fill-between-without-edges","best_answers_votes":66,"tags":["python","matplotlib"],"question_length":677,"response_length":488,"tag_count":2} {"question":"Confusion between numpy, scipy, matplotlib and pylab Numpy, scipy, matplotlib, and pylab are common terms among they who use python for scientific computation. I just learn a bit about pylab, and I got confused. Whenever I want to import numpy, I can always do: \n```\nimport numpy as np\n```\nI just consider, that once I do \n```\nfrom pylab import *\n```\nthe numpy will be imported as well (with np alias). So basically the second one does more things compared to the first one. There are few things I want to ask: Is it right that pylab is just a wrapper for numpy, scipy and matplotlib? As np is the numpy alias in pylab, what is the scipy and matplotlib alias in pylab? (as far as I know, plt is alias of matplotlib.pyplot, but I don't know the alias for the matplotlib itself)","response":"No, pylab is part of matplotlib (in matplotlib.pylab) and tries to give you a MatLab like environment. matplotlib has a number of dependencies, among them numpy which it imports under the common alias np. scipy is not a dependency of matplotlib. If you run ipython --pylab an automatic import will put all symbols from matplotlib.pylab into global scope. Like you wrote numpy gets imported under the np alias. Symbols from matplotlib are available under the mpl alias.","best_answers_score":0.7701,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/12987624\/confusion-between-numpy-scipy-matplotlib-and-pylab","best_answers_votes":136,"tags":["python","numpy","matplotlib","scipy"],"question_length":776,"response_length":468,"tag_count":4} {"question":"How to change the color of a single bar in a bar plot Supposely, I have the bar chart as below: Any ideas on how to set different colors for each carrier? As for example, AK would be Red, GA would be Green, etc? I am using Pandas and matplotlib in Python \n```\n\n```python\nf=plt.figure()\n```\n\n```python\nax=f.add_subplot(1,1,1)\n```\n\n```python\nax.bar([1,2,3,4], [1,2,3,4])\n#Output\n#\n```\n\n```python\nax.get_children()\n#Output\n#[, , , , , , , , , , , ]\n```\n\n```python\nax.get_children()[2].set_color('r') #You can also try to locate the first patches.Rectangle object instead of direct calling the index.\n#Output\n#```\n#For the suggestions above, how do exactly we could enumerate ax.get_children() and check if the object type is rectangle? So if the object is rectangle, we would assign different random color?\n```","response":"Simple, just use .set_color \n```\n\n```python\nbarlist=plt.bar([1,2,3,4], [1,2,3,4])\n```\n\n```python\nbarlist[0].set_color('r')\n```\n\n```python\nplt.show()\n#Output\n#```\n#For your new question, not much harder either, just need to find the bar from your axis, an example: \n#```\n```\n\n```python\nf=plt.figure()\n```\n\n```python\nax=f.add_subplot(1,1,1)\n```\n\n```python\nax.bar([1,2,3,4], [1,2,3,4])\n#Output\n#\n```\n\n```python\nax.get_children()\n#Output\n#[, \n# , \n# , \n# , \n# , \n# , \n# , \n# , \n# , \n# ,\n# , \n# ]\n```\n\n```python\nax.get_children()[2].set_color('r') \n#Output\n# #You can also try to locate the first patches.Rectangle object \n# #instead of direct calling the index.\n#```\n#If you have a complex plot and want to identify the bars first, add those: \n#```\n```\n\n```python\nimport matplotlib\n```\n\n```python\nchildrenLS=ax.get_children()\n```\n\n```python\nbarlist=filter(lambda x: isinstance(x, matplotlib.patches.Rectangle), childrenLS)\n#Output\n#[, \n# , \n# , \n# , \n# ]\n#```\n```","best_answers_score":0.7694,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/18973404\/how-to-change-the-color-of-a-single-bar-in-a-bar-plot","best_answers_votes":197,"tags":["python","matplotlib","pandas","bar-chart"],"question_length":1314,"response_length":1611,"tag_count":4} {"question":"Store mouse click event coordinates with matplotlib I am trying to implement a simple mouse click event in matplotlib. I wish to plot a figure then use the mouse to select the lower and upper limits for integration. So far I am able to print the coordinates to screen but not store them for later use in the program. I would also like to exit the connection to the figure after the second mouse click. Below is the code which currently plots and then prints the coordinates. My Question(s): How can I store coordinates from the figure to list? i.e. click = [xpos, ypos] Is it possible to get two sets of x coordinates in order to do a simple integration over that section of line? \n```py\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.arange(-10,10)\ny = x**2\n\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.plot(x,y)\n\ndef onclick(event):\n global ix, iy\n ix, iy = event.xdata, event.ydata\n print 'x = %d, y = %d'%(\n ix, iy)\n\n global coords\n coords = [ix, iy]\n\n return coords\n\n\nfor i in xrange(0,1):\n\n cid = fig.canvas.mpl_connect('button_press_event', onclick)\n\n\nplt.show()\n```","response":"mpl_connect needs to be called just once to connect the event to event handler. It will start listening to click event until you disconnect. And you can use \n```\nfig.canvas.mpl_disconnect(cid)\n```\nto disconnect the event hook. What you want to do is something like: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.arange(-10,10)\ny = x**2\n\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.plot(x,y)\n\ncoords = []\n\ndef onclick(event):\n global ix, iy\n ix, iy = event.xdata, event.ydata\n print (f'x = {ix}, y = {iy}')\n\n global coords\n coords.append((ix, iy))\n \n if len(coords) == 2:\n fig.canvas.mpl_disconnect(cid)\n\n return coords\ncid = fig.canvas.mpl_connect('button_press_event', onclick)\n```","best_answers_score":0.7689,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/25521120\/store-mouse-click-event-coordinates-with-matplotlib","best_answers_votes":57,"tags":["python","matplotlib","onclick","mouseevent"],"question_length":1117,"response_length":735,"tag_count":4} {"question":"How to change fonts in matplotlib (python)? It sounds as an easy problem but I do not find any effective solution to change the font (not the font size) in a plot made with matplotlib in python. I found a couple of tutorials to change the default font of matplotlib by modifying some files in the folders where matplotlib stores its default font - see this blog post - but I am looking for a less radical solution since I would like to use more than one font in my plot (text, label, axis label, etc).","response":"Say you want Comic Sans for the title and Helvetica for the x label. \n```\ncsfont = {'fontname':'Comic Sans MS'}\nhfont = {'fontname':'Helvetica'}\n\nplt.title('title',**csfont)\nplt.xlabel('xlabel', **hfont)\nplt.show()\n```","best_answers_score":0.7687,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/21321670\/how-to-change-fonts-in-matplotlib-python","best_answers_votes":166,"tags":["python","fonts","matplotlib"],"question_length":501,"response_length":218,"tag_count":3} {"question":"Remove the legend on a matplotlib figure To add a legend to a matplotlib plot, one simply runs legend(). How to remove a legend from a plot? (The closest I came to this is to run legend([]) in order to empty the legend from data. But that leaves an ugly white rectangle in the upper right corner.)","response":"As of matplotlib v1.4.0rc4, a remove method has been added to the legend object. Usage: \n```py\nax.get_legend().remove()\n```\nor \n```py\nlegend = ax.legend(...)\n...\nlegend.remove()\n```\nSee here for the commit where this was introduced.","best_answers_score":0.7674,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/5735208\/remove-the-legend-on-a-matplotlib-figure","best_answers_votes":471,"tags":["matplotlib","legend"],"question_length":297,"response_length":232,"tag_count":2} {"question":"Setting a relative frequency in a matplotlib histogram I have data as a list of floats and I want to plot it as a histogram. Hist() function does the job perfectly for plotting the absolute histogram. However, I cannot figure out how to represent it in a relative frequency format - I would like to have it as a fraction or ideally as a percentage on the y-axis. Here is the code: \n```\nfig = plt.figure()\nax = fig.add_subplot(111)\nn, bins, patches = ax.hist(mydata, bins=100, normed=1, cumulative=0)\nax.set_xlabel('Bins', size=20)\nax.set_ylabel('Frequency', size=20)\nax.legend\n\nplt.show()\n```\nI thought normed=1 argument would do it, but it gives fractions that are too high and sometimes are greater than 1. They also seem to depend on the bin size, as if they are not normalized by the bin size or something. Nevertheless, when I set cumulative=1, it nicely sums up to 1. So, where is the catch? By the way, when I feed the same data into Origin and plot it, it gives me perfectly correct fractions. Thank you!","response":"Because normed option of hist returns the density of points, e.g dN\/dx What you need is something like that: \n```\n# assuming that mydata is an numpy array\n ax.hist(mydata, weights=np.zeros_like(mydata) + 1. \/ mydata.size)\n # this will give you fractions\n```","best_answers_score":0.7672,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/9767241\/setting-a-relative-frequency-in-a-matplotlib-histogram","best_answers_votes":64,"tags":["python","matplotlib","histogram","frequency"],"question_length":1012,"response_length":257,"tag_count":4} {"question":"Rotate tick labels in subplot I am attempting to rotate the x labels of a subplot (created using GridSpec) by 45 degrees. I have tried using axa.set_xticks() and axa.set_xticklabels, but it does not seem to work. Google wasn't helping either, since most questions concerning labels are about normal plots, and not subplots. See code below: \n```\nwidth = 20 # Width of the figure in centimeters\nheight = 15 # Height of the figure in centimeters\nw = width * 0.393701 # Conversion to inches\nh = height * 0.393701 # Conversion to inches\n\nf1 = plt.figure(figsize=[w,h])\ngs = gridspec.GridSpec(1, 7, width_ratios = [1.5, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0])\n\naxa = plt.subplot(gs[0])\naxa.plot(dts, z,'k', alpha=0.75, lw=0.25)\n#axa.set_title('...')\naxa.set_ylabel('TVDSS ' + '$[m]$', fontsize = '10' )\naxa.set_xlabel('slowness 'r'$[\\mu s\/m]$', fontsize = '10')\naxa.set_ylim(245, 260)\naxa.set_xlim(650, 700)\naxa.tick_params(labelsize=7)\naxa.invert_yaxis()\naxa.grid()\n```\nAny help will be greatly appreciated!","response":"You can do it in multiple ways: Here is one solution making use of tick_params: \n```\nax.tick_params(labelrotation=45)\n```\nHere is another solution making use of set_xticklabels: \n```\nax.set_xticklabels(labels, rotation=45)\n```\nHere is a third solution making use of set_rotation: \n```\nfor tick in ax.get_xticklabels():\n tick.set_rotation(45)\n```","best_answers_score":0.7669,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/31186019\/rotate-tick-labels-in-subplot","best_answers_votes":130,"tags":["python","matplotlib","subplot","xticks","yticks"],"question_length":1115,"response_length":348,"tag_count":5} {"question":"How to format axis number format to thousands with a comma How can I change the format of the numbers in the x-axis to be like 10,000 instead of 10000? Ideally, I would just like to do something like this: \n```\nx = format((10000.21, 22000.32, 10120.54), \"#,###\")\n```\nHere is the code: \n```\nimport matplotlib.pyplot as plt\n\n# create figure instance\nfig1 = plt.figure(1)\nfig1.set_figheight(15)\nfig1.set_figwidth(20)\n\nax = fig1.add_subplot(2,1,1)\n\nx = 10000.21, 22000.32, 10120.54\n\ny = 1, 4, 15\nax.plot(x, y)\n\nax2 = fig1.add_subplot(2,1,2)\n\nx2 = 10434, 24444, 31234\ny2 = 1, 4, 9\nax2.plot(x2, y2)\n\nfig1.show()\n```","response":"Use , as format specifier: \n```\n\n```python\nformat(10000.21, ',')\n#Output\n#'10,000.21'\n#```\n#Alternatively you can also use str.format instead of format: \n#```\n```\n\n```python\n'{:,}'.format(10000.21)\n#Output\n#'10,000.21'\n#```\n#With matplotlib.ticker.FuncFormatter: \n#```\n#...\n#ax.get_xaxis().set_major_formatter(\n# matplotlib.ticker.FuncFormatter(lambda x, p: format(int(x), ',')))\n#ax2.get_xaxis().set_major_formatter(\n# matplotlib.ticker.FuncFormatter(lambda x, p: format(int(x), ',')))\n#fig1.show()\n#```\n```","best_answers_score":0.7665,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/25973581\/how-to-format-axis-number-format-to-thousands-with-a-comma","best_answers_votes":124,"tags":["python","matplotlib"],"question_length":609,"response_length":461,"tag_count":2} {"question":"How to view all colormaps available in matplotlib? I was wondering if there is a function call that can give me the name of all colormaps available in matplotlib? It used to be possible by something along the lines of (see here): \n```\nimport matplotlib.pyplot as plt\ncmaps = sorted(m for m in plt.cm.datad if not m.endswith(\"_r\"))\n```\nBut running this in mpl 1.5 does not return the new colormaps, such as viridis, magma and so on. On the new reference page the code actually hardcodes the names (see here) but I was wondering if a similar query to the above is still possible?","response":"plt.colormaps() returns a list of all registered colormaps. From the docs: matplotlib.pyplot.colormaps() Matplotlib provides a number of colormaps, and others can be added using register_cmap(). This function documents the built-in colormaps, and will also return a list of all registered colormaps if called. The list this returns includes viridis, magma, inferno and plasma for me in 1.5.0","best_answers_score":0.7656,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/34314356\/how-to-view-all-colormaps-available-in-matplotlib","best_answers_votes":38,"tags":["python","matplotlib"],"question_length":577,"response_length":391,"tag_count":2} {"question":"Cleanest way to hide every nth tick label in matplotlib colorbar? The labels on my horizontal colorbar are too close together and I don't want to reduce text size further: \n```\ncbar = plt.colorbar(shrink=0.8, orientation='horizontal', extend='both', pad=0.02)\ncbar.ax.tick_params(labelsize=8)\n```\nI'd like to preserve all ticks, but remove every other label. Most examples I've found pass a user-specified list of strings to cbar.set_ticklabels(). I'm looking for a general solution. I played around with variations of \n```\ncbar.set_ticklabels(cbar.get_ticklabels()[::2])\n```\nand \n```\ncbar.ax.xaxis.set_major_locator(matplotlib.ticker.MaxNLocator(nbins=4))\n```\nbut I haven't found the magic combination. I know there must be a clean way to do this using a locator object.","response":"For loop the ticklabels, and call set_visible(): \n```\nfor label in cbar.ax.xaxis.get_ticklabels()[::2]:\n label.set_visible(False)\n```","best_answers_score":0.7655,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/20337664\/cleanest-way-to-hide-every-nth-tick-label-in-matplotlib-colorbar","best_answers_votes":102,"tags":["python","matplotlib"],"question_length":771,"response_length":136,"tag_count":2} {"question":"How to plot 2d math vectors with matplotlib? How can we plot 2D math vectors with matplotlib? Does anyone have an example or suggestion about that? I have a couple of vectors stored as 2D numpy arrays, and I would like to plot them as directed edges. The vectors to be plotted are constructed as below: \n```\nimport numpy as np\n# a list contains 3 vectors;\n# each list is constructed as the tail and the head of the vector\na = np.array([[0, 0, 3, 2], [0, 0, 1, 1], [0, 0, 9, 9]])\n```\nEdit: I just added the plot of the final answer of tcaswell for anyone interested in the output and want to plot 2d vectors with matplotlib:","response":"The suggestion in the comments by halex is correct, you want to use quiver (doc), but you need to tweak the properties a bit. \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nsoa = np.array([[0, 0, 3, 2], [0, 0, 1, 1], [0, 0, 9, 9]])\nX, Y, U, V = zip(*soa)\nplt.figure()\nax = plt.gca()\nax.quiver(X, Y, U, V, angles='xy', scale_units='xy', scale=1)\nax.set_xlim([-1, 10])\nax.set_ylim([-1, 10])\nplt.draw()\nplt.show()\n```","best_answers_score":0.7652,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/12265234\/how-to-plot-2d-math-vectors-with-matplotlib","best_answers_votes":38,"tags":["python","math","matplotlib","visualization"],"question_length":623,"response_length":424,"tag_count":4} {"question":"Rotate tick labels for seaborn barplot I am trying to display a chart with rotated x-axis labels, but the chart is not displaying. \n```\nimport seaborn as sns\n%matplotlib inline\n\nyellow='#FFB11E'\nby_school=sns.barplot(x ='Organization Name',y ='Score',data = combined.sort('Organization Name'),color=yellow,ci=None)\n```\nAt this point I can see the image, but after I set the xticklabel, I don't see the image anymore only an object reference. (I would post the image, but I don't enough reputation :() \n```\nby_school.set_xticklabels('Organization Name',rotation=45)\n\n\n```\nA similar question is posted here: Rotate label text in seaborn factorplot but the solution is not working.","response":"You need a different method call, namely .set_rotation for each ticklables. Since you already have the ticklabels, just change their rotations: \n```\nfor item in by_school.get_xticklabels():\n item.set_rotation(45)\n```\nbarplot returns a matplotlib.axes object (as of seaborn 0.6.0), therefore you have to rotate the labels this way. In other cases, when the method returns a FacetGrid object, refer to Rotate label text in seaborn factorplot","best_answers_score":0.7647,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/31859285\/rotate-tick-labels-for-seaborn-barplot","best_answers_votes":64,"tags":["python","matplotlib","seaborn"],"question_length":731,"response_length":442,"tag_count":3} {"question":"How to draw a rectangle over a specific region in a matplotlib graph I have a graph, computed from some data, drawn in matplotlib. I want to draw a rectangular region around the global maximum of this graph. I tried plt.axhspan, but the rectangle doesn't seem to appear when I call plt.show() So, how can a rectangular region be drawn onto a matplotlib graph? Thanks!","response":"The most likely reason is that you used data units for the x arguments when calling axhspan. From the function's docs (my emphasis): y coords are in data units and x coords are in axes (relative 0-1) units. So any rectangle stretching left of 0 or right of 1 is simply drawn off-plot. An easy alternative might be to add a Rectangle to your axis (e.g., via plt.gca and add_patch); Rectangle uses data units for both dimensions. The following would add a grey rectangle with width & height of 1 centered on (2,3): \n```py\nfrom matplotlib.patches import Rectangle\nimport matplotlib.pyplot as plt\n\nfig = plt.figure()\nplt.xlim(0, 10)\nplt.ylim(0, 12)\n\nsomeX, someY = 2, 5\ncurrentAxis = plt.gca()\ncurrentAxis.add_patch(Rectangle((someX - .5, someY - .5), 4, 6, facecolor=\"grey\"))\n```\nWithout facecolor \n```py\ncurrentAxis.add_patch(Rectangle((someX - .5, someY - .5), 4, 6, facecolor=\"none\", ec='k', lw=2))\n```","best_answers_score":0.7645,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/13013781\/how-to-draw-a-rectangle-over-a-specific-region-in-a-matplotlib-graph","best_answers_votes":54,"tags":["python","matplotlib"],"question_length":367,"response_length":902,"tag_count":2} {"question":"Rotate theta=0 on matplotlib polar plot I have the following example code: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport random\n\ndata_theta = range(10,171,10)\n\ndata_theta_rad = []\nfor i in data_theta:\n data_theta_rad.append(float(i)*np.pi\/180.0)\n\ndata_r = random.sample(range(70, 90), 17)\n\nprint data_theta\nprint data_r\n\nax = plt.subplot(111, polar=True)\nax.plot(data_theta_rad, data_r, color='r', linewidth=3)\nax.set_rmax(95)\n# ax.set_rmin(70.0)\nax.grid(True)\n\nax.set_title(\"Example\", va='bottom')\nplt.show()\n```\n...which produces something like this: ...but I would like to set theta=0 to the 'West'. So something like: Any ideas how to do this with matplotlib (I made the pic below in powerpoint) ?","response":"Simply use: \n```\nax.set_theta_zero_location(\"W\")\n```\nMore info in the documentation of matplotlib.","best_answers_score":0.7629,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/26906510\/rotate-theta-0-on-matplotlib-polar-plot","best_answers_votes":58,"tags":["python","matplotlib","plot"],"question_length":721,"response_length":98,"tag_count":3} {"question":"Plot a black-and-white binary map in matplotlib I'm using python to simulate some automation models, and with the help of matplotlib I'm producing plots like the one shown below. I'm currently plotting with the following command: \n```\nax.imshow(self.g, cmap=map, interpolation='nearest')\n```\nwhere self.g is the binary map (0 -> blue, 1 -> red in my current plots). However, to include this in my report I would like the plot to be with black dots on white background instead of red on blue. How do I accomplish that?","response":"You can change the color map you are using via the cmap keyword. The color map 'Greys' provides the effect you want. You can find a list of available maps on the scipy website. \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nnp.random.seed(101)\ng = np.floor(np.random.random((100, 100)) + .5)\n\nplt.subplot(211)\nplt.imshow(g)\nplt.subplot(212)\nplt.imshow(g, cmap='Greys', interpolation='nearest')\nplt.savefig('blkwht.png')\n\nplt.show()\n```\nwhich results in:","best_answers_score":0.7628,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/9638826\/plot-a-black-and-white-binary-map-in-matplotlib","best_answers_votes":73,"tags":["python","matplotlib"],"question_length":517,"response_length":464,"tag_count":2} {"question":"Python\/Matplotlib - Colorbar Range and Display Values When using matplotlib with a contour plot, I'm having trouble getting the colorbar to display as I want. I've read through numerous similar examples, but have still not been able to get what I want. In the image below, I want two things changed. I want the minimum value and maximum values to be display on the color bar (the max should be 2.0 and the min -0.1). These two values should be at the very edge of the colorbar. Also, I want the colorbar to display the value at every color transition. For example. in the plot below, between 2.1 and 1.8, there is another color transition where the value isn't displayed. I think I may need to use norm, but it hasn't worked for me so far. Code: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nxi = np.array([0., 0.5, 1.0])\nyi = np.array([0., 0.5, 1.0])\nzi = np.array([[0., 1.0, 2.0],\n [0., 1.0, 2.0],\n [-0.1, 1.0, 2.0]])\n\nplt.contour(xi, yi, zi, 15, linewidths=0.5, colors='k')\nplt.contourf(xi, yi, zi, 15, cmap=plt.cm.jet)\nplt.colorbar()\nplt.show()\n```","response":"If I understand correctly what you want, I think this should do it: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nxi = np.array([0., 0.5, 1.0])\nyi = np.array([0., 0.5, 1.0])\nzi = np.array([[0., 1.0, 2.0],\n [0., 1.0, 2.0],\n [-0.1, 1.0, 2.0]])\n\nv = np.linspace(-.1, 2.0, 15, endpoint=True)\nplt.contour(xi, yi, zi, v, linewidths=0.5, colors='k')\nplt.contourf(xi, yi, zi, v, cmap=plt.cm.jet)\nx = plt.colorbar(ticks=v)\nprint x\nplt.show()\n```","best_answers_score":0.7618,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/5826592\/python-matplotlib-colorbar-range-and-display-values","best_answers_votes":46,"tags":["python","matplotlib","contour","colorbar"],"question_length":1091,"response_length":475,"tag_count":4} {"question":"Difference between plt.close() and plt.clf() In matplotlib.pyplot, what is the difference between plt.clf() and plt.close()? Will they function the same way? I am running a loop where at the end of each iteration I am producing a figure and saving the plot. On first couple tries the plot was retaining the old figures in every subsequent plot. I'm looking for, individual plots for each iteration without the old figures, does it matter which one I use? The calculation I'm running takes a very long time and it would be very time consuming to test it out.","response":"plt.close() will close the figure window entirely, where plt.clf() will just clear the figure - you can still paint another plot onto it. It sounds like, for your needs, you should be preferring plt.clf(), or better yet keep a handle on the line objects themselves (they are returned in lists by plot calls) and use .set_data on those in subsequent iterations.","best_answers_score":0.7618,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/16661790\/difference-between-plt-close-and-plt-clf","best_answers_votes":45,"tags":["python","matplotlib"],"question_length":557,"response_length":360,"tag_count":2} {"question":"Cannot get minor grid lines to appear in matplotlib figure Ok so I have the code below for real-time graphing some data from an embedded device received over serial. It's not meant to be a production tool but rather an internal eng tool, hence it's not terribly user-friendly. The problem is that no matter what I do, I can't get minor grid lines to appear, even though here they are set to True, which=both. I can do anything I want to the major grid lines but the minors won't appear. Any ideas? Here is the code: \n```\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfrom matplotlib import animation\nimport serial\n\nSERIAL_PORT_NUM=9\n\n...a bunch of constants...\n#windows starts serial port numbers at 1, python starts at 0\nSERIAL_PORT_NUM = SERIAL_PORT_NUM - 1\n\"\"\"\nOpen the serial port\n\"\"\"\nser =serial.Serial(port=SERIAL_PORT_NUM,baudrate=115200,bytesize=8,parity='N',stopbits=1,timeout=None,xonxoff=0,rtscts=0)\n\n# First set up the figure, the axis, and the plot element we want to animate\nraw_adc_fig = plt.figure()\nraw_adc_ax = plt.axes(xlim=(0, 200), ylim=(0, 2047))\nraw_adc_ax.grid(True, which='both')\nraw_adc_fig.suptitle(\"Raw ADC data\")\nplt.ylabel(\"ADC values (hex)\")\nplt.xlabel(\"time (sec)\")\nraw_adc_line, = raw_adc_ax.plot([], [], lw=2)\n\ndef read_serial(serial_port):\n tmp = ''\n same_line = True\n while same_line:\n tmp += serial_port.read(1)\n if tmp != '':\n if tmp[-1] == '*':\n same_line = False\n tmp = tmp.rstrip()\n tmp = tmp.lstrip()\n return tmp\n\ndef process_serial(input_data):\n output_data = 0\n intermediate_data = input_data[A_TYPE_START_POS:A_TYPE_STOP_POS + 1]\n if( intermediate_data != ''):\n output_data = int(intermediate_data , 16 )\n else:\n print \"bad data\"\n output_data = -100\n\n return output_data\n\ndef get_sound_value(serial_port):\n cur_line = ''\n\n get_next_line = True\n # read in the next line until a sound packet of type A is found\n while( get_next_line ):\n cur_line = read_serial(serial_port)\n if( (cur_line != '') and (cur_line[0:3] == ROUTER_SOUND_DATA) and (len(cur_line) == D_TYPE_STOP_POS + 2) ):\n get_next_line = False\n\n sound_value = process_serial(cur_line)\n return sound_value\n\n# initialization function: plot the background of each frame\ndef raw_adc_init():\n raw_adc_line.set_data([], [])\n return raw_adc_line,\n\n# animation function. This is called sequentially\ndef raw_adc_animate(i):\n sound_data_list.append( get_sound_value(ser) )\n y = sound_data_list\n if( len(y) == 190 ):\n del y[0]\n x = np.linspace(0, len(y), len(y))\n raw_adc_line.set_data(x, y)\n return raw_adc_line,\n\n# call the animator. blit=True means only re-draw the parts that have changed.\nraw_adc_anim = animation.FuncAnimation(raw_adc_fig, raw_adc_animate, init_func=raw_adc_init, frames=200, interval=1000, blit=True)\n```\nEdit: fixed a bug in opening the serial port. Changed timeout=0 to timeout=None.","response":"Unfortunately, ax.grid is a bit confusing in this regard. (This is a design bug \/ common gotcha.) It turns the minor grid on, but the minor ticks are still turned off. What you need to do is call plt.minorticks_on or ax.minorticks_on in addition to calling ax.grid(True, which='both').","best_answers_score":0.7609,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/19940518\/cannot-get-minor-grid-lines-to-appear-in-matplotlib-figure","best_answers_votes":66,"tags":["python","numpy","matplotlib","pyserial","graphing"],"question_length":2981,"response_length":285,"tag_count":5} {"question":"Pandas DataFrame step plot: where=\"post\" I am wondering how I can pass matplotlibs where=\"post\" into a pandas plot. \n```\nimport numpy as np\nimport pandas as pd\n\ndf = pd.DataFrame(np.random.randn(36, 3))\ndf.plot(drawstyle=\"steps\", linewidth=2)\n\n# this doesn't work\ndf.plot(drawstyle=\"steps\", where='post')\n```\nDoes anyone know how to realize this? Thanks in advance!","response":"You just need to specify drawstyle=\"steps-post\": \n```\ndf = pd.DataFrame(np.random.randn(36, 3))\ndf.plot(drawstyle=\"steps\", linewidth=2)\ndf.plot(drawstyle=\"steps-post\", linewidth=2)\n```\nCompare the result:","best_answers_score":0.7595,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/35067304\/pandas-dataframe-step-plot-where-post","best_answers_votes":65,"tags":["python","pandas","matplotlib"],"question_length":365,"response_length":204,"tag_count":3} {"question":"Increase distance between title and plot in matplolib? [duplicate] This question already has answers here: Python Matplotlib figure title overlaps axes label when using twiny (9 answers) Closed 6 years ago. I have a simple plot in matplotlib and I would like to increase the distance between the title and the plot (without using suptitle because it does not work on the version I use on a server). How to do that ?","response":"With matplotlib 2.2+, you can use the keyword argument pad: \n```\nax.set_title('Title', pad=20)\n```\nAdjust pad until you're happy with the axis title position. The advantage of this method over using rcParams is that it only changes this one axis title.","best_answers_score":0.7589,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/16419670\/increase-distance-between-title-and-plot-in-matplolib","best_answers_votes":165,"tags":["python","matplotlib","plot","title","margins"],"question_length":415,"response_length":252,"tag_count":5} {"question":"Plotting a horizontal line on multiple subplots in python using pyplot I am plotting three subplots on the same page. I want to draw a horiZontal line through all the subplots. Following is my code and the resultant graph: (You can notice I can get the horizontal line on one of the plots, but not all) \n```\ngs1 = gridspec.GridSpec(8, 2)\ngs1.update(left=0.12, right=.94, wspace=0.12)\nax1 = plt.subplot(gs1[0:2, :])\nax2 = plt.subplot(gs1[3:5, :], sharey=ax1)\nax3 = plt.subplot(gs1[6:8, :], sharey=ax1)\n\nax1.scatter(theta_cord, density, c = 'r', marker= '1')\nax2.scatter(phi_cord, density, c = 'r', marker= '1')\nax3.scatter(r_cord, density, c = 'r', marker= '1')\nax1.set_xlabel('Theta (radians)')\nax1.set_ylabel('Galaxy count')\nax2.set_xlabel('Phi (radians)')\nax2.set_ylabel('Galaxy count')\nax3.set_xlabel('Distance (Mpc)')\nax3.set_ylabel('Galaxy count')\nplt.ylim((0,0.004))\nloc = plticker.MultipleLocator(base=0.001)\nax1.yaxis.set_major_locator(loc)\n\nplt.axhline(y=0.002, xmin=0, xmax=1, hold=None)\n\nplt.show()\n```\nThis generates the following: Again, I want the line I drew on the last subplot to appear on the first two subplots too. How do I do that?","response":"I found a way to do it for anyone who stumbles on this anyways. We need to replace the following line from the OP: \n```\nplt.axhline(y=0.002, xmin=0, xmax=1, hold=None)\n```\nWe replace it with: \n```\nax1.axhline(y=0.002,xmin=0,xmax=3,c=\"blue\",linewidth=0.5,zorder=0)\nax2.axhline(y=0.002,xmin=0,xmax=3,c=\"blue\",linewidth=0.5,zorder=0)\nax3.axhline(y=0.002,xmin=0,xmax=3,c=\"blue\",linewidth=0.5,zorder=0)\n```\nThis produces:","best_answers_score":0.7589,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/21129007\/plotting-a-horizontal-line-on-multiple-subplots-in-python-using-pyplot","best_answers_votes":48,"tags":["python","matplotlib","scatter-plot","subplot","line-plot"],"question_length":1152,"response_length":416,"tag_count":5} {"question":"How to have logarithmic bins in a Python histogram As far as I know the option Log=True in the histogram function only refers to the y-axis. \n```\nP.hist(d,bins=50,log=True,alpha=0.5,color='b',histtype='step')\n```\nI need the bins to be equally spaced in log10. Is there something that can do this?","response":"use logspace() to create a geometric sequence, and pass it to bins parameter. And set the scale of xaxis to log scale. \n```\nimport pylab as pl\nimport numpy as np\n\ndata = np.random.normal(size=10000)\npl.hist(data, bins=np.logspace(np.log10(0.1),np.log10(1.0), 50))\npl.gca().set_xscale(\"log\")\npl.show()\n```","best_answers_score":0.7579,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/6855710\/how-to-have-logarithmic-bins-in-a-python-histogram","best_answers_votes":160,"tags":["python","numpy","matplotlib","histogram"],"question_length":296,"response_length":304,"tag_count":4} {"question":"How to avoid overlapping of labels & autopct in a pie chart My Python code is: \n```\nvalues = [234, 64, 54,10, 0, 1, 0, 9, 2, 1, 7, 7]\nmonths = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',\n 'Jul','Aug','Sep','Oct', 'Nov','Dec']\n\ncolors = ['yellowgreen', 'red', 'gold', 'lightskyblue', \n 'white','lightcoral','blue','pink', 'darkgreen', \n 'yellow','grey','violet','magenta','cyan']\n\nplt.pie(values, labels=labels, autopct='%1.1f%%', shadow=True, \n colors=colors, startangle=90, radius=1.2)\n\nplt.show()\n```\nIs it possible to show the labels \"Jan\", \"Feb\", \"Mar\", etc. and the percentages, either: without overlapping, or using an arrow mark?","response":"Alternatively you can put the legends beside the pie graph: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nx = np.char.array(['Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct', 'Nov','Dec'])\ny = np.array([234, 64, 54,10, 0, 1, 0, 9, 2, 1, 7, 7])\ncolors = ['yellowgreen','red','gold','lightskyblue','white','lightcoral','blue','pink', 'darkgreen','yellow','grey','violet','magenta','cyan']\nporcent = 100.*y\/y.sum()\n\npatches, texts = plt.pie(y, colors=colors, startangle=90, radius=1.2)\nlabels = ['{0} - {1:1.2f} %'.format(i,j) for i,j in zip(x, porcent)]\n\nsort_legend = True\nif sort_legend:\n patches, labels, dummy = zip(*sorted(zip(patches, labels, y),\n key=lambda x: x[2],\n reverse=True))\n\nplt.legend(patches, labels, loc='left center', bbox_to_anchor=(-0.1, 1.),\n fontsize=8)\n\nplt.savefig('piechart.png', bbox_inches='tight')\n```\nEDIT: if you want to keep the legend in the original order, as you mentioned in the comments, you can set sort_legend=False in the code above, giving:","best_answers_score":0.7579,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/23577505\/how-to-avoid-overlapping-of-labels-autopct-in-a-pie-chart","best_answers_votes":82,"tags":["python","numpy","matplotlib","pie-chart","plot-annotations"],"question_length":669,"response_length":1098,"tag_count":5} {"question":"How to hide ticks label in python but keep the ticks in place? I want to hide my ticks label on a plot I created, but keep this tick itself (the little marks on the axis). When I try to use what I've found here, for example, the entire tick is removed, and not just the labels. How can I remove only the labels then?","response":"Here is a slightly simpler answer, using ax.tick_params \n```\nimport matplotlib.pylab as plt\n\nfig, ax = plt.subplots()\nplt.plot([1,2,3],[4,5,6])\n\nax.tick_params(labelbottom=False) \n\nplt.show()\n```\nHere is the resulting output in Matplotlib 3 As commented by @chris, one can similarly hide the labels on any of the other axes using labeltop, labelleft, labelright, in the above example, instead of labelbottom.","best_answers_score":0.7566,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/20936658\/how-to-hide-ticks-label-in-python-but-keep-the-ticks-in-place","best_answers_votes":137,"tags":["python","matplotlib","axis","label"],"question_length":316,"response_length":411,"tag_count":4} {"question":"Annotate Time Series plot I have an index array (x) of dates (datetime objects) and an array of actual values (y: bond prices). Doing the following: \n```py\nplot(x,y)\n```\nproduces a perfectly fine time series graph with the x-axis labeled with the dates. No problem so far. But I want to add text on certain dates. For example, on 2009-10-31, I wish to display the text \"Event 1\" with an arrow pointing to the y value at that date. I have read through the Matplotlib documentation on text() and annotate() to no avail.","response":"Matplotlib uses an internal floating point format for dates. You just need to convert your date to that format (using matplotlib.dates.date2num or matplotlib.dates.datestr2num) and then use annotate as usual. As a somewhat excessively fancy example: \n```\nimport datetime as dt\nimport matplotlib.pyplot as plt\nimport matplotlib.dates as mdates\n\nx = [dt.datetime(2009, 05, 01), dt.datetime(2010, 06, 01), \n dt.datetime(2011, 04, 01), dt.datetime(2012, 06, 01)]\ny = [1, 3, 2, 5]\n\nfig, ax = plt.subplots()\nax.plot_date(x, y, linestyle='--')\n\nax.annotate('Test', (mdates.date2num(x[1]), y[1]), xytext=(15, 15), \n textcoords='offset points', arrowprops=dict(arrowstyle='-|>'))\n\nfig.autofmt_xdate()\nplt.show()\n```","best_answers_score":0.7563,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/11067368\/annotate-time-series-plot","best_answers_votes":89,"tags":["python","numpy","matplotlib","time-series","annotations"],"question_length":517,"response_length":721,"tag_count":5} {"question":"Drawing lines between two plots in Matplotlib I am drawing two subplots with Matplotlib, essentially following : \n```\nsubplot(211); imshow(a); scatter(..., ...)\nsubplot(212); imshow(b); scatter(..., ...)\n```\nCan I draw lines between those two subplots? How would I do that?","response":"The solution from the other answers are suboptimal in many cases (as they would only work if no changes are made to the plot after calculating the points). A better solution would use the specially designed ConnectionPatch: \n```\nimport matplotlib.pyplot as plt\nfrom matplotlib.patches import ConnectionPatch\nimport numpy as np\n\nfig = plt.figure(figsize=(10,5))\nax1 = fig.add_subplot(121)\nax2 = fig.add_subplot(122)\n\nx,y = np.random.rand(100),np.random.rand(100)\n\nax1.plot(x,y,'ko')\nax2.plot(x,y,'ko')\n\ni = 10\nxy = (x[i],y[i])\ncon = ConnectionPatch(xyA=xy, xyB=xy, coordsA=\"data\", coordsB=\"data\",\n axesA=ax2, axesB=ax1, color=\"red\")\nax2.add_artist(con)\n\nax1.plot(x[i],y[i],'ro',markersize=10)\nax2.plot(x[i],y[i],'ro',markersize=10)\n\n\nplt.show()\n```","best_answers_score":0.7558,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/17543359\/drawing-lines-between-two-plots-in-matplotlib","best_answers_votes":66,"tags":["python","matplotlib"],"question_length":273,"response_length":768,"tag_count":2} {"question":"set matplotlib 3d plot aspect ratio \n```\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n```\nSetting the aspect ratio works for 2d plots: \n```\nax = plt.axes()\nax.plot([0,1], [0,10])\nax.set_aspect('equal', 'box')\n```\nBut it does not work for 3d: \n```\nax = plt.axes(projection='3d')\nax.plot([0,1], [0,1], [0,10])\nax.set_aspect('equal', 'box')\n```\nHow do I set the aspect ratio for 3d?","response":"As of matplotlib 3.3.0, Axes3D.set_box_aspect seems to be the recommended approach. \n```py\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nxs, ys, zs = ...\nax = plt.axes(projection='3d')\n\nax.set_box_aspect((np.ptp(xs), np.ptp(ys), np.ptp(zs))) # aspect ratio is 1:1:1 in data space\n\nax.plot(xs, ys, zs)\n```","best_answers_score":0.7551,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/8130823\/set-matplotlib-3d-plot-aspect-ratio","best_answers_votes":65,"tags":["python","matplotlib","matplotlib-3d"],"question_length":406,"response_length":311,"tag_count":3} {"question":"How to set x axis ticklabels in a seaborn plot [duplicate] This question already has answers here: Modify tick label text (13 answers) Closed 2 years ago. I am unable to set x axis ticklabels for a seaborn lineplot correctly. \n```\nimport pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\ndf = pd.DataFrame({'a':np.random.rand(8),'b':np.random.rand(8)})\nsns.set(style=\"darkgrid\")\ng = sns.lineplot(data=df)\ng.set_xticklabels(['2011','2012','2013','2014','2015','2016','2017','2018'])\n```\nThe years on the x axis are not aligning properly.","response":"Whenever you set the x-ticklabels manually, you should try to first set the corresponding ticks, and then specify the labels. In your case, therefore you should do \n```\ng = sns.lineplot(data=df)\ng.set_xticks(range(len(df))) # <--- set the ticks first\ng.set_xticklabels(['2011','2012','2013','2014','2015','2016','2017','2018'])\n```","best_answers_score":0.7551,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/56605113\/how-to-set-x-axis-ticklabels-in-a-seaborn-plot","best_answers_votes":46,"tags":["python","matplotlib","seaborn"],"question_length":573,"response_length":331,"tag_count":3} {"question":"How to extract data from matplotlib plot I have a wxPython program which reads from different datasets, performs various types of simple on-the-fly analysis on the data and plots various combinations of the datasets to matplotlib canvas. I would like to have the opportunity to dump currently plotted data to file for more sophisticated analysis later on. The question is: are there any methods in matplotlib that allow access to the data currently plotted in matplotlib.Figure?","response":"Jakub is right about modifying the Python script to write out the data directly from the source from which it was sent into the plot; that's the way I'd prefer to do this. But for reference, if you do need to get data out of a plot, I think this should do it \n```\ngca().get_lines()[n].get_xydata()\n```\nAlternatively you can get the x and y data sets separately: \n```\nline = gca().get_lines()[n]\nxd = line.get_xdata()\nyd = line.get_ydata()\n```","best_answers_score":0.7546,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/8938449\/how-to-extract-data-from-matplotlib-plot","best_answers_votes":43,"tags":["python","matplotlib"],"question_length":478,"response_length":442,"tag_count":2} {"question":"Using a Pandas dataframe index as values for x-axis in matplotlib plot I have time series in a Pandas dateframe with a number of columns which I'd like to plot. Is there a way to set the x-axis to always use the index from a dateframe? When I use the .plot() method from Pandas the x-axis is formatted correctly however I when I pass my dates and the column(s) I'd like to plot directly to matplotlib the graph doesn't plot correctly. Thanks in advance. \n```\nplt.plot(site2.index.values, site2['Cl'])\nplt.show()\n```\nFYI: site2.index.values produces this (I've cut out the middle part for brevity): \n```\narray([\n '1987-07-25T12:30:00.000000000+0200',\n '1987-07-25T16:30:00.000000000+0200',\n '2010-08-13T02:00:00.000000000+0200',\n '2010-08-31T02:00:00.000000000+0200',\n '2010-09-15T02:00:00.000000000+0200'\n], \ndtype='datetime64[ns]')\n```","response":"You can use plt.xticks to set the x-axis try: \n```\nplt.xticks( site2['Cl'], site2.index.values ) # location, labels\nplt.plot( site2['Cl'] )\nplt.show()\n```\nsee the documentation for more details: http:\/\/matplotlib.org\/api\/pyplot_api.html#matplotlib.pyplot.xticks","best_answers_score":0.7538,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/22356881\/using-a-pandas-dataframe-index-as-values-for-x-axis-in-matplotlib-plot","best_answers_votes":12,"tags":["python","matplotlib","plot","pandas"],"question_length":851,"response_length":261,"tag_count":4} {"question":"Plot Pandas DataFrame as Bar and Line on the same one chart I am trying to plot a chart with the 1st and 2nd columns of data as bars and then a line overlay for the 3rd column of data. I have tried the following code but this creates 2 separate charts but I would like this all on one chart. \n```py\nleft_2013 = pd.DataFrame({'month': ['jan', 'feb', 'mar', 'apr', 'may', 'jun', 'jul', 'aug', 'sep', 'oct', 'nov', 'dec'],\n '2013_val': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 9, 6]})\n\nright_2014 = pd.DataFrame({'month': ['jan', 'feb'], '2014_val': [4, 5]})\n\nright_2014_target = pd.DataFrame({'month': ['jan', 'feb', 'mar', 'apr', 'may', 'jun', 'jul', 'aug', 'sep', 'oct', 'nov', 'dec'],\n '2014_target_val': [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]})\n\n\ndf_13_14 = pd.merge(left_2013, right_2014, how='outer')\ndf_13_14_target = pd.merge(df_13_14, right_2014_target, how='outer')\ndf_13_14_target[['month','2013_val','2014_val','2014_target_val']].head(12)\n\nplt.figure()\ndf_13_14_target[['month','2014_target_val']].plot(x='month',linestyle='-', marker='o')\ndf_13_14_target[['month','2013_val','2014_val']].plot(x='month', kind='bar')\n```\nThis is what I currently get","response":"The DataFrame plotting methods return a matplotlib AxesSubplot or list of AxesSubplots. (See the docs for plot, or boxplot, for instance.) You can then pass that same Axes to the next plotting method (using ax=ax) to draw on the same axes: \n```\nax = df_13_14_target[['month','2014_target_val']].plot(x='month',linestyle='-', marker='o')\ndf_13_14_target[['month','2013_val','2014_val']].plot(x='month', kind='bar', \n ax=ax)\n```\n```\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nleft_2013 = pd.DataFrame(\n {'month': ['jan', 'feb', 'mar', 'apr', 'may', 'jun', 'jul', 'aug', 'sep',\n 'oct', 'nov', 'dec'],\n '2013_val': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 9, 6]})\n\nright_2014 = pd.DataFrame({'month': ['jan', 'feb'], '2014_val': [4, 5]})\n\nright_2014_target = pd.DataFrame(\n {'month': ['jan', 'feb', 'mar', 'apr', 'may', 'jun', 'jul', 'aug', 'sep',\n 'oct', 'nov', 'dec'],\n '2014_target_val': [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]})\n\ndf_13_14 = pd.merge(left_2013, right_2014, how='outer')\ndf_13_14_target = pd.merge(df_13_14, right_2014_target, how='outer')\n\nax = df_13_14_target[['month', '2014_target_val']].plot(\n x='month', linestyle='-', marker='o')\ndf_13_14_target[['month', '2013_val', '2014_val']].plot(x='month', kind='bar',\n ax=ax)\n\nplt.show()\n```","best_answers_score":0.7518,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/23482201\/plot-pandas-dataframe-as-bar-and-line-on-the-same-one-chart","best_answers_votes":53,"tags":["python","pandas","dataframe","matplotlib","plot"],"question_length":1207,"response_length":1358,"tag_count":5} {"question":"Adding an axes using the same arguments as a previous axes I want to plot data, in two different subplots. After plotting, I want to go back to the first subplot and plot an additional dataset in it. However, when I do so I get this warning: MatplotlibDeprecationWarning: Adding an axes using the same arguments as a previous axes currently reuses the earlier instance. In a future version, a new instance will always be created and returned. Meanwhile, this warning can be suppressed, and the future behavior ensured, by passing a unique label to each axes instance. warnings.warn(message, mplDeprecation, stacklevel=1) I can reproduce that with a simple piece of code: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# Generate random data\ndata = np.random.rand(100)\n\n# Plot in different subplots\nplt.figure()\nplt.subplot(1, 2, 1)\nplt.plot(data)\n\nplt.subplot(1, 2, 2)\nplt.plot(data)\n\nplt.subplot(1, 2, 1) # Warning occurs here\nplt.plot(data + 1)\n```\nAny ideas on how to avoid this warning? I use matplotlib 2.1.0. Looks like the same problem as here","response":"This is a good example that shows the benefit of using matplotlib's object oriented API. \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# Generate random data\ndata = np.random.rand(100)\n\n# Plot in different subplots\nfig, (ax1, ax2) = plt.subplots(1, 2)\nax1.plot(data)\n\nax2.plot(data)\n\nax1.plot(data+1)\n\nplt.show()\n```\nNote: it is more pythonic to have variable names start with a lower case letter e.g. data = ... rather than Data = ... see PEP8","best_answers_score":0.751,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/46933824\/adding-an-axes-using-the-same-arguments-as-a-previous-axes","best_answers_votes":38,"tags":["python","matplotlib"],"question_length":1060,"response_length":455,"tag_count":2} {"question":"Extract matplotlib colormap in hex-format I am trying to extract discrete colors from a matplotlib colormap by manipulating this example. However, I cannot find the N discrete colors that are extracted from the colormap. In the code below I've used cmap._segmentdata, but I've found that it is the definition of the entire colormap. Given a colormap and an integer N, how do I extract N discrete colors from the colormap and export them in hex-format? \n```\nfrom pylab import *\n\ndelta = 0.01\nx = arange(-3.0, 3.0, delta)\ny = arange(-3.0, 3.0, delta)\nX,Y = meshgrid(x, y)\nZ1 = bivariate_normal(X, Y, 1.0, 1.0, 0.0, 0.0)\nZ2 = bivariate_normal(X, Y, 1.5, 0.5, 1, 1)\nZ = Z2 - Z1 # difference of Gaussians\n\ncmap = cm.get_cmap('seismic', 5) # PiYG\ncmap_colors = cmap._segmentdata\n\ndef print_hex(r,b,g):\n if not(0 <= r <= 255 or 0 <= b <= 255 or 0 <= g <= 255):\n raise ValueError('rgb not in range(256)')\n print '#%02x%02x%02x' % (r, b, g)\n\n\nfor i in range(len(cmap_colors['blue'])):\n r = int(cmap_colors['red'][i][2]*255)\n b = int(cmap_colors['blue'][i][2]*255)\n g = int(cmap_colors['green'][i][2]*255)\n print_hex(r, g, b)\n\n\n\nim = imshow(Z, cmap=cmap, interpolation='bilinear',\n vmax=abs(Z).max(), vmin=-abs(Z).max())\naxis('off')\ncolorbar()\n\nshow()\n```","response":"You can get a tuple of rgba values for the segment with index i by calling cmap(i). There is also already a function that turns rgb values into hex. As Joe Kington wrote in the comments, you can use matplotlib.colors.rgb2hex. Therefore, a possible solution would be: \n```\nfrom pylab import *\n\ncmap = cm.get_cmap('seismic', 5) # PiYG\n\nfor i in range(cmap.N):\n rgba = cmap(i)\n # rgb2hex accepts rgb or rgba\n print(matplotlib.colors.rgb2hex(rgba))\n```\nThe output is: \n```none\n#00004c\n#0000ff\n#ffffff\n#ff0000\n#7f0000\n```","best_answers_score":0.7507,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/33596491\/extract-matplotlib-colormap-in-hex-format","best_answers_votes":121,"tags":["python","matplotlib","hex","color-mapping","colormap"],"question_length":1372,"response_length":528,"tag_count":5} {"question":"How to add a title to each subplot I have one figure which contains many subplots. \n```\nfig = plt.figure(num=None, figsize=(26, 12), dpi=80, facecolor='w', edgecolor='k')\nfig.canvas.set_window_title('Window Title')\n\n# Returns the Axes instance\nax = fig.add_subplot(311) \nax2 = fig.add_subplot(312) \nax3 = fig.add_subplot(313)\n```\nHow do I add titles to the subplots? fig.suptitle adds a title to all graphs and although ax.set_title() exists, the latter does not add any title to my subplots. Thank you for your help. Edit: Corrected typo about set_title(). Thanks Rutger Kassies","response":"ax.title.set_text('My Plot Title') seems to work too. \n```\nfig = plt.figure()\nax1 = fig.add_subplot(221)\nax2 = fig.add_subplot(222)\nax3 = fig.add_subplot(223)\nax4 = fig.add_subplot(224)\nax1.title.set_text('First Plot')\nax2.title.set_text('Second Plot')\nax3.title.set_text('Third Plot')\nax4.title.set_text('Fourth Plot')\nplt.show()\n```","best_answers_score":0.7506,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/25239933\/how-to-add-a-title-to-each-subplot","best_answers_votes":536,"tags":["python","matplotlib","subplot","subtitle"],"question_length":579,"response_length":334,"tag_count":4} {"question":"How to set the labels size on a pie chart I want to have labels with small size on a piechart in python to improve visibility here is the code \n```\nimport matplotlib.pyplot as plt\n\nfrac=[1.40 , 10.86 , 19.31 , 4.02 , 1.43 , 2.66 , 4.70 , 0.70 , 0.13 , 1.48, 32.96 , 1.11 , 13.30 , 5.86]\nlabels=['HO0900344', 'HO0900331', 'HO0900332', 'HO0900354', \n'HO0900358', 'HO0900374', 'HO0900372', 'HO0900373', \n'HO0900371', 'HO0900370', 'HO0900369', 'HO0900356', \n'HO0900353', 'HO0900343']\n\nfig = plt.figure(1, figsize=(6,6))\nax = fig.add_subplot(111)\nax.axis('equal')\ncolors=('b', 'g', 'r', 'c', 'm', 'y', 'burlywood', 'w')\nax.pie(frac,colors=colors ,labels=labels, autopct='%1.1f%%')\nplt.show()\n```","response":"The simplest way to change the font size on a pie chart is directly via the textprops argument in the pie() function. Using the code above add it like so: \n```\nax.pie(frac, colors=colors ,labels=labels, \n autopct='%1.1f%%', textprops={'fontsize': 14})\n```\nThat way you can just pass in a dictionary with your desired fontsize (e.g., 14). No messing around with rcParams or return values from the function call.","best_answers_score":0.7503,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/7082345\/how-to-set-the-labels-size-on-a-pie-chart","best_answers_votes":166,"tags":["python","matplotlib","label","pie-chart"],"question_length":690,"response_length":416,"tag_count":4} {"question":"Change y range to start from 0 with matplotlib I am using matplotlib to plot data. Here's a code that does something similar: \n```\nimport matplotlib.pyplot as plt\nf, ax = plt.subplots(1)\nxdata = [1, 4, 8]\nydata = [10, 20, 30]\nax.plot(xdata, ydata)\nplt.show(f)\n```\nThis shows a line in a graph with the y axis that goes from 10 to 30. While I am satisfied with the x range, I would like to change the y range to start from 0 and adjust on the ymax to show everything. My current solution is to do: \n```\nax.set_ylim(0, max(ydata))\n```\nHowever I am wondering if there is a way to just say: autoscale but starts from 0.","response":"The range must be set after the plot. \n```\nimport matplotlib.pyplot as plt\nf, ax = plt.subplots(1)\nxdata = [1, 4, 8]\nydata = [10, 20, 30]\nax.plot(xdata, ydata)\nax.set_ylim(ymin=0)\nplt.show(f)\n```\nIf ymin is changed before plotting, this will result in a range of [0, 1]. Edit: the ymin argument has been replaced by bottom: \n```\nax.set_ylim(bottom=0)\n```\nDocumentation: https:\/\/matplotlib.org\/stable\/api\/_as_gen\/matplotlib.axes.Axes.set_ylim.html You can do the same on the x axis with left and right: \n```\nax.set_xlim(left=0)\n```\nDocumentation: https:\/\/matplotlib.org\/stable\/api\/_as_gen\/matplotlib.axes.Axes.set_xlim.html","best_answers_score":0.7502,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/22642511\/change-y-range-to-start-from-0-with-matplotlib","best_answers_votes":164,"tags":["python","matplotlib"],"question_length":615,"response_length":622,"tag_count":2} {"question":"Get default line color cycle I noticed when you plot that the first line is blue, then orange, then green, and so on. Is there some way to access this list of colors? I've seen a million posts on how to change the color cycle or access the iterator, but not on how to just get the list of colors that matplotlib cycles through by default.","response":"Often, there is no need to get the default color cycle from anywhere, as it is the default one, so just using it is sufficient. \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfig = plt.figure()\nax = fig.add_subplot(111)\n\nt = np.arange(5)\n\nfor i in range(4):\n line, = ax.plot(t,i*(t+1), linestyle = '-')\n ax.plot(t,i*(t+1)+.3,color = line.get_color(), linestyle = ':')\n\nplt.show()\n```\nIn case you want to use the default color cycle for something different, there are of course several options. \"tab10\" colormap First it should be mentionned that the \"tab10\" colormap comprises the colors from the default color cycle, you can get it via cmap = plt.get_cmap(\"tab10\"). Equivalent to the above would hence be \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfig = plt.figure()\nax = fig.add_subplot(111)\n\nt = np.arange(5)\ncmap = plt.get_cmap(\"tab10\")\nfor i in range(4):\n ax.plot(t,i*(t+1), color=cmap(i), linestyle = '-')\n ax.plot(t,i*(t+1)+.3,color=cmap(i), linestyle = ':')\n\nplt.show()\n```\nColors from color cycle You can also use the color cycler directly, cycle = plt.rcParams['axes.prop_cycle'].by_key()['color']. This gives list with the colors from the cycle, which you can use to iterate over. \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfig = plt.figure()\nax = fig.add_subplot(111)\n\nt = np.arange(5)\ncycle = plt.rcParams['axes.prop_cycle'].by_key()['color']\n\nfor i in range(4):\n ax.plot(t,i*(t+1), color=cycle[i], linestyle = '-')\n ax.plot(t,i*(t+1)+.3,color=cycle[i], linestyle = ':')\n\nplt.show()\n```\nThe CN notation Finally, the CN notation allows to get the Nth color of the color cycle, color=\"C{}\".format(i). This however only works for the first 10 colors (N in [0,1,...9]) \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfig = plt.figure()\nax = fig.add_subplot(111)\n\nt = np.arange(5)\n\nfor i in range(4):\n ax.plot(t,i*(t+1), color=\"C{}\".format(i), linestyle = '-')\n ax.plot(t,i*(t+1)+.3,color=\"C{}\".format(i), linestyle = ':')\n\nplt.show()\n```\nAll codes presented here produce the same plot.","best_answers_score":0.75,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/42086276\/get-default-line-color-cycle","best_answers_votes":198,"tags":["python","matplotlib"],"question_length":338,"response_length":2071,"tag_count":2} {"question":"How to change the text color of font in legend? Is there a way to change the font color of the legend in a matplotlib plot? Specially in occasions where the background of the plot is dark, the default black text in the legend is hard or impossible to read.","response":"As of matplotlib version 3.3.0, you can now directly use the keyword argument labelcolor in matplotlib.pyplot.legend(). Example using the same color as the corresponding artist by setting labelcolor='linecolor': \n```py\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nplt.figure(figsize=(4, 3))\nplt.plot(np.arange(10), np.random.rand(10) * 0, '-', label='spam')\nplt.plot(np.arange(10), np.random.rand(10) * 1, ':', label='ham')\nplt.plot(np.arange(10), np.random.rand(10) * 2, 'o', label='eggs')\nplt.legend(labelcolor='linecolor')\n```\nExample changing all text to white by setting labelcolor='w', e.g. for dark backgrounds: \n```py\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nplt.figure(figsize=(4, 3))\nplt.plot(np.arange(10), np.random.rand(10) * 0, '-', label='spam')\nplt.plot(np.arange(10), np.random.rand(10) * 1, ':', label='ham')\nplt.plot(np.arange(10), np.random.rand(10) * 2, 'o', label='eggs')\nplt.legend(facecolor='k', labelcolor='w')\n```","best_answers_score":0.75,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/18909696\/how-to-change-the-text-color-of-font-in-legend","best_answers_votes":44,"tags":["python","colors","fonts","matplotlib","legend"],"question_length":256,"response_length":957,"tag_count":5} {"question":"How do I increase the line thickness for sns.lineplot? I have a few seaborn lineplots and I can't figure out how to increase the width of my lines. Here is my code \n```\n#graph 1\nsns.lineplot(x=\"date\", y=\"nps\", data=df_nps, ax=ax1, label=\"NPS\", color='#0550D0')\nsns.lineplot(x=\"date\", y=\"ema28\", data=df_nps, ax=ax1, label=\"EMA28\", color='#7DF8F3')\nsns.lineplot(x=\"date\", y=\"ema7\", data=df_nps, ax=ax1, label=\"EMA7\", color='orange')\n\n#graph 2\ndfz_nps_lineplot = sns.lineplot(x=\"date\", y=\"nps\", data=dfz_nps, ax=ax2, label=\"NPS\", color='#0550D0')\ndfz_nps_lineplot = sns.lineplot(x=\"date\", y=\"ema28\", data=dfz_nps, ax=ax2, label=\"EMA28\", color='#7DF8F3')\ndfz_nps_lineplot = sns.lineplot(x=\"date\", y=\"ema7\", data=dfz_nps, ax=ax2, label=\"EMA7\", color='orange')\n\n#graph3\ndfp_nps_lineplot = sns.lineplot(x=\"date\", y=\"nps\", data=dfp_nps, ax=ax3, label=\"NPS\", color='#0550D0')\ndfp_nps_lineplot = sns.lineplot(x=\"date\", y=\"ema28\", data=dfp_nps, ax=ax3, label=\"EMA28\", color='#7DF8F3')\ndfp_nps_lineplot = sns.lineplot(x=\"date\", y=\"ema7\", data=dfp_nps, ax=ax3, label=\"EMA7\", color='orange')\n\n# formatting\n\nplt.show()\n```","response":"As you can see from seaborn.lineplot documentation, the function accepts matplotlib.axes.Axes.plot() arguments, which means you can pass the same arguments you can to matplotlib function in this documentation. If you want to simply adjust the width of your lineplots I find this the easiest: pass an argument linewidth = your_desired_line_width_in_float , for example, linewidth = 1.5 in your sns.lineplot() functions. You can find additional possible arguments in the documentations linked. Example output on random data: seaborn.lineplot() without linewdith argument provided seaborn.lineplot() with linewidth = 3","best_answers_score":0.7499,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/62667158\/how-do-i-increase-the-line-thickness-for-sns-lineplot","best_answers_votes":50,"tags":["python","matplotlib","seaborn","line-plot"],"question_length":1108,"response_length":615,"tag_count":4} {"question":"How to plot multiple bars grouped How to plot multiple bars in matplotlib, when I tried to call the bar function multiple times, they overlap and as seen the below figure the highest value red can be seen only. How can I plot the multiple bars with dates on the x-axes? So far, I tried this: \n```py\nimport matplotlib.pyplot as plt\nimport datetime\n\nx = [\n datetime.datetime(2011, 1, 4, 0, 0),\n datetime.datetime(2011, 1, 5, 0, 0),\n datetime.datetime(2011, 1, 6, 0, 0)\n]\ny = [4, 9, 2]\nz = [1, 2, 3]\nk = [11, 12, 13]\n\nax = plt.subplot(111)\nax.bar(x, y, width=0.5, color='b', align='center')\nax.bar(x, z, width=0.5, color='g', align='center')\nax.bar(x, k, width=0.5, color='r', align='center')\nax.xaxis_date()\n\nplt.show()\n```\nI got this: The results should be something like, but with the dates are on the x-axes and bars are next to each other:","response":"```\nimport matplotlib.pyplot as plt\nfrom matplotlib.dates import date2num\nimport datetime\n\nx = [\n datetime.datetime(2011, 1, 4, 0, 0),\n datetime.datetime(2011, 1, 5, 0, 0),\n datetime.datetime(2011, 1, 6, 0, 0)\n]\nx = date2num(x)\n\ny = [4, 9, 2]\nz = [1, 2, 3]\nk = [11, 12, 13]\n\nax = plt.subplot(111)\nax.bar(x-0.2, y, width=0.2, color='b', align='center')\nax.bar(x, z, width=0.2, color='g', align='center')\nax.bar(x+0.2, k, width=0.2, color='r', align='center')\nax.xaxis_date()\n\nplt.show()\n```\nI don't know what's the \"y values are also overlapping\" means, does the following code solve your problem? \n```\nax = plt.subplot(111)\nw = 0.3\nax.bar(x-w, y, width=w, color='b', align='center')\nax.bar(x, z, width=w, color='g', align='center')\nax.bar(x+w, k, width=w, color='r', align='center')\nax.xaxis_date()\nax.autoscale(tight=True)\n\nplt.show()\n```","best_answers_score":0.7492,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/14270391\/how-to-plot-multiple-bars-grouped","best_answers_votes":159,"tags":["python","pandas","matplotlib","seaborn","grouped-bar-chart"],"question_length":850,"response_length":848,"tag_count":5} {"question":"Specify color of each point in 3d scatter plot I have a 3D Plot that I created using matplotlib, and I have a list of rbg values that correspond to each point. I have the X, Y, and Z data, and then I have a \"color list\" of the form: \n```\n[ (r,g,b), (r,g,b), ... , (r,g,b) ]\n```\nto match each (x, y, z) point. Right now, I have \n```\nfig = plt.figure()\nax = fig.add_subplot(111, projection = '3d')\n\nax.scatter(X, Y, Z)\nplt.show()\n```\nWhat's the appropriate way to incorporate those rgb tuples so that each point will be assigned a specific color?","response":"If you don't want to use a for loop (which can be very slow for large lists) You can use the scatter command as is with an RGB color list, but you need to specify the colors as a vector of RGB (or RGBA) values between 0 and 1 \n```\nX = [0, 1, 2]\nY = [0, 1, 2]\nZ = [0, 1, 2]\nC = np.array([[255, 0, 0], [0, 255, 0], [0, 0, 255]])\nfig = plt.figure()\nax = fig.add_subplot(111, projection = '3d')\n\nax.scatter(X, Y, Z, c = C\/255.0)\nplt.show()\n```","best_answers_score":0.7487,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/33287156\/specify-color-of-each-point-in-3d-scatter-plot","best_answers_votes":53,"tags":["python","matplotlib","colors","scatter-plot","matplotlib-3d"],"question_length":544,"response_length":439,"tag_count":5} {"question":"Plotting a time series? I have a data set which has dates in the first column, and a \"result\" integer which is either 1 or 0. The date column was successfully converted to a time object. I tried to plot the values directly using matplotlib's plot function, but that did not work.. Sample: \n```none\nDate Result\n2017-01-06 0.0\n2017-01-06 1.0\n2017-01-06 0.0\n2017-01-07 0.0\n2017-01-07 0.0\n```\nI tried using df.plot(), but the resulting plot has very undesirable results. What I want at the end of the day is dates on the x axis, and the \"result\" on the y axis. Where am I going wrong? What's wrong with what I'm doing?","response":"Please use df.set_index('Date').plot() or df.plot(x='Date', y='Result') because of the plot by default use index of df as the x-axis, so you should set the 'Date' column as the index, or specify which column to use as the x-axis. see more at pandas.DataFrame.plot","best_answers_score":0.7482,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/43707620\/plotting-a-time-series","best_answers_votes":51,"tags":["python","pandas","matplotlib","datetime","time-series"],"question_length":640,"response_length":263,"tag_count":5} {"question":"Command-line Unix ASCII-based charting \/ plotting tool Is there a good command-line UNIX charting \/ graphing \/ plotting tool out there? I'm looking for something that will plot xy points on an ASCII graph. Just to clarify, I'm looking for something that will output a graph in ASCII (like ascii-art style), so I can use it over an interactive shell session without needing X.","response":"Try gnuplot. It has very powerful graphing possibilities. It can output to your terminal in the following way: \n```\ngnuplot> set terminal dumb\nTerminal type set to 'dumb'\nOptions are 'feed 79 24'\ngnuplot> plot sin(x)\n\n 1 ++----------------**---------------+----**-----------+--------**-----++\n + *+ * + * * + sin(x) ****** +\n 0.8 ++ * * * * * * ++\n | * * * * * * |\n 0.6 ++ * * * * * * ++\n * * * * * * * |\n 0.4 +* * * * * * * ++\n |* * * * * * * |\n 0.2 +* * * * * * * ++\n | * * * * * * * |\n 0 ++* * * * * * *++\n | * * * * * * *|\n-0.2 ++ * * * * * * *+\n | * * * * * * *|\n-0.4 ++ * * * * * * *+\n | * * * * * * *\n-0.6 ++ * * * * * * ++\n | * * * * * * |\n-0.8 ++ * * * * * * ++\n + * * + * * + * * +\n -1 ++-----**---------+----------**----+---------------**+---------------++\n -10 -5 0 5 10\n```","best_answers_score":0.7481,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/123378\/command-line-unix-ascii-based-charting-plotting-tool","best_answers_votes":166,"tags":["unix","charts","graph","matplotlib"],"question_length":375,"response_length":1916,"tag_count":4} {"question":"How to plot a 2d matrix in python with colorbar? (like imagesc in Matlab) In Matlab I can visualize a matrix data quite easily with \n```\ndata = rand(10,10); % Createas a 10 x 10 random matrix\nimagesc(data);\ncolorbar;\n```\nNow I want to do the same thing in python. I already know how to plot a 2d matrix (numpy array): \n```\nfrom matplotlib.pyplot import imshow\nimport numpy as np\ndata = np.random.random((10,10))\nimshow(np.asarray(img))\n```\nbut I don't know how to add a colorbar to it. Any ideas?","response":"```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nplt.imshow(np.random.random((50,50)))\nplt.colorbar()\nplt.show()\n```","best_answers_score":0.7477,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/42116671\/how-to-plot-a-2d-matrix-in-python-with-colorbar-like-imagesc-in-matlab","best_answers_votes":76,"tags":["python","matplotlib"],"question_length":496,"response_length":123,"tag_count":2} {"question":"Adding a matplotlib legend How can one create a legend for a line graph in Matplotlib's PyPlot without creating any extra variables? Please consider the graphing script below: \n```py\nif __name__ == '__main__':\n PyPlot.plot(length, bubble, 'b-',\n length, ins, 'r-',\n length, merge_r, 'g+',\n length, merge_i, 'p-', )\n PyPlot.title(\"Combined Statistics\")\n PyPlot.xlabel(\"Length of list (number)\")\n PyPlot.ylabel(\"Time taken (seconds)\")\n PyPlot.show()\n```\nAs you can see, this is a very basic use of matplotlib's PyPlot. This generates the following graph: However, it is unclear which line is which. Thus, I need a legend; however, taking a look at the following example below (from the official site): \n```\nax = subplot(1,1,1)\np1, = ax.plot([1,2,3], label=\"line 1\")\np2, = ax.plot([3,2,1], label=\"line 2\")\np3, = ax.plot([2,3,1], label=\"line 3\")\n\nhandles, labels = ax.get_legend_handles_labels()\n\n# reverse the order\nax.legend(handles[::-1], labels[::-1])\n\n# or sort them by labels\nimport operator\nhl = sorted(zip(handles, labels), key=operator.itemgetter(1))\nhandles2, labels2 = zip(*hl)\n\nax.legend(handles2, labels2)\n```\nYou will see that I need to create an extra variable ax. How can I add a legend to my graph without having to create this extra variable and retaining the simplicity of my current script?","response":"Add a label= to each of your plot() calls, and then call legend(loc='upper left'). Consider this sample (tested with Python 3.8.0): \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.linspace(0, 20, 1000)\ny1 = np.sin(x)\ny2 = np.cos(x)\n\nplt.plot(x, y1, \"-b\", label=\"sine\")\nplt.plot(x, y2, \"-r\", label=\"cosine\")\nplt.legend(loc=\"upper left\")\nplt.ylim(-1.5, 2.0)\nplt.show()\n```\nSlightly modified from this tutorial: http:\/\/jakevdp.github.io\/mpl_tutorial\/tutorial_pages\/tut1.html","best_answers_score":0.7468,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/19125722\/adding-a-matplotlib-legend","best_answers_votes":868,"tags":["python","matplotlib","legend","legend-properties"],"question_length":1366,"response_length":487,"tag_count":4} {"question":"Plot Ellipse with matplotlib.pyplot Sorry if this is a stupid question, but is there an easy way to plot an ellipse with matplotlib.pyplot in Python? I was hoping there would be something similar to matplotlib.pyplot.arrow, but I can't find anything. Is the only way to do it using matplotlib.patches with draw_artist or something similar? I would hope that there is a simpler method, but the documentation doesn't offer much help.","response":"If you do not want to use a patch, you can use the parametric equation of an ellipse: x = u + a cos(t) ; y = v + b sin(t) \n```\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfrom math import pi\n\nu=1. #x-position of the center\nv=0.5 #y-position of the center\na=2. #radius on the x-axis\nb=1.5 #radius on the y-axis\n\nt = np.linspace(0, 2*pi, 100)\nplt.plot( u+a*np.cos(t) , v+b*np.sin(t) )\nplt.grid(color='lightgray',linestyle='--')\nplt.show()\n```\nWhich gives: The ellipse can be rotated thanks to a 2D-rotation matrix : \n```\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfrom math import pi, cos, sin\n\nu=1. #x-position of the center\nv=0.5 #y-position of the center\na=2. #radius on the x-axis\nb=1.5 #radius on the y-axis\nt_rot=pi\/4 #rotation angle\n\nt = np.linspace(0, 2*pi, 100)\nEll = np.array([a*np.cos(t) , b*np.sin(t)]) \n #u,v removed to keep the same center location\nR_rot = np.array([[cos(t_rot) , -sin(t_rot)],[sin(t_rot) , cos(t_rot)]]) \n #2-D rotation matrix\n\nEll_rot = np.zeros((2,Ell.shape[1]))\nfor i in range(Ell.shape[1]):\n Ell_rot[:,i] = np.dot(R_rot,Ell[:,i])\n\nplt.plot( u+Ell[0,:] , v+Ell[1,:] ) #initial ellipse\nplt.plot( u+Ell_rot[0,:] , v+Ell_rot[1,:],'darkorange' ) #rotated ellipse\nplt.grid(color='lightgray',linestyle='--')\nplt.show()\n```\nReturns:","best_answers_score":0.7467,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/10952060\/plot-ellipse-with-matplotlib-pyplot","best_answers_votes":41,"tags":["python","matplotlib","ellipse"],"question_length":431,"response_length":1339,"tag_count":3} {"question":"How do you set the absolute position of figure windows with matplotlib? I'm writing a simple Python application that uses matplotlib to display a few figures on screen. The number of figures generated is based on user input and changes throughout the application's life. The user has the ability to issue a \"plot\" command to generate a new figure window with the selected data series. In order to improve the user experience, I would like to provide another command that would programmatically arrange all open figure windows in some convenient arrangement (e.g. tile them across the available screen space). I believe to have found APIs that allow me to adjust the size of the figure window (in pixels), but haven't had any success in finding a way to set their absolute position on screen. Is there a way to do this without delving into the details of whatever backend is in use? I would like to do this in a backend-agnostic way so I can avoid relying upon implementation details that might change in the future.","response":"Found the solution for QT backend: \n```\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots()\nmngr = plt.get_current_fig_manager()\n# to put it into the upper left corner for example:\nmngr.window.setGeometry(50,100,640, 545)\n```\nIf one doesn't know the x- and y-width one can read them out first, like so: \n```\n# get the QTCore PyRect object\ngeom = mngr.window.geometry()\nx,y,dx,dy = geom.getRect()\n```\nand then set the new position with the same size: \n```\nmngr.window.setGeometry(newX, newY, dx, dy)\n```","best_answers_score":0.7466,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/7449585\/how-do-you-set-the-absolute-position-of-figure-windows-with-matplotlib","best_answers_votes":57,"tags":["python","matplotlib"],"question_length":1015,"response_length":508,"tag_count":2} {"question":"Change title and colorbar text and tick colors I wanted to know how to change the color of the ticks in the colorbar and how to change the font color of the title and colorbar in a figure. For example, things obviously are visible in temp.png but not in temp2.png: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom numpy.random import randn\n\nfig = plt.figure()\ndata = np.clip(randn(250,250),-1,1)\ncax = plt.imshow(data, interpolation='nearest')\nplt.title('my random fig')\nplt.colorbar()\n\n# works fine\nplt.savefig('temp.png')\n# title and colorbar ticks and text hidden\nplt.savefig('temp2.png', facecolor=\"black\", edgecolor=\"none\")\n```\nThanks","response":"Previous answer didnt give what I wanted. This is how I did it: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom numpy.random import randn\ndata = np.clip(randn(250,250),-1,1)\ndata = np.ma.masked_where(data > 0.5, data)\n\n\nfig, ax1 = plt.subplots(1,1)\n\nim = ax1.imshow(data, interpolation='nearest')\ncb = plt.colorbar(im)\n\nfg_color = 'white'\nbg_color = 'black'\n\n# IMSHOW \n# set title plus title color\nax1.set_title('ax1 title', color=fg_color)\n\n# set figure facecolor\nax1.patch.set_facecolor(bg_color)\n\n# set tick and ticklabel color\nim.axes.tick_params(color=fg_color, labelcolor=fg_color)\n\n# set imshow outline\nfor spine in im.axes.spines.values():\n spine.set_edgecolor(fg_color) \n\n# COLORBAR\n# set colorbar label plus label color\ncb.set_label('colorbar label', color=fg_color)\n\n# set colorbar tick color\ncb.ax.yaxis.set_tick_params(color=fg_color)\n\n# set colorbar edgecolor \ncb.outline.set_edgecolor(fg_color)\n\n# set colorbar ticklabels\nplt.setp(plt.getp(cb.ax.axes, 'yticklabels'), color=fg_color)\n\nfig.patch.set_facecolor(bg_color) \nplt.tight_layout()\nplt.show()\n#plt.savefig('save\/to\/pic.png', dpi=200, facecolor=bg_color)\n```","best_answers_score":0.7465,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/9662995\/change-title-and-colorbar-text-and-tick-colors","best_answers_votes":52,"tags":["python","matplotlib"],"question_length":651,"response_length":1154,"tag_count":2} {"question":"Matplotlib returning a plot object I have a function that wraps pyplot.plt so I can quickly create graphs with oft-used defaults: \n```\ndef plot_signal(time, signal, title='', xlab='', ylab='',\n line_width=1, alpha=1, color='k',\n subplots=False, show_grid=True, fig_size=(10, 5)):\n\n # Skipping a lot of other complexity here\n\n f, axarr = plt.subplots(figsize=fig_size)\n axarr.plot(time, signal, linewidth=line_width,\n alpha=alpha, color=color)\n axarr.set_xlim(min(time), max(time))\n axarr.set_xlabel(xlab)\n axarr.set_ylabel(ylab)\n axarr.grid(show_grid)\n\n plt.suptitle(title, size=16)\n plt.show()\n```\nHowever, there are times where I'd want to be able to return the plot so I can manually add\/edit things for a specific graph. For example, I want to be able to change the axis labels, or add a second line to the plot after calling the function: \n```\nimport numpy as np\n\nx = np.random.rand(100)\ny = np.random.rand(100)\n\nplot = plot_signal(np.arange(len(x)), x)\n\nplot.plt(y, 'r')\nplot.show()\n```\nI've seen a few questions on this (How to return a matplotlib.figure.Figure object from Pandas plot function? and AttributeError: 'Figure' object has no attribute 'plot') and as a result I've tried adding the following to the end of the function: return axarr return axarr.get_figure() return plt.axes() However, they all return a similar error: AttributeError: 'AxesSubplot' object has no attribute 'plt' Whats the correct way to return a plot object so it can be edited later?","response":"I think the error is pretty self-explanatory. There is no such thing as pyplot.plt, or similar. plt is the quasi-standard abbreviated form of pyplot when being imported, i.e., import matplotlib.pyplot as plt. Concerning the problem, the first approach, return axarr is the most versatile one. You get an axis, or an array of axes, and can plot to it. The code may look like: \n```\ndef plot_signal(x,y, ..., **kwargs):\n # Skipping a lot of other complexity here\n f, ax = plt.subplots(figsize=fig_size)\n ax.plot(x,y, ...)\n # further stuff\n return ax\n\nax = plot_signal(x,y, ...)\nax.plot(x2, y2, ...)\nplt.show()\n```","best_answers_score":0.7462,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/43925337\/matplotlib-returning-a-plot-object","best_answers_votes":49,"tags":["python","matplotlib","plot"],"question_length":1542,"response_length":625,"tag_count":3} {"question":"Plotting numerous disconnected line segments with different colors I have a set of data records like this: \n```\n(s1, t1), (u1, v1), color1\n(s2, t2), (u2, v2), color2\n.\n.\n.\n(sN, tN), (uN, vN), colorN\n```\nIn any record, the first two values are the end-points of a line segment, the third value is the color of that line segment. More specifically, (sn, tn) are the x-y coordinates of the first end-point, (un, vn) are the x-y coordinates of the second-endpoint. Also, color is an rgb with alpha value. In general, any two line segments are disconnected (meaning that their end-points do not necessarily coincide). How to plot this data using matplotlib with a single plot call (or as few as possible) as there could be potentially thousands of records. Attempts Preparing the data in one big list and calling plot against it is way too slow. For example the following code couldn't finish in a reasonable amount of time: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndata = []\nfor _ in xrange(60000):\n data.append((np.random.rand(), np.random.rand()))\n data.append((np.random.rand(), np.random.rand()))\n data.append('r')\n\nprint 'now plotting...' # from now on, takes too long\nplt.plot(*data)\nprint 'done'\n#plt.show()\n```\nI was able to speed-up the plot rendering by using the None insertion trick as follows: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom timeit import timeit\n\nN = 60000\n_s = np.random.rand(N)\n_t = np.random.rand(N)\n_u = np.random.rand(N)\n_v = np.random.rand(N)\nx = []\ny = []\nfor s, t, u, v in zip(_s, _t, _u, _v):\n x.append(s)\n x.append(u)\n x.append(None)\n y.append(t)\n y.append(v)\n y.append(None)\nprint timeit(lambda:plt.plot(x, y), number=1)\n```\nThis executes in under a second on my machine. I still have to figure out how to embed the color values (RGB with alpha channel).","response":"use LineCollection: \n```\nimport numpy as np\nimport pylab as pl\nfrom matplotlib import collections as mc\n\nlines = [[(0, 1), (1, 1)], [(2, 3), (3, 3)], [(1, 2), (1, 3)]]\nc = np.array([(1, 0, 0, 1), (0, 1, 0, 1), (0, 0, 1, 1)])\n\nlc = mc.LineCollection(lines, colors=c, linewidths=2)\nfig, ax = pl.subplots()\nax.add_collection(lc)\nax.autoscale()\nax.margins(0.1)\n```\nhere is the output:","best_answers_score":0.7461,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/21352580\/plotting-numerous-disconnected-line-segments-with-different-colors","best_answers_votes":124,"tags":["python","matplotlib","plot","line"],"question_length":1847,"response_length":381,"tag_count":4} {"question":"Dollar Sign with Thousands Comma Tick Labels Given the following bar chart: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\ndf = pd.DataFrame({'A': ['A', 'B'], 'B': [1000,2000]})\n\nfig, ax = plt.subplots(1, 1, figsize=(2, 2))\n\ndf.plot(kind='bar', x='A', y='B',\n align='center', width=.5, edgecolor='none', \n color='grey', ax=ax)\nplt.xticks(rotation=25)\nplt.show()\n```\nI'd like to display the y-tick labels as thousands of dollars like this: $2,000 I know I can use this to add a dollar sign: \n```\nimport matplotlib.ticker as mtick\nfmt = '$%.0f'\ntick = mtick.FormatStrFormatter(fmt)\nax.yaxis.set_major_formatter(tick)\n```\n...and this to add a comma: \n```\nax.get_yaxis().set_major_formatter(\n mtick.FuncFormatter(lambda x, p: format(int(x), ',')))\n```\n...but how do I get both?","response":"You can use StrMethodFormatter, which uses the str.format() specification mini-language. \n```\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport matplotlib.ticker as mtick\n\ndf = pd.DataFrame({'A': ['A', 'B'], 'B': [1000,2000]})\n\nfig, ax = plt.subplots(1, 1, figsize=(2, 2))\ndf.plot(kind='bar', x='A', y='B',\n align='center', width=.5, edgecolor='none', \n color='grey', ax=ax)\n\nfmt = '${x:,.0f}'\ntick = mtick.StrMethodFormatter(fmt)\nax.yaxis.set_major_formatter(tick) \nplt.xticks(rotation=25)\n\nplt.show()\n```","best_answers_score":0.7455,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/38152356\/dollar-sign-with-thousands-comma-tick-labels","best_answers_votes":70,"tags":["python","matplotlib","display","number-formatting","dollar-sign"],"question_length":821,"response_length":548,"tag_count":5} {"question":"matplotlib analog of R's `pairs` R has a useful function pairs that provides nice matrix of plots of pairwise connections between variables in a data set. The resulting plot looks similar to the following figure, copied from this blog post: Is there any ready to use function based on python's matplolib? I have searched its gallery, but couldn't find anything that resembles what I need. Technically, this should be a simple task, but proper handling of all the possible cases, labels, titles, etc is very tedious. UPDATE see below my answer with a quick and dirty approximation.","response":"Pandas has a built in function scatter_matrix (source code) which is something like this. \n```\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt \n\ndf = pd.DataFrame(np.random.randn(1000, 4), columns=['A','B','C','D'])\naxes = pd.tools.plotting.scatter_matrix(df, alpha=0.2)\nplt.tight_layout()\nplt.savefig('scatter_matrix.png')\n```\nHowever it is pandas specific (but could be used as a starting point). There are some more R like plots in pandas. Have a look at the docs.","best_answers_score":0.7454,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/2682144\/matplotlib-analog-of-rs-pairs","best_answers_votes":49,"tags":["python","r","matplotlib"],"question_length":580,"response_length":490,"tag_count":3} {"question":"Change the relative size of a subplot I have two plots \n```\nimport matplotlib.pyplot as plt\nplt.subplot(121)\nplt.subplot(122)\n```\nI want plt.subplot(122) to be half as wide as plt.subplot(121). Is there a straightforward way to set the height and width parameters for a subplot?","response":"See the grid-spec tutorial: http:\/\/matplotlib.sourceforge.net\/users\/gridspec.html Example code: \n```\nimport matplotlib.pyplot as plt\nimport matplotlib.gridspec as gridspec\n\nf = plt.figure()\n\ngs = gridspec.GridSpec(1, 2,width_ratios=[2,1])\n \nax1 = plt.subplot(gs[0])\nax2 = plt.subplot(gs[1])\n\nplt.show()\n```\nYou can also adjust the height ratio using a similar option in GridSpec","best_answers_score":0.7447,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/5083763\/change-the-relative-size-of-a-subplot","best_answers_votes":55,"tags":["python","matplotlib"],"question_length":278,"response_length":391,"tag_count":2} {"question":"Split title of a figure in matplotlib into multiple lines I use matplotlib to create a figure with 4 sub-plots in it. I would like to split one of my title of a subplot, such that each line would be in the centered with respect to subplot. I tried \n```\nimport matplotlib.pylab as plt\n\nfig = plt.figure(num=0,figsize=(8.27, 11.69), dpi=300)\nax = fig.add_subplot(2, 2, 1)\nax.set_title(r'Normalized occupied \\\\ Neighbors')\n```\nand what I get is that Neighbors is indented to the left side. How could I correct this?","response":"I get the correct alignment when I format the string this way: \n```\nimport matplotlib.pylab as plt\n\nfig = plt.figure()#num=0,figsize=(8.27, 11.69), dpi=300)\nax = fig.add_subplot(2, 2, 1)\nax.set_title('Normalized occupied \\n Neighbors')\n\nplt.show()\n```","best_answers_score":0.7446,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/8598163\/split-title-of-a-figure-in-matplotlib-into-multiple-lines","best_answers_votes":127,"tags":["python","matplotlib"],"question_length":513,"response_length":252,"tag_count":2} {"question":"Creating a Confidence Ellipse in a scatterplot using matplotlib How do I create a confidence ellipse in a scatterplot using matplotlib? The following code works until creating scatter plot. Then, is anyone familiar with putting confidence ellipses over the scatter plot? \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nx = [5,7,11,15,16,17,18]\ny = [8, 5, 8, 9, 17, 18, 25]\n\nplt.scatter(x,y)\nplt.show()\n```\nFollowing is the reference for Confidence Ellipses from SAS. http:\/\/support.sas.com\/documentation\/cdl\/en\/grstatproc\/62603\/HTML\/default\/viewer.htm#a003160800.htm The code in sas is like this: \n```\nproc sgscatter data=sashelp.iris(where=(species=\"Versicolor\"));\n title \"Versicolor Length and Width\";\n compare y=(sepalwidth petalwidth)\n x=(sepallength petallength)\n \/ reg ellipse=(type=mean) spacing=4;\nrun;\n```","response":"After giving the accepted answer a go, I found that it doesn't choose the quadrant correctly when calculating theta, as it relies on np.arccos: Taking a look at the 'possible duplicate' and Joe Kington's solution on github, I watered his code down to this: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.patches import Ellipse\n\ndef eigsorted(cov):\n vals, vecs = np.linalg.eigh(cov)\n order = vals.argsort()[::-1]\n return vals[order], vecs[:,order]\n\nx = [5,7,11,15,16,17,18]\ny = [25, 18, 17, 9, 8, 5, 8]\n\nnstd = 2\nax = plt.subplot(111)\n\ncov = np.cov(x, y)\nvals, vecs = eigsorted(cov)\ntheta = np.degrees(np.arctan2(*vecs[:,0][::-1]))\nw, h = 2 * nstd * np.sqrt(vals)\nell = Ellipse(xy=(np.mean(x), np.mean(y)),\n width=w, height=h,\n angle=theta, color='black')\nell.set_facecolor('none')\nax.add_artist(ell)\nplt.scatter(x, y)\nplt.show()\n```","best_answers_score":0.7437,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/20126061\/creating-a-confidence-ellipse-in-a-scatterplot-using-matplotlib","best_answers_votes":29,"tags":["python","numpy","matplotlib","scipy"],"question_length":842,"response_length":892,"tag_count":4} {"question":"How do I change the range of the x-axis with datetime? I'm trying to plot a graph of dates on the x-axis and values on the y-axis. It works fine, except that I can't get the range of the x-axis to be appropriate. The x-axis range is always Jan 2012 to Jan 2016, despite my dates being from today. I am even specifying that xlim should be the first and last date. I'm writing this for python-django, if that's relevant. \n```\nimport datetime\n import matplotlib.pyplot as plt\n\n x = [datetime.date(2014, 1, 29), datetime.date(2014, 1, 29), datetime.date(2014, 1, 29)] \n y = [2, 4, 1]\n\n fig, ax = plt.subplots()\n ax.plot_date(x, y)\n ax.set_xlim([x[0], x[-1]])\n\n canvas = FigureCanvas(plt.figure(1))\n response = HttpResponse(content_type='image\/png')\n canvas.print_png(response)\n return response\n```\nAnd here is the output:","response":"Edit: Having seen actual data from the OP, all of the values are at the same date\/time. So matplotlib is automatically zooming the x-axis out. You can still manually set the x-axis limits with datetime objects If I do something like this on matplotlib v1.3.1: \n```\nimport datetime\nimport matplotlib.pyplot as plt\n\nx = [datetime.date(2014, 1, 29)] * 3 \ny = [2, 4, 1]\n\nfig, ax = plt.subplots()\nax.plot_date(x, y, markerfacecolor='CornflowerBlue', markeredgecolor='white')\nfig.autofmt_xdate()\nax.set_xlim([datetime.date(2014, 1, 26), datetime.date(2014, 2, 1)])\nax.set_ylim([0, 5])\n```\nI get: And the axes limits match the dates that I specified.","best_answers_score":0.7426,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/21423158\/how-do-i-change-the-range-of-the-x-axis-with-datetime","best_answers_votes":79,"tags":["python","django","date","matplotlib"],"question_length":817,"response_length":643,"tag_count":4} {"question":"Y-Axis ticks on Right Side of Plot I have a simple line plot and need to move the y-axis ticks from the (default) left side of the plot to the right side. Any thoughts on how to do this?","response":"Use ax.yaxis.tick_right() for example: \n```\nfrom matplotlib import pyplot as plt\n\nf = plt.figure()\nax = f.add_subplot(111)\nax.yaxis.tick_right()\nplt.plot([2,3,4,5])\nplt.show()\n```","best_answers_score":0.7424,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/10354397\/y-axis-ticks-on-right-side-of-plot","best_answers_votes":266,"tags":["python","matplotlib"],"question_length":186,"response_length":179,"tag_count":2} {"question":"Dynamically updating plot in matplotlib I am making an application in Python which collects data from a serial port and plots a graph of the collected data against arrival time. The time of arrival for the data is uncertain. I want the plot to be updated when data is received. I searched on how to do this and found two methods: Clear the plot and re-draw the plot with all the points again. Animate the plot by changing it after a particular interval. I do not prefer the first one as the program runs and collects data for a long time (a day for example), and redrawing the plot will be pretty slow. The second one is also not preferable as time of arrival of data is uncertain and I want the plot to update only when the data is received. Is there a way in which I can update the plot just by adding more points to it only when the data is received?","response":"Is there a way in which I can update the plot just by adding more point[s] to it... There are a number of ways of animating data in matplotlib, depending on the version you have. Have you seen the animation examples in the matplotlib documentation? The animation API defines a function FuncAnimation which animates a function in time. This function could just be the function you use to acquire your data. Each method basically sets the data property of the object being drawn, so doesn't require clearing the screen or figure. The data property can simply be extended, so you can keep the previous points and just keep adding to your line (or image or whatever you are drawing). Given that you say that your data arrival time is uncertain your best bet is probably just to do something like: \n```\nimport matplotlib.pyplot as plt\nimport numpy\n\nhl, = plt.plot([], [])\n\ndef update_line(hl, new_data):\n hl.set_xdata(numpy.append(hl.get_xdata(), new_data))\n hl.set_ydata(numpy.append(hl.get_ydata(), new_data))\n plt.draw()\n```\nThen when you receive data from the serial port just call update_line.","best_answers_score":0.7421,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/10944621\/dynamically-updating-plot-in-matplotlib","best_answers_votes":175,"tags":["python","matplotlib","tkinter"],"question_length":853,"response_length":1102,"tag_count":3} {"question":"How to draw intersecting planes? I want to use matplotlib to draw more or less the figure I attached below, which includes the two intersecting planes with the right amount of transparency indicating their relative orientations, and the circles and vectors in the two planes projected in 2D. I'm not sure if there is an existing package for doing this, any hints?","response":"```\nfrom mpl_toolkits.mplot3d import axes3d\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\n\ndim = 10\n\nX, Y = np.meshgrid([-dim, dim], [-dim, dim])\nZ = np.zeros((2, 2))\n\nangle = .5\nX2, Y2 = np.meshgrid([-dim, dim], [0, dim])\nZ2 = Y2 * angle\nX3, Y3 = np.meshgrid([-dim, dim], [-dim, 0])\nZ3 = Y3 * angle\n\nr = 7\nM = 1000\nth = np.linspace(0, 2 * np.pi, M)\n\nx, y, z = r * np.cos(th), r * np.sin(th), angle * r * np.sin(th)\n\nax.plot_surface(X2, Y3, Z3, color='blue', alpha=.5, linewidth=0, zorder=-1)\n\nax.plot(x[y < 0], y[y < 0], z[y < 0], lw=5, linestyle='--', color='green',\n zorder=0)\n\nax.plot_surface(X, Y, Z, color='red', alpha=.5, linewidth=0, zorder=1)\n\nax.plot(r * np.sin(th), r * np.cos(th), np.zeros(M), lw=5, linestyle='--',\n color='k', zorder=2)\n\nax.plot_surface(X2, Y2, Z2, color='blue', alpha=.5, linewidth=0, zorder=3)\n\nax.plot(x[y > 0], y[y > 0], z[y > 0], lw=5, linestyle='--', color='green',\n zorder=4)\n\nplt.axis('off')\nplt.show()\n```\ncaveats: I am running a version very close to the current master, so I am not sure what will work in older versions The reason for splitting up the plotting is that 'above' and 'below' are determined in a some what arcane way (I am not strictly sure the zorder actually does anything), and is really dependent on the order the artists are drawn in. Thus surfaces can not intersect (one will be above the other every where), so you need to plot the sections on either side of the intersection separately. (You can see this in the black line which I didn't split at looks like it in 'on top of' the upper blue plane). The 'proper' ordering of the surfaces also seems to be dependent on the view angle.","best_answers_score":0.7421,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/14824893\/how-to-draw-intersecting-planes","best_answers_votes":25,"tags":["python","matplotlib"],"question_length":363,"response_length":1735,"tag_count":2} {"question":"matplotlib not showing first label on x axis for the Bar Plot Here is the code I am working on to create a logarithmic bar plot \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfig = plt.figure(figsize = (12,6))\n\nax = fig.add_subplot(111)\n\nx = ['Blue Whale', 'Killer Whale', 'Bluefin tuna', \\\n 'Bottlenose dolphin', \"Maui's dolphin\", 'Flounder',\\\n 'Starfish', 'Spongebob Squarepants']\n\ny = [190000, 5987, 684, 650, 40, 6.8, 5, 0.02]\n\nax.bar(np.arange(len(x)),y, log=1)\n\nax.set_xticklabels(x, rotation = 45)\n\n\nfig.savefig(filename = \"f:\/plot.png\")\n```\nNow this is creating the bar plot where its not showing the first label, which is Blue Whale. Here is the plot I am getting So how can this be rectified ? Matplotlib version is 2.0.0 and Numpy version is 1.12.1 Thanks","response":"In matplotlib 2.0 there might be unshown tickmarks at the edges of the axes. To be on the safe side, you can set the tick locations in addition to the tick labels, \n```\nax.set_xticks(np.arange(len(x)))\nax.set_xticklabels(x, rotation = 45)\n```\nYou may also want to set the labels to align to their right edge, if they're rotated: \n```\nax.set_xticklabels(x, rotation = 45, ha=\"right\")\n```","best_answers_score":0.7413,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/43673659\/matplotlib-not-showing-first-label-on-x-axis-for-the-bar-plot","best_answers_votes":32,"tags":["python","matplotlib"],"question_length":784,"response_length":386,"tag_count":2} {"question":"How do I plot a step function? This should be easy but I have just started toying with matplotlib and python. I can do a line or a scatter plot but i am not sure how to do a simple step function. Any help is much appreciated. \n```\nx = 1,2,3,4\ny = 0.002871972681775004, 0.00514787917410944, 0.00863476098280219, 0.012003316194034325\n```","response":"It seems like you want step. E.g. \n```\nimport matplotlib.pyplot as plt\n\nx = [1,2,3,4] \ny = [0.002871972681775004, 0.00514787917410944, \n 0.00863476098280219, 0.012003316194034325]\n\nplt.step(x, y)\nplt.show()\n```","best_answers_score":0.7406,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/8921296\/how-do-i-plot-a-step-function","best_answers_votes":72,"tags":["python","matplotlib"],"question_length":335,"response_length":214,"tag_count":2} {"question":"unique plot marker for each plot I have a loop where i create some plots and I need unique marker for each plot. I think about creating function, which returns random symbol, and use it in my program in this way: \n```\nfor i in xrange(len(y)):\n plt.plot(x, y [i], randomMarker())\n```\nbut I think this way is not good one. I need this just to distinguish plots on legend, because plots must be not connected with lines, they must be just sets of dots.","response":"itertools.cycle will iterate over a list or tuple indefinitely. This is preferable to a function which randomly picks markers for you. Python 2.x \n```\nimport itertools\nmarker = itertools.cycle((',', '+', '.', 'o', '*')) \nfor n in y:\n plt.plot(x,n, marker = marker.next(), linestyle='')\n```\nPython 3.x \n```\nimport itertools\nmarker = itertools.cycle((',', '+', '.', 'o', '*')) \nfor n in y:\n plt.plot(x,n, marker = next(marker), linestyle='')\n```\nYou can use that to produce a plot like this (Python 2.x): \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport itertools\n\nx = np.linspace(0,2,10)\ny = np.sin(x)\n\nmarker = itertools.cycle((',', '+', '.', 'o', '*')) \n\nfig = plt.figure()\nax = fig.add_subplot(111)\n\nfor q,p in zip(x,y):\n ax.plot(q,p, linestyle = '', marker=marker.next())\n \nplt.show()\n```","best_answers_score":0.7401,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/13091649\/unique-plot-marker-for-each-plot","best_answers_votes":95,"tags":["python","matplotlib","plot-annotations"],"question_length":452,"response_length":817,"tag_count":3} {"question":"Why is set_xlim() not setting the x-limits in my figure? I'm plotting some data with matplotlib. I want the plot to focus on a specific range of x-values, so I'm using set_xlim(). Roughly, my code looks like this: \n```\nfig=plt.figure()\nax=fig.add_subplot(111)\nfor ydata in ydatalist:\n ax.plot(x_data,y_data[0],label=ydata[1])\nax.set_xlim(left=0.0,right=1000)\nplt.savefig(filename)\n```\nWhen I look at the plot, the x range ends up being from 0 to 12000. This occurs whether set_xlim() occurs before or after plot(). Why is set_xlim() not working in this situation?","response":"The text of this answer was taken from an answer that was deleted almost immediately after it was posted. set_xlim() limits the data that is displayed on the plot. In order to change the bounds of the axis, use set_xbound(). \n```\nfig=plt.figure()\nax=fig.add_subplot(111)\nax.plot(x_data,y_data)\nax.set_xbound(lower=0.0, upper=1000)\nplt.savefig(filename)\n```","best_answers_score":0.74,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/17734587\/why-is-set-xlim-not-setting-the-x-limits-in-my-figure","best_answers_votes":18,"tags":["python","matplotlib"],"question_length":566,"response_length":356,"tag_count":2} {"question":"Display multiple images in subplots How do I use the matlib function plt.imshow(image) to display multiple images? For example my code is as follows: \n```\nfor file in images:\n process(file)\n\ndef process(filename):\n image = mpimg.imread(filename)\n \n plt.imshow(image)\n```\nMy results show that only the last processed image is shown effectively overwriting the other images","response":"To display the multiple images use subplot() \n```\nplt.figure()\n\n#subplot(r,c) provide the no. of rows and columns\nf, axarr = plt.subplots(4,1) \n\n# use the created array to output your multiple images. In this case I have stacked 4 images vertically\naxarr[0].imshow(v_slice[0])\naxarr[1].imshow(v_slice[1])\naxarr[2].imshow(v_slice[2])\naxarr[3].imshow(v_slice[3])\n```","best_answers_score":0.7395,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/41210823\/display-multiple-images-in-subplots","best_answers_votes":64,"tags":["python","matplotlib","subplot","imshow"],"question_length":409,"response_length":364,"tag_count":4} {"question":"iPython\/Jupyter Notebook and Pandas, how to plot multiple graphs in a for loop? Consider the following code running in iPython\/Jupyter Notebook: \n```\nfrom pandas import *\n%matplotlib inline\n\nys = [[0,1,2,3,4],[4,3,2,1,0]]\nx_ax = [0,1,2,3,4]\n\nfor y_ax in ys:\n ts = Series(y_ax,index=x_ax)\n ts.plot(kind='bar', figsize=(15,5))\n```\nI would expect to have 2 separate plots as output, instead, I get the two series merged in one single plot. Why is that? How can I get two separate plots keeping the for loop?","response":"Just add the call to plt.show() after you plot the graph (you might want to import matplotlib.pyplot to do that), like this: \n```\nfrom pandas import Series\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nys = [[0,1,2,3,4],[4,3,2,1,0]]\nx_ax = [0,1,2,3,4]\n\nfor y_ax in ys:\n ts = Series(y_ax,index=x_ax)\n ts.plot(kind='bar', figsize=(15,5))\n plt.show()\n```","best_answers_score":0.739,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/29532894\/ipython-jupyter-notebook-and-pandas-how-to-plot-multiple-graphs-in-a-for-loop","best_answers_votes":72,"tags":["python","pandas","matplotlib","plot","jupyter-notebook"],"question_length":510,"response_length":366,"tag_count":5} {"question":"Saving interactive Matplotlib figures Is there a way to save a Matplotlib figure such that it can be re-opened and have typical interaction restored? (Like the .fig format in MATLAB?) I find myself running the same scripts many times to generate these interactive figures. Or I'm sending my colleagues multiple static PNG files to show different aspects of a plot. I'd rather send the figure object and have them interact with it themselves.","response":"I just found out how to do this. The \"experimental pickle support\" mentioned by @pelson works quite well. Try this: \n```\n# Plot something\nimport matplotlib.pyplot as plt\nfig,ax = plt.subplots()\nax.plot([1,2,3],[10,-10,30])\n```\nAfter your interactive tweaking, save the figure object as a binary file: \n```\nimport pickle\npickle.dump(fig, open('FigureObject.fig.pickle', 'wb')) # This is for Python 3 - py2 may need `file` instead of `open`\n```\nLater, open the figure and the tweaks should be saved and GUI interactivity should be present: \n```\nimport pickle\nfigx = pickle.load(open('FigureObject.fig.pickle', 'rb'))\n\nfigx.show() # Show the figure, edit it, etc.!\n```\nYou can even extract the data from the plots: \n```\ndata = figx.axes[0].lines[0].get_data()\n```\n(It works for lines, pcolor & imshow - pcolormesh works with some tricks to reconstruct the flattened data.) I got the excellent tip from Saving Matplotlib Figures Using Pickle.","best_answers_score":0.7383,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/4348733\/saving-interactive-matplotlib-figures","best_answers_votes":101,"tags":["python","matplotlib"],"question_length":441,"response_length":938,"tag_count":2} {"question":"multi index plotting I have some data where I've manipulated the dataframe using the following code: \n```\nimport pandas as pd\nimport numpy as np\n\ndata = pd.DataFrame([[0,0,0,3,6,5,6,1],[1,1,1,3,4,5,2,0],[2,1,0,3,6,5,6,1],[3,0,0,2,9,4,2,1],[4,0,1,3,4,8,1,1],[5,1,1,3,3,5,9,1],[6,1,0,3,3,5,6,1],[7,0,1,3,4,8,9,1]], columns=[\"id\", \"sex\", \"split\", \"group0Low\", \"group0High\", \"group1Low\", \"group1High\", \"trim\"])\ndata\n\n#remove all where trim == 0\ntrimmed = data[(data.trim == 1)]\ntrimmed\n\n#create df with columns to be split\ncolumns = ['group0Low', 'group0High', 'group1Low', 'group1High']\nto_split = trimmed[columns]\nto_split\n\nlevel_group = np.where(to_split.columns.str.contains('0'), 0, 1)\n# output: array([0, 0, 1, 1])\nlevel_low_high = np.where(to_split.columns.str.contains('Low'), 'low', 'high')\n# output: array(['low', 'high', 'low', 'high'], dtype=' z, then you will have a problem because there are multiple ways to triangulate that 3d point cloud into a surface. Here's a smooth surface example: \n```\nimport numpy as np\nfrom mpl_toolkits.mplot3d import Axes3D \n# Axes3D import has side effects, it enables using projection='3d' in add_subplot\nimport matplotlib.pyplot as plt\nimport random\n\ndef fun(x, y):\n return x**2 + y\n\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nx = y = np.arange(-3.0, 3.0, 0.05)\nX, Y = np.meshgrid(x, y)\nzs = np.array(fun(np.ravel(X), np.ravel(Y)))\nZ = zs.reshape(X.shape)\n\nax.plot_surface(X, Y, Z)\n\nax.set_xlabel('X Label')\nax.set_ylabel('Y Label')\nax.set_zlabel('Z Label')\n\nplt.show()\n```","best_answers_score":0.7228,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/9170838\/surface-plots-in-matplotlib","best_answers_votes":185,"tags":["python","numpy","matplotlib","surface","matplotlib-3d"],"question_length":413,"response_length":878,"tag_count":5} {"question":"Set y axis limit in Pandas histogram I am using Pandas histogram. I would like to set the y-axis range of the plot. Here is the context: \n```\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ninteresting_columns = ['Level', 'Group']\n\nfor column in interesting_columns:\n data['ranking'].hist(by=data[column], normed=True)\n```\nThere is a range argument that can filter x-values, but I am unaware of the y equivalent: \n```\nhist(by=[column], normed=True, range=[0, 1]) #working argument\nhist(by=[column], normed=True, y_range=[0, 1]) #hypothetical argument\n```\nI've read a lot of different methods for changing plot ranges using plt attributes. They do not seem to work in a loop and for subplots. I am struggling to grasp the right way to approach this problem.","response":"If you use \n```\ndata['ranking'].plot.hist(ylim=(0,1))\n```\n(mind the .plot in the syntax!) it should work.","best_answers_score":0.722,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/38424459\/set-y-axis-limit-in-pandas-histogram","best_answers_votes":45,"tags":["pandas","matplotlib","histogram"],"question_length":763,"response_length":105,"tag_count":3} {"question":"Matplotlib plot with variable line width Is it possible to plot a line with variable line width in matplotlib? For example: \n```\nfrom pylab import *\nx = [1, 2, 3, 4, 5]\ny = [1, 2, 2, 0, 0]\nwidth = [.5, 1, 1.5, .75, .75]\n\nplot(x, y, linewidth=width)\n```\nThis doesn't work because linewidth expects a scalar. Note: I'm aware of *fill_between()* and *fill_betweenx()*. Because these only fill in x or y direction, these do not do justice to cases where you have a slanted line. It is desirable for the fill to always be normal to the line. That is why a variable width line is sought.","response":"Use LineCollections. A way to do it along the lines of this Matplotlib example is \n```\nimport numpy as np\nfrom matplotlib.collections import LineCollection\nimport matplotlib.pyplot as plt\nx = np.linspace(0,4*np.pi,10000)\ny = np.cos(x)\nlwidths=1+x[:-1]\npoints = np.array([x, y]).T.reshape(-1, 1, 2)\nsegments = np.concatenate([points[:-1], points[1:]], axis=1)\nlc = LineCollection(segments, linewidths=lwidths,color='blue')\nfig,a = plt.subplots()\na.add_collection(lc)\na.set_xlim(0,4*np.pi)\na.set_ylim(-1.1,1.1)\nfig.show()\n```","best_answers_score":0.7214,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/19390895\/matplotlib-plot-with-variable-line-width","best_answers_votes":94,"tags":["python","matplotlib","plot"],"question_length":581,"response_length":523,"tag_count":3} {"question":"Is it possible to control matplotlib marker orientation? If I have a triangular marker, is it possible to control its orientation? I have a series of facets, with their corresponding vertices, and I would like to plot a basemap of them. I know it is straightforward script when using Mayavi and tvtk.PolyData. But since I'm dealing with maps and not 3D objects, things got a bit complicated. ps: for maps I'm using basemap tool.","response":"You can create custom polygons using the keyword argument marker and passing it a tuple of 3 numbers (number of sides, style, rotation). To create a triangle you would use (3, 0, rotation), an example is shown below. \n```\nimport matplotlib.pyplot as plt\n\nx = [1,2,3]\nfor i in x:\n plt.plot(i, i, marker=(3, 0, i*90), markersize=20, linestyle='None')\n\nplt.xlim([0,4])\nplt.ylim([0,4])\n\nplt.show()\n```","best_answers_score":0.7211,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/23345565\/is-it-possible-to-control-matplotlib-marker-orientation","best_answers_votes":49,"tags":["python","matplotlib","matplotlib-basemap"],"question_length":428,"response_length":400,"tag_count":3} {"question":"seaborn scatterplot marker size for ALL markers I can't find out anywhere how to change the marker size on seaborn scatterplots. There is a size option listed in the documentation but it is only for when you want variable size across points. I want the same size for all points but larger than the default! I tried making a new column of integers in my dataframe and set that as the size, but it looks like the actual value doesn't matter, it changes the marker size on a relative basis, so in this case all the markers were still the same size as the default. Here's some code: \n```py\nax = sns.scatterplot(x=\"Data Set Description\", y=\"R Squared\", data=mean_df)\nplt.show()\n```\nI just tried something and it worked, not sure if it's the best method though. I added size=[1, 1, 1, 1, 1, 1] and sizes=(500, 500). So essentially I'm setting all sizes to be the same, and the range of sizes to be only at 500.","response":"You can do so by giving a value to the s argument to change the marker size. Example: \n```\nax = sns.scatterplot(x=\"Data Set Description\", y=\"R Squared\", data=mean_df, s=10)\n```","best_answers_score":0.7209,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/52785101\/seaborn-scatterplot-marker-size-for-all-markers","best_answers_votes":202,"tags":["python","matplotlib","seaborn","scatter-plot"],"question_length":904,"response_length":176,"tag_count":4} {"question":"matplotlib imshow(): how to animate? i found this wonderful short tutorial on animation: http:\/\/jakevdp.github.io\/blog\/2012\/08\/18\/matplotlib-animation-tutorial\/ however i cant produce an animated imshow() plot of same fashion. I tried to replace some lines: \n```\n# First set up the figure, the axis, and the plot element we want to animate\nfig = plt.figure()\nax = plt.axes(xlim=(0, 10), ylim=(0, 10))\n#line, = ax.plot([], [], lw=2)\na=np.random.random((5,5))\nim=plt.imshow(a,interpolation='none')\n# initialization function: plot the background of each frame\ndef init():\n im.set_data(np.random.random((5,5)))\n return im\n\n# animation function. This is called sequentially\ndef animate(i):\n a=im.get_array()\n a=a*np.exp(-0.001*i) # exponential decay of the values\n im.set_array(a)\n return im\n```\nbut i run into errors can you help me get this running? thank you in advance. best,","response":"You're very close, but there's one mistake - init and animate should return iterables containing the artists that are being animated. That's why in Jake's version they return line, (which is actually a tuple) rather than line (which is a single line object). Sadly the docs are not clear on this! You can fix your version like this: \n```\n# initialization function: plot the background of each frame\ndef init():\n im.set_data(np.random.random((5,5)))\n return [im]\n\n# animation function. This is called sequentially\ndef animate(i):\n a=im.get_array()\n a=a*np.exp(-0.001*i) # exponential decay of the values\n im.set_array(a)\n return [im]\n```","best_answers_score":0.7197,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/17212722\/matplotlib-imshow-how-to-animate","best_answers_votes":38,"tags":["python","image","matplotlib","matplotlib-animation"],"question_length":896,"response_length":658,"tag_count":4} {"question":"Pandas histogram df.hist() group by How to plot a histogram with pandas DataFrame.hist() using group by? I have a data frame with 5 columns: \"A\", \"B\", \"C\", \"D\" and \"Group\" There are two Groups classes: \"yes\" and \"no\" Using: \n```\ndf.hist()\n```\nI get the hist for each of the 4 columns. Now I would like to get the same 4 graphs but with blue bars (group=\"yes\") and red bars (group = \"no\"). I tried this withouth success: \n```\ndf.hist(by = \"group\")\n```","response":"Using Seaborn If you are open to use Seaborn, a plot with multiple subplots and multiple variables within each subplot can easily be made using seaborn.FacetGrid. \n```\nimport numpy as np; np.random.seed(1)\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\ndf = pd.DataFrame(np.random.randn(300,4), columns=list(\"ABCD\"))\ndf[\"group\"] = np.random.choice([\"yes\", \"no\"], p=[0.32,0.68],size=300)\n\ndf2 = pd.melt(df, id_vars='group', value_vars=list(\"ABCD\"), value_name='value')\n\nbins=np.linspace(df2.value.min(), df2.value.max(), 10)\ng = sns.FacetGrid(df2, col=\"variable\", hue=\"group\", palette=\"Set1\", col_wrap=2)\ng.map(plt.hist, 'value', bins=bins, ec=\"k\")\n\ng.axes[-1].legend()\nplt.show()\n```","best_answers_score":0.7195,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/45883598\/pandas-histogram-df-hist-group-by","best_answers_votes":20,"tags":["pandas","matplotlib","histogram"],"question_length":450,"response_length":711,"tag_count":3} {"question":"How to create a seaborn.heatmap() with frames around the tiles? I rendered a heatmap with seaborn.heatmap() works nicely. However, for a certain purpose I need frames around the plot. matplotlib.rcParams['axes.edgecolor'] = 'black' matplotlib.rcParams['axes.linewidth'] = 1 both don't work.","response":"sns.heatmap(dataset, linewidths=1, linecolor='black')","best_answers_score":0.7191,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/36560829\/how-to-create-a-seaborn-heatmap-with-frames-around-the-tiles","best_answers_votes":24,"tags":["python","matplotlib","seaborn"],"question_length":290,"response_length":53,"tag_count":3} {"question":"How do I extend the margin at the bottom of a figure in Matplotlib? The following screenshot shows my x-axis. I added some labels and rotated them by 90 degrees in order to better read them. However, pyplot truncates the bottom such that I'm not able to completely read the labels. How do I extend the bottom margin in order to see the complete labels?","response":"Two retroactive ways: \n```py\nfig, ax = plt.subplots()\n# ...\nfig.tight_layout()\n```\nOr \n```py\nfig.subplots_adjust(bottom=0.2) # or whatever\n```\nHere's a subplots_adjust example: http:\/\/matplotlib.org\/examples\/pylab_examples\/subplots_adjust.html (but I prefer tight_layout)","best_answers_score":0.7183,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/27878217\/how-do-i-extend-the-margin-at-the-bottom-of-a-figure-in-matplotlib","best_answers_votes":97,"tags":["matplotlib","margin"],"question_length":352,"response_length":271,"tag_count":2} {"question":"Top label for matplotlib colorbars By default, matplotlib would position colorbar labels alongside the vertical colorbars. What is the best way to force the label to be on top of a colorbar? Currently my solution needs adjusting labelpad and y values depending on size of the label: \n```py\nimport numpy as np\nimport matplotlib.pylab as plt \n\ndat = np.random.randn(10,10)\nplt.imshow(dat, interpolation='none')\n\nclb = plt.colorbar()\nclb.set_label('label', labelpad=-40, y=1.05, rotation=0)\n\nplt.show()\n```\nIs there a better, more generic way to do this?","response":"You could set the title of the colorbar axis (which appears above the axis), rather than the label (which appears along the long axis). To access the colorbar's Axes, you can use clb.ax. You can then use set_title, in the same way you can for any other Axes instance. For example: \n```\nimport numpy as np\nimport matplotlib.pylab as plt \n\ndat = np.random.randn(10,10)\nplt.imshow(dat, interpolation='none')\n\nclb = plt.colorbar()\nclb.ax.set_title('This is a title')\n\nplt.show()\n```","best_answers_score":0.717,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/33737427\/top-label-for-matplotlib-colorbars","best_answers_votes":82,"tags":["python","matplotlib","plot","colorbar","imshow"],"question_length":551,"response_length":478,"tag_count":5} {"question":"Giving graphs a subtitle I want to give my graph a title in big 18pt font, then a subtitle below it in smaller 10pt font. How can I do this in matplotlib? It appears the title() function only takes one single string with a single fontsize attribute. There has to be a way to do this, but how?","response":"What I do is use the title() function for the subtitle and the suptitle() for the main title (they can take different font size arguments).","best_answers_score":0.7165,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/1388450\/giving-graphs-a-subtitle","best_answers_votes":113,"tags":["python","matplotlib"],"question_length":292,"response_length":139,"tag_count":2} {"question":"yaxis range display using absolute values rather than offset values I have the following range of numpy data (deltas of usec timestamps): \n```\narray([ 4.312, 4.317, 4.316, 4.32 , 4.316, 4.316, 4.319, 4.317,\n 4.317, 4.316, 4.318, 4.316, 4.318, 4.316, 4.318, 4.317,\n 4.317, 4.317, 4.316, 4.317, 4.318, 4.316, 4.318, 4.316,\n 4.318, 4.316, 4.317, 4.317, 4.318, 4.316, 4.317, 4.317,\n 4.317, 4.317, 4.317, 4.316, 4.319, 4.315, 4.319, 4.315,\n 4.319, 4.315, 4.316, 4.319, 4.317, 4.317, 4.317, 4.318,\n 4.315, 4.317, 4.317, 4.317, 4.319, 4.314, 4.32 , 4.315,\n 4.317, 4.318, 4.315, 4.318, 4.317, 4.317, 4.317, 4.316,\n 4.317, 4.318, 4.317, 4.317, 4.317, 4.315, 4.319, 4.317,\n 4.315, 4.319, 4.316, 4.318, 4.318, 4.315, 4.318, 4.317,\n 4.317, 4.321])\n```\nWhen I plot with matplotlib.pyplot: \n```\nimport matplotlib.pyplot as plt\n plt.plot( deltas ) \n plt.show()\n```\nI get the following plot. Why is the Y-axis being scaled that way? How can I get the Y-axis to be labeled as the data, not an offset of the data? Sometimes the plot is the \"+4.nnn\" sometimes it isn't (depending on the data range?). Plotted \"strangely\": Plotted \"correctly\":","response":"set useOffset to False: \n```\nax = plt.gca()\nax.ticklabel_format(useOffset=False)\n```","best_answers_score":0.7158,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/9303728\/yaxis-range-display-using-absolute-values-rather-than-offset-values","best_answers_votes":48,"tags":["python","matplotlib","yaxis"],"question_length":1224,"response_length":84,"tag_count":3} {"question":"Grouping boxplots in seaborn when input is a DataFrame I intend to plot multiple columns in a pandas dataframe, all grouped by another column using groupby inside seaborn.boxplot. There is a nice answer here, for a similar problem in matplotlib matplotlib: Group boxplots but given the fact that seaborn.boxplot comes with groupby option I thought it could be much easier to do this in seaborn. Here we go with a reproducible example that fails: \n```\nimport seaborn as sns\nimport pandas as pd\ndf = pd.DataFrame([[2, 4, 5, 6, 1], [4, 5, 6, 7, 2], [5, 4, 5, 5, 1],\n [10, 4, 7, 8, 2], [9, 3, 4, 6, 2], [3, 3, 4, 4, 1]],\n columns=['a1', 'a2', 'a3', 'a4', 'b'])\n\n# display(df)\n a1 a2 a3 a4 b\n0 2 4 5 6 1\n1 4 5 6 7 2\n2 5 4 5 5 1\n3 10 4 7 8 2\n4 9 3 4 6 2\n5 3 3 4 4 1\n\n#Plotting by seaborn\nsns.boxplot(df[['a1','a2', 'a3', 'a4']], groupby=df.b)\n```\nWhat I get is something that completely ignores groupby option: Whereas if I do this with one column it works thanks to another SO question Seaborn groupby pandas Series : \n```\nsns.boxplot(df.a1, groupby=df.b)\n```\nSo I would like to get all my columns in one plot (all columns come in a similar scale). EDIT: The above SO question was edited and now includes a 'not clean' answer to this problem, but it would be nice if someone has a better idea for this problem.","response":"As the other answers note, the boxplot function is limited to plotting a single \"layer\" of boxplots, and the groupby parameter only has an effect when the input is a Series and you have a second variable you want to use to bin the observations into each box.. However, you can accomplish what I think you're hoping for with the factorplot function, using kind=\"box\". But, you'll first have to \"melt\" the sample dataframe into what is called long-form or \"tidy\" format where each column is a variable and each row is an observation: \n```\ndf_long = pd.melt(df, \"b\", var_name=\"a\", value_name=\"c\")\n```\nThen it's very simple to plot: \n```\nsns.factorplot(\"a\", hue=\"b\", y=\"c\", data=df_long, kind=\"box\")\n```","best_answers_score":0.7152,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/25284859\/grouping-boxplots-in-seaborn-when-input-is-a-dataframe","best_answers_votes":27,"tags":["python","pandas","matplotlib","seaborn","boxplot"],"question_length":1399,"response_length":699,"tag_count":5} {"question":"Plot two histograms on single chart I created a histogram plot using data from a file and no problem. Now I wanted to superpose data from another file in the same histogram, so I do something like this \n```\nn,bins,patchs = ax.hist(mydata1,100)\nn,bins,patchs = ax.hist(mydata2,100)\n```\nbut the problem is that for each interval, only the bar with the highest value appears, and the other is hidden. I wonder how could I plot both histograms at the same time with different colors.","response":"Here you have a working example: \n```\nimport random\nimport numpy\nfrom matplotlib import pyplot\n\nx = [random.gauss(3,1) for _ in range(400)]\ny = [random.gauss(4,2) for _ in range(400)]\n\nbins = numpy.linspace(-10, 10, 100)\n\npyplot.hist(x, bins, alpha=0.5, label='x')\npyplot.hist(y, bins, alpha=0.5, label='y')\npyplot.legend(loc='upper right')\npyplot.show()\n```","best_answers_score":0.7144,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/6871201\/plot-two-histograms-on-single-chart","best_answers_votes":614,"tags":["python","matplotlib","histogram"],"question_length":479,"response_length":358,"tag_count":3} {"question":"Why do many examples use `fig, ax = plt.subplots()` I'm learning to use matplotlib by studying examples, and a lot of examples seem to include a line like the following before creating a single plot... \n```\nfig, ax = plt.subplots()\n```\nHere are some examples... Modify tick label text http:\/\/matplotlib.org\/examples\/pylab_examples\/boxplot_demo2.html I see this function used a lot, even though the example is only attempting to create a single chart. Is there some other advantage? The official demo for subplots() also uses f, ax = subplots when creating a single chart, and it only ever references ax after that. This is the code they use. \n```\n# Just a figure and one subplot\nf, ax = plt.subplots()\nax.plot(x, y)\nax.set_title('Simple plot')\n```","response":"plt.subplots() is a function that returns a tuple containing a figure and axes object(s). Thus when using fig, ax = plt.subplots() you unpack this tuple into the variables fig and ax. Having fig is useful if you want to change figure-level attributes or save the figure as an image file later (e.g. with fig.savefig('yourfilename.png')). You certainly don't have to use the returned figure object but many people do use it later so it's common to see. Also, all axes objects (the objects that have plotting methods), have a parent figure object anyway, thus: \n```\nfig, ax = plt.subplots()\n```\nis more concise than this: \n```\nfig = plt.figure()\nax = fig.add_subplot(111)\n```","best_answers_score":0.7142,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/34162443\/why-do-many-examples-use-fig-ax-plt-subplots","best_answers_votes":567,"tags":["python","matplotlib"],"question_length":747,"response_length":673,"tag_count":2} {"question":"How to retrieve colorbar instance from figure in matplotlib all. I want to update the colorbar of a figure when the imagedata is changed. So something like: \n```\nimg = misc.lena()\nfig = plt.figure()\nax = plt.imshow(im)\nplt.colorbar(ax)\nnewimg = img+10*np.randn(512,512)\n\ndef update_colorbar(fig,ax,newimg):\n cbar = fig.axes[1]\n ax.set_data(newimg)\n cbar.update_normal(ax)\n plt.draw()\n```\nbut it seems that returned results from fig.axes() does not have the colorbar instance like I expected. I can probably just pass the colorbar instance as an argument to the update function, but I thought just passing one fig parameter may be good enough. Can anyone explain a little bit on how to retrieve the colorbar from the figure? Or why 'fig.axes()' doesn't return the AxesImage or Colobar instance but just the Axes or AxesSubplot? I think I just need more understanding of the Axes\/Figure stuff.Thank you!","response":"Sometimes it can be useful to retrieve a colorbar even if it was not held in a variable. In this case, it is possible to retrieve the colorbar from the plot with: \n```\n# Create an example image and colourbar\nimg = np.arange(20).reshape(5,4)\nplt.imshow(img)\nplt.colorbar()\n\n# Get the current axis \nax = plt.gca() \n\n# Get the images on an axis\nim = ax.images \n\n# Assume colorbar was plotted last one plotted last\ncb = im[-1].colorbar \n\n# Do any actions on the colorbar object (e.g. remove it)\ncb.remove()\n```\nEDIT: or, equivalently, the one liner: \n```\nplt.gca().images[-1].colorbar.remove()\n```\nN.B.: see also comments for the use of ax.collections[-1] instead of ax.images[-1]. For me it always worked only the first way, I don't know what depends on, maybe the type of data or plot. Now you can operate on cb as if it were stored using commands described in the colorbar API. For instance you could change xlim or call update as explained in other comments. You could remove it with cb.remove() and recreate it with plt.colorbar(). plt.draw() or show should be called after to update plot. As the image is the mappable associated to the colorbar and can be obtained with cb.mappable.","best_answers_score":0.7138,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/19816820\/how-to-retrieve-colorbar-instance-from-figure-in-matplotlib","best_answers_votes":61,"tags":["python","matplotlib","colorbar"],"question_length":913,"response_length":1200,"tag_count":3} {"question":"How to plot statsmodels linear regression (OLS) cleanly Problem Statement: I have some nice data in a pandas dataframe. I'd like to run simple linear regression on it: Using statsmodels, I perform my regression. Now, how do I get my plot? I've tried statsmodels' plot_fit method, but the plot is a little funky: I was hoping to get a horizontal line which represents the actual result of the regression. Statsmodels has a variety of methods for plotting regression (a few more details about them here) but none of them seem to be the super simple \"just plot the regression line on top of your data\" -- plot_fit seems to be the closest thing. Questions: The first picture above is from pandas' plot function, which returns a matplotlib.axes._subplots.AxesSubplot. Can I overlay a regression line easily onto that plot? Is there a function in statsmodels I've overlooked? Is there a better way to put together this figure? Two related questions: Plotting Pandas OLS linear regression results Getting the regression line to plot from a Pandas regression Neither seems to have a good answer. Sample data \n```none\nmotifScore expression\n6870 1.401123 0.55\n10456 1.188554 -1.58\n12455 1.476361 -1.75\n18052 1.805736 0.13\n19725 1.110953 2.30\n30401 1.744645 -0.49\n30716 1.098253 -1.59\n30771 1.098253 -2.04\n```\nabline_plot I had tried this, but it doesn't seem to work... not sure why:","response":"As I mentioned in the comments, seaborn is a great choice for statistical data visualization. \n```\nimport seaborn as sns\n\nsns.regplot(x='motifScore', y='expression', data=motif)\n```\nAlternatively, you can use statsmodels.regression.linear_model.OLS and manually plot a regression line. \n```\nimport statsmodels.api as sm\n\n# regress \"expression\" onto \"motifScore\" (plus an intercept)\nmodel = sm.OLS(motif.expression, sm.add_constant(motif.motifScore))\np = model.fit().params\n\n# generate x-values for your regression line (two is sufficient)\nx = np.arange(1, 3)\n\n# scatter-plot data\nax = motif.plot(x='motifScore', y='expression', kind='scatter')\n\n# plot regression line on the same axes, set x-axis limits\nax.plot(x, p.const + p.motifScore * x)\nax.set_xlim([1, 2])\n```\nYet another solution is statsmodels.graphics.regressionplots.abline_plot which takes away some of the boilerplate from the above approach. \n```\nimport statsmodels.api as sm\nfrom statsmodels.graphics.regressionplots import abline_plot\n\n# regress \"expression\" onto \"motifScore\" (plus an intercept)\nmodel = sm.OLS(motif.expression, sm.add_constant(motif.motifScore))\n\n# scatter-plot data\nax = motif.plot(x='motifScore', y='expression', kind='scatter')\n\n# plot regression line\nabline_plot(model_results=model.fit(), ax=ax)\n```","best_answers_score":0.7131,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/42261976\/how-to-plot-statsmodels-linear-regression-ols-cleanly","best_answers_votes":38,"tags":["python","pandas","matplotlib","linear-regression","statsmodels"],"question_length":1415,"response_length":1289,"tag_count":5} {"question":"How is order of items in matplotlib legend determined? I am having to reorder items in a legend, when I don't think I should have to. I try: \n```\nfrom pylab import *\nclf()\nax=gca()\nht=ax.add_patch(Rectangle((1,1),1,1,color='r',label='Top',alpha=.1))\nh1=ax.bar(1,2,label='Middle')\nhb=ax.add_patch(Rectangle((1,1),1,1,color='k',label='Bottom',alpha=.11))\nlegend()\nshow()\n```\nand end up with Bottom above Middle. How can I get the right order? Is it not determined by creation order? Update: The following can be used to force the order. I think this may be the simplest way to do it, and that seems awkward. The question is what determines the original order? \n```\nhh=[ht,h1,hb]\nlegend([ht,h1.patches[0],hb],[H.get_label() for H in hh])\n```","response":"A slight variation on some other aswers. The list order should have the same length as the number of legend items, and specifies the new order manually. \n```\nhandles, labels = plt.gca().get_legend_handles_labels()\norder = [0,2,1]\nplt.legend([handles[idx] for idx in order],[labels[idx] for idx in order])\n```","best_answers_score":0.7129,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/22263807\/how-is-order-of-items-in-matplotlib-legend-determined","best_answers_votes":145,"tags":["python","matplotlib","legend"],"question_length":738,"response_length":308,"tag_count":3} {"question":"How to find the intersection of two graphs Let 0 <= x <= 1. I have two columns f and g of length 5000 respectively. Now I plot: \n```\nplt.plot(x, f, '-')\nplt.plot(x, g, '*')\n```\nI want to find the point 'x' where the curve intersects. I don't want to find the intersection of f and g. I can do it simply with: \n```\nset(f) & set(g)\n```","response":"You can use np.sign in combination with np.diff and np.argwhere to obtain the indices of points where the lines cross (in this case, the points are [ 0, 149, 331, 448, 664, 743]): \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.arange(0, 1000)\nf = np.arange(0, 1000)\ng = np.sin(np.arange(0, 10, 0.01) * 2) * 1000\n\nplt.plot(x, f, '-')\nplt.plot(x, g, '-')\n\nidx = np.argwhere(np.diff(np.sign(f - g))).flatten()\nplt.plot(x[idx], f[idx], 'ro')\nplt.show()\n```\nFirst it calculates f - g and the corresponding signs using np.sign. Applying np.diff reveals all the positions, where the sign changes (e.g. the lines cross). Using np.argwhere gives us the exact indices.","best_answers_score":0.7128,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/28766692\/how-to-find-the-intersection-of-two-graphs","best_answers_votes":151,"tags":["python","matplotlib","intersection"],"question_length":333,"response_length":675,"tag_count":3} {"question":"Need to add space between SubPlots for X axis label, maybe remove labelling of axis notches Looking to add in vertical space between plotted graphs to allow a X-Axis label to show: Each graph needs to have space to show the day, currently the last 2 graphs are the only one's that show simply because the graphs are overlapping it. Also curious if I could actually remove the notch labels for the X-Axis for the graphs above the one's marked Thursday\/Friday, i.e. the bottom X-axis is the only one that shows. Same for the Y-Axis, but only the graphs on the left having the scale shown. *Unfortunately I can't post an image to show this since I don't have enough rep. Code snippet: \n```\nimport mathlib.pyplot as pyplot\nfig = pyplot.figure()\nax1 = fig.add_subplot(4,2,1)\nax1.set_yscale('log')\nax2 = fig.add_subplot(4,2,2, sharex=ax1, sharey=ax1)\nax3 = fig.add_subplot(4,2,3, sharex=ax2, sharey=ax2)\nax4 = fig.add_subplot(4,2,4, sharex=ax3, sharey=ax3)\nax5 = fig.add_subplot(4,2,5, sharex=ax4, sharey=ax4)\nax6 = fig.add_subplot(4,2,6, sharex=ax5, sharey=ax5)\nax7 = fig.add_subplot(4,2,7, sharex=ax6, sharey=ax6)\nax1.plot(no_dict[\"Saturday\"],'k.-',label='Saturday')\nax1.set_xlabel('Saturday')\nax1.axis([0,24,0,10000])\npyplot.suptitle('Title')\npyplot.xlabel('Hour in 24 Hour Format')\nax2.plot(no_dict[\"Sunday\"],'b.-',label='Sunday')\nax2.set_xlabel('Sunday')\n...\n```","response":"Use subplots_adjust. In your case this looks good: \n```\nfig.subplots_adjust(hspace=.5)\n```\nto remove the tick labels do this: \n```\nax1.set_xticklabels([])\n```\nSimilar for the yticklabels. However, you cannot share the x-axis with the plots that do have tick labels.","best_answers_score":0.7124,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/5159065\/need-to-add-space-between-subplots-for-x-axis-label-maybe-remove-labelling-of-a","best_answers_votes":113,"tags":["python","matplotlib"],"question_length":1361,"response_length":265,"tag_count":2} {"question":"Matplotlib scatter plot with unknown error I am attempting to create a scatter plot. I have a list of numbers from 0 - 17 as well as an array with 18 values. I can plot the data as a line plot but when I try to plot as a scatter, I get an error message I do not understand: \n```none\nTypeError: ufunc 'sqrt' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''\n```\nWhat does this error message mean and how can I get the data to plot as a scatter? \n```py\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ny = [7316.0, 7453.25, 7518.25, 7711.5, 7448.0, 7210.25, 7416.75, 6960.75, \n 7397.75, 6397.5, 5522.75, 5139.0, 5034.75, 4264.75, 5106.0, 3489.5, \n 4712.0, 4770.0]\nx = np.arange(0,18,1)\n\nplt.rcParams['legend.loc'] = 'best'\nplt.figure(1)\nplt.xlim(0, 20)\nplt.ylim(0, 10000)\nplt.scatter(x, y, 'r')\nplt.show()\n```","response":"Check the scatter documentation. Third argument is for size of points and should be scalar or array_like. I assume 'r' is for color so do the following: \n```\nplt.scatter(x, y, c='r')\n```","best_answers_score":0.7112,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/35733223\/matplotlib-scatter-plot-with-unknown-error","best_answers_votes":97,"tags":["python","python-3.x","numpy","matplotlib","scatter-plot"],"question_length":908,"response_length":186,"tag_count":5} {"question":"Unable to show legend in seaborn distplot I am new to plotting in python and trying following code to plot distribution in seaborn but unable to see the legend, i.e., test_label1 and test_label1 on the plot. \n```\nimport matplotlib.pylab as plt\nimport seaborn as sns\nimport numpy as np\n\nplt.figure(\"Test Plots\")\nlst1 = list(np.random.rand(10))\nlst2 = list(np.random.rand(10))\nsns.distplot(lst1, label='test_label1', color=\"0.25\")\nsns.distplot(lst2, label='test_label2', color=\"0.25\")\n\nplt.show()\n```","response":"As you have already labelled your plots using label= inside your sns.distplot then all you have to do is show your legend. This is done by adding plt.legend() just before plt.show() More information on matplotlib legends can be found in the documentation","best_answers_score":0.7106,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/44968012\/unable-to-show-legend-in-seaborn-distplot","best_answers_votes":79,"tags":["python","matplotlib","legend","seaborn"],"question_length":498,"response_length":254,"tag_count":4} {"question":"How do I let my matplotlib plot go beyond the axes? I have to translate an image plotting script from matlab to matplotlib\/pylab, and I'm trying to achieve the same effect as the matlab image below: As you can see, the z order of the plots seem to be higher than the z order of the grid, so the markers are not hidden by the axes. However, I can't figure out a way to do the same with my matplotlib image: I'm wondering if it is possible to get the same display without having to increase the limits of the y axis.","response":"To get the marker to show beyond the axes you can turn the clipping off. This can be done using the keyword argument in the plot command clip_on=False. For example: \n```\nimport matplotlib.pyplot as plt\nplt.plot(range(5), range(5), 'ro', markersize=20, clip_on=False, zorder=100)\nplt.show()\n```","best_answers_score":0.7106,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/9912206\/how-do-i-let-my-matplotlib-plot-go-beyond-the-axes","best_answers_votes":65,"tags":["python","matplotlib","z-order","axes"],"question_length":514,"response_length":293,"tag_count":4} {"question":"AtributeError: 'module' object has no attribute 'plt' - Seaborn I'm very new with these libraries and i'm having troubles while plotting this: \n```\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport random\n\ndf5 = pd.read_csv('..\/..\/..\/..\/datos\/tiempos-exacto-variando-n-m0.csv', sep=', ', engine='python')\nprint(df5)\ndf5['n'] = df5['n'].apply(lambda x: x**2)\nsns.jointplot(df5['n'], df5['tiempoTotal'], kind=\"reg\")\nsns.plt.show()\n```\nAnd i'm getting this output: \n```\nn m tiempoTotal\n0 1 0 2274\n1 2 0 3370\n2 3 0 5709\n3 4 0 8959\n4 5 0 13354\n5 6 0 18503\n6 7 0 26329\n7 8 0 33859\n8 9 0 41110\n9 10 0 52710\n10 11 0 64364\n11 12 0 74142\n12 13 0 81072\n13 14 0 69332\n14 15 0 71027\n15 16 0 89721\n16 17 0 85459\n17 18 0 95217\n18 19 0 119210\n19 20 0 136888\n20 21 0 131903\n21 22 0 138395\n22 23 0 151222\n23 24 0 163542\n24 25 0 177236\n25 26 0 192475\n26 27 0 240162\n27 28 0 260701\n28 29 0 235752\n29 30 0 250835\n.. ... .. ...\n580 581 0 88306854\n581 582 0 89276420\n582 583 0 87457875\n583 584 0 90807004\n584 585 0 87790003\n585 586 0 89821530\n586 587 0 89486585\n587 588 0 88496901\n588 589 0 89090661\n589 590 0 89110803\n590 591 0 90397942\n591 592 0 94029839\n592 593 0 92749859\n593 594 0 105991135\n594 595 0 95383921\n595 596 0 105155207\n596 597 0 114193414\n597 598 0 98108892\n598 599 0 97888966\n599 600 0 103802453\n600 601 0 97249346\n601 602 0 101917488\n602 603 0 104943847\n603 604 0 98966140\n604 605 0 97924262\n605 606 0 97379587\n606 607 0 97518808\n607 608 0 99839892\n608 609 0 100046492\n609 610 0 103857464\n\n[610 rows x 3 columns]\n---------------------------------------------------------------------------\nAttributeError Traceback (most recent call last)\n in ()\n 9 df5['n'] = df5['n'].apply(lambda x: x**2)\n 10 sns.jointplot(df5['n'], df5['tiempoTotal'], kind=\"reg\")\n---> 11 sns.plt.show()\n\nAttributeError: 'module' object has no attribute 'plt'\n```\nI'm running this in my Jupyter Notebook with Python 2.7.12. Any ideas?","response":"sns.plt.show() works fine for me using seaborn 0.7.1. Could be that this is different in other versions. However, if you anyways import matplotlib.pyplot as plt you may as well simply use plt.show(), as sns.plt.show() is only working because pyplot is available inside the seaborn namespace.","best_answers_score":0.7093,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/45070959\/atributeerror-module-object-has-no-attribute-plt-seaborn","best_answers_votes":37,"tags":["python","pandas","matplotlib","seaborn"],"question_length":2549,"response_length":291,"tag_count":4} {"question":"Create own colormap using matplotlib and plot color scale I have the following problem, I want to create my own colormap (red-mix-violet-mix-blue) that maps to values between -2 and +2 and want to use it to color points in my plot. The plot should then have the colorscale to the right. That is how I create the map so far. But I am not really sure if it mixes the colors. \n```\ncmap = matplotlib.colors.ListedColormap([\"red\",\"violet\",\"blue\"], name='from_list', N=None)\nm = cm.ScalarMappable(norm=norm, cmap=cmap)\n```\nThat way I map the colors to the values. \n```\ncolors = itertools.cycle([m.to_rgba(1.22), ..])\n```\nThen I plot it: \n```\nfor i in range(0, len(array_dg)):\n plt.plot(array_dg[i], markers.next(),alpha=alpha[i], c=colors.next())\n```\nMy problems are: 1. I can't plot the color scale. 2. I am not completely sure if my scale is creating a continues (smooth) colorscale.","response":"Since the methods used in other answers seems quite complicated for such easy task, here is a new answer: Instead of a ListedColormap, which produces a discrete colormap, you may use a LinearSegmentedColormap. This can easily be created from a list using the from_list method. \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.colors\n\nx,y,c = zip(*np.random.rand(30,3)*4-2)\n\nnorm=plt.Normalize(-2,2)\ncmap = matplotlib.colors.LinearSegmentedColormap.from_list(\"\", [\"red\",\"violet\",\"blue\"])\n\nplt.scatter(x,y,c=c, cmap=cmap, norm=norm)\nplt.colorbar()\nplt.show()\n```\nMore generally, if you have a list of values (e.g. [-2., -1, 2]) and corresponding colors, (e.g. [\"red\",\"violet\",\"blue\"]), such that the nth value should correspond to the nth color, you can normalize the values and supply them as tuples to the from_list method. \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.colors\n\nx,y,c = zip(*np.random.rand(30,3)*4-2)\n\ncvals = [-2., -1, 2]\ncolors = [\"red\",\"violet\",\"blue\"]\n\nnorm=plt.Normalize(min(cvals),max(cvals))\ntuples = list(zip(map(norm,cvals), colors))\ncmap = matplotlib.colors.LinearSegmentedColormap.from_list(\"\", tuples)\n\nplt.scatter(x,y,c=c, cmap=cmap, norm=norm)\nplt.colorbar()\nplt.show()\n```","best_answers_score":0.7088,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/16834861\/create-own-colormap-using-matplotlib-and-plot-color-scale","best_answers_votes":136,"tags":["python","matplotlib","plot"],"question_length":880,"response_length":1257,"tag_count":3} {"question":"How to get center of set of points using Python I would like to get the center point(x,y) of a figure created by a set of points. How do I do this?","response":"If you mean centroid, you just get the average of all the points. \n```\nx = [p[0] for p in points]\ny = [p[1] for p in points]\ncentroid = (sum(x) \/ len(points), sum(y) \/ len(points))\n```","best_answers_score":0.7087,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/4355894\/how-to-get-center-of-set-of-points-using-python","best_answers_votes":46,"tags":["python","matplotlib","triangulation"],"question_length":147,"response_length":184,"tag_count":3} {"question":"Simple line plots using seaborn I'm trying to plot a ROC curve using seaborn (python). With matplotlib I simply use the function plot: \n```\nplt.plot(one_minus_specificity, sensitivity, 'bs--')\n```\nwhere one_minus_specificity and sensitivity are two lists of paired values. Is there a simple counterparts of the plot function in seaborn? I had a look at the gallery but I didn't find any straightforward method.","response":"Since seaborn also uses matplotlib to do its plotting you can easily combine the two. If you only want to adopt the styling of seaborn the set_style function should get you started: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport seaborn as sns\n\nsns.set_style(\"darkgrid\")\nplt.plot(np.cumsum(np.random.randn(1000,1)))\nplt.show()\n```\nResult:","best_answers_score":0.7083,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/31069191\/simple-line-plots-using-seaborn","best_answers_votes":88,"tags":["python","matplotlib","plot","seaborn","roc"],"question_length":410,"response_length":354,"tag_count":5} {"question":"How can one display an image using cv2 in Python I've been working with code to display frames from a movie. The bare bones of the code is as follows: \n```\nimport cv2\nimport matplotlib.pyplot as plt\n\n# Read single frame avi\ncap = cv2.VideoCapture('singleFrame.avi')\nrval, frame = cap.read()\n\n# Attempt to display using cv2 (doesn't work)\ncv2.namedWindow(\"Input\")\ncv2.imshow(\"Input\", frame)\n\n#Display image using matplotlib (Works)\nb,g,r = cv2.split(frame)\nframe_rgb = cv2.merge((r,g,b))\nplt.imshow(frame_rgb)\nplt.title('Matplotlib') #Give this plot a title, \n #so I know it's from matplotlib and not cv2\nplt.show()\n```\nBecause I can display the image using matplotlib, I know that I'm successfully reading it in. I don't understand why my creation of a window and attempt to show an image using cv2 doesn't work. No cv2 window ever appears. Oddly though, if I create a second cv2 window, the 'input' window appears, but it is only a blank\/white window. What am I missing here?","response":"As far as I can see, you are doing it almost good. There is one thing missing: \n```\ncv2.imshow('image',img)\ncv2.waitKey(0)\n```\nSo probably your window appears but is closed very very fast.","best_answers_score":0.7082,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/34966541\/how-can-one-display-an-image-using-cv2-in-python","best_answers_votes":134,"tags":["python","opencv","matplotlib"],"question_length":999,"response_length":188,"tag_count":3} {"question":"Annotating a 3D scatter plot I'm trying to generate a 3D scatter plot using Matplotlib. I would like to annotate individual points like the 2D case here: How to put individual tags for a matplotlib scatter plot?. I've tried to use this function and consulted the Matplotlib docoment but found it seems that the library does not support 3D annotation. Does anyone know how to do this?","response":"Maybe easier via ax.text(...): \n```\nfrom matplotlib import pyplot\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom numpy.random import rand\nfrom pylab import figure\n\n \nm=rand(3,3) # m is an array of (x,y,z) coordinate triplets\n \nfig = figure()\nax = fig.add_subplot(projection='3d')\n\nfor i in range(len(m)): #plot each point + it's index as text above\n ax.scatter(m[i,0],m[i,1],m[i,2],color='b') \n ax.text(m[i,0],m[i,1],m[i,2], '%s' % (str(i)), size=20, zorder=1, \n color='k') \n\nax.set_xlabel('x')\nax.set_ylabel('y')\nax.set_zlabel('z')\npyplot.show()\n```","best_answers_score":0.7081,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/10374930\/annotating-a-3d-scatter-plot","best_answers_votes":127,"tags":["python","matplotlib","matplotlib-3d"],"question_length":383,"response_length":559,"tag_count":3} {"question":"Matplotlib: Save figure as file from iPython notebook I am trying to save a Matplotlib figure as a file from an iPython notebook. \n```\nimport matplotlib.pyplot as plt\n\nfig = plt.figure()\nax = fig.add_axes([1,1,1,1])\nax.plot([1,2])\n\nfig.savefig('test.png')\n```\nThe inline view in the iPython notebook looks good: The file 'test.png' is almost empty though. It looks like the plot is shifted to the top right, you can see the tick labels '1.0' and '0.0' in the corner. How can I produce a file from the iPython notebook that looks like the inline view?","response":"Problem solved: add 'bbox_inches='tight' argument to savefig. \n```\nimport matplotlib.pyplot as plt\n\nfig = plt.figure()\nax = fig.add_axes([1,1,1,1])\nplt.plot([1,2])\n\nsavefig('test.png', bbox_inches='tight')\n```\nI don't understand what's happening here, but the file looks like the iPython notebook inline file now. Yay.","best_answers_score":0.7081,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/19271309\/matplotlib-save-figure-as-file-from-ipython-notebook","best_answers_votes":44,"tags":["python","matplotlib","jupyter-notebook"],"question_length":550,"response_length":318,"tag_count":3} {"question":"Annotate bars with values on Pandas bar plots I was looking for a way to annotate my bars in a Pandas bar plot with the rounded numerical values from my DataFrame. \n```\n\n```python\ndf=pd.DataFrame({'A':np.random.rand(2),'B':np.random.rand(2)},index=['value1','value2'] ) \n```\n\n```python\ndf\n#Output\n# A B\n# value1 0.440922 0.911800\n# value2 0.588242 0.797366\n#```\n#I would like to get something like this: I tried with this code sample, but the annotations are all centered on the x ticks: \n#```\n```\n\n```python\nax = df.plot(kind='bar') \n```\n\n```python\nfor idx, label in enumerate(list(df.index)): \n#Output\n# for acc in df.columns:\n# value = np.round(df.ix[idx][acc],decimals=2)\n# ax.annotate(value,\n# (idx, value),\n# xytext=(0, 15), \n# textcoords='offset points')\n#```\n```","response":"You get it directly from the axes' patches: \n```\nfor p in ax.patches:\n ax.annotate(str(p.get_height()), (p.get_x() * 1.005, p.get_height() * 1.005))\n```\nYou'll want to tweak the string formatting and the offsets to get things centered, maybe use the width from p.get_width(), but that should get you started. It may not work with stacked bar plots unless you track the offsets somewhere.","best_answers_score":0.708,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/25447700\/annotate-bars-with-values-on-pandas-bar-plots","best_answers_votes":228,"tags":["python","matplotlib","plot","pandas","dataframe"],"question_length":835,"response_length":390,"tag_count":5} {"question":"matplotlib: Aligning y-axis labels in stacked scatter plots In the plot bellow i have two scatter plots which have different number scale, so their Y-axis labels are not aligned. Is there any way I can force the horizontal alignment in the y-axis labels? \n```\nimport matplotlib.pylab as plt\nimport random\nimport matplotlib.gridspec as gridspec\n\nrandom.seed(20)\ndata1 = [random.random() for i in range(10)]\ndata2 = [random.random()*1000 for i in range(10)]\n\ngs = gridspec.GridSpec(2,1)\nfig = plt.figure()\n\nax = fig.add_subplot(gs[0])\nax.plot(data1)\nax.set_ylabel(r'Label One', size =16)\n\nax = fig.add_subplot(gs[1])\nax.plot(data2)\nax.set_ylabel(r'Label Two', size =16)\n\nplt.show()\n```","response":"You can use the set_label_coords method. \n```\nimport matplotlib.pylab as plt\nimport random\nimport matplotlib.gridspec as gridspec\n\nrandom.seed(20)\ndata1 = [random.random() for i in range(10)]\ndata2 = [random.random()*1000 for i in range(10)]\n\ngs = gridspec.GridSpec(2,1)\nfig = plt.figure()\n\nax = fig.add_subplot(gs[0])\nax.plot(data1)\nax.set_ylabel(r'Label One', size =16)\nax.get_yaxis().set_label_coords(-0.1,0.5)\n\nax = fig.add_subplot(gs[1])\nax.plot(data2)\nax.set_ylabel(r'Label Two', size =16)\nax.get_yaxis().set_label_coords(-0.1,0.5)\n```","best_answers_score":0.7073,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/19277324\/matplotlib-aligning-y-axis-labels-in-stacked-scatter-plots","best_answers_votes":36,"tags":["python","matplotlib","plot","alignment","axis-labels"],"question_length":683,"response_length":541,"tag_count":5} {"question":"How to modify the navigation toolbar easily in a matplotlib figure window? Is it possible to do something like the following to modify the navigation toolbar in matplotlib? Generate a figure window, with: fig = figure() Get a reference of the navigation tool-bar, with: tbar = fig.get_navigation_toolbar(), or better yet, just by: tbar = fig.navtbar Modify the tool-bar through the reference tbar, such as delete\/add\/edit a button with something like this: tbar.add_button(); tbar.remove_button(a reference to a button); tbar.edit_button(a reference to a button); Update the figure with: fig.canvas.draw() Thank you very much.","response":"The way I found to remove unwanted toolbar items is making a subclass, which is instantiated and used in a GTK application. As I manually create Figure, FigureCanvas and NavigationToolbar objects anyway, this was the easiest way. \n```\nclass NavigationToolbar(NavigationToolbar2GTKAgg):\n # only display the buttons we need\n toolitems = [t for t in NavigationToolbar2GTKAgg.toolitems if\n t[0] in ('Home', 'Pan', 'Zoom', 'Save')]\n```\nIf you want to create custom buttons, you should take a look on the definition of NavigationToolbar2 in backend_bases. You can easily add your own entries to the toolitems list and define appropriate callback functions in your toolbar subclass.","best_answers_score":0.7071,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/12695678\/how-to-modify-the-navigation-toolbar-easily-in-a-matplotlib-figure-window","best_answers_votes":30,"tags":["python","matplotlib"],"question_length":626,"response_length":697,"tag_count":2} {"question":"Matplotlib DateFormatter for axis label not working I'm trying to adjust the formatting of the date tick labels of the x-axis so that it only shows the Year and Month values. From what I've found online, I have to use mdates.DateFormatter, but it's not taking effect at all with my current code as is. Anyone see where the issue is? (the dates are the index of the pandas Dataframe) \n```\nimport matplotlib.dates as mdates\nimport matplotlib.pyplot as plt\nimport pandas as pd \n\nfig = plt.figure(figsize = (10,6))\nax = fig.add_subplot(111)\n\nax.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m'))\n\nbasicDF['some_column'].plot(ax=ax, kind='bar', rot=75)\n\nax.xaxis_date()\n```\nReproducible scenario code: \n```\nimport numpy as np\nimport matplotlib.dates as mdates\nimport matplotlib.pyplot as plt\nimport pandas as pd \n\nrng = pd.date_range('1\/1\/2014', periods=20, freq='m')\n\nblah = pd.DataFrame(data = np.random.randn(len(rng)), index=rng)\n\nfig = plt.figure(figsize = (10,6))\nax = fig.add_subplot(111)\n\nax.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m'))\n\nblah.plot(ax=ax, kind='bar')\n\nax.xaxis_date()\n```\nStill can't get just the year and month to show up. If I set the format after .plot , get an error like this: ValueError: DateFormatter found a value of x=0, which is an illegal date. This usually occurs because you have not informed the axis that it is plotting dates, e.g., with ax.xaxis_date(). It's the same for if I put it before ax.xaxis_date() or after.","response":"pandas just doesn't work well with custom date-time formats. You need to just use raw matplotlib in cases like this. \n```\nimport numpy\nimport matplotlib.pyplot as plt\nimport matplotlib.dates as mdates\nimport pandas\n\nN = 20\nnumpy.random.seed(N)\n\ndates = pandas.date_range('1\/1\/2014', periods=N, freq='m')\ndf = pandas.DataFrame(\n data=numpy.random.randn(N), \n index=dates,\n columns=['A']\n)\n\nfig, ax = plt.subplots(figsize=(10, 6))\nax.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m'))\nax.bar(df.index, df['A'], width=25, align='center')\n```\nAnd that gives me:","best_answers_score":0.7061,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/33743394\/matplotlib-dateformatter-for-axis-label-not-working","best_answers_votes":40,"tags":["python","date","pandas","matplotlib"],"question_length":1469,"response_length":572,"tag_count":4} {"question":"Improve subplot size\/spacing with many subplots I need to generate a whole bunch of vertically-stacked plots in matplotlib. The result will be saved using savefig and viewed on a webpage, so I don't care how tall the final image is, as long as the subplots are spaced so they don't overlap. No matter how big I allow the figure to be, the subplots always seem to overlap. My code currently looks like \n```\nimport matplotlib.pyplot as plt\nimport my_other_module\n\ntitles, x_lists, y_lists = my_other_module.get_data()\n\nfig = plt.figure(figsize=(10,60))\nfor i, y_list in enumerate(y_lists):\n plt.subplot(len(titles), 1, i)\n plt.xlabel(\"Some X label\")\n plt.ylabel(\"Some Y label\")\n plt.title(titles[i])\n plt.plot(x_lists[i],y_list)\nfig.savefig('out.png', dpi=100)\n```","response":"Please review matplotlib: Tight Layout guide and try using matplotlib.pyplot.tight_layout, or matplotlib.figure.Figure.tight_layout As a quick example: \n```\nimport matplotlib.pyplot as plt\n\nfig, axes = plt.subplots(nrows=4, ncols=4, figsize=(8, 8))\nfig.tight_layout() # Or equivalently, \"plt.tight_layout()\"\n\nplt.show()\n```\nWithout Tight Layout With Tight Layout","best_answers_score":0.7059,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/6541123\/improve-subplot-size-spacing-with-many-subplots","best_answers_votes":783,"tags":["python","pandas","matplotlib","seaborn","subplot"],"question_length":777,"response_length":363,"tag_count":5} {"question":"How to set color in matplotlib histograms I am plotting a histogram using Matplotlib. I would like the color of the histogram to be \"sky blue\". But the data overlaps, and produces a histogram which is nearly black in color. \n```py\nplt.hist(data, color = \"skyblue\")\n```\nBelow is how the histogram looks. As you can see, even though I specified the color to be \"Skyblue, the histogram on the right is nearly black.","response":"The reason for the histogram to look black is that the bars' surrounding lines (which are black) take most of the space. Options would be to get rid of the edges by setting the linewidth to zero: \n```\nplt.hist(data, color = \"skyblue\", lw=0)\n```\nand\/or to set the edgecolor to the same color as the bars itself \n```\nplt.hist(data, color = \"skyblue\", ec=\"skyblue\")\n```","best_answers_score":0.7055,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/42172440\/how-to-set-color-in-matplotlib-histograms","best_answers_votes":34,"tags":["python","matplotlib","colors","histogram"],"question_length":412,"response_length":366,"tag_count":4} {"question":"Change values on matplotlib imshow() graph axis Say I have some input data: \n```py\ndata = np.random.normal(loc=100, scale=10, size=(500,1,32))\nhist = np.ones((32, 20)) # initialise hist\nfor z in range(32):\n hist[z], edges = np.histogram(data[:, 0, z], bins=np.arange(80, 122, 2))\n```\nI can plot it using imshow(): \n```py\nplt.imshow(hist, cmap='Reds')\n```\ngetting: However, the x-axis values do not match the input data (i.e. mean of 100, range from 80 to 122). Therefore, I'd like to change the x-axis to show the values in edges. I have tried: \n```py\nax = plt.gca()\nax.set_xlabel([80,122]) # range of values in edges\n...\n# this shifts the plot so that nothing is visible\n```\nand \n```py\nax.set_xticklabels(edges)\n...\n# this labels the axis but does not centre around the mean:\n```\nAny ideas on how I can change the axis values to reflect the input data I am using?","response":"I would try to avoid changing the xticklabels if possible, otherwise it can get very confusing if you for example overplot your histogram with additional data. Defining the range of your grid is probably the best and with imshow it can be done by adding the extent keyword. This way the axes gets adjusted automatically. If you want to change the labels i would use set_xticks with perhaps some formatter. Altering the labels directly should be the last resort. \n```\nfig, ax = plt.subplots(figsize=(6,6))\n\nax.imshow(hist, cmap=plt.cm.Reds, interpolation='none', extent=[80,120,32,0])\nax.set_aspect(2) # you may also use am.imshow(..., aspect=\"auto\") to restore the aspect ratio\n```","best_answers_score":0.7051,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/18696122\/change-values-on-matplotlib-imshow-graph-axis","best_answers_votes":213,"tags":["python","numpy","matplotlib","plot","imshow"],"question_length":867,"response_length":681,"tag_count":5} {"question":"Python matplotlib decrease size of colorbar labels I need your help! I have a plotting code which is the following: \n```\nfig = plt.figure()\nax1 = fig.add_subplot(111)\n\nimax1 = ax1.imshow(data,interpolation = 'nearest', origin = 'lower',cmap=cm.jet)#plot\ncbar = plt.colorbar(imax1, extend='neither', spacing='proportional',\n orientation='vertical', shrink=0.7, format=\"%.0f\")\ncbar.set_label(r\"ET [mm\/month]\", size=10)\n\ntitlestr = \"Evapotranspiration in mm\/month\"\nplt.title(titlestr)\n#plt.xlabel(\"Longitude\")\n#plt.ylabel(\"Latitude\")\nimax1.set_clim(0,60)\nlabels = [item.get_text() for item in ax1.get_xticklabels()]\nfor ii in range(np.shape(labels)[0]):\n labels[ii] = str(grid_lon[75*ii\/np.shape(labels)[0]])\n\nax1.set_xticklabels(labels, rotation = 45, ha='right', size = 10)\n\nlabels = [item.get_text() for item in ax1.get_yticklabels()]\nfor ii in range(np.shape(labels)[0]):\n labels[ii] = str(grid_lat[75*ii\/np.shape(labels)[0]])\n\nax1.set_yticklabels(labels, size = 10)\n\npngname = \".\/out\/2d_\"+variable+\"_\"+mm+\".png\"\nprint \"save \", pngname\nplt.savefig(pngname, dpi=None, facecolor='w', edgecolor='w',\n orientation='portrait', papertype=None, format=None,\n transparent=False, bbox_inches=None, pad_inches=0.1)\n\n\nprint \"plot finished\"\n```\nI would like to set the label size of the colorbar labels (e.g. 0,10,20,...60) to size of 10 or smaller. This will probably go into the line \"imax1.set_clim(0,60). Any ideas? I'd be also interested to print information of the imax1 object to command line. How could I do that? E.g. available attributes and functions of imax1. I deeply appreciate your help!","response":"Aha! Found the answer here: \n```\ncbar.ax.tick_params(labelsize=10)\n```\nP.S. Upvote that answer and give Paul some love!","best_answers_score":0.7038,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/15305737\/python-matplotlib-decrease-size-of-colorbar-labels","best_answers_votes":235,"tags":["python","attributes","matplotlib","label","colorbar"],"question_length":1618,"response_length":119,"tag_count":5} {"question":"How can I display a numpy array with imshow The np.array that results from this loop has 4383 lines and 6 columns. I have tried without success to use pylab.imshow() from matplotlib(pylab) to display the array. The objective is to create an image of the array, in which the colors gradient represent the magnitude of the array values. Each row of the array represents the variation in depth of a lake temperature in each day (4383 days). Thus the objective is to find differences in lake temperatures in depth and with time. \n```py\nTempLake = np.zeros((N+1,Nlayers))\nTempLake[0] = T0\n\nQ = np.zeros(N+1)\nQ[0] = 0.0\nfor i in xrange(N):\n Q[i+1]=Qn(HSR[i],TD[i],FW[i],TempLake[i][0])\n TempLake[i+1] = main_loop(Z,z,Areat0,Areat1,TempLake[i],wind[i],Q[i],Q[i+1])\n \npylab.imshow(TempLake)\npylab.show()\n```","response":"You can use imshow as follows: \n```\nimport pylab as plt\nimport numpy as np\n\n\nZ=np.array(((1,2,3,4,5),(4,5,6,7,8),(7,8,9,10,11)))\nim = plt.imshow(Z, cmap='hot')\nplt.colorbar(im, orientation='horizontal')\nplt.show()\n```\nIn your case can you check the output of TempLake.","best_answers_score":0.7034,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/11775354\/how-can-i-display-a-numpy-array-with-imshow","best_answers_votes":29,"tags":["python","numpy","matplotlib","imshow"],"question_length":808,"response_length":268,"tag_count":4} {"question":"Seaborn Bar Plot Ordering I have a pandas dataframe that has two columns. I need the plot ordered by the \"Count\" Column. \n```\ndicti=({'37':99943,'25':47228,'36':16933,'40':14996,'35':11791,'34':8030,'24' : 6319 ,'2' :5055 ,'39' :4758 ,'38' :4611 })\npd_df = pd.DataFrame(list(dicti.iteritems()))\npd_df.columns =[\"Dim\",\"Count\"]\nplt.figure(figsize=(12,8))\nax = sns.barplot(x=\"Dim\", y= \"Count\",data=pd_df )\nax.get_yaxis().set_major_formatter(plt.FuncFormatter(lambda x, loc: \"\n{:,}\".format(int(x))))\nax.set(xlabel=\"Dim\", ylabel='Count')\nfor item in ax.get_xticklabels():\n item.set_rotation(90)\nfor i, v in enumerate(pd_df[\"Count\"].iteritems()): \n ax.text(i ,v[1], \"{:,}\".format(v[1]), color='m', va ='bottom', \n rotation=45)\nplt.tight_layout()\n```\nRight now the plot is getting ordered by the \"Dim\" column, I need it ordered by the \"Count\" column,How can I do this?","response":"you can use the order parameter for this. \n```\nsns.barplot(x='Id', y=\"Speed\", data=df, order=result['Id'])\n```\nCredits to Wayne. See the rest of his code. This link is still working for me. But, for the sake of convenience, I'm pasting the author's code here. \n```\nresult = df.groupby([\"Id\"])['Speed'].aggregate(np.median).reset_index().sort_values('Speed')\nsns.barplot(x='Id', y=\"Speed\", data=df, order=result['Id'])\nplt.show()\n```\ndf \n```\nId Speed\n0 1 30\n1 1 35\n2 1 31\n3 2 20\n4 2 25\n```\nresult \n```\nId Speed\n1 2 22.5\n0 1 31.0\n2 3 80.0\n```","best_answers_score":0.7029,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/43770507\/seaborn-bar-plot-ordering","best_answers_votes":59,"tags":["python","pandas","matplotlib","seaborn"],"question_length":879,"response_length":585,"tag_count":4} {"question":"Removing white space around a saved image I need to take an image and save it after some process. The figure looks fine when I display it, but after saving the figure, I got some white space around the saved image. I have tried the 'tight' option for savefig method, did not work either. The code: \n```\nimport matplotlib.image as mpimg\nimport matplotlib.pyplot as plt\n\nfig = plt.figure(1)\nimg = mpimg.imread(\"image.jpg\")\nplt.imshow(img)\nax = fig.add_subplot(1, 1, 1)\n\nextent = ax.get_window_extent().transformed(fig.dpi_scale_trans.inverted())\nplt.savefig('1.png', bbox_inches=extent)\n\nplt.axis('off') \nplt.show()\n```\nI am trying to draw a basic graph by using NetworkX on a figure and save it. I realized that without a graph it works, but when added a graph I get white space around the saved image; \n```\nimport matplotlib.image as mpimg\nimport matplotlib.pyplot as plt\nimport networkx as nx\n\nG = nx.Graph()\nG.add_node(1)\nG.add_node(2)\nG.add_node(3)\nG.add_edge(1, 3)\nG.add_edge(1, 2)\npos = {1:[100, 120], 2:[200, 300], 3:[50, 75]}\n\nfig = plt.figure(1)\nimg = mpimg.imread(\"image.jpg\")\nplt.imshow(img)\nax = fig.add_subplot(1, 1, 1)\n\nnx.draw(G, pos=pos)\n\nextent = ax.get_window_extent().transformed(fig.dpi_scale_trans.inverted())\nplt.savefig('1.png', bbox_inches=extent)\n\nplt.axis('off') \nplt.show()\n```","response":"You can remove the white space padding by setting bbox_inches=\"tight\" in savefig: \n```\nplt.savefig(\"test.png\",bbox_inches='tight')\n```\nYou'll have to put the argument to bbox_inches as a string, perhaps this is why it didn't work earlier for you. Possible duplicates: Matplotlib plots: removing axis, legends and white spaces How to set the margins for a matplotlib figure? Reduce left and right margins in matplotlib plot","best_answers_score":0.7025,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/11837979\/removing-white-space-around-a-saved-image","best_answers_votes":400,"tags":["python","matplotlib","imshow","savefig"],"question_length":1303,"response_length":422,"tag_count":4} {"question":"How do I adjust (offset) colorbar title in matplotlib Given the following code: \n```\nimshow(np.arange(16*16).reshape(16,16))\ncb = colorbar()\ncb.set_label(\"Foo\")\ncb.set_ticks([0,255])\n```\nWhich produces: How do I adjust the colorbar text \"Foo\" so that it is offset to the left, betwen the 0 and 255, closer to the colorbar, reducing the un-needed whitespace?","response":"```\ncb.set_label(\"Foo\", labelpad=-1)\n```\nNegative labelpad values will move closer to the bar, positive away.","best_answers_score":0.7023,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/17475619\/how-do-i-adjust-offset-colorbar-title-in-matplotlib","best_answers_votes":83,"tags":["matplotlib","alignment","colorbar"],"question_length":357,"response_length":109,"tag_count":3} {"question":"Plot a histogram from a Dictionary I created a dictionary that counts the occurrences in a list of every key and I would now like to plot the histogram of its content. This is the content of the dictionary I want to plot: \n```\n{1: 27, 34: 1, 3: 72, 4: 62, 5: 33, 6: 36, 7: 20, 8: 12, 9: 9, 10: 6, 11: 5, 12: 8, 2: 74, 14: 4, 15: 3, 16: 1, 17: 1, 18: 1, 19: 1, 21: 1, 27: 2}\n```\nSo far I wrote this: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\npos = np.arange(len(myDictionary.keys()))\nwidth = 1.0 # gives histogram aspect to the bar diagram\n\nax = plt.axes()\nax.set_xticks(pos + (width \/ 2))\nax.set_xticklabels(myDictionary.keys())\n\nplt.bar(myDictionary.keys(), ******, width, color='g')\n# ^^^^^^ what should I put here?\nplt.show()\n```\nI tried by simply doing \n```\nplt.bar(myDictionary.keys(), myDictionary, width, color='g')\n```\nbut this is the result: and I don't know why the 3 bars are shifted and also I'd like the histogram to be displayed in a ordered fashion. Can somebody tell me how to do it?","response":"You can use the function for plotting histograms like this: \n```\na = np.random.random_integers(0,10,20) #example list of values\nplt.hist(a)\nplt.show()\n```\nOr you can use myDictionary just like this: \n```\nplt.bar(myDictionary.keys(), myDictionary.values(), width, color='g')\n```","best_answers_score":0.7022,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/21195179\/plot-a-histogram-from-a-dictionary","best_answers_votes":90,"tags":["python","dictionary","matplotlib","histogram"],"question_length":1045,"response_length":277,"tag_count":4} {"question":"savefig loop adds previous plots to figure I am trying to use a forloop to produce figures for each set of data I have, but while the .show() command produces the correct figure, .savefig() keeps adding the previous plotted values to the new figure. In my forloop, this is the relevant sample of the code. \n```\nimport matplotlib.pyplot as plt\nplt.plot(X,Y[:,0],'o-')\nplt.xlabel('x')\nplt.savefig('plot'+str(i)+'.png')\n```\nAs a comparison, here is the savefig plot and here is that shown by show(). As can be seen, the savefig() plot also plotted the previous result.","response":"You have to close current figure after saving with function plt.close(): http:\/\/matplotlib.org\/api\/pyplot_api.html#matplotlib.pyplot.close Or you have to clean current figure after saving by plt.clf(): http:\/\/matplotlib.org\/api\/pyplot_api.html#matplotlib.pyplot.clf","best_answers_score":0.7021,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/37734512\/savefig-loop-adds-previous-plots-to-figure","best_answers_votes":55,"tags":["python","for-loop","matplotlib","plot"],"question_length":565,"response_length":265,"tag_count":4} {"question":"How to change legend fontsize with matplotlib.pyplot Simple question here: I'm trying to get the size of my legend using matplotlib.pyplot to be smaller (i.e., the text to be smaller). The code I'm using goes something like this: \n```\nplot.figure()\nplot.scatter(k, sum_cf, color='black', label='Sum of Cause Fractions')\nplot.scatter(k, data[:, 0], color='b', label='Dis 1: cf = .6, var = .2')\nplot.scatter(k, data[:, 1], color='r', label='Dis 2: cf = .2, var = .1')\nplot.scatter(k, data[:, 2], color='g', label='Dis 3: cf = .1, var = .01')\nplot.legend(loc=2)\n```","response":"You can set an individual font size for the legend by adjusting the prop keyword. \n```\nplot.legend(loc=2, prop={'size': 6})\n```\nThis takes a dictionary of keywords corresponding to matplotlib.font_manager.FontProperties properties. See the documentation for legend: Keyword arguments: \n```\nprop: [ None | FontProperties | dict ]\n A matplotlib.font_manager.FontProperties instance. If prop is a \n dictionary, a new instance will be created with prop. If None, use\n rc settings.\n```\nIt is also possible, as of version 1.2.1, to use the keyword fontsize.","best_answers_score":0.7013,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/7125009\/how-to-change-legend-fontsize-with-matplotlib-pyplot","best_answers_votes":837,"tags":["python","matplotlib","legend"],"question_length":566,"response_length":560,"tag_count":3} {"question":"Increasing the space for x axis labels in Matplotlib I'm plotting, but find that I need to increase the area underneath chart such that I can plot the labels vertically but in a font size that is not so tiny. At the moment, I have: \n```\nplt.figure(count_fig) fig, ax = plt.subplots() \nrects1 = ax.bar(ind, ratio_lst, width, color='r', linewidth=1, alpha=0.8, log=1) \nax.set_ylabel('') \nax.set_title('') \nax.set_xticks(ind_width) \nax.set_xticklabels(labels_lst, rotation='vertical', fontsize=6)\n```\nAt the moment it works, but the labels often run-off the edge of the plot.","response":"subplots_adjust will do it. You can play with the bottom keyword to get a good placement of the bottom of the plot. \n```\nfig.subplots_adjust(bottom=0.2)\n```","best_answers_score":0.7013,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/30508850\/increasing-the-space-for-x-axis-labels-in-matplotlib","best_answers_votes":39,"tags":["python","matplotlib"],"question_length":572,"response_length":156,"tag_count":2} {"question":"Creating multiple boxplots on the same graph from a dictionary I am trying to create a graph whereby the x axis is the key of the dictionary and the boxplot is from the information inside the dictionary. My dictionary can have many keys. \n```py\nData = {'ABC': [34.54, 34.345, 34.761], 'DEF': [34.541, 34.748, 34.482]}\n \nfor ID in Data: \n plt.boxplot(Data[ID])\n plt.xlabel(ID)\nplt.savefig('BoxPlot.png')\nplt.clf()\n```\nIt however seems to put the box plots on top of each other. I tried iterating the positions value within boxplot with no luck. I would also like to use the key as the xaxis value for each boxplot if possible.","response":"```\nmy_dict = {'ABC': [34.54, 34.345, 34.761], 'DEF': [34.541, 34.748, 34.482]}\n\nfig, ax = plt.subplots()\nax.boxplot(my_dict.values())\nax.set_xticklabels(my_dict.keys())\n```","best_answers_score":0.7011,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/52273543\/creating-multiple-boxplots-on-the-same-graph-from-a-dictionary","best_answers_votes":62,"tags":["python","python-3.x","dictionary","matplotlib","boxplot"],"question_length":647,"response_length":173,"tag_count":5} {"question":"Named colors in matplotlib What named colors are available in matplotlib for use in plots? I can find a list on the matplotlib documentation that claims that these are the only names: \n```\nb: blue\ng: green\nr: red\nc: cyan\nm: magenta\ny: yellow\nk: black\nw: white\n```\nHowever, I've found that these colors can also be used, at least in this context: \n```\nscatter(X,Y, color='red')\nscatter(X,Y, color='orange')\nscatter(X,Y, color='darkgreen')\n```\nbut these are not on the above list. Does anyone know an exhaustive list of the named colors that are available?","response":"I constantly forget the names of the colors I want to use and keep coming back to this question =) The previous answers are great, but I find it a bit difficult to get an overview of the available colors from the posted image. I prefer the colors to be grouped with similar colors, so I slightly tweaked the matplotlib answer that was mentioned in a comment above to get a color list sorted in columns. The order is not identical to how I would sort by eye, but I think it gives a good overview. I updated the image and code to reflect that 'rebeccapurple' has been added and the three sage colors have been moved under the 'xkcd:' prefix since I posted this answer originally. I really didn't change much from the matplotlib example, but here is the code for completeness. \n```\nimport matplotlib.pyplot as plt\nfrom matplotlib import colors as mcolors\n\n\ncolors = dict(mcolors.BASE_COLORS, **mcolors.CSS4_COLORS)\n\n# Sort colors by hue, saturation, value and name.\nby_hsv = sorted((tuple(mcolors.rgb_to_hsv(mcolors.to_rgba(color)[:3])), name)\n for name, color in colors.items())\nsorted_names = [name for hsv, name in by_hsv]\n\nn = len(sorted_names)\nncols = 4\nnrows = n \/\/ ncols\n\nfig, ax = plt.subplots(figsize=(12, 10))\n\n# Get height and width\nX, Y = fig.get_dpi() * fig.get_size_inches()\nh = Y \/ (nrows + 1)\nw = X \/ ncols\n\nfor i, name in enumerate(sorted_names):\n row = i % nrows\n col = i \/\/ nrows\n y = Y - (row * h) - h\n\n xi_line = w * (col + 0.05)\n xf_line = w * (col + 0.25)\n xi_text = w * (col + 0.3)\n\n ax.text(xi_text, y, name, fontsize=(h * 0.8),\n horizontalalignment='left',\n verticalalignment='center')\n\n ax.hlines(y + h * 0.1, xi_line, xf_line,\n color=colors[name], linewidth=(h * 0.8))\n\nax.set_xlim(0, X)\nax.set_ylim(0, Y)\nax.set_axis_off()\n\nfig.subplots_adjust(left=0, right=1,\n top=1, bottom=0,\n hspace=0, wspace=0)\nplt.show()\n```\nAdditional named colors Updated 2017-10-25. I merged my previous updates into this section. xkcd If you would like to use additional named colors when plotting with matplotlib, you can use the xkcd crowdsourced color names, via the 'xkcd:' prefix: \n```\nplt.plot([1,2], lw=4, c='xkcd:baby poop green')\n```\nNow you have access to a plethora of named colors! Tableau The default Tableau colors are available in matplotlib via the 'tab:' prefix: \n```\nplt.plot([1,2], lw=4, c='tab:green')\n```\nThere are ten distinct colors: HTML You can also plot colors by their HTML hex code: \n```\nplt.plot([1,2], lw=4, c='#8f9805')\n```\nThis is more similar to specifying and RGB tuple rather than a named color (apart from the fact that the hex code is passed as a string), and I will not include an image of the 16 million colors you can choose from... For more details, please refer to the matplotlib colors documentation and the source file specifying the available colors, _color_data.py.","best_answers_score":0.7004,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/22408237\/named-colors-in-matplotlib","best_answers_votes":496,"tags":["python","matplotlib","colors"],"question_length":554,"response_length":2926,"tag_count":3} {"question":"Displaying Matplotlib Navigation Toolbar in Tkinter via grid I'm developing a small Tkinter GUI to draw matplotlib-plots. (It contains a few Entries and assembles the plot according to their content.) I have designed my plotting widget according to http:\/\/matplotlib.org\/examples\/user_interfaces\/embedding_in_tk.html, only I use grid instead of pack: \n```\ncanvas = FigureCanvasTkAgg(fig, master=root)\ncanvas.get_tk_widget().grid(row=1,column=4,columnspan=3,rowspan=20)\n```\nThat part works. But embedding the NavigationToolbar in the same fashion does not. Tkinter breaks down without error when I include the lines: \n```\ntoolbar = NavigationToolbar2TkAgg( canvas, root )\ncanvas._tkcanvas.grid(row=22,column=4)\n```\nI know this is because NavigationToolbar calls pack internally, and pack and grid don't get along. However, I like grid and would hate to have to redesign my whole GUI just to be able to use the NavigationToolbar. Is there a workaround so I can use NavigationToolbar2TkAgg via grid? (I have found the advice to \"subclass and overload\" here, but don't know how to do that.) Any help greatly appreciated!","response":"Here is a code example for what was mentioned in Bryan Oakleys answer (add toolbar to frame, place frame on grid): \n```\nfig = Figure(figsize=(5, 5), dpi=100)\n\n canvas = FigureCanvasTkAgg(fig, master=root)\n canvas.get_tk_widget().grid(row=1,column=4,columnspan=3,rowspan=20)\n # here: plot suff to your fig\n canvas.draw()\n\n ############### TOOLBAR ###############\n toolbarFrame = Frame(master=root)\n toolbarFrame.grid(row=22,column=4)\n toolbar = NavigationToolbar2TkAgg(canvas, toolbarFrame)\n```","best_answers_score":0.7,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/12913854\/displaying-matplotlib-navigation-toolbar-in-tkinter-via-grid","best_answers_votes":15,"tags":["python","matplotlib","tkinter","grid"],"question_length":1116,"response_length":523,"tag_count":4} {"question":"Python saving multiple figures into one PDF file In python (for one figure created in a GUI) I was able to save the figure under .jpg and also .pdf by either using: \n```\nplt.savefig(filename1 + '.pdf')\n```\nor \n```\nplt.savefig(filename1 + '.jpg')\n```\nUsing one file I would like to save multiple figures in either .pdf or .jpg (just like its done in math lab). Can anybody please help with this?","response":"Use PdfPages to solve your problem. Pass your figure object to the savefig method. For example, if you have a whole pile of figure objects open and you want to save them into a multi-page PDF, you might do: \n```\nimport matplotlib.backends.backend_pdf\npdf = matplotlib.backends.backend_pdf.PdfPages(\"output.pdf\")\nfor fig in xrange(1, figure().number): ## will open an empty extra figure :(\n pdf.savefig( fig )\npdf.close()\n```","best_answers_score":0.6998,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/17788685\/python-saving-multiple-figures-into-one-pdf-file","best_answers_votes":110,"tags":["python","matplotlib"],"question_length":396,"response_length":427,"tag_count":2} {"question":"'invalid value encountered in double_scalars' warning, possibly numpy As I run my code I get these warnings, always in groups of four, sporadically. I have tried to locate the source by placing debug messages before and after certain statements to pin-point its origin. \n```\nWarning: invalid value encountered in double_scalars\nWarning: invalid value encountered in double_scalars\nWarning: invalid value encountered in double_scalars\nWarning: invalid value encountered in double_scalars\n```\nIs this is a Numpy warning, and what is a double scalar? From Numpy I use \n```\nmin(), argmin(), mean() and random.randn()\n```\nI also use Matplotlib","response":"It looks like a floating-point calculation error. Check the numpy.seterr function to get more information about where it happens.","best_answers_score":0.699,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/3767409\/invalid-value-encountered-in-double-scalars-warning-possibly-numpy","best_answers_votes":84,"tags":["python","numpy","warnings","matplotlib"],"question_length":638,"response_length":129,"tag_count":4} {"question":"Edit the width of bars using pd.DataFrame.plot() I am making a stacked bar plot using: \n```\nDataFrame.plot(kind='bar',stacked=True)\n```\nI want to control width of bars so that the bars are connected to each other like a histogram. I've looked through the documentation but to no avail - any suggestions? Is it possible to do it this way?","response":"For anyone coming across this question: Since pandas 0.14, plotting with bars has a 'width' command: https:\/\/github.com\/pydata\/pandas\/pull\/6644 The example above can now be solved simply by using \n```\ndf.plot(kind='bar', stacked=True, width=1)\n```\nSee pandas.DataFrame.plot.bar or pandas.DataFrame.plot with kind='bar'. When changing the width of the bars, it might also be appropriate to change the figure size by specifying the figsize= parameter.","best_answers_score":0.696,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/14824456\/edit-the-width-of-bars-using-pd-dataframe-plot","best_answers_votes":109,"tags":["python","pandas","matplotlib","bar-chart"],"question_length":337,"response_length":449,"tag_count":4} {"question":"How to change the color of the axis, ticks and labels I'd like to change the color of the axis, as well as ticks and value-labels for a plot I did using matplotlib and PyQt.","response":"As a quick example (using a slightly cleaner method than the potentially duplicate question): \n```\nimport matplotlib.pyplot as plt\n\nfig = plt.figure()\nax = fig.add_subplot(111)\n\nax.plot(range(10))\nax.set_xlabel('X-axis')\nax.set_ylabel('Y-axis')\n\nax.spines['bottom'].set_color('red')\nax.spines['top'].set_color('red')\nax.xaxis.label.set_color('red')\nax.tick_params(axis='x', colors='red')\n\nplt.show()\n```\nAlternatively \n```py\n[t.set_color('red') for t in ax.xaxis.get_ticklines()]\n[t.set_color('red') for t in ax.xaxis.get_ticklabels()]\n```","best_answers_score":0.6958,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/4761623\/how-to-change-the-color-of-the-axis-ticks-and-labels","best_answers_votes":295,"tags":["python","matplotlib","colors","pyqt","seaborn"],"question_length":173,"response_length":539,"tag_count":5} {"question":"Plot negative values on a log scale I am doing some analysis to calculate the value of log_10(x) which is a negative number. I am now trying to plot these values, however, since the range of the answers is very large I would like to use a logarithmic scale for this. If I simply use plt.yscale('log') I get a message telling me UserWarning: Data has no positive values, and therefore cannot be log-scaled. I also cannot supply the values of x to plt.plot as the result of log_10(x) is so large and negative that the answer of x**(log_10(x)) is simply 0. What might be the most straightforward way of plotting this data?","response":"You can use \n```\nplt.yscale('symlog')\n```\nto set the scale to a symmetic log scale. This means that it will scale logarithmically to both sides of 0. Only using the negative part of the symlog scale would work just fine.","best_answers_score":0.6953,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/43372499\/plot-negative-values-on-a-log-scale","best_answers_votes":84,"tags":["python","matplotlib","seaborn"],"question_length":619,"response_length":220,"tag_count":3} {"question":"How to set ticks on Fixed Position , matplotlib Can anyone help me set the ticks on a fixed position using matplotlib? I've tried using FixedPosition as this tutorial describes: \n```\nax = pl.gca()\nax.xaxis.set_major_locator(eval(locator))\n```\nhttp:\/\/scipy-lectures.github.io\/intro\/matplotlib\/matplotlib.html#figures-subplots-axes-and-ticks But when I try to run, it tells me that set_major_locator method does not exist. A simple example would be very useful. Thanks.","response":"Just use ax.set_xticks(positions) or ax.set_yticks(positions). For example: \n```\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots()\nax.set_xticks([0.15, 0.68, 0.97])\nax.set_yticks([0.2, 0.55, 0.76])\nplt.show()\n```","best_answers_score":0.6953,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/17129947\/how-to-set-ticks-on-fixed-position-matplotlib","best_answers_votes":61,"tags":["python","matplotlib"],"question_length":467,"response_length":220,"tag_count":2} {"question":"Create automated strictly-designed multi-page PDF report from HTML [closed] Closed. This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meet Stack Overflow guidelines. It is not currently accepting answers. We don\u2019t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations. Closed 2 years ago. The community reviewed whether to reopen this question 2 years ago and left it closed: Original close reason(s) were not resolved Improve this question What are good Python-based options to create strictly designed PDF reports from HTML? I've attached a draft PDF to illustrate the following points: The design of the report is rather strictly designed. In other words \"Looks matter\". The report contains complex vector graphics (package: Matplotlib).These may slightly differ in size . The report contains images. The report contains a large number of numbers \/ strings filled in dynamically. Optimally, the solution would use open source packages. We create our HTML with Django. The report may span multiple pages. It looks as if there was already a good amount of very diverse packages facilitating reporting. Just to name a few, there are xhtml2pdf, weasyprint, django-wkhtmltopdf. In my experience, it\u2019s easy with these tools to create a PDF from your content. The hard part comes when the PDF needs to fall into a highly-defined design structure as in our case. Unfortunately, I was not able to find example PDFs for the different PDF generation packages that have a highly designed structure. What is your experience with this? Which options worked well for you? Are there well done complex examples that I\u2019ve overlooked?","response":"You can see this Python package: weasyprint Web page: http:\/\/weasyprint.org\/ Official doc: http:\/\/weasyprint.readthedocs.io\/en\/latest\/ It's great, because you can generate the PDF from a web page or an html file, you can have conflicts with some CSS (which are specified in the documentation), but it provides what you need","best_answers_score":0.695,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/42091524\/create-automated-strictly-designed-multi-page-pdf-report-from-html","best_answers_votes":10,"tags":["python","html","pdf","matplotlib","pdf-generation"],"question_length":1836,"response_length":323,"tag_count":5} {"question":"Plotting a 2D heatmap Using Matplotlib, I want to plot a 2D heat map. My data is an n-by-n Numpy array, each with a value between 0 and 1. So for the (i, j) element of this array, I want to plot a square at the (i, j) coordinate in my heat map, whose color is proportional to the element's value in the array. How can I do this?","response":"The imshow() function with parameters interpolation='nearest' and cmap='hot' should do what you want. Please review the interpolation parameter details, and see Interpolations for imshow and Image antialiasing. \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\na = np.random.random((16, 16))\nplt.imshow(a, cmap='hot', interpolation='nearest')\nplt.show()\n```","best_answers_score":0.6946,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/33282368\/plotting-a-2d-heatmap","best_answers_votes":351,"tags":["python","matplotlib","seaborn","heatmap"],"question_length":328,"response_length":364,"tag_count":4} {"question":"more than 9 subplots in matplotlib Is it possible to get more than 9 subplots in matplotlib? I am on the subplots command pylab.subplot(449); how can I get a 4410 to work? Thank you very much.","response":"It was easier than I expected, I just did: pylab.subplot(4,4,10) and it worked.","best_answers_score":0.6936,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/4158367\/more-than-9-subplots-in-matplotlib","best_answers_votes":74,"tags":["python","charts","matplotlib"],"question_length":192,"response_length":79,"tag_count":3} {"question":"How do I plot only a table in Matplotlib? Is it possible to draw only a table with matplotlib? If I uncomment the line \n```\nplt.bar(index, data[row], bar_width, bottom=y_offset, color=colors[row])\n```\nof this example code, the plot is still visible. I want to have a table on top of my (PyQt) window and underneath a plot (with some space in between).","response":"This is another option to write a pandas dataframe directly into a matplotlib table: \n```\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots()\n\n# hide axes\nfig.patch.set_visible(False)\nax.axis('off')\nax.axis('tight')\n\ndf = pd.DataFrame(np.random.randn(10, 4), columns=list('ABCD'))\n\nax.table(cellText=df.values, colLabels=df.columns, loc='center')\n\nfig.tight_layout()\n\nplt.show()\n```","best_answers_score":0.6935,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/32137396\/how-do-i-plot-only-a-table-in-matplotlib","best_answers_votes":90,"tags":["python","matplotlib","pyqt"],"question_length":351,"response_length":427,"tag_count":3} {"question":"Plot yerr\/xerr as shaded region rather than error bars In matplotlib, how do I plot error as a shaded region rather than error bars? For example: rather than","response":"Ignoring the smooth interpolation between points in your example graph (that would require doing some manual interpolation, or just have a higher resolution of your data), you can use pyplot.fill_between(): \n```\nfrom matplotlib import pyplot as plt\nimport numpy as np\n\nx = np.linspace(0, 30, 30)\ny = np.sin(x\/6*np.pi)\nerror = np.random.normal(0.1, 0.02, size=y.shape)\ny += np.random.normal(0, 0.1, size=y.shape)\n\nplt.plot(x, y, 'k-')\nplt.fill_between(x, y-error, y+error)\nplt.show()\n```\nSee also the matplotlib examples.","best_answers_score":0.6931,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/12957582\/plot-yerr-xerr-as-shaded-region-rather-than-error-bars","best_answers_votes":220,"tags":["python","matplotlib"],"question_length":157,"response_length":520,"tag_count":2} {"question":"How to change size of figure with subplots I'm having some trouble trying to change the figure size when using plt.subplots. With the following code, I just get the standard size graph with all my subplots bunched in (there's ~100) and obviously just an extra empty figuresize . I've tried using tight_layout, but to no avail. \n```py\ndef plot(reader):\n channels=[]\n for i in reader:\n channels.append(i)\n\n plt.figure(figsize=(50,100))\n fig, ax = plt.subplots(len(channels), sharex=True)\n \n plot=0 \n for j in reader: \n \n ax[plot].plot(reader[\"%s\" % j])\n plot=plot+1\n\n plt.tight_layout()\n plt.show()\n```","response":"You can remove your initial plt.figure(). When calling plt.subplots() a new figure is created, so you first call doesn't do anything. The subplots command in the background will call plt.figure() for you, and any keywords will be passed along. So just add the figsize keyword to the subplots() command: \n```\ndef plot(reader):\n channels=[]\n for i in reader:\n channels.append(i)\n\n fig, ax = plt.subplots(len(channels), sharex=True, figsize=(50,100))\n\n plot=0 \n for j in reader: \n\n ax[plot].plot(reader[\"%s\" % j])\n plot=plot+1\n\n plt.tight_layout()\n plt.show()\n```","best_answers_score":0.6925,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/19932553\/how-to-change-size-of-figure-with-subplots","best_answers_votes":70,"tags":["python","matplotlib","layout","subplot","figsize"],"question_length":658,"response_length":605,"tag_count":5} {"question":"Boxplots in matplotlib: Markers and outliers I have some questions about boxplots in matplotlib: Question A. What do the markers that I highlighted below with Q1, Q2, and Q3 represent? I believe Q1 is maximum and Q3 are outliers, but what is Q2? Question B How does matplotlib identify outliers? (i.e. how does it know that they are not the true max and min values?)","response":"A picture is worth a thousand words. Note that the outliers (the + markers in your plot) are simply points outside of the wide [(Q1-1.5 IQR), (Q3+1.5 IQR)] margin below. However, the picture is only an example for a normally distributed data set. It is important to understand that matplotlib does not estimate a normal distribution first and calculates the quartiles from the estimated distribution parameters as shown above. Instead, the median and the quartiles are calculated directly from the data. Thus, your boxplot may look different depending on the distribution of your data and the size of the sample, e.g., asymmetric and with more or less outliers.","best_answers_score":0.6901,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/17725927\/boxplots-in-matplotlib-markers-and-outliers","best_answers_votes":107,"tags":["python","matplotlib","statistics","boxplot"],"question_length":366,"response_length":661,"tag_count":4} {"question":"Pandas and Matplotlib - fill_between() vs datetime64 There is a Pandas DataFrame: \n```\n\nInt64Index: 300 entries, 5220 to 5519\nData columns (total 3 columns):\nDate 300 non-null datetime64[ns]\nA 300 non-null float64\nB 300 non-null float64\ndtypes: datetime64[ns](1), float64(2)\nmemory usage: 30.5 KB\n```\nI want to plot A and B series vs Date. \n```\nplt.plot_date(data['Date'], data['A'], '-')\nplt.plot_date(data['Date'], data['B'], '-')\n```\nThen I want apply fill_between() on area between A and B series: \n```\nplt.fill_between(data['Date'], data['A'], data['B'],\n where=data['A'] >= data['B'],\n facecolor='green', alpha=0.2, interpolate=True)\n```\nWhich outputs: \n```\nTypeError: ufunc 'isfinite' not supported for the input types, and the inputs\ncould not be safely coerced to any supported types according to the casting \nrule ''safe''\n```\nDoes matplotlib accept pandas datetime64 object in fill_between() function? Should I convert it to different date type?","response":"Pandas registers a converter in matplotlib.units.registry which converts a number of datetime types (such as pandas DatetimeIndex, and numpy arrays of dtype datetime64) to matplotlib datenums, but it does not handle Pandas Series with dtype datetime64. \n```\n\n```python\nimport pandas.tseries.converter as converter\n```\n\n```python\nc = converter.DatetimeConverter()\n```\n\n```python\ntype(c.convert(df['Date'].values, None, None))\n#Output\n#numpy.ndarray # converted (good)\n```\n\n```python\ntype(c.convert(df['Date'], None, None))\n#Output\n#pandas.core.series.Series # left unchanged\n#```\n#fill_between checks for and uses a converter to handle the data if it exists. So as a workaround, you could convert the dates to a NumPy array of datetime64's: \n#```\n#d = data['Date'].values\n#plt.fill_between(d, data['A'], data['B'],\n# where=data['A'] >= data['B'],\n# facecolor='green', alpha=0.2, interpolate=True)\n#```\n#For example, \n#```\n#import numpy as np\n#import matplotlib.pyplot as plt\n#import pandas as pd\n#\n#N = 300\n#dates = pd.date_range('2000-1-1', periods=N, freq='D')\n#x = np.linspace(0, 2*np.pi, N)\n#data = pd.DataFrame({'A': np.sin(x), 'B': np.cos(x),\n# 'Date': dates})\n#plt.plot_date(data['Date'], data['A'], '-')\n#plt.plot_date(data['Date'], data['B'], '-')\n#\n#d = data['Date'].values\n#plt.fill_between(d, data['A'], data['B'],\n# where=data['A'] >= data['B'],\n# facecolor='green', alpha=0.2, interpolate=True)\n#plt.xticks(rotation=25)\n#plt.show()\n#```\n```","best_answers_score":0.6898,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/29329725\/pandas-and-matplotlib-fill-between-vs-datetime64","best_answers_votes":39,"tags":["python","pandas","matplotlib"],"question_length":1065,"response_length":1491,"tag_count":3} {"question":"plot pandas dataframe two columns i have a pandas dataframe which has dates as indexes and some columns: I would like to plot a line chart with 2 lines (let's say 'ISP.MI' and 'Ctrv'); on the x axis I need the 'Date' \n```\nTicker ISP.MI Daily returns Ctrv Inv_Am Giac_Media\nDate \n2016-01-01 2.90117 NaN 100.000000 100 100.0 \n2016-01-04 2.80159 -0.034927 196.507301 200 150.0 \n2016-01-05 2.85608 0.019263 300.292610 300 200.0 \n2016-01-06 2.77904 -0.027345 392.081255 400 250.0 \n2016-01-07 2.73206 -0.017050 485.396411 500 300.0 \n2016-01-08 2.72267 -0.003443 583.725246 600 350.0\n```","response":"I think the simpliest is select columns by subset and then DataFrame.plot: \n```\ndf[['ISP.MI','Ctrv']].plot()\n```","best_answers_score":0.6898,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/41825939\/plot-pandas-dataframe-two-columns","best_answers_votes":56,"tags":["python","pandas","matplotlib"],"question_length":779,"response_length":112,"tag_count":3} {"question":"Plotting transparent histogram with non transparent edge I am plotting a histogram, and I have three datasets which I want to plot together, each one with different colours and linetype (dashed, dotted, etc). I am also giving some transparency, in order to see the overlapping bars. The point is that I would like the edge of each bar not to become transparent as the inner part does. Here is an example: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nx = np.random.random(20)\ny =np.random.random(20)\nz= np.random.random(20)\n\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.hist(x, bins=np.arange(0, 1, 0.1), ls='dashed', alpha = 0.5, lw=3, color= 'b')\nax.hist(y, bins=np.arange(0, 1, 0.1), ls='dotted', alpha = 0.5, lw=3, color= 'r')\nax.hist(z, bins=np.arange(0, 1, 0.1), alpha = 0.5, lw=3, color= 'k')\nax.set_xlim(-0.5, 1.5)\nax.set_ylim(0, 7)\nplt.show()\n```","response":"plt.hist accepts additional keyword arguments that are passed to the constructor for matplotlib.patches.Patch. In particular you can pass an fc= argument which lets you set the patch facecolor using an (R, G, B, A) tuple when you create the histograms. Changing the alpha value of the facecolor does not affect the transparency of the edges: \n```\nax.hist(x, bins=np.arange(0, 1, 0.1), ls='dashed', lw=3, fc=(0, 0, 1, 0.5))\nax.hist(y, bins=np.arange(0, 1, 0.1), ls='dotted', lw=3, fc=(1, 0, 0, 0.5))\nax.hist(z, bins=np.arange(0, 1, 0.1), lw=3, fc=(0, 0, 0, 0.5))\n```","best_answers_score":0.6897,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/28398200\/plotting-transparent-histogram-with-non-transparent-edge","best_answers_votes":54,"tags":["python","matplotlib","plot","histogram","alpha"],"question_length":869,"response_length":565,"tag_count":5} {"question":"Inserting a degree symbol into python plot This is a really simple problem but its escaping me. I'm just trying to insert a degree symbol into the titles and legends of my python plot. Code is below. Thanks. \n```\nfrom numpy import *\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ntheta1 = linspace(0,60,610)\ntheta2 = linspace(0,45,460)\ntheta3 = linspace(45,90,460)\n\nCTS = 1\/cos(radians(theta1))\nCTS0 = 1\/cos(radians(60-theta2))\nCTS45 = 1\/cos(radians(105-theta3))\n\nplt.plot(theta1,CTS,label=u'CTS Head at 0',linewidth=2)\nplt.plot(theta2,CTS0,label='CTS Head at 60',linewidth=2)\nplt.plot(theta3,CTS45,label='CTS Head at 105',linewidth=2)\n\nplt.xlabel('Manufactured Ply Angle (degrees)')\nplt.ylabel('Thickness')\n\nplt.legend( loc='lower right', numpoints = 1 )\nplt.ylim([0,2.5])\n\nplt.grid(b=None, which='major', axis='both')\nplt.grid(color='k', linestyle='--', linewidth=0.5)\nplt.axhline(y=1.035, xmin=0, xmax=90,color='k', linestyle='-', linewidth=1)\n\nplt.show()\n```","response":"Use LaTeX Style. For Example: $^\\circ$ Text would produce \u00b0Text See the matplotlib documentation for more information about printing (especially mathematical expression). In your case the code has to be: plt.xlabel('Manufactured Ply Angle $^\\circ$') The TeX part of the expression must be enclosed by dollar signs \"$\".","best_answers_score":0.6894,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/19926246\/inserting-a-degree-symbol-into-python-plot","best_answers_votes":55,"tags":["python","matplotlib","symbols"],"question_length":967,"response_length":318,"tag_count":3} {"question":"How can I plot NaN values as a special color with imshow? I am trying to use imshow in matplotlib to plot data as a heatmap, but some of the values are NaNs. I'd like the NaNs to be rendered as a special color not found in the colormap. example: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nf = plt.figure()\nax = f.add_subplot(111)\na = np.arange(25).reshape((5,5)).astype(float)\na[3,:] = np.nan\nax.imshow(a, interpolation='nearest')\nf.canvas.draw()\n```\nThe resultant image is unexpectedly all blue (the lowest color in the jet colormap). However, if I do the plotting like this: \n```\nax.imshow(a, interpolation='nearest', vmin=0, vmax=24)\n```\n--then I get something better, but the NaN values are drawn the same color as vmin... Is there a graceful way that I can set NaNs to be drawn with a special color (eg: gray or transparent)?","response":"With newer versions of Matplotlib, it is not necessary to use a masked array anymore. For example, let\u2019s generate an array where every 7th value is a NaN: \n```\narr = np.arange(100, dtype=float).reshape(10, 10)\narr[~(arr % 7).astype(bool)] = np.nan\n```\n.cm.get_cmap() is replaced by .colormaps.get_cmap('viridis') in matplotlib v3.7.0 Set the color with .set_bad. \n```py\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport numpy as np\n\narr = np.arange(100, dtype=float).reshape(10, 10)\narr[~(arr % 7).astype(bool)] = np.nan\n\ncmap = mpl.colormaps.get_cmap('viridis') # viridis is the default colormap for imshow\ncmap.set_bad(color='red')\n\nplt.imshow(arr, cmap=cmap)\n```\n.cm.get_cmap() is deprecated We can modify the current colormap and plot the array with the following lines: \n```\ncurrent_cmap = mpl.cm.get_cmap()\ncurrent_cmap.set_bad(color='red')\nplt.imshow(arr)\n```","best_answers_score":0.6893,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/2578752\/how-can-i-plot-nan-values-as-a-special-color-with-imshow","best_answers_votes":84,"tags":["python","matplotlib","nan","colormap","imshow"],"question_length":843,"response_length":881,"tag_count":5} {"question":"ValueError: x and y must be the same size \n```\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as pt\n\ndata1 = pd.read_csv('stage1_labels.csv')\n\nX = data1.iloc[:, :-1].values\ny = data1.iloc[:, 1].values\n\nfrom sklearn.preprocessing import LabelEncoder, OneHotEncoder\nlabel_X = LabelEncoder()\nX[:,0] = label_X.fit_transform(X[:,0])\nencoder = OneHotEncoder(categorical_features = [0])\nX = encoder.fit_transform(X).toarray()\n\nfrom sklearn.cross_validation import train_test_split\nX_train, X_test, y_train,y_test = train_test_split(X, y, test_size = 0.4, random_state = 0)\n\n#fitting Simple Regression to training set\n\nfrom sklearn.linear_model import LinearRegression\nregressor = LinearRegression()\nregressor.fit(X_train, y_train)\n\n#predecting the test set results\ny_pred = regressor.predict(X_test)\n\n#Visualization of the training set results\npt.scatter(X_train, y_train, color = 'red')\npt.plot(X_train, regressor.predict(X_train), color = 'green')\npt.title('salary vs yearExp (Training set)')\npt.xlabel('years of experience')\npt.ylabel('salary')\npt.show()\n```\nI need a help understanding the error in while executing the above code. Below is the error: \"raise ValueError(\"x and y must be the same size\")\" I have .csv file with 1398 rows and 2 column. I have taken 40% as y_test set, as it is visible in the above code.","response":"Print X_train shape. What do you see? I'd bet X_train is 2d (matrix with a single column), while y_train 1d (vector). In turn you get different sizes. I think using X_train[:,0] for plotting (which is from where the error originates) should solve the problem","best_answers_score":0.6889,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/41659535\/valueerror-x-and-y-must-be-the-same-size","best_answers_votes":40,"tags":["python","csv","numpy","matplotlib","machine-learning"],"question_length":1330,"response_length":258,"tag_count":5} {"question":"Python - Plotting colored grid based on values I have been searching here and on the net. I found somehow close questions\/answers to what I want, but still couldn't reach to what I'm looking for. I have an array of for example, 100 values. The values are in the range from 0 to 100. I want to plot this array as a grid, filling the squares according to the values in the array. The solutions I found so far are like the followings: Drawing grid pattern in matplotlib and custom matplotlib plot : chess board like table with colored cells In the examples I mentioned, the ranges of the colors vary and are not fixed. However, what I am wondering about, is whether I can set the ranges for specific values and colors. For example, if the values are between 10 and 20, let the color of the grid square be red. else if the values are between 20 and 30, let the color be blue. etc. How this could be achieved in python?","response":"You can create a ListedColormap for your custom colors and color BoundaryNorms to threshold the values. \n```\nimport matplotlib.pyplot as plt\nfrom matplotlib import colors\nimport numpy as np\n\ndata = np.random.rand(10, 10) * 20\n\n# create discrete colormap\ncmap = colors.ListedColormap(['red', 'blue'])\nbounds = [0,10,20]\nnorm = colors.BoundaryNorm(bounds, cmap.N)\n\nfig, ax = plt.subplots()\nax.imshow(data, cmap=cmap, norm=norm)\n\n# draw gridlines\nax.grid(which='major', axis='both', linestyle='-', color='k', linewidth=2)\nax.set_xticks(np.arange(-.5, 10, 1));\nax.set_yticks(np.arange(-.5, 10, 1));\n\nplt.show()\n```\nResulting in; For more, you can check this matplotlib example.","best_answers_score":0.688,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/43971138\/python-plotting-colored-grid-based-on-values","best_answers_votes":63,"tags":["python","matplotlib","plot","colors","grid"],"question_length":914,"response_length":673,"tag_count":5} {"question":"What is y axis in seaborn distplot? I have some geometrically distributed data. When I want to take a look at it, I use \n```\nsns.distplot(data, kde=False, norm_hist=True, bins=100)\n```\nwhich results is a picture: However, bins heights don't add up to 1, which means y axis doesn't show probability, it's something different. If instead we use \n```\nweights = np.ones_like(np.array(data))\/float(len(np.array(data)))\nplt.hist(data, weights=weights, bins = 100)\n```\nthe y axis shall show probability, as bins heights sum up to 1: It can be seen more clearly here: suppose we have a list \n```\nl = [1, 3, 2, 1, 3]\n```\nWe have two 1s, two 3s and one 2, so their respective probabilities are 2\/5, 2\/5 and 1\/5. When we use seaborn histplot with 3 bins: \n```\nsns.distplot(l, kde=False, norm_hist=True, bins=3)\n```\nwe get: As you can see, the 1st and the 3rd bin sum up to 0.6+0.6=1.2 which is already greater than 1, so y axis is not a probability. When we use \n```\nweights = np.ones_like(np.array(l))\/float(len(np.array(l)))\nplt.hist(l, weights=weights, bins = 3)\n```\nwe get: and the y axis is probability, as 0.4+0.4+0.2=1 as expected. The amount of bins in these 2 cases are is the same for both methods used in each case: 100 bins for geometrically distributed data, 3 bins for small array l with 3 possible values. So bins amount is not the issue. My question is: in seaborn distplot called with norm_hist=True, what is the meaning of y axis?","response":"The x-axis is the value of the variable just like in a histogram, but what exactly does the y-axis represent? ANS-> The y-axis in a density plot is the probability density function for the kernel density estimation. However, we need to be careful to specify this is a probability density and not a probability. The difference is the probability density is the probability per unit on the x-axis. To convert to an actual probability, we need to find the area under the curve for a specific interval on the x-axis. Somewhat confusingly, because this is a probability density and not a probability, the y-axis can take values greater than one. The only requirement of the density plot is that the total area under the curve integrates to one. I generally tend to think of the y-axis on a density plot as a value only for relative comparisons between different categories. from the reference of https:\/\/towardsdatascience.com\/histograms-and-density-plots-in-python-f6bda88f5ac0","best_answers_score":0.6875,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/51666784\/what-is-y-axis-in-seaborn-distplot","best_answers_votes":26,"tags":["python","matplotlib","seaborn"],"question_length":1437,"response_length":973,"tag_count":3} {"question":"Matplotlib log scale tick label number formatting With matplotlib when a log scale is specified for an axis, the default method of labeling that axis is with numbers that are 10 to a power eg. 10^6. Is there an easy way to change all of these labels to be their full numerical representation? eg. 1, 10, 100, etc. Note that I do not know what the range of powers will be and want to support an arbitrary range (negatives included).","response":"Sure, just change the formatter. For example, if we have this plot: \n```\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots()\nax.axis([1, 10000, 1, 100000])\nax.loglog()\n\nplt.show()\n```\nYou could set the tick labels manually, but then the tick locations and labels would be fixed when you zoom\/pan\/etc. Therefore, it's best to change the formatter. By default, a logarithmic scale uses a LogFormatter, which will format the values in scientific notation. To change the formatter to the default for linear axes (ScalarFormatter) use e.g. \n```\nfrom matplotlib.ticker import ScalarFormatter\nfor axis in [ax.xaxis, ax.yaxis]:\n axis.set_major_formatter(ScalarFormatter())\n```","best_answers_score":0.6872,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/21920233\/matplotlib-log-scale-tick-label-number-formatting","best_answers_votes":80,"tags":["python","numpy","matplotlib","graphing"],"question_length":431,"response_length":677,"tag_count":4} {"question":"How do I set matplotlib plot to \"no fill\"? I'd like to superimpose one plot over another (they are polygons, really in some lat\/lon space, using geopandas, but the plot is simply derived from matplotlib) I have: \n```\nfigZ, axZ = plt.subplots(1, figsize=(11,8.5))\nSfig = X.plot(ax=axZ, color='white', edgecolor='black', lw=0.7)\nY.plot(ax=axZ, color='white', edgecolor='black', lw=0.7, alpha=0.3)\n```\nHow do I set Sfig's color to \"no-fill\" instead of white? The way it is now it \"blurs\" my Sfig image (X.plot) by the alpha of the Y.plot one. How do I set \"color\" to actually transparent?","response":"I don't expect upvotes, but this is what I found as solution. I'll vote up better ones if they exist: \n```\nSfig = X.plot(ax=axZ, facecolor=\"none\", \n edgecolor='black', lw=0.7)\n```","best_answers_score":0.6871,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/47847252\/how-do-i-set-matplotlib-plot-to-no-fill","best_answers_votes":57,"tags":["python","matplotlib"],"question_length":585,"response_length":192,"tag_count":2} {"question":"warning about too many open figures In a script where I create many figures with fix, ax = plt.subplots(...), I get the warning RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (matplotlib.pyplot.figure) are retained until explicitly closed and may consume too much memory. However, I don't understand why I get this warning, because after saving the figure with fig.savefig(...), I delete it with fig.clear(); del fig. At no point in my code, I have more than one figure open at a time. Still, I get the warning about too many open figures. What does that mean \/ how can I avoid getting the warning?","response":"Use .clf or .cla on your figure object instead of creating a new figure. From @DavidZwicker Assuming you have imported pyplot as \n```\nimport matplotlib.pyplot as plt\n```\nplt.cla() clears an axis, i.e. the currently active axis in the current figure. It leaves the other axes untouched. plt.clf() clears the entire current figure with all its axes, but leaves the window opened, such that it may be reused for other plots. plt.close() closes a window, which will be the current window, if not specified otherwise. plt.close('all') will close all open figures. The reason that del fig does not work is that the pyplot state-machine keeps a reference to the figure around (as it must if it is going to know what the 'current figure' is). This means that even if you delete your ref to the figure, there is at least one live ref, hence it will never be garbage collected. Since I'm polling on the collective wisdom here for this answer, @JoeKington mentions in the comments that plt.close(fig) will remove a specific figure instance from the pylab state machine (plt._pylab_helpers.Gcf) and allow it to be garbage collected.","best_answers_score":0.6863,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/21884271\/warning-about-too-many-open-figures","best_answers_votes":324,"tags":["python","python-3.x","matplotlib"],"question_length":651,"response_length":1120,"tag_count":3} {"question":"How can I create stacked line graph? I would like to be able to produce a stacked line graph (similar to the method used here) with Python (preferably using matplotlib, but another library would be fine too). How can I do this? This similar to the stacked bar graph example on their website, except I'd like the top of bar to be connected with a line segment and the area underneath to be filled. I might be able to approximate this by decreasing the gaps between bars and using lots of bars (but this seems like a hack, and besides I'm not sure if it is possible).","response":"Newer versions of matplotlib contain the function plt.stackplot(), which allows for several different \"out-of-the-box\" stacked area plots: \n```\nimport numpy as np\nimport pylab as plt\n\nX = np.arange(0, 10, 1) \nY = X + 5 * np.random.random((5, X.size))\n\nbaseline = [\"zero\", \"sym\", \"wiggle\", \"weighted_wiggle\"]\nfor n, v in enumerate(baseline):\n plt.subplot(2 ,2, n + 1)\n plt.stackplot(X, *Y, baseline=v)\n plt.title(v)\n plt.axis('tight')\nplt.show()\n```","best_answers_score":0.6859,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/2225995\/how-can-i-create-stacked-line-graph","best_answers_votes":74,"tags":["python","pandas","matplotlib"],"question_length":565,"response_length":460,"tag_count":3} {"question":"How to show minor tick labels on log-scale with Matplotlib Does anyone know how to show the labels of the minor ticks on a logarithmic scale with Python\/Matplotlib?","response":"You can use plt.tick_params(axis='y', which='minor') to set the minor ticks on and format them with the matplotlib.ticker FormatStrFormatter. For example, \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.ticker import FormatStrFormatter\nx = np.linspace(0,4,1000)\ny = np.exp(x)\nplt.plot(x, y)\nax = plt.gca()\nax.set_yscale('log')\nplt.tick_params(axis='y', which='minor')\nax.yaxis.set_minor_formatter(FormatStrFormatter(\"%.1f\"))\nplt.show()\n```","best_answers_score":0.6841,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/30887920\/how-to-show-minor-tick-labels-on-log-scale-with-matplotlib","best_answers_votes":26,"tags":["python","matplotlib"],"question_length":164,"response_length":463,"tag_count":2} {"question":"How do I add a title and axis labels to Seaborn Heatmap? I want to add a title to a seaborn heatmap. Using Pandas and iPython Notebook code is below, \n```\na1_p = a1.pivot_table( index='Postcode', columns='Property Type', values='Count', aggfunc=np.mean, fill_value=0)\n\nsns.heatmap(a1_p, cmap=\"YlGnBu\")\n```\nthe data is pretty straight forward: \n```\n\n```python\na1_p\n#Output\n#Property Type Flat Terraced house Unknown\n#Postcode \n#E1 11 0 0\n#E14 12 0 0\n#E1W 6 0 0\n#E2 6 0 0\n#```\n```","response":"heatmap is an axes-level function, so you should be able to use just plt.title or ax.set_title: \n```\n%matplotlib inline\nimport numpy as np\nimport os\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\ndata = np.random.randn(10,12)\n\nax = plt.axes()\nsns.heatmap(data, ax = ax)\n\nax.set_title('lalala')\nplt.show()\n```","best_answers_score":0.6839,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/32723798\/how-do-i-add-a-title-and-axis-labels-to-seaborn-heatmap","best_answers_votes":102,"tags":["python","pandas","matplotlib","seaborn","heatmap"],"question_length":503,"response_length":316,"tag_count":5} {"question":"How to remove frame from a figure To remove frame in figure, I write \n```\nframeon=False\n```\nworks perfect with pyplot.figure, but with matplotlib.Figure it only removes the gray background, the frame stays. Also, I only want the lines to show, and all the rest of figure be transparent. with pyplot I can do what I want, I want to do it with matplotlib for some long reason I'd rather not mention to extend my question.","response":"ax.axis('off'), will as Joe Kington pointed out, remove everything except the plotted line. For those wanting to only remove the frame (border), and keep labels, tickers etc, one can do that by accessing the spines object on the axis. Given an axis object ax, the following should remove borders on all four sides: \n```\nax.spines['top'].set_visible(False)\nax.spines['right'].set_visible(False)\nax.spines['bottom'].set_visible(False)\nax.spines['left'].set_visible(False)\n```\nAnd, in case of removing x and y ticks from the plot: \n```\nax.get_xaxis().set_ticks([])\n ax.get_yaxis().set_ticks([])\n```","best_answers_score":0.6833,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/14908576\/how-to-remove-frame-from-a-figure","best_answers_votes":442,"tags":["python","matplotlib"],"question_length":419,"response_length":595,"tag_count":2} {"question":"Matplotlib: Scatter Plot to Foreground on top of a Contour Plot Does anyone know a way to bring a scatter plot to the foreground in matplotlib? I have to display the scatter plotting on top of the contour, but by default it is plotted underneath...","response":"You can manually choose in which order the different plots are to be displayed with the zorder parameter of e.g. the scatter method. To demonstrate, see the code below, where the scatter plot in the left subplot has zorder=1 and in the right subplot it has zorder=-1. The object with the highest zorder is placed on top. This means that the scatter will be placed on top of the contour in the first subplot, while it is placed underneath in the second subplot. \n```\nimport numpy as np\nimport matplotlib.cm as cm\nimport matplotlib.mlab as mlab\nimport matplotlib.pyplot as plt\n\ndelta = 0.025\nx = np.arange(-3.0, 3.0, delta)\ny = np.arange(-2.0, 2.0, delta)\nX, Y = np.meshgrid(x, y)\nZ1 = mlab.bivariate_normal(X, Y, 1.0, 1.0, 0.0, 0.0)\nZ2 = mlab.bivariate_normal(X, Y, 1.5, 0.5, 1, 1)\nZ = 10.0 * (Z2 - Z1)\n\nnorm = cm.colors.Normalize(vmax=abs(Z).max(), vmin=-abs(Z).max())\ncmap = cm.PRGn\n\nlevels = np.arange(-2.0, 1.601, 0.4)\n\nfig, axes = plt.subplots(1,2, sharey=True)\n\nfor ax, zord in zip(axes, [1, -1]):\n ax.contourf(X, Y, Z, levels,\n cmap=cm.get_cmap(cmap, len(levels)-1),\n norm=norm)\n ax.autoscale(False) # To avoid that the scatter changes limits\n ax.scatter(np.random.uniform(-3,3,10),\n np.random.uniform(-2,2,10),\n zorder=zord)\n ax.set_title('Scatter with zorder={0}'.format(zord))\n```","best_answers_score":0.6833,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/17431441\/matplotlib-scatter-plot-to-foreground-on-top-of-a-contour-plot","best_answers_votes":103,"tags":["python","matplotlib","scatter-plot"],"question_length":248,"response_length":1359,"tag_count":3} {"question":"How to read image file from S3 bucket directly into memory? I have the following code \n```\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np\nimport boto3\ns3 = boto3.resource('s3', region_name='us-east-2')\nbucket = s3.Bucket('sentinel-s2-l1c')\nobject = bucket.Object('tiles\/10\/S\/DG\/2015\/12\/7\/0\/B01.jp2')\nobject.download_file('B01.jp2')\nimg=mpimg.imread('B01.jp2')\nimgplot = plt.imshow(img)\nplt.show(imgplot)\n```\nand it works. But the problem it downloads file into current directory first. Is it possible to read file and decode it as image directly in RAM?","response":"I would suggest using io module to read the file directly in to memory, without having to use a temporary file at all. For example: \n```\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np\nimport boto3\nimport io\n\ns3 = boto3.resource('s3', region_name='us-east-2')\nbucket = s3.Bucket('sentinel-s2-l1c')\nobject = bucket.Object('tiles\/10\/S\/DG\/2015\/12\/7\/0\/B01.jp2')\n\nfile_stream = io.StringIO()\nobject.download_fileobj(file_stream)\nimg = mpimg.imread(file_stream)\n# whatever you need to do\n```\nYou could also use io.BytesIO if your data is binary.","best_answers_score":0.6833,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/44043036\/how-to-read-image-file-from-s3-bucket-directly-into-memory","best_answers_votes":76,"tags":["python","matplotlib","amazon-s3","boto3"],"question_length":590,"response_length":575,"tag_count":4} {"question":"Putting text in top left corner of matplotlib plot How can I put text in the top left (or top right) corner of a matplotlib figure, e.g. where a top left legend would be, or on top of the plot but in the top left corner? E.g. if it's a plt.scatter(), then something that would be within the square of the scatter, put in the top left most corner. I'd like to do this without ideally knowing the scale of the scatterplot being plotted for example, since it will change from dataset to data set. I just want it the text to be roughly in the upper left, or roughly in the upper right. With legend type positioning it should not overlap with any scatter plot points anyway.","response":"You can use text. \n```\nplt.text(x, y, s, fontsize=12)\n```\ntext coordinates can be given relative to the axis, so the position of your text will be independent of the size of the plot: The default transform specifies that text is in data coords, alternatively, you can specify text in axis coords (0,0 is lower-left and 1,1 is upper-right). The example below places text in the center of the axes:: \n```\nplt.text(0.5, 0.5, 'matplotlib',\n horizontalalignment='center',\n verticalalignment='center',\n transform = ax.transAxes)\n```\nTo prevent the text to interfere with any point of your scatter is more difficult afaik. The easier method is to set y_axis (ymax in ylim((ymin,ymax))) to a value a bit higher than the max y-coordinate of your points. In this way you will always have this free space for the text. EDIT: here you have an example: \n```py\nfrom matplotlib import pyplot as plt\n\nf, ax = plt.subplots()\nplt.scatter([3,5,2,6,8],[5,3,2,1,5])\nplt.text(.01, .99, 'matplotlib', ha='left', va='top', transform=ax.transAxes)\nf.tight_layout()\n```\nThe ha and va parameters set the alignment of your text relative to the insertion point. ie. ha='left' is a good set to prevent a long text to go out of the left axis when the frame is reduced (made narrower) manually.","best_answers_score":0.6831,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/8482588\/putting-text-in-top-left-corner-of-matplotlib-plot","best_answers_votes":242,"tags":["python","matplotlib","plot","seaborn","plot-annotations"],"question_length":669,"response_length":1274,"tag_count":5} {"question":"Save multiple plots in a single PDF file plotting module \n```\ndef plotGraph(X,Y):\n fignum = random.randint(0,sys.maxint)\n plt.figure(fignum)\n ### Plotting arrangements ###\n return fignum\n```\nmain module \n```\nimport matplotlib.pyplot as plt\n### tempDLStats, tempDLlabels are the argument\nplot1 = plotGraph(tempDLstats, tempDLlabels)\nplot2 = plotGraph(tempDLstats_1, tempDLlabels_1)\nplot3 = plotGraph(tempDLstats_2, tempDLlabels_2)\nplt.show()\n```\nI want to save all the graphs plot1, plot2, plot3 to a single PDF file. Is there any way to achieve it? I can't include the plotGraph function in the main module. There's a function named pyplot.savefig but that seems to work only with a single figure. Is there any other way to accomplish it?","response":"If someone ends up here from google, looking to convert a single figure to a .pdf (that was what I was looking for): \n```\nimport matplotlib.pyplot as plt\n\nf = plt.figure()\nplt.plot(range(10), range(10), \"o\")\nplt.show()\n\nf.savefig(\"foo.pdf\", bbox_inches='tight')\n```","best_answers_score":0.6826,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/11328958\/save-multiple-plots-in-a-single-pdf-file","best_answers_votes":282,"tags":["python","matplotlib"],"question_length":750,"response_length":265,"tag_count":2} {"question":"Matplotlib savefig with a legend outside the plot Reading the following article, I managed to put a legend outside plot. How to put the legend out of the plot code: \n```\nimport matplotlib.pyplot as pyplot\n\nx = [0, 1, 2, 3, 4]\ny = [xx*xx for xx in x]\n\nfig = pyplot.figure()\nax = fig.add_subplot(111)\n\nbox = ax.get_position()\nax.set_position([box.x0, box.y0, box.width*0.8, box.height])\n\nax.plot(x, y)\nleg = ax.legend(['abc'], loc = 'center left', bbox_to_anchor = (1.0, 0.5))\n#pyplot.show()\n\nfig.savefig('aaa.png', bbox_inches='tight')\n```\npyplot.show() displays the correct plot with a legend outside it. But when I save it as a file with fig.savefig(), the legend is truncated. Some googling shows me workarounds such as adding bbox_extra_artists=[leg.legendPatch] or bbox_extra_artists=[leg] to savefig(), but neither worked. What is the correct way to do it? Matplotlib version is 0.99.3. Thanks.","response":"The problem is that when you plot dynamically, matplotlib determines the borders automatically to fit all your objects. When you save a file, things are not being done automatically, so you need to specify the size of your figure, and then the bounding box of your axes object. Here is how to correct your code: \n```\nimport matplotlib.pyplot as pyplot\n\nx = [0, 1, 2, 3, 4]\ny = [xx*xx for xx in x]\n\nfig = pyplot.figure(figsize=(3,3))\nax = fig.add_subplot(111)\n\n#box = ax.get_position()\n#ax.set_position([0.3, 0.4, box.width*0.3, box.height])\n# you can set the position manually, with setting left,buttom, witdh, hight of the axis\n# object\nax.set_position([0.1,0.1,0.5,0.8])\nax.plot(x, y)\nleg = ax.legend(['abc'], loc = 'center left', bbox_to_anchor = (1.0, 0.5))\n\nfig.savefig('aaa.png')\n```","best_answers_score":0.6826,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/8971834\/matplotlib-savefig-with-a-legend-outside-the-plot","best_answers_votes":29,"tags":["python","matplotlib","legend"],"question_length":900,"response_length":790,"tag_count":3} {"question":"Why is my plt.savefig is not working? I have a simple python code as follows: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n\"\"\"\nHere are the solutions and the plot.\n\"\"\"\n\n# Create the axis and plot.\nplt.axis([0, 10, 0, 10])\naxis_x = range(1, 11)\ngrd = [1.1, 2.1, 3.1, 4.1, 5.1, 6.1, 7.1, 8.1, 9.1, 10.1]\ngrd2 = [1.2, 2.2, 3.2, 4.2, 5.2, 6.2, 7.2, 8.2, 9.2, 10.2]\nplt.plot(axis_x, grd, '-g', label='BR1')\nplt.plot(axis_x, grd2, '-b', label='BR2')\nplt.legend(loc='upper left')\nplt.grid()\nplt.show()\n\n# Save the results vector to a text file.\nnp.savetxt('test.out', (grd, grd2))\n\n# Save the figure as '.eps' file.\nplt.savefig('expl.pdf', format='pdf', dpi=1200)\n```\nWhen I open the output files expl.pdf and\/or test.out I find them blank and nothing in there. Why? Thanks.","response":"When you close the image displayed by plt.show(), the image is closed and freed from memory. You should call savefig and savetxt before calling show.","best_answers_score":0.6826,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/30765455\/why-is-my-plt-savefig-is-not-working","best_answers_votes":134,"tags":["python","numpy","matplotlib"],"question_length":779,"response_length":149,"tag_count":3} {"question":"How do I set color to Rectangle in Matplotlib? How do I set color to Rectangle for example in matplotlib? I tried using argument color, but had no success. I have following code: \n```\nfig=pylab.figure()\nax=fig.add_subplot(111)\n\npylab.xlim([-400, 400]) \npylab.ylim([-400, 400])\npatches = []\npolygon = Rectangle((-400, -400), 10, 10, color='y')\npatches.append(polygon)\n\np = PatchCollection(patches, cmap=matplotlib.cm.jet)\nax.add_collection(p)\nax.xaxis.set_major_locator(MultipleLocator(20)) \nax.yaxis.set_major_locator(MultipleLocator(20)) \n\npylab.show()\n```","response":"I couldn't get your code to work, but hopefully this will help: \n```\nimport matplotlib\nimport matplotlib.pyplot as plt\n\nfig = plt.figure()\nax = fig.add_subplot(111)\nrect1 = matplotlib.patches.Rectangle((-200,-100), 400, 200, color='yellow')\nrect2 = matplotlib.patches.Rectangle((0,150), 300, 20, color='red')\nrect3 = matplotlib.patches.Rectangle((-300,-50), 40, 200, color='#0099FF')\ncircle1 = matplotlib.patches.Circle((-200,-250), radius=90, color='#EB70AA')\nax.add_patch(rect1)\nax.add_patch(rect2)\nax.add_patch(rect3)\nax.add_patch(circle1)\nplt.xlim([-400, 400])\nplt.ylim([-400, 400])\nplt.show()\n```\nproduces:","best_answers_score":0.6826,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/10550477\/how-do-i-set-color-to-rectangle-in-matplotlib","best_answers_votes":48,"tags":["python","matplotlib"],"question_length":566,"response_length":611,"tag_count":2} {"question":"Seaborn plots not showing up I'm sure I'm forgetting something very simple, but I cannot get certain plots to work with Seaborn. If I do: \n```\nimport seaborn as sns\n```\nThen any plots that I create as usual with matplotlib get the Seaborn styling (with the grey grid in the background). However, if I try to do one of the examples, such as: \n```\n\n```python\nimport seaborn as sns\n```\n\n```python\nsns.set()\n```\n\n```python\ndf = sns.load_dataset('iris')\n```\n\n```python\nsns.pairplot(df, hue='species', size=2.5)\n#Output\n#\n#```\n#The pairplot function returns a PairGrid object, but the plot doesn't show up. I'm a little confused because matplotlib seems to be functioning properly, and the Seaborn styles are applied to other matplotlib plots, but the Seaborn functions don't seem to do anything. Does anybody have any idea what might be the problem?\n```","response":"Plots created using seaborn need to be displayed like ordinary matplotlib plots. This can be done using the \n```\nplt.show()\n```\nfunction from matplotlib. Originally I posted the solution to use the already imported matplotlib object from seaborn (sns.plt.show()) however this is considered to be a bad practice. Therefore, simply directly import the _matplotlib.pyplot_ module and show your plots with \n```\nimport matplotlib.pyplot as plt\nplt.show()\n```\nIf the IPython notebook is used the inline backend can be invoked to remove the necessity of calling show after each plot. The respective magic is \n```\n%matplotlib inline\n```","best_answers_score":0.6824,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/26597116\/seaborn-plots-not-showing-up","best_answers_votes":591,"tags":["python","matplotlib","seaborn"],"question_length":860,"response_length":628,"tag_count":3} {"question":"matplotlib -- interactively select points or locations? In R, there is a function locator which is like Matlab's ginput where you can click on the figure with a mouse and select any x,y coordinate. In addition, there is a function called identify(x,y) where if you give it a set of points x,y that you have plotted and then click on the figure, it will return the index of the x,y point which lies nearest (within an adjustable tolerance) to the location you have selected (or multiple indices, if multiple points are selected). Is there such a functionality in Matplotlib?","response":"You may want to use a pick event : \n```\nfig = figure()\nax1 = fig.add_subplot(111)\nax1.set_title('custom picker for line data')\nline, = ax1.plot(rand(100), rand(100), 'o', picker=line_picker)\nfig.canvas.mpl_connect('pick_event', onpick2)\n```\nTolerance set by picker parameter there: \n```\nline, = ax1.plot(rand(100), 'o', picker=5) # 5 points tolerance\n```","best_answers_score":0.6816,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/7969352\/matplotlib-interactively-select-points-or-locations","best_answers_votes":18,"tags":["matplotlib"],"question_length":573,"response_length":355,"tag_count":1} {"question":"Reverse colormap in matplotlib I would like to know how to simply reverse the color order of a given colormap in order to use it with plot_surface.","response":"The standard colormaps also all have reversed versions. They have the same names with _r tacked on to the end. (Documentation here.)","best_answers_score":0.6815,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/3279560\/reverse-colormap-in-matplotlib","best_answers_votes":775,"tags":["python","matplotlib"],"question_length":147,"response_length":132,"tag_count":2} {"question":"Using %matplotlib notebook after %matplotlib inline in Jupyter Notebook doesn't work I am using Jupyter Notebook for plotting piechart figures. In first cell with my code I have a magic command %matplotlib inline and after this magic command I run my code, everything works fine and my figure renders. But in second cell when I set %matplotlib notebook for interactive plotting my figure won't render after running this second cell. I need to restart kernel and run cell with %matplotlib notebook again and cannot run %matplotlib inline command before that. Here is my code for first cell with %matplotlib inline, which renders fine: \n```\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n\nlabels = \"No\", \"Yes\"\nsizes = [100, 50]\n\nfig, ax = plt.subplots(figsize=(6, 6))\n\n_, texts, autotexts = ax.pie(sizes, explode=explode, labels=labels, colors=colors, autopct='%1.1f%%',\n shadow=False, startangle=90)\n\nax.axis('equal')\n```\nAfter that I have second cell with same code, just %matplotlib inline is changed to %matplotlib notebook. Figure won't render after I run this cell and I need to restart kernel and run this cell again. Why?","response":"You just have the wrong order of your commands. A backend should be set before importing pyplot in jupyter. Or in other words, after changing the backend, pyplot needs to be imported again. Therefore call %matplotlib ... prior to importing pyplot. In first cell: \n```\n%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.plot([1,1.6,3])\n```\nIn second cell: \n```\n%matplotlib notebook\n#calling it a second time may prevent some graphics errors\n%matplotlib notebook \nimport matplotlib.pyplot as plt\nplt.plot([1,1.6,3])\n```","best_answers_score":0.6811,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/43545050\/using-matplotlib-notebook-after-matplotlib-inline-in-jupyter-notebook-doesnt","best_answers_votes":54,"tags":["python","matplotlib","jupyter-notebook"],"question_length":1144,"response_length":522,"tag_count":3} {"question":"How to plot normal distribution Given a mean and a variance is there a simple function call which will plot a normal distribution?","response":"```\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.stats as stats\nimport math\n\nmu = 0\nvariance = 1\nsigma = math.sqrt(variance)\nx = np.linspace(mu - 3*sigma, mu + 3*sigma, 100)\nplt.plot(x, stats.norm.pdf(x, mu, sigma))\nplt.show()\n```","best_answers_score":0.681,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/10138085\/how-to-plot-normal-distribution","best_answers_votes":330,"tags":["python","matplotlib"],"question_length":130,"response_length":249,"tag_count":2} {"question":"Plotting lines connecting points I know there is another very similar question, but I could not extract the information I need from it. plotting lines in pairs I have 4 points in the (x,y) plane: x=[x1,x2,x3,x4] and y=[y1,y2,y3,y4] \n```\nx=[-1 ,0.5 ,1,-0.5]\ny=[ 0.5, 1, -0.5, -1]\n```\nNow, I can plot the four points by doing: \n```\nimport matplotlib.pyplot as plt\n\nplt.plot(x,y, 'ro')\nplt.axis('equal')\nplt.show()\n```\nBut, apart from the four points, I would like to have 2 lines: 1) one connecting (x1,y1) with (x2,y2) and 2) the second one connecting (x3,y3) with (x4,y4). This is a simple toy example. In the real case I have 2N points in the plane. How can I get the desired output: for points with two connecting lines ? Thank you.","response":"I think you're going to need separate lines for each segment: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx, y = np.random.random(size=(2,10))\n\nfor i in range(0, len(x), 2):\n plt.plot(x[i:i+2], y[i:i+2], 'ro-')\n\nplt.show()\n```\n(The numpy import is just to set up some random 2x10 sample data)","best_answers_score":0.6765,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/35363444\/plotting-lines-connecting-points","best_answers_votes":64,"tags":["matplotlib"],"question_length":735,"response_length":308,"tag_count":1} {"question":"How to dynamically update a plot in a loop in IPython notebook (within one cell) Environment: Python 2.7, Matplotlib 1.3, IPython notebook 1.1, Linux, and Chrome. The code is in one single input cell, using --pylab=inline. I want to use IPython notebook and Pandas to consume a stream and dynamically update a plot every five seconds. When I just use a print statement to print the data in text format, it works perfectly fine: the output cell just keeps printing data and adding new rows. But when I try to plot the data (and then update it in a loop), the plot never shows up in the output cell. But if I remove the loop, and just plot it once, it works fine. Then I did some simple test: \n```\ni = pd.date_range('2013-1-1',periods=100,freq='s')\nwhile True:\n plot(pd.Series(data=np.random.randn(100), index=i))\n #pd.Series(data=np.random.randn(100), index=i).plot() also tried this one\n time.sleep(5)\n```\nThe output will not show anything until I manually interrupt the process (Ctrl + M + I). And after I interrupt it, the plot shows correctly as multiple overlapped lines. But what I really want is a plot that shows up and gets updated every five seconds (or whenever the plot() function gets called, just like what print statement outputs I mentioned above, which works well). Only showing the final chart after the cell is completely done is not what I want. I even tried to explicitly add the draw() function after each plot(), etc. None of them works. How can I dynamically update a plot by a for\/while loop within one cell in IPython notebook?","response":"Use the IPython.display module: \n```\n%matplotlib inline\nimport time\nimport pylab as pl\nfrom IPython import display\nfor i in range(10):\n pl.plot(pl.randn(100))\n display.clear_output(wait=True)\n display.display(pl.gcf())\n time.sleep(1.0)\n```","best_answers_score":0.6762,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/21360361\/how-to-dynamically-update-a-plot-in-a-loop-in-ipython-notebook-within-one-cell","best_answers_votes":147,"tags":["python","pandas","matplotlib","jupyter-notebook","ipython"],"question_length":1561,"response_length":251,"tag_count":5} {"question":"Plot mean and standard deviation I have several values of a function at different x points. I want to plot the mean and std in python, like the answer of this SO question. I know this must be easy using matplotlib, but I have no idea of the function's name that can do that. Does anyone know it?","response":"plt.errorbar can be used to plot x, y, error data (as opposed to the usual plt.plot) \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nx = np.array([1, 2, 3, 4, 5])\ny = np.power(x, 2) # Effectively y = x**2\ne = np.array([1.5, 2.6, 3.7, 4.6, 5.5])\n\nplt.errorbar(x, y, e, linestyle='None', marker='^')\n\nplt.show()\n```\nplt.errorbar accepts the same arguments as plt.plot with additional yerr and xerr which default to None (i.e. if you leave them blank it will act as plt.plot).","best_answers_score":0.6755,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/22481854\/plot-mean-and-standard-deviation","best_answers_votes":125,"tags":["python","matplotlib","plot"],"question_length":295,"response_length":482,"tag_count":3} {"question":"Matplotlib remove patches from figure In my case, I want to remove one of the circle when clicking reset button. However, ax.clear() would clear all circles on the current figure. Can someone tell me how to remove only part of the patches? \n```\nimport matplotlib.patches as patches\nimport matplotlib.pyplot as plt\nfrom matplotlib.widgets import Button\n\nfig = plt.figure()\nax = fig.add_subplot(111) \n\ncircle1 = patches.Circle((0.3, 0.3), 0.03, fc='r', alpha=0.5)\ncircle2 = patches.Circle((0.4, 0.3), 0.03, fc='r', alpha=0.5)\nbutton = Button(plt.axes([0.8, 0.025, 0.1, 0.04]), 'Reset', color='g', hovercolor='0.975')\nax.add_patch(circle1)\nax.add_patch(circle2)\n\ndef reset(event):\n '''what to do here'''\n ax.clear()\n\nbutton.on_clicked(reset)\nplt.show()\n```","response":"Try this: \n```\ndef reset(event):\n circle1.remove()\n```\nAlso maybe you prefer: \n```\ndef reset(event):\n circle1.set_visible(False)\n```","best_answers_score":0.6751,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/21687571\/matplotlib-remove-patches-from-figure","best_answers_votes":33,"tags":["python","matplotlib","interactive"],"question_length":759,"response_length":138,"tag_count":3} {"question":"pandas pie chart plot remove the label text on the wedge the pie chart example on pandas plotting tutorial http:\/\/pandas.pydata.org\/pandas-docs\/version\/0.15.0\/visualization.html generates the following figure: with this code: \n```\nimport matplotlib.pyplot as plt\nplt.style.use('ggplot')\nimport numpy as np\nnp.random.seed(123456)\n\n\nimport pandas as pd\ndf = pd.DataFrame(3 * np.random.rand(4, 2), index=['a', 'b', 'c', 'd'], columns=['x', 'y'])\n\nf, axes = plt.subplots(1,2, figsize=(10,5))\nfor ax, col in zip(axes, df.columns):\n df[col].plot(kind='pie', autopct='%.2f', labels=df.index, ax=ax, title=col, fontsize=10)\n ax.legend(loc=3)\n\nplt.show()\n```\nI want to remove the text label (a,b,c,d) from both subplots, because for my application those label are long, so I only want to show them in legend. After read this: How to add a legend to matplotlib pie chart?, I figure out an way with matplotlib.pyplot.pie but the figure is not as fancy even if i am still using ggplot. \n```\nf, axes = plt.subplots(1,2, figsize=(10,5))\nfor ax, col in zip(axes, df.columns):\n patches, text, _ = ax.pie(df[col].values, autopct='%.2f')\n ax.legend(patches, labels=df.index, loc='best')\n```\nMy question is, is there a way that can combine the things I want from both side? to be clear, I want the fanciness from pandas, but remove the text from the wedges. Thank you","response":"You can turn off the labels in the chart, and then define them within the call to legend: \n```\ndf[col].plot(kind='pie', autopct='%.2f', labels=['','','',''], ax=ax, title=col, fontsize=10)\nax.legend(loc=3, labels=df.index)\n```\nor \n```\n... labels=None ...\n```","best_answers_score":0.6737,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/30059862\/pandas-pie-chart-plot-remove-the-label-text-on-the-wedge","best_answers_votes":63,"tags":["pandas","matplotlib","legend","pie-chart"],"question_length":1361,"response_length":259,"tag_count":4} {"question":"How to create a heat map in python that ranges from green to red? I'm trying to plot log ratios from the range -3 to 3 and want negative ratios to be green and positive to be red, with a log ratio of 0 (center) to be white in color. None of the pre-existing color schemes in matplotlib provide this option, and I haven't been able to figure out how to output a nice gradient manually.","response":"Using matplotlib.colors.LinearSegmentedColormap's from_list method seems more intuitive than some of the other answers here. \n```\nfrom matplotlib.colors import LinearSegmentedColormap\ncmap=LinearSegmentedColormap.from_list('rg',[\"r\", \"w\", \"g\"], N=256)\n```\nOr for more sophisticated tuning: \n```\nfrom matplotlib.colors import LinearSegmentedColormap\nc = [\"darkred\",\"red\",\"lightcoral\",\"white\", \"palegreen\",\"green\",\"darkgreen\"]\nv = [0,.15,.4,.5,0.6,.9,1.]\nl = list(zip(v,c))\ncmap=LinearSegmentedColormap.from_list('rg',l, N=256)\n```","best_answers_score":0.6735,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/38246559\/how-to-create-a-heat-map-in-python-that-ranges-from-green-to-red","best_answers_votes":56,"tags":["python","matplotlib","heatmap"],"question_length":384,"response_length":531,"tag_count":3} {"question":"cumulative distribution plots python I am doing a project using python where I have two arrays of data. Let's call them pc and pnc. I am required to plot a cumulative distribution of both of these on the same graph. For pc it is supposed to be a less than plot i.e. at (x,y), y points in pc must have value less than x. For pnc it is to be a more than plot i.e. at (x,y), y points in pnc must have value more than x. I have tried using histogram function - pyplot.hist. Is there a better and easier way to do what i want? Also, it has to be plotted on a logarithmic scale on the x-axis.","response":"You were close. You should not use plt.hist as numpy.histogram, that gives you both the values and the bins, than you can plot the cumulative with ease: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# some fake data\ndata = np.random.randn(1000)\n# evaluate the histogram\nvalues, base = np.histogram(data, bins=40)\n#evaluate the cumulative\ncumulative = np.cumsum(values)\n# plot the cumulative function\nplt.plot(base[:-1], cumulative, c='blue')\n#plot the survival function\nplt.plot(base[:-1], len(data)-cumulative, c='green')\n\nplt.show()\n```","best_answers_score":0.6733,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/15408371\/cumulative-distribution-plots-python","best_answers_votes":50,"tags":["python","python-3.x","matplotlib"],"question_length":586,"response_length":549,"tag_count":3} {"question":"Matplotlib coord. sys origin to top left How can I flip the origin of a matplotlib plot to be in the upper-left corner - as opposed to the default lower-left? I'm using matplotlib.pylab.plot to produce the plot (though if there is another plotting routine that is more flexible, please let me know). I'm looking for the equivalent of the matlab command: axis ij; Also, I've spent a couple hours surfing matplotlib help and google but haven't come up with an answer. Some info on where I could have looked up the answer would be helpful as well.","response":"The easiest way is to use: \n```\nplt.gca().invert_yaxis()\n```\nAfter you plotted the image. Origin works only for imshow.","best_answers_score":0.6711,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/1349230\/matplotlib-coord-sys-origin-to-top-left","best_answers_votes":37,"tags":["python","matplotlib","axes"],"question_length":544,"response_length":119,"tag_count":3} {"question":"Matplotlib vline label parameter not showing I want to label my vertical lines with matplotlib's .vline command, but for some reason the label parameter doesn't do anything\/show anything on the final plot. Does anyone know how to get the label to show? \n```\nplt.vlines(x=pah, ymin=0, ymax=0.6, colors='0.75', linestyles='dashed', label='PAHs')\n```\nEverything works apart from the label. Many thanks, L","response":"The label keyword is displayed in the legend. You need create the legend explicitly to see the label in the plot: \n```\nplt.vlines([1,2,3], 0, 1, label='test')\nplt.legend()\n```","best_answers_score":0.6708,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/27154793\/matplotlib-vline-label-parameter-not-showing","best_answers_votes":31,"tags":["python","matplotlib","enthought"],"question_length":401,"response_length":175,"tag_count":3} {"question":"Keep plotting window open in Matplotlib When writing scripts that use matplotlib, I temporally get an interactive graphing window when I run the script, which immediately goes away before I can view the plot. If I execute the same code interactively inside iPython, the graphing window stays open. How can I get matplotlib to keep a plot open once it is produces a graph when I run a script? For example, I can save this plot, but I cannot display it with show(): \n```\nfrom matplotlib import pyplot as plt\nimport scipy as sp\n\nx = sp.arange(10)\ny = sp.arange(10)\n\nplt.plot(x,y)\nplt.show()\n```","response":"According to the documentation, there's an experimental block parameter you can pass to plt.show(). Of course, if your version of matplotlib isn't new enough, it won't have this. If you have this feature, you should be able to replace plt.show() with plt.show(block=True) to get your desired behavior.","best_answers_score":0.6704,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/12358312\/keep-plotting-window-open-in-matplotlib","best_answers_votes":52,"tags":["python","matplotlib"],"question_length":593,"response_length":301,"tag_count":2} {"question":"How do I plot Shapely polygons and objects using Matplotlib? I want to use Shapely for my computational geometry project. I need to be able to visualize and display polygons, lines, and other geometric objects for this. I've tried to use Matplotlib for this but I am having trouble with it. \n```py\nfrom shapely.geometry import Polygon\nimport matplotlib.pyplot as plt\n\npolygon1 = Polygon([(0,5),\n (1,1),\n (3,0),\n ])\n\nplt.plot(polygon1)\nplt.show()\n```\nI would like to be able to display this polygon in a plot. How would I change my code to do this?","response":"Use: \n```\nimport matplotlib.pyplot as plt\n\nx,y = polygon1.exterior.xy\nplt.plot(x,y)\n```\nOr, more succinctly: \n```\nplt.plot(*polygon1.exterior.xy)\n```","best_answers_score":0.6703,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/55522395\/how-do-i-plot-shapely-polygons-and-objects-using-matplotlib","best_answers_votes":144,"tags":["python","matplotlib","shapely"],"question_length":604,"response_length":149,"tag_count":3} {"question":"How to plot gamma distribution with alpha and beta parameters in python I want to plot a gamma distribution with alpha = 29 (the scale) and beta = 3 (the size). In other words, I want to plot the pdf for Gamma(29,3). How do I do this if according to the documentation, the python gamma function only has parameters a and x and the size parameter doesn't exist? I thought loc was beta, but I think it's actually offset, so the code below is wrong... \n```\nimport numpy as np\nimport scipy.stats as stats \nfrom matplotlib import pyplot as plt\n\nx = np.linspace (0, 100, 200) \ny1 = stats.gamma.pdf(x, a=29, loc=3) #a is alpha, loc is beta???\nplt.plot(x, y1, \"y-\", label=(r'$\\alpha=29, \\beta=3$')) \n\n\nplt.ylim([0,0.08])\nplt.xlim([0,150])\nplt.show()\n```","response":"According to the documentation, you want to use the scale parameter (theta), but since you are defining beta, which is the inverse of theta, then you pass scale with the value of 1\/beta, which in your example would be 1\/3 or 0.33333. Therefore, try: \n```\ny1 = stats.gamma.pdf(x, a=29, scale=0.33333)\n```","best_answers_score":0.6699,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/42150965\/how-to-plot-gamma-distribution-with-alpha-and-beta-parameters-in-python","best_answers_votes":27,"tags":["python","numpy","matplotlib","scipy","gamma-distribution"],"question_length":745,"response_length":303,"tag_count":5} {"question":"matplotlib bar graph black - how do I remove bar borders I'm using pyplot.bar but I'm plotting so many points that the color of the bars is always black. This is because the borders of the bars are black and there are so many of them that they are all squished together so that all you see is the borders (black). Is there a way to remove the bar borders so that I can see the intended color?","response":"Set the edgecolor to \"none\": bar(..., edgecolor = \"none\")","best_answers_score":0.6698,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/15904042\/matplotlib-bar-graph-black-how-do-i-remove-bar-borders","best_answers_votes":141,"tags":["python","graph","matplotlib","border"],"question_length":392,"response_length":57,"tag_count":4} {"question":"How to create a density plot In R I can create the desired output by doing: \n```\ndata = c(rep(1.5, 7), rep(2.5, 2), rep(3.5, 8),\n rep(4.5, 3), rep(5.5, 1), rep(6.5, 8))\nplot(density(data, bw=0.5))\n```\nIn python (with matplotlib) the closest I got was with a simple histogram: \n```\nimport matplotlib.pyplot as plt\ndata = [1.5]*7 + [2.5]*2 + [3.5]*8 + [4.5]*3 + [5.5]*1 + [6.5]*8\nplt.hist(data, bins=6)\nplt.show()\n```\nI also tried the normed=True parameter but couldn't get anything other than trying to fit a gaussian to the histogram. My latest attempts were around scipy.stats and gaussian_kde, following examples on the web, but I've been unsuccessful so far.","response":"Five years later, when I Google \"how to create a kernel density plot using python\", this thread still shows up at the top! Today, a much easier way to do this is to use seaborn, a package that provides many convenient plotting functions and good style management. \n```\nimport numpy as np\nimport seaborn as sns\ndata = [1.5]*7 + [2.5]*2 + [3.5]*8 + [4.5]*3 + [5.5]*1 + [6.5]*8\nsns.set_style('whitegrid')\nsns.kdeplot(np.array(data), bw=0.5)\n```","best_answers_score":0.6693,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/4150171\/how-to-create-a-density-plot","best_answers_votes":205,"tags":["python","numpy","matplotlib","scipy","kernel-density"],"question_length":669,"response_length":441,"tag_count":5} {"question":"matplotlib - increase resolution to see details I have a big process that is composed of tasks (about 600), and I created a figure to watch the order they are launched with and the time they take. To do this, I used matplotlib and a barh. The figure is ok (my 1st matplotlib success !), but: I would like to see the details and zoom on the picture when exported (as PNG, for instance), as the zoom option allows when matplotlib displays the result with the show() command the legends of the Y axis are too close and unreadable I tried to increase the resolution as said in this other SO post, this is better but details are not precise enough. Here are my results so far: full result zoom with matplotlib Do you know how I could improve readability ? Thanks a lot (else, all my efforts would be useless, I'm afraid...) PS: I'm using matplotlib 1.1.1 and python 2.7.","response":"I managed to do so, on Evert's advice, by using a very big resolution with a very small font. Here are the most important steps: \n```\nimport pylab as pl\npl.figure(figsize=(70, 70)) # This increases resolution\npl.savefig('test.eps', format='eps', dpi=900) # This does, too\npl.annotate(..., fontsize='xx-small', ...)\n```","best_answers_score":0.669,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/12192661\/matplotlib-increase-resolution-to-see-details","best_answers_votes":43,"tags":["python","matplotlib"],"question_length":865,"response_length":318,"tag_count":2} {"question":"vertical & horizontal lines in matplotlib I do not quite understand why I am unable to create horizontal and vertical lines at specified limits. I would like to bound the data by this box. However, the sides do not seem to comply with my instructions. Why is this? \n```\n# CREATING A BOUNDING BOX\n# BOTTOM HORIZONTAL\nplt.axhline(y=.4, xmin=0.25, xmax=0.402, linewidth=2, color = 'k')\n# RIGHT VERTICAL\nplt.axvline(x=0.402, ymin=0.4, ymax = 0.615, linewidth=2, color='k')\n# LEFT VERTICAL\nplt.axvline(x=0.1, ymin=0.58, ymax = 0.79, linewidth=2, color='k')\nplt.show()\n```","response":"The pyplot functions you are calling, axhline() and axvline() draw lines that span a portion of the axis range, regardless of coordinates. The parameters xmin or ymin use value 0.0 as the minimum of the axis and 1.0 as the maximum of the axis. Instead, use plt.plot((x1, x2), (y1, y2), 'k-') to draw a line from the point (x1, y1) to the point (x2, y2) in color k. See pyplot.plot.","best_answers_score":0.6678,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/16930328\/vertical-horizontal-lines-in-matplotlib","best_answers_votes":182,"tags":["matplotlib"],"question_length":566,"response_length":381,"tag_count":1} {"question":"Get data points from Seaborn distplot I use \n```\nsns.distplot\n```\nto plot a univariate distribution of observations. Still, I need not only the chart, but also the data points. How do I get the data points from matplotlib Axes (returned by distplot)?","response":"You can use the matplotlib.patches API. For instance, to get the first line: \n```\nsns.distplot(x).get_lines()[0].get_data()\n```\nThis returns two numpy arrays containing the x and y values for the line. For the bars, information is stored in: \n```\nsns.distplot(x).patches\n```\nYou can access the bar's height via the function patches.get_height(): \n```\n[h.get_height() for h in sns.distplot(x).patches]\n```","best_answers_score":0.6677,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/37374983\/get-data-points-from-seaborn-distplot","best_answers_votes":31,"tags":["python","matplotlib","seaborn"],"question_length":250,"response_length":404,"tag_count":3} {"question":"How to remove gaps between bars in a bar chart I'm making a bar chart in Matplotlib with a call like this: \n```\nxs.bar(bar_lefts, bar_heights, facecolor='black', edgecolor='black')\n```\nI get a barchart that looks like this: What I'd like is one with no white gap between consecutive bars, e.g. more like this: Is there a way to achieve this in Matplotlib using the bar() function?","response":"Add width=1.0 as a keyword argument to bar(). E.g. xs.bar(bar_lefts, bar_heights, width=1.0, facecolor='black', edgecolor='black'). This will fill the bars gaps vertically.","best_answers_score":0.6674,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/20454120\/how-to-remove-gaps-between-bars-in-a-bar-chart","best_answers_votes":63,"tags":["python","matplotlib","bar-chart"],"question_length":380,"response_length":172,"tag_count":3} {"question":"adding extra axis ticks using matplotlib I have a simple plot code as \n```\nplt.plot(x,y)\nplt.show()\n```\nI want to add some extra ticks on the x-axis in addition to the current ones, let's say at \n```\nextraticks=[2.1, 3, 7.6]\n```\nAs you see I do not have a pattern for ticks so I do not want to increase the tick frequency for the whole axis; just keep the original ones and add those extras... Is it possible, at all? Regards","response":"Yes, you can try something like: \n```\nplt.xticks(list(plt.xticks()[0]) + extraticks)\n```\nThe function to use is xticks(). When called without arguments, it returns the current ticks. Calling it with arguments, you can set the tick positions and, optionally, labels.","best_answers_score":0.6671,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/14716660\/adding-extra-axis-ticks-using-matplotlib","best_answers_votes":108,"tags":["python","matplotlib","plot"],"question_length":425,"response_length":265,"tag_count":3} {"question":"Is it possible to add a string as a legend item I am producing some plots in matplotlib and would like to add explanatory text for some of the data. I want to have a string inside my legend as a separate legend item above the '0-10' item. Does anyone know if there is a possible way to do this? This is the code for my legend: ax.legend(['0-10','10-100','100-500','500+'],loc='best')","response":"Alternative solution, kind of dirty but pretty quick. \n```\nimport pylab as plt\n\nX = range(50)\nY = range(50)\nplt.plot(X, Y, label=\"Very straight line\")\n\n# Create empty plot with blank marker containing the extra label\nplt.plot([], [], ' ', label=\"Extra label on the legend\")\n\nplt.legend()\nplt.show()\n```","best_answers_score":0.6667,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/16826711\/is-it-possible-to-add-a-string-as-a-legend-item","best_answers_votes":124,"tags":["python","matplotlib","pandas","legend","legend-properties"],"question_length":383,"response_length":302,"tag_count":5} {"question":"How to skip empty dates (weekends) in a financial plot \n```\nax.plot_date((dates, dates), (highs, lows), '-')\n```\nI'm currently using this command to plot financial highs and lows using Matplotlib. It works great, but how do I remove the blank spaces in the x-axis left by days without market data, such as weekends and holidays? I have lists of dates, highs, lows, closes and opens. I can't find any examples of creating a graph with an x-axis that show dates but doesn't enforce a constant scale.","response":"Up to date answer (2018) with Matplotlib 2.1.2, Python 2.7.12 The function equidate_ax handles everything you need for a simple date x-axis with equidistant spacing of data points. Realised with ticker.FuncFormatter based on this example. \n```\nfrom __future__ import division\nfrom matplotlib import pyplot as plt\nfrom matplotlib.ticker import FuncFormatter\nimport numpy as np\nimport datetime\n\n\ndef equidate_ax(fig, ax, dates, fmt=\"%Y-%m-%d\", label=\"Date\"):\n \"\"\"\n Sets all relevant parameters for an equidistant date-x-axis.\n Tick Locators are not affected (set automatically)\n\n Args:\n fig: pyplot.figure instance\n ax: pyplot.axis instance (target axis)\n dates: iterable of datetime.date or datetime.datetime instances\n fmt: Display format of dates\n label: x-axis label\n Returns:\n None\n\n \"\"\" \n N = len(dates)\n def format_date(index, pos):\n index = np.clip(int(index + 0.5), 0, N - 1)\n return dates[index].strftime(fmt)\n ax.xaxis.set_major_formatter(FuncFormatter(format_date))\n ax.set_xlabel(label)\n fig.autofmt_xdate()\n\n\n#\n# Some test data (with python dates)\n#\ndates = [datetime.datetime(year, month, day) for year, month, day in [\n (2018,2,1), (2018,2,2), (2018,2,5), (2018,2,6), (2018,2,7), (2018,2,28)\n]]\ny = np.arange(6)\n\n\n# Create plots. Left plot is default with a gap\nfig, [ax1, ax2] = plt.subplots(1, 2)\nax1.plot(dates, y, 'o-')\nax1.set_title(\"Default\")\nax1.set_xlabel(\"Date\")\n\n\n# Right plot will show equidistant series\n# x-axis must be the indices of your dates-list\nx = np.arange(len(dates))\nax2.plot(x, y, 'o-')\nax2.set_title(\"Equidistant Placement\")\nequidate_ax(fig, ax2, dates)\n```","best_answers_score":0.6667,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/1273472\/how-to-skip-empty-dates-weekends-in-a-financial-plot","best_answers_votes":7,"tags":["python","matplotlib","graph","time-series","finance"],"question_length":497,"response_length":1691,"tag_count":5} {"question":"Second y-axis label getting cut off I'm trying to plot two sets of data in a bar graph with matplotlib, so I'm using two axes with the twinx() method. However, the second y-axis label gets cut off. I've tried a few different methods with no success (tight_layout(), setting the major_pads in rcParams, etc...). I feel like the solution is simple, but I haven't come across it yet. Here's a MWE: \n```\n#!\/usr\/bin\/env python\nimport numpy as np\nimport matplotlib\nmatplotlib.use('Agg')\nimport matplotlib.pyplot as plt\n\nmatplotlib.rcParams.update({'font.size': 21})\nax = plt.gca()\nplt.ylabel('Data1') #Left side\nax2 = ax.twinx()\nfor i in range(10):\n if(i%2==0):\n ax.bar(i,np.random.randint(10))\n else:\n ax2.bar(i,np.random.randint(1000),color='k')\n\n\nplt.ylabel('Data2') #Right\n```\nside plt.savefig(\"test.png\")","response":"I just figured it out: the trick is to use bbox_inches='tight' in savefig. E.G. plt.savefig(\"test.png\",bbox_inches='tight')","best_answers_score":0.6658,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/21288062\/second-y-axis-label-getting-cut-off","best_answers_votes":146,"tags":["python","graph","matplotlib"],"question_length":811,"response_length":123,"tag_count":3} {"question":"Plot a horizontal line on a given plot How do I add a horizontal line to an existing plot?","response":"Use axhline (a horizontal axis line). For example, this plots a horizontal line at y = 0.5: \n```\nimport matplotlib.pyplot as plt\nplt.axhline(y=0.5, color='r', linestyle='-')\nplt.show()\n```","best_answers_score":0.6646,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/33382619\/plot-a-horizontal-line-on-a-given-plot","best_answers_votes":919,"tags":["python","pandas","matplotlib","seaborn"],"question_length":90,"response_length":188,"tag_count":4} {"question":"Specify format of floats for tick labels I am trying to set the format to two decimal numbers in a matplotlib subplot environment. Unfortunately, I do not have any idea how to solve this task. To prevent using scientific notation on the y-axis I used ScalarFormatter(useOffset=False) as you can see in my snippet below. I think my task should be solved by passing further options\/arguments to the used formatter. However, I could not find any hint in matplotlib's documentation. How can I set two decimal digits or none (both cases are needed)? I am not able to provide sample data, unfortunately. -- SNIPPET -- \n```\nf, axarr = plt.subplots(3, sharex=True)\n\ndata = conv_air\nx = range(0, len(data))\n\naxarr[0].scatter(x, data)\naxarr[0].set_ylabel('$T_\\mathrm{air,2,2}$', size=FONT_SIZE)\naxarr[0].yaxis.set_major_locator(MaxNLocator(5))\naxarr[0].yaxis.set_major_formatter(ScalarFormatter(useOffset=False))\naxarr[0].tick_params(direction='out', labelsize=FONT_SIZE)\naxarr[0].grid(which='major', alpha=0.5)\naxarr[0].grid(which='minor', alpha=0.2)\n\ndata = conv_dryer\nx = range(0, len(data))\n\naxarr[1].scatter(x, data)\naxarr[1].set_ylabel('$T_\\mathrm{dryer,2,2}$', size=FONT_SIZE)\naxarr[1].yaxis.set_major_locator(MaxNLocator(5))\naxarr[1].yaxis.set_major_formatter(ScalarFormatter(useOffset=False))\naxarr[1].tick_params(direction='out', labelsize=FONT_SIZE)\naxarr[1].grid(which='major', alpha=0.5)\naxarr[1].grid(which='minor', alpha=0.2)\n\ndata = conv_lambda\nx = range(0, len(data))\n\naxarr[2].scatter(x, data)\naxarr[2].set_xlabel('Iterationsschritte', size=FONT_SIZE)\naxarr[2].xaxis.set_major_locator(MaxNLocator(integer=True))\naxarr[2].set_ylabel('$\\lambda$', size=FONT_SIZE)\naxarr[2].yaxis.set_major_formatter(ScalarFormatter(useOffset=False))\naxarr[2].yaxis.set_major_locator(MaxNLocator(5))\naxarr[2].tick_params(direction='out', labelsize=FONT_SIZE)\naxarr[2].grid(which='major', alpha=0.5)\naxarr[2].grid(which='minor', alpha=0.2)\n```","response":"See the relevant documentation in general and specifically \n```\nfrom matplotlib.ticker import FormatStrFormatter\n\nfig, ax = plt.subplots()\n\nax.yaxis.set_major_formatter(FormatStrFormatter('%.2f'))\n```","best_answers_score":0.6644,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/29188757\/specify-format-of-floats-for-tick-labels","best_answers_votes":222,"tags":["python","matplotlib"],"question_length":1929,"response_length":200,"tag_count":2} {"question":"Labeling boxplot in seaborn with median value How can I label each boxplot in a seaborn plot with the median value? E.g. \n```\nimport seaborn as sns\nsns.set_style(\"whitegrid\")\ntips = sns.load_dataset(\"tips\")\nax = sns.boxplot(x=\"day\", y=\"total_bill\", data=tips)\n```\nHow do I label each boxplot with the median or average value?","response":"I love when people include sample datasets! \n```\nimport seaborn as sns\n\nsns.set_style(\"whitegrid\")\ntips = sns.load_dataset(\"tips\")\nbox_plot = sns.boxplot(x=\"day\",y=\"total_bill\",data=tips)\n\nmedians = tips.groupby(['day'])['total_bill'].median()\nvertical_offset = tips['total_bill'].median() * 0.05 # offset from median for display\n\nfor xtick in box_plot.get_xticks():\n box_plot.text(xtick,medians[xtick] + vertical_offset,medians[xtick], \n horizontalalignment='center',size='x-small',color='w',weight='semibold')\n```","best_answers_score":0.6644,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/38649501\/labeling-boxplot-in-seaborn-with-median-value","best_answers_votes":84,"tags":["python","matplotlib","seaborn"],"question_length":325,"response_length":529,"tag_count":3} {"question":"Editing the date formatting of x-axis tick labels I am looking to edit the formatting of the dates on the x-axis. The picture below shows how they appear on my bar graph by default. I would like to remove the repetition of 'Dec' and '2012' and just have the actual date numbers along the x-axis. Any suggestions as to how I can do this?","response":"In short: \n```\nimport matplotlib.dates as mdates\nmyFmt = mdates.DateFormatter('%d')\nax.xaxis.set_major_formatter(myFmt)\n```\nMany examples on the matplotlib website. The one I most commonly use is here","best_answers_score":0.6642,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/14946371\/editing-the-date-formatting-of-x-axis-tick-labels","best_answers_votes":161,"tags":["python","matplotlib","xticks"],"question_length":336,"response_length":200,"tag_count":3} {"question":"Draw curved arrow that looks just like pyplot.arrow I use pyplot.arrow do draw some straight arrows, e.g., \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nv={}\nfor i in range (1,4):\n v[i]=np.array([np.cos(-2*np.pi\/3*i),np.sin(-2*np.pi\/3*i)])\n\nplt.arrow(.85*(.05*v[2]+.95*v[1])[0],.85*(.05*v[2]+.95*v[1])[1],.85*.9*(v[2]-v[1])[0],.85*.9*(v[2]-v[1])[1],width=0,head_width=.03,head_length=.045,length_includes_head=True,color=\"black\")\nplt.arrow(.85*(.05*v[3]+.95*v[2])[0],.85*(.05*v[3]+.95*v[2])[1],.85*.9*(v[3]-v[2])[0],.85*.9*(v[3]-v[2])[1],width=0,head_width=.03,head_length=.045,length_includes_head=True,color=\"black\")\nplt.arrow(.85*(.05*v[1]+.95*v[3])[0],.85*(.05*v[1]+.95*v[3])[1],.85*.9*(v[1]-v[3])[0],.85*.9*(v[1]-v[3])[1],width=0,head_width=.03,head_length=.045,length_includes_head=True,color=\"black\")\n\nplt.axes().set_xlim(-.5,1)\nplt.axes().set_ylim(-np.sqrt(3)\/2,np.sqrt(3)\/2)\nplt.axes().set_aspect(1)\nplt.show()\n```\nNow I want to also draw some arrows that have circular curvature instead of being straight. I see that I can achieve this with pyplot.annotate() or patches.FancyArrowPatch with connectionstyle=\"arc3,rad=.5\" or so. But these arrows look completely different from the pyplot.arrows and do not fit with the rest of my figures. And I don't know how I could pass something like connectionstyle to pyplot.arrow. Is there a way to draw curved arrows that look exactly like those that I get from pyplot.arrow?","response":"You cannot plot curved arrows with pyplot.arrow. However, patches.FancyArrowPatch should offer all the options to get any arrow style you want, so the idea would be to use a FancyArrowPatch for the straight arrows as well, such that you can use the same style for all arrows. \n```\nimport matplotlib.pyplot as plt\nimport matplotlib.patches as patches\n\nplt.axes().set_xlim(-.5,0.5)\nplt.axes().set_ylim(-0.9,0.7)\nplt.axes().set_aspect(1)\n\nstyle = \"Simple, tail_width=0.5, head_width=4, head_length=8\"\nkw = dict(arrowstyle=style, color=\"k\")\n\na1 = patches.FancyArrowPatch((-0.4, -0.6), (0, 0.6), **kw)\na2 = patches.FancyArrowPatch((0, 0.6), (0.4, -0.6), **kw)\na3 = patches.FancyArrowPatch((-0.4, -0.6), (0.4, -0.6),\n connectionstyle=\"arc3,rad=.5\", **kw)\n\nfor a in [a1, a2, a3]:\n plt.gca().add_patch(a)\nplt.show()\n```","best_answers_score":0.6639,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/44526103\/draw-curved-arrow-that-looks-just-like-pyplot-arrow","best_answers_votes":47,"tags":["python","matplotlib"],"question_length":1439,"response_length":842,"tag_count":2} {"question":"How to create a matplotlib bar chart with a threshold line? I'd like to know how to create a matplotlib bar chart with a threshold line, the part of bars above threshold line should have red color, and the parts below the threshold line should be green. Please provide me a simple example, I couldn't find anything on the web.","response":"You can simply use axhline like this. See this documentation \n```\n# For your case\nplt.axhline(y=threshold,linewidth=1, color='k')\n\n# Another example - You can also define xmin and xmax\nplt.axhline(y=5, xmin=0.5, xmax=3.5)\n```","best_answers_score":0.6626,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/28129606\/how-to-create-a-matplotlib-bar-chart-with-a-threshold-line","best_answers_votes":39,"tags":["python","matplotlib","charts"],"question_length":326,"response_length":225,"tag_count":3} {"question":"Pandas Plotting with Multi-Index After performing a groupby.sum() on a DataFrame I'm having some trouble trying to create my intended plot. \n```py\nimport pandas as pd\nimport numpy as np\n\nnp.random.seed(365)\nrows = 100\ndata = {'Month': np.random.choice(['2014-01', '2014-02', '2014-03', '2014-04'], size=rows),\n 'Code': np.random.choice(['A', 'B', 'C'], size=rows),\n 'ColA': np.random.randint(5, 125, size=rows),\n 'ColB': np.random.randint(0, 51, size=rows),}\ndf = pd.DataFrame(data)\n\n Month Code ColA ColB\n0 2014-03 C 59 47\n1 2014-01 A 24 9\n2 2014-02 C 77 50\n\ndfg = df.groupby(['Code', 'Month']).sum()\n\n ColA ColB\nCode Month \nA 2014-01 124 102\n 2014-02 398 282\n 2014-03 474 198\n 2014-04 830 237\nB 2014-01 477 300\n 2014-02 591 167\n 2014-03 522 192\n 2014-04 367 169\nC 2014-01 412 180\n 2014-02 275 205\n 2014-03 795 291\n 2014-04 901 309\n```\nHow can I create a subplot (kind='bar') for each Code, where the x-axis is the Month and the bars are ColA and ColB?","response":"I found the unstack(level) method to work perfectly, which has the added benefit of not needing a priori knowledge about how many Codes there are. \n```py\nax = dfg.unstack(level=0).plot(kind='bar', subplots=True, rot=0, figsize=(9, 7), layout=(2, 3))\nplt.tight_layout()\n```","best_answers_score":0.6624,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/25386870\/pandas-plotting-with-multi-index","best_answers_votes":143,"tags":["python","pandas","matplotlib","seaborn","bar-chart"],"question_length":1131,"response_length":272,"tag_count":5} {"question":"Scatterplot without linear fit in seaborn I am wondering if there is a way to turn off the linear fit in seaborn's lmplot or if there is an equivalent function that just produces the scatterplot. Sure, I could also use matplotlib, however, I find the syntax and aesthetics in seaborn quite appealing. E.g,. I want to plot the following plot \n```\nimport seaborn as sns\nsns.set(style=\"ticks\")\n\ndf = sns.load_dataset(\"anscombe\")\nsns.lmplot(\"x\", \"y\", data=df, hue='dataset')\n```\nWithout the linear fit like so: \n```\nfrom itertools import cycle\nimport numpy as np\n\nimport matplotlib.pyplot as plt\n\ncolor_gen = cycle(('blue', 'lightgreen', 'red', 'purple', 'gray', 'cyan'))\n\nfor lab in np.unique(df['dataset']):\n plt.scatter(df.loc[df['dataset'] == lab, 'x'], \n df.loc[df['dataset'] == lab, 'y'], \n c=next(color_gen),\n label=lab)\n\nplt.legend(loc='best')\n```","response":"set fit_reg argument to False: \n```\nsns.lmplot(\"x\", \"y\", data=df, hue='dataset', fit_reg=False)\n```","best_answers_score":0.6613,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/29637150\/scatterplot-without-linear-fit-in-seaborn","best_answers_votes":48,"tags":["python","matplotlib","seaborn"],"question_length":899,"response_length":99,"tag_count":3} {"question":"How to draw axis in the middle of the figure? I want to draw a figure in matplotib where the axis are displayed within the plot itself not on the side I have tried the following code from here: \n```\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef sigmoid(x):\n a = []\n for item in x:\n a.append(1\/(1+math.exp(-item)))\n return a\n\nx = np.arange(-10., 10., 0.2)\nsig = sigmoid(x)\n\nplt.plot(x,sig)\nplt.show()\n```\nThe above code displays the figure like this: What I would like to draw is something as follows (image from Wikipedia) This question describes a similar problem, but it draws a reference line in the middle but no axis.","response":"One way to do it is using spines: \n```\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef sigmoid(x):\n a = []\n for item in x:\n a.append(1\/(1+math.exp(-item)))\n return a\n\n \nx = np.arange(-10., 10., 0.2)\nsig = sigmoid(x)\n\nfig = plt.figure()\nax = fig.add_subplot(1, 1, 1)\n\n# Move left y-axis and bottom x-axis to centre, passing through (0,0)\nax.spines['left'].set_position('center')\nax.spines['bottom'].set_position('center')\n\n# Eliminate upper and right axes\nax.spines['right'].set_color('none')\nax.spines['top'].set_color('none')\n\n# Show ticks in the left and lower axes only\nax.xaxis.set_ticks_position('bottom')\nax.yaxis.set_ticks_position('left')\n\nplt.plot(x,sig)\nplt.show()\n```\nshows:","best_answers_score":0.6604,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/31556446\/how-to-draw-axis-in-the-middle-of-the-figure","best_answers_votes":74,"tags":["python","matplotlib","axis"],"question_length":660,"response_length":724,"tag_count":3} {"question":"Matplotlib legends in subplot I would like to put legends inside each one of the subplots below. I've tried with plt.legend but it didn't work. \n```\nf, (ax1, ax2, ax3) = plt.subplots(3, sharex=True, sharey=True)\nax1.plot(xtr, color='r', label='Blue stars')\nax2.plot(ytr, color='g')\nax3.plot(ztr, color='b')\nax1.set_title('2012\/09\/15')\nplt.legend([ax1, ax2, ax3],[\"HHZ 1\", \"HHN\", \"HHE\"])\nplt.show()\n```\nWith the suggestion from atomh33ls: \n```\nax1.legend(\"HHZ 1\",loc=\"upper right\")\nax2.legend(\"HHN\",loc=\"upper right\")\nax3.legend(\"HHE\",loc=\"upper right\")\n```\nThe legend position is fixed, however it seems to have a problem with the strings, because each letter is placed in a new line. Does anyone knows how to fix it?","response":"This should work: \n```\nax1.plot(xtr, color='r', label='HHZ 1')\nax1.legend(loc=\"upper right\")\nax2.plot(xtr, color='r', label='HHN')\nax2.legend(loc=\"upper right\")\nax3.plot(xtr, color='r', label='HHE')\nax3.legend(loc=\"upper right\")\n```","best_answers_score":0.6601,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/27016904\/matplotlib-legends-in-subplot","best_answers_votes":128,"tags":["python","matplotlib"],"question_length":717,"response_length":232,"tag_count":2} {"question":"Image is not displaying in Google Colab while using imshow() I am working on a project which requires functions from OpenCV to plot images. I am trying to display image using the below code in Google Colab. But nothing shows up in the output. Can anybody help me with this? \n```\n%pylab notebook\nimport cv2\n\ntestim = imread('butterfly.jpg')\nfigure()\nimshow(testim)\nplt.show()\n```\nScreenshot: Link to my Colab Notebook","response":"Google colab crashes if you try to display image using cv2.imshow() instead import from google.colab.patches import cv2_imshow and display using cv2_imshow()","best_answers_score":0.6597,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/55288657\/image-is-not-displaying-in-google-colab-while-using-imshow","best_answers_votes":102,"tags":["python","opencv","matplotlib","google-colaboratory","imshow"],"question_length":416,"response_length":157,"tag_count":5} {"question":"Hide ticks but show tick labels I can remove the ticks with \n```\nax.set_xticks([]) \nax.set_yticks([])\n```\nbut this removes the labels as well. Any way I can plot the tick labels but not the ticks and the spine","response":"You can set the tick length to 0 using tick_params (http:\/\/matplotlib.org\/api\/axes_api.html#matplotlib.axes.Axes.tick_params): \n```\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.plot([1],[1])\nax.tick_params(axis=u'both', which=u'both',length=0)\nplt.show()\n```","best_answers_score":0.659,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/29988241\/hide-ticks-but-show-tick-labels","best_answers_votes":155,"tags":["python","matplotlib"],"question_length":209,"response_length":261,"tag_count":2} {"question":"Is there a list of line styles in matplotlib? I'm writing a script that will do some plotting. I want it to plot several data series, each with its unique line style (not color). I can easily iterate through a list, but is there such a list already available in python?","response":"According to the doc you could find them by doing this : \n```\nfrom matplotlib import lines\nlines.lineStyles.keys()\n\n```python\n['', ' ', 'None', '--', '-.', '-', ':']\n#Output\n#```\n#You can do the same with markers EDIT: In the latest versions, there are still the same styles, but you can vary the space between dots\/lines.\n```","best_answers_score":0.659,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/13359951\/is-there-a-list-of-line-styles-in-matplotlib","best_answers_votes":112,"tags":["python","matplotlib","linestyle"],"question_length":269,"response_length":305,"tag_count":3} {"question":"Setting Yaxis in Matplotlib using Pandas Using Pandas to plot in IPython Notebook, I have several plots and because Matplotlib decides the Y axis it is setting them differently and we need to compare that data using the same range. From the Matplotlib doc it seems that I need to set ylim, but can't figure the syntax to do so. I have tried several variants on: \n```\ndf2250.plot(); plt.ylim((100000,500000))\ndf2260.plot()\ndf5.plot()\n```\nI assume I'll need to apply the limits to each plot, but since I can't get one working...","response":"DataFrame.plot() exposes a ylim parameter that sets the y axis limits: \n```\ndf.plot(ylim=(0, 200))\n```\nI'm guessing this feature was added after Rutger's answer was accepted in 2013.","best_answers_score":0.6577,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/17787366\/setting-yaxis-in-matplotlib-using-pandas","best_answers_votes":105,"tags":["python","pandas","matplotlib"],"question_length":526,"response_length":182,"tag_count":3} {"question":"Label data points on plot If you want to label your plot points using python matplotlib, I used the following code. \n```py\nfrom matplotlib import pyplot as plt\n\nfig = plt.figure()\nax = fig.add_subplot(111)\n\nA = anyarray\nB = anyotherarray\n\nplt.plot(A,B)\nfor i,j in zip(A,B):\n ax.annotate('%s)' %j, xy=(i,j), xytext=(30,0), textcoords='offset points')\n ax.annotate('(%s,' %i, xy=(i,j))\n\nplt.grid()\nplt.show()\n```\nI know that xytext=(30,0) goes along with the textcoords and you use those 30,0 values to position the data label point, so it's on the y=0 and x=30 on its own little area. You need both the lines plotting i and j otherwise you only plot x or y data label. You get something like this out (note the labels only): It's not ideal, there is still some overlap.","response":"How about print (x, y) at once. \n```\nfrom matplotlib import pyplot as plt\n\nfig = plt.figure()\nax = fig.add_subplot(111)\n\nA = -0.75, -0.25, 0, 0.25, 0.5, 0.75, 1.0\nB = 0.73, 0.97, 1.0, 0.97, 0.88, 0.73, 0.54\n\nax.plot(A,B)\nfor xy in zip(A, B): # <--\n ax.annotate('(%s, %s)' % xy, xy=xy, textcoords='data') # <--\n\nax.grid()\nplt.show()\n```","best_answers_score":0.6575,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/22272081\/label-data-points-on-plot","best_answers_votes":123,"tags":["python","matplotlib","plot","label","annotate"],"question_length":774,"response_length":376,"tag_count":5} {"question":"Plotting multiple line graph using pandas and matplotlib I have the following data in a pandas dataframe \n```\ndate template score\n0 20140605 0 0.138786\n1 20140605 1 0.846441\n2 20140605 2 0.766636\n3 20140605 3 0.259632\n4 20140605 4 0.497366\n5 20140606 0 0.138139\n6 20140606 1 0.845320\n7 20140606 2 0.762876\n8 20140606 3 0.261035\n9 20140606 4 0.498010\n```\nFor every day there will be 5 templates and each template will have a score. I want to plot the date in the x axis and score in the y axis and a separate line graph for each template in the same figure. Is it possible to do this using matplotlib?","response":"You can use the groupby method: \n```\ndata.groupby(\"template\").plot(x=\"date\", y=\"score\")\n```","best_answers_score":0.6575,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/24080275\/plotting-multiple-line-graph-using-pandas-and-matplotlib","best_answers_votes":56,"tags":["python","matplotlib","plot","pandas"],"question_length":705,"response_length":91,"tag_count":4} {"question":"Making a chart bigger in size I'm trying to get a bigger chart. However, the figure method from matplotlib does not seem to be working properly. I get a message, which is not an error: \n```none\n\n```\nMy code: \n```py\nimport pandas.io.data as web\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n...\nplt.figure(figsize=(20,10))\ndf2['media']= df2['SPY']*.6 + df2['TLT']*.4\ndf2.plot()\nplt.show()\n```\nWhat's wrong with my code?","response":"You can skip the first plt.figure() and just use the argument figsize: \n```\ndf2.plot(figsize=(20,10))\n```\nSee docs.","best_answers_score":0.6571,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/38294852\/making-a-chart-bigger-in-size","best_answers_votes":107,"tags":["python","pandas","matplotlib","plot"],"question_length":483,"response_length":115,"tag_count":4} {"question":"Plot a histogram such that bar heights sum to 1 (probability) I'd like to plot a normalized histogram from a vector using matplotlib. I tried the following: \n```\nplt.hist(myarray, normed=True)\n```\nas well as: \n```\nplt.hist(myarray, normed=1)\n```\nbut neither option produces a y-axis from [0, 1] such that the bar heights of the histogram sum to 1.","response":"If you want the sum of all bars to be equal unity, weight each bin by the total number of values: \n```\nweights = np.ones_like(myarray) \/ len(myarray)\nplt.hist(myarray, weights=weights)\n```\nNote for Python 2.x: add casting to float() for one of the operators of the division as otherwise you would end up with zeros due to integer division","best_answers_score":0.6556,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/3866520\/plot-a-histogram-such-that-bar-heights-sum-to-1-probability","best_answers_votes":235,"tags":["python","matplotlib","seaborn","histogram"],"question_length":347,"response_length":338,"tag_count":4} {"question":"Show only certain items in legend I currently am plotting a stacked bar graph of a large amount of taxonomic data, and only wish to show significant species in the legend (out of ~500 I wish to show ~25). Is there a simple way to do this? Below is the code I have: \n```\nlabels=['0','20','40','60','80','100','120']\nax1=subj1df.plot(kind='barh', stacked=True,legend=True,cmap='Paired', grid=False)\nlegend(ncol=2,loc=2, bbox_to_anchor=(1.05, 1), borderaxespad=0.)\nlabel1=['Baseline','8h','24h','48h','96h','120h']\nax1.set_yticklabels(label1, fontdict=None, minor=False)\nplt.title('Subject 1 Phyla',fontweight='bold')\nplt.savefig('Subject1Phyla.eps', format='eps', dpi=1000)\nax1.set_xticklabels(labels)\n```\nEdit: tried adding this to show only one legend entry, however only returns an empty legend: \n```\nh, l = ax1.get_legend_handles_labels()\nlegend(l[4],h[4],ncol=2,loc=2, bbox_to_anchor=(1.05, 1), borderaxespad=0.)\n```","response":"This works: \n```\nplt.plot([0, 4], [3,4])\nplt.plot([0, 4], [2,3],label='_nolegend_') # element missing from legend\nplt.plot([0, 4], [1,2])\nplt.legend(['first', 'third'])\n```","best_answers_score":0.6554,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/24680981\/show-only-certain-items-in-legend","best_answers_votes":309,"tags":["python","matplotlib","legend"],"question_length":919,"response_length":172,"tag_count":3} {"question":"Relocating legend from GeoPandas plot I'm plotting a map with legends using the GeoPandas plotting function. When I plot, my legends appear in the upper right corner of the figure. Here is how it looks like: I wanted to move the legends to the lower part of the graph. I would normally would have done something like this for a normal matplotlib plot: \n```\nfig, ax = plt.subplots(1, figsize=(4.5,10))\nlima_bank_num.plot(ax=ax, column='quant_cuts', cmap='Blues', alpha=1, legend=True)\nax.legend(loc='lower left')\n```\nHowever, this modification is not taken into account.","response":"This could be done using the legend_kwds argument: \n```\ndf.plot(column='values', legend=True, legend_kwds={'loc': 'lower right'});\n```","best_answers_score":0.6545,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/39816790\/relocating-legend-from-geopandas-plot","best_answers_votes":31,"tags":["matplotlib","geopandas"],"question_length":569,"response_length":134,"tag_count":2} {"question":"Hide legend from seaborn pairplot I would like to hide the Seaborn pairplot legend. The official docs don't mention a keyword legend. Everything I tried using plt.legend didn't work. Please suggest the best way forward. Thanks! \n```\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\n\ntest = pd.DataFrame({\n 'id': ['1','2','1','2','2','6','7','7','6','6'],\n 'x': [123,22,356,412,54,634,72,812,129,110],\n 'y':[120,12,35,41,45,63,17,91,112,151]})\nsns.pairplot(x_vars='x', y_vars=\"y\", \n data=test,\n hue = 'id', \n height = 3)\n```","response":"Since _legend.remove() method won't work on some other seaborn plots, what about: \n```\nplt.legend([],[], frameon=False)\n```","best_answers_score":0.6544,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/54781243\/hide-legend-from-seaborn-pairplot","best_answers_votes":64,"tags":["python-3.x","matplotlib","seaborn"],"question_length":604,"response_length":123,"tag_count":3} {"question":"Show decimal places and scientific notation on the axis I am plotting some big numbers with matplotlib in a pyqt program using python 2.7. I have a y-axis that ranges from 1e+18 to 3e+18 (usually). I'd like to see each tick mark show values in scientific notation and with 2 decimal places. For example 2.35e+18 instead of just 2e+18 because values between 2e+18 and 3e+18 still read just 2e+18 for a few tickmarks. Here is an example of that problem. \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nfig = plt.figure()\nax = fig.add_subplot(111)\nx = np.linspace(0, 300, 20)\ny = np.linspace(0,300, 20)\ny = y*1e16\nax.plot(x,y) \nax.get_xaxis().set_major_formatter(plt.LogFormatter(10, labelOnlyBase=False))\nax.get_yaxis().set_major_formatter(plt.LogFormatter(10, labelOnlyBase=False))\nplt.show()\n```","response":"This is really easy to do if you use the matplotlib.ticker.FormatStrFormatter as opposed to the LogFormatter. The following code will label everything with the format '%.2e': \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.ticker as mtick\n\nfig = plt.figure()\n\nax = fig.add_subplot(111)\n\nx = np.linspace(0, 300, 20)\n\ny = np.linspace(0,300, 20)\ny = y*1e16\n\nax.plot(x,y)\n\nax.yaxis.set_major_formatter(mtick.FormatStrFormatter('%.2e'))\n\nplt.show()\n```","best_answers_score":0.6544,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/25750170\/show-decimal-places-and-scientific-notation-on-the-axis","best_answers_votes":68,"tags":["python","matplotlib","axis-labels","significant-digits"],"question_length":806,"response_length":473,"tag_count":4} {"question":"How to change the datetime tick label frequency Below shows a plot of simulated data, which contains the xticks that I want to modify. By default, the pd.df.plot chooses dates that are approximately 3 months apart as ticks. But what I want is each month being a tick. What is the best way to do this? What about seasonal ticks? Thank you in advance.","response":"First of all you have to convert pandas date objects to python date objects. This conversion is needed because of matplotlib internal date conversion functions. Then use functions from matplotlib.dates to set desired formatter and tick positions like here: \n```\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pylab as plt\nimport matplotlib.dates as mdates\n\n# convert date objects from pandas format to python datetime\nindex = pd.date_range(start = \"2015-07-01\", end = \"2017-01-01\", freq = \"D\")\nindex = [pd.to_datetime(date, format='%Y-%m-%d').date() for date in index]\ndata = np.random.randint(1,100, size=len(index))\ndf = pd.DataFrame(data=data,index=index, columns=['data'])\nprint (df.head())\n\nax = df.plot()\n# set monthly locator\nax.xaxis.set_major_locator(mdates.MonthLocator(interval=1))\n# set formatter\nax.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))\n# set font and rotation for date tick labels\nplt.gcf().autofmt_xdate()\n\nplt.show()\n```\nFor season labels you have to construct it by yourself and then set it with plt.setp function (for month 02 set label winter, 04 - spring etc.): plt.setp(new_labels, rotation=90, fontsize=9). head of df: \n```\ndata\n2015-07-01 26\n2015-07-02 33\n2015-07-03 46\n2015-07-04 69\n2015-07-05 17\n```","best_answers_score":0.6539,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/45704366\/how-to-change-the-datetime-tick-label-frequency","best_answers_votes":29,"tags":["python","pandas","matplotlib","datetime","xticks"],"question_length":349,"response_length":1271,"tag_count":5} {"question":"matplotlib y-axis label on right side Is there a simple way to put the y-axis label on the right-hand side of the plot? I know that this can be done for the tick labels using ax.yaxis.tick_right(), but I would like to know if it can be done for the axis label as well. One idea which came to mind was to use \n```\nax.yaxis.tick_right()\nax2 = ax.twinx()\nax2.set_ylabel('foo')\n```\nHowever, this doesn't have the desired effect of placing all labels (tick and axis labels) on the right-hand side, while preserving the extent of the y-axis. In short, I would like a way to move all the y-axis labels from the left to the right.","response":"It looks like you can do it with: \n```\nax.yaxis.set_label_position(\"right\")\nax.yaxis.tick_right()\n```\nSee here for an example.","best_answers_score":0.652,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/13369888\/matplotlib-y-axis-label-on-right-side","best_answers_votes":259,"tags":["python","matplotlib","label"],"question_length":622,"response_length":126,"tag_count":3} {"question":"how to turn on minor ticks only on y axis How can I turn the minor ticks only on y axis on a linear vs linear plot? When I use the function minor_ticks_on to turn minor ticks on, they appear on both x and y axis.","response":"Nevermind, I figured it out. \n```\nax.tick_params(axis='x', which='minor', bottom=False)\n```","best_answers_score":0.6515,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/12711202\/how-to-turn-on-minor-ticks-only-on-y-axis","best_answers_votes":89,"tags":["python","matplotlib","axis","yticks"],"question_length":212,"response_length":91,"tag_count":4} {"question":"matplotlib values under cursor [duplicate] This question already has answers here: Interactive pixel information of an image (7 answers) Closed 10 years ago. I'm using matplotlib.imshow to get an interactive display of a 2D array. The x\/y coordinate under the cursor is displayed at the bottom left of the window. Is it possible to also get the value of the array under the cursor as well?","response":"You simply need to re-assign ax.format_coord. See this example from the documentation. (code lifted directly from example) \n```\n\"\"\"\nShow how to modify the coordinate formatter to report the image \"z\"\nvalue of the nearest pixel given x and y\n\"\"\"\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.cm as cm\n\nX = 10*np.random.rand(5,3)\n\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.imshow(X, cmap=cm.jet, interpolation='nearest')\n\nnumrows, numcols = X.shape\ndef format_coord(x, y):\n\u00a0 \u00a0 col = int(x+0.5)\n\u00a0 \u00a0 row = int(y+0.5)\n\u00a0 \u00a0 if col>=0 and col=0 and row Exception in Tkinter callback) (For many extra points, can anyone outline how matplotlib's packer \/ spacer works, along the lines of chapter 17 \"the packer\" in the Tcl\/Tk book?)","response":"The link posted by Jose has been updated and pylab now has a tight_layout() function that does this automatically (in matplotlib version 1.1.0). http:\/\/matplotlib.org\/api\/pyplot_api.html#matplotlib.pyplot.tight_layout http:\/\/matplotlib.org\/users\/tight_layout_guide.html#plotting-guide-tight-layout","best_answers_score":0.6392,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/2418125\/matplotlib-subplots-adjust-hspace-so-titles-and-xlabels-dont-overlap","best_answers_votes":47,"tags":["python","matplotlib"],"question_length":1186,"response_length":297,"tag_count":2} {"question":"Bar plot with groupby My categorical variable case_status takes on four unique values. I have data from 2014 to 2016. I would like to plot the distribution of case_status grouped by year. I try the following: \n```py\ndf.groupby('year').case_status.value_counts().plot.barh()\n```\nAnd I get the following plot: However, I want the following plot:","response":"I think you need add unstack for DataFrame: \n```\ndf.groupby('year').case_status.value_counts().unstack().plot.barh()\n```\nAlso is possible change level: \n```\ndf.groupby('year').case_status.value_counts().unstack(0).plot.barh()\n```","best_answers_score":0.6389,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/48238305\/bar-plot-with-groupby","best_answers_votes":44,"tags":["python-3.x","pandas","matplotlib","seaborn","bar-chart"],"question_length":343,"response_length":229,"tag_count":5} {"question":"Efficient method of calculating density of irregularly spaced points I am attempting to generate map overlay images that would assist in identifying hot-spots, that is areas on the map that have high density of data points. None of the approaches that I've tried are fast enough for my needs. Note: I forgot to mention that the algorithm should work well under both low and high zoom scenarios (or low and high data point density). I looked through numpy, pyplot and scipy libraries, and the closest I could find was numpy.histogram2d. As you can see in the image below, the histogram2d output is rather crude. (Each image includes points overlaying the heatmap for better understanding) My second attempt was to iterate over all the data points, and then calculate the hot-spot value as a function of distance. This produced a better looking image, however it is too slow to use in my application. Since it's O(n), it works ok with 100 points, but blows out when I use my actual dataset of 30000 points. My final attempt was to store the data in an KDTree, and use the nearest 5 points to calculate the hot-spot value. This algorithm is O(1), so much faster with large dataset. It's still not fast enough, it takes about 20 seconds to generate a 256x256 bitmap, and I would like this to happen in around 1 second time. Edit The boxsum smoothing solution provided by 6502 works well at all zoom levels and is much faster than my original methods. The gaussian filter solution suggested by Luke and Neil G is the fastest. You can see all four approaches below, using 1000 data points in total, at 3x zoom there are around 60 points visible. Complete code that generates my original 3 attempts, the boxsum smoothing solution provided by 6502 and gaussian filter suggested by Luke (improved to handle edges better and allow zooming in) is here: \n```\nimport matplotlib\nimport numpy as np\nfrom matplotlib.mlab import griddata\nimport matplotlib.cm as cm\nimport matplotlib.pyplot as plt\nimport math\nfrom scipy.spatial import KDTree\nimport time\nimport scipy.ndimage as ndi\n\n\ndef grid_density_kdtree(xl, yl, xi, yi, dfactor):\n zz = np.empty([len(xi),len(yi)], dtype=np.uint8)\n zipped = zip(xl, yl)\n kdtree = KDTree(zipped)\n for xci in range(0, len(xi)):\n xc = xi[xci]\n for yci in range(0, len(yi)):\n yc = yi[yci]\n density = 0.\n retvalset = kdtree.query((xc,yc), k=5)\n for dist in retvalset[0]:\n density = density + math.exp(-dfactor * pow(dist, 2)) \/ 5\n zz[yci][xci] = min(density, 1.0) * 255\n return zz\n\ndef grid_density(xl, yl, xi, yi):\n ximin, ximax = min(xi), max(xi)\n yimin, yimax = min(yi), max(yi)\n xxi,yyi = np.meshgrid(xi,yi)\n #zz = np.empty_like(xxi)\n zz = np.empty([len(xi),len(yi)])\n for xci in range(0, len(xi)):\n xc = xi[xci]\n for yci in range(0, len(yi)):\n yc = yi[yci]\n density = 0.\n for i in range(0,len(xl)):\n xd = math.fabs(xl[i] - xc)\n yd = math.fabs(yl[i] - yc)\n if xd < 1 and yd < 1:\n dist = math.sqrt(math.pow(xd, 2) + math.pow(yd, 2))\n density = density + math.exp(-5.0 * pow(dist, 2))\n zz[yci][xci] = density\n return zz\n\ndef boxsum(img, w, h, r):\n st = [0] * (w+1) * (h+1)\n for x in xrange(w):\n st[x+1] = st[x] + img[x]\n for y in xrange(h):\n st[(y+1)*(w+1)] = st[y*(w+1)] + img[y*w]\n for x in xrange(w):\n st[(y+1)*(w+1)+(x+1)] = st[(y+1)*(w+1)+x] + st[y*(w+1)+(x+1)] - st[y*(w+1)+x] + img[y*w+x]\n for y in xrange(h):\n y0 = max(0, y - r)\n y1 = min(h, y + r + 1)\n for x in xrange(w):\n x0 = max(0, x - r)\n x1 = min(w, x + r + 1)\n img[y*w+x] = st[y0*(w+1)+x0] + st[y1*(w+1)+x1] - st[y1*(w+1)+x0] - st[y0*(w+1)+x1]\n\ndef grid_density_boxsum(x0, y0, x1, y1, w, h, data):\n kx = (w - 1) \/ (x1 - x0)\n ky = (h - 1) \/ (y1 - y0)\n r = 15\n border = r * 2\n imgw = (w + 2 * border)\n imgh = (h + 2 * border)\n img = [0] * (imgw * imgh)\n for x, y in data:\n ix = int((x - x0) * kx) + border\n iy = int((y - y0) * ky) + border\n if 0 <= ix < imgw and 0 <= iy < imgh:\n img[iy * imgw + ix] += 1\n for p in xrange(4):\n boxsum(img, imgw, imgh, r)\n a = np.array(img).reshape(imgh,imgw)\n b = a[border:(border+h),border:(border+w)]\n return b\n\ndef grid_density_gaussian_filter(x0, y0, x1, y1, w, h, data):\n kx = (w - 1) \/ (x1 - x0)\n ky = (h - 1) \/ (y1 - y0)\n r = 20\n border = r\n imgw = (w + 2 * border)\n imgh = (h + 2 * border)\n img = np.zeros((imgh,imgw))\n for x, y in data:\n ix = int((x - x0) * kx) + border\n iy = int((y - y0) * ky) + border\n if 0 <= ix < imgw and 0 <= iy < imgh:\n img[iy][ix] += 1\n return ndi.gaussian_filter(img, (r,r)) ## gaussian convolution\n\ndef generate_graph(): \n n = 1000\n # data points range\n data_ymin = -2.\n data_ymax = 2.\n data_xmin = -2.\n data_xmax = 2.\n # view area range\n view_ymin = -.5\n view_ymax = .5\n view_xmin = -.5\n view_xmax = .5\n # generate data\n xl = np.random.uniform(data_xmin, data_xmax, n) \n yl = np.random.uniform(data_ymin, data_ymax, n)\n zl = np.random.uniform(0, 1, n)\n\n # get visible data points\n xlvis = []\n ylvis = []\n for i in range(0,len(xl)):\n if view_xmin < xl[i] < view_xmax and view_ymin < yl[i] < view_ymax:\n xlvis.append(xl[i])\n ylvis.append(yl[i])\n\n fig = plt.figure()\n\n\n # plot histogram\n plt1 = fig.add_subplot(221)\n plt1.set_axis_off()\n t0 = time.clock()\n zd, xe, ye = np.histogram2d(yl, xl, bins=10, range=[[view_ymin, view_ymax],[view_xmin, view_xmax]], normed=True)\n plt.title('numpy.histogram2d - '+str(time.clock()-t0)+\"sec\")\n plt.imshow(zd, origin='lower', extent=[view_xmin, view_xmax, view_ymin, view_ymax])\n plt.scatter(xlvis, ylvis)\n\n\n # plot density calculated with kdtree\n plt2 = fig.add_subplot(222)\n plt2.set_axis_off()\n xi = np.linspace(view_xmin, view_xmax, 256)\n yi = np.linspace(view_ymin, view_ymax, 256)\n t0 = time.clock()\n zd = grid_density_kdtree(xl, yl, xi, yi, 70)\n plt.title('function of 5 nearest using kdtree\\n'+str(time.clock()-t0)+\"sec\")\n cmap=cm.jet\n A = (cmap(zd\/256.0)*255).astype(np.uint8)\n #A[:,:,3] = zd \n plt.imshow(A , origin='lower', extent=[view_xmin, view_xmax, view_ymin, view_ymax])\n plt.scatter(xlvis, ylvis)\n\n # gaussian filter\n plt3 = fig.add_subplot(223)\n plt3.set_axis_off()\n t0 = time.clock()\n zd = grid_density_gaussian_filter(view_xmin, view_ymin, view_xmax, view_ymax, 256, 256, zip(xl, yl))\n plt.title('ndi.gaussian_filter - '+str(time.clock()-t0)+\"sec\")\n plt.imshow(zd , origin='lower', extent=[view_xmin, view_xmax, view_ymin, view_ymax])\n plt.scatter(xlvis, ylvis)\n\n # boxsum smoothing\n plt3 = fig.add_subplot(224)\n plt3.set_axis_off()\n t0 = time.clock()\n zd = grid_density_boxsum(view_xmin, view_ymin, view_xmax, view_ymax, 256, 256, zip(xl, yl))\n plt.title('boxsum smoothing - '+str(time.clock()-t0)+\"sec\")\n plt.imshow(zd, origin='lower', extent=[view_xmin, view_xmax, view_ymin, view_ymax])\n plt.scatter(xlvis, ylvis)\n\nif __name__=='__main__':\n generate_graph()\n plt.show()\n```","response":"This approach is along the lines of some previous answers: increment a pixel for each spot, then smooth the image with a gaussian filter. A 256x256 image runs in about 350ms on my 6-year-old laptop. \n```\nimport numpy as np\nimport scipy.ndimage as ndi\n\ndata = np.random.rand(30000,2) ## create random dataset\ninds = (data * 255).astype('uint') ## convert to indices\n\nimg = np.zeros((256,256)) ## blank image\nfor i in xrange(data.shape[0]): ## draw pixels\n img[inds[i,0], inds[i,1]] += 1\n\nimg = ndi.gaussian_filter(img, (10,10))\n```","best_answers_score":0.6385,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/6652671\/efficient-method-of-calculating-density-of-irregularly-spaced-points","best_answers_votes":31,"tags":["python","numpy","scipy","matplotlib"],"question_length":7391,"response_length":573,"tag_count":4} {"question":"How to get boxplot data for matplotlib boxplots I need to get the statistical data which were generated to draw a box plot in Pandas(using dataframe to create boxplots). i.e. Quartile1,Quartile2,Quartile3, lower whisker value, upper whisker value and outliers. I tried the following query to draw the boxplot. \n```\nimport pandas as pd\ndf = pd.DataFrame(np.random.rand(100, 5), columns=['A', 'B', 'C', 'D', 'E'])\npd.DataFrame.boxplot(df,return_type = 'both')\n```\nIs there a way to do it instead of manually calculating the values?","response":"One option is to use the y data from the plots - probably most useful for the outliers (fliers) \n```\n_, bp = pd.DataFrame.boxplot(df, return_type='both')\n\noutliers = [flier.get_ydata() for flier in bp[\"fliers\"]]\nboxes = [box.get_ydata() for box in bp[\"boxes\"]]\nmedians = [median.get_ydata() for median in bp[\"medians\"]]\nwhiskers = [whiskers.get_ydata() for whiskers in bp[\"whiskers\"]]\n```\nBut it's probably more straightforward to get the other values (including IQR) using either \n```\nquantiles = df.quantile([0.01, 0.25, 0.5, 0.75, 0.99])\n```\nor, as suggested by WoodChopper \n```\nstats = df.describe()\n```","best_answers_score":0.6384,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/33518472\/how-to-get-boxplot-data-for-matplotlib-boxplots","best_answers_votes":34,"tags":["python","pandas","matplotlib"],"question_length":529,"response_length":607,"tag_count":3} {"question":"Obtaining values used in boxplot, using python and matplotlib I can draw a boxplot from data: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndata = np.random.rand(100)\nplt.boxplot(data)\n```\nThen, the box will range from the 25th-percentile to 75th-percentile, and the whisker will range from the smallest value to the largest value between (25th-percentile - 1.5*IQR, 75th-percentile + 1.5*IQR), where the IQR denotes the inter-quartile range. (Of course, the value 1.5 is customizable). Now I want to know the values used in the boxplot, i.e. the median, upper and lower quartile, the upper whisker end point and the lower whisker end point. While the former three are easy to obtain by using np.median() and np.percentile(), the end point of the whiskers will require some verbose coding: \n```\nmedian = np.median(data)\nupper_quartile = np.percentile(data, 75)\nlower_quartile = np.percentile(data, 25)\n\niqr = upper_quartile - lower_quartile\nupper_whisker = data[data<=upper_quartile+1.5*iqr].max()\nlower_whisker = data[data>=lower_quartile-1.5*iqr].min()\n```\nI was wondering, while this is acceptable, would there be a neater way to do this? It seems that the values should be ready to pull-out from the boxplot, as it's already drawn.","response":"Why do you want to do so? what you are doing is already pretty direct. Yeah, if you want to fetch them for the plot, when the plot is already made, simply use the get_ydata() method. \n```\nB = plt.boxplot(data)\n[item.get_ydata() for item in B['whiskers']]\n```\nIt returns an array of the shape (2,) for each whiskers, the second element is the value we want: \n```\n[item.get_ydata()[1] for item in B['whiskers']]\n```","best_answers_score":0.6378,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/23461713\/obtaining-values-used-in-boxplot-using-python-and-matplotlib","best_answers_votes":39,"tags":["python","numpy","matplotlib","scipy"],"question_length":1246,"response_length":413,"tag_count":4} {"question":"Move X-Axis label downwards, but not X-Axis Ticks in matplotlib I'm using Matplotlib to plot a histogram. Using tips from my previous question: Matplotlib - label each bin, I've more or less got the kinks worked out. There's one final issue - previously - the x-axis label (\"Time (in milliseconds)\") was being rendered underneath the x-axis tickmarks (0.00, 0.04, 0.08, 0.12 etc.) Using the advice from Joe Kingston (see question above), I tried using: \n```py\nax.tick_params(axis='x', pad=30)\n```\nHowever, this moves both the x-axis tickmarks (0.00, 0.04, 0.08, 0.12 etc.), as well as the x-axis label (\"Time (in milliseconds)\"): Is there any way to move only the x-axis label to underneath the three rows of figures? NB: You may need to open the PNGs below directly - Right Click on the image, then View Image (in FF), or Open image in new tab (Chrome). The image resize done by SO has rendered them nigh unreadable","response":"use labelpad parameter: \n```\npl.xlabel(\"...\", labelpad=20)\n```\nor set it after: \n```\nax.xaxis.labelpad = 20\n```","best_answers_score":0.6372,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/6406368\/move-x-axis-label-downwards-but-not-x-axis-ticks-in-matplotlib","best_answers_votes":276,"tags":["python","matplotlib","plot","position","axis-labels"],"question_length":916,"response_length":111,"tag_count":5} {"question":"Show tick labels when sharing an axis I'm running the following function: \n```\ndef plot_variance_analysis(indices, stat_frames, legend_labels, shape):\n x = np.linspace(1, 5, 500)\n fig, axes = plt.subplots(shape[0], shape[1], sharex=True sharey=True)\n questions_and_axes = zip(indices, axes.ravel())\n frames_and_labels = zip(stat_frames, legend_labels)\n for qa in questions_and_axes:\n q = qa[0]\n ax = qa[1]\n for fl in frames_and_labels:\n frame = fl[0]\n label = fl[1]\n ax.plot(x, stats.norm.pdf(x, frame['mean'][q], frame['std'][q]), label=label)\n ax.set_xlabel(q)\n ax.legend(loc='best')\n plt.xticks([1,2,3,4,5])\n return fig, axes\n```\nHere's what I get with some of my own sample data: I'm trying to maintain the shared state between axes, but at the same time display the tick labels for the x axis on all subplots (including the top two). I can't find any means to turn this off in the documentation. Any suggestions? Or should I just set the x tick labels axis by axis? I'm running matplotlib 1.4.0, if that's important.","response":"In Matplotlib 2.2 and above the tick labels can be turned back on using: \n```\nax.xaxis.set_tick_params(labelbottom=True)\n```","best_answers_score":0.6372,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/29266966\/show-tick-labels-when-sharing-an-axis","best_answers_votes":73,"tags":["python","matplotlib"],"question_length":1118,"response_length":124,"tag_count":2} {"question":"Save matplotlib plot in grayscale I have some color plots that I need to be saved in grayscale. Is there an easy way to do this without changing the plotting formats?","response":"There is an easy solution: \n```\nplt.imsave(filename, image, cmap='gray')\n```","best_answers_score":0.637,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/6019473\/save-matplotlib-plot-in-grayscale","best_answers_votes":14,"tags":["python","matplotlib"],"question_length":166,"response_length":76,"tag_count":2} {"question":"How can I create a frontend for matplotlib? I feel that I should start this question by mentioning that I am a Chemical Engineer and not a trained programmer (but I am willing to learn). I process a lot of spectra, which are essentially ASCII files with x,y coordinates produced by spectrometer specific software. A standard software used for analysis is Origin from Originlab but I find it is not user friendly, it has a lot more functions than I need and it is not free. So I was hoping that there is a way to develop a frontend for matplotlib that allows to use matplotlib with a GUI. I want to be able to read in and plot data from ASCII files, zoom in and out in the plot, call scripts for things such as peak-detection and fitting functions where function specific parameters can be entered in the GUI. I am simply puzzled where to start, so the question really is: What is the easiest way to develop a frontend\/GUI for matplotlib that allows me to program the things that I mentioned above? Also, ultimately, a cross platform application would be nice because many in my group work in Windows.","response":"Check out also good examples from Eli Bendersky's website: matplotlib with PyQt GUIs More PyQt plotting demos matplotlib with wxPython GUIs","best_answers_score":0.6354,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/5285912\/how-can-i-create-a-frontend-for-matplotlib","best_answers_votes":15,"tags":["user-interface","matplotlib","originlab"],"question_length":1100,"response_length":139,"tag_count":3} {"question":"matplotlib axis label format I am having an issue with the format of the tick labels of an axis. I disabled the offset from the y_axis: \n```\nax1.ticklabel_format(style = 'sci', useOffset=False)\n```\nand tried to put it a scientific format but all I get is: \n```\n0.00355872\n```\nbut I expected something like: \n```\n3.55872...E-2\n```\nor similar. what I really want is something like: \n```\n3.55872... (on the tick label)\nx 10^2 (or something similar - on the axis label)\n```\nI could try to set the labels as static,, but in the end I will have a few tens or hundreds of plots with different values, so it needs to be set dynamically. An alternative would be to place the y_axis offset as the label, but I also have no clue on how to do this.","response":"You should also specify axis and threshold limits: \n```\nax1.ticklabel_format(axis='y', style='sci', scilimits=(-2,2))\n```\nThis would use sci format on y axis when figures are out of the [0.01, 99] bounds.","best_answers_score":0.6352,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/14775040\/matplotlib-axis-label-format","best_answers_votes":36,"tags":["python","matplotlib"],"question_length":737,"response_length":204,"tag_count":2} {"question":"How to add trendline to a scatter plot How could I add a trendline to a dot graph drawn using matplotlib.scatter?","response":"as explained here With help from numpy one can calculate for example a linear fitting. \n```\n# plot the data itself\npylab.plot(x,y,'o')\n\n# calc the trendline\nz = numpy.polyfit(x, y, 1)\np = numpy.poly1d(z)\npylab.plot(x,p(x),\"r--\")\n# the line equation:\nprint \"y=%.6fx+(%.6f)\"%(z[0],z[1])\n```","best_answers_score":0.6351,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/26447191\/how-to-add-trendline-to-a-scatter-plot","best_answers_votes":110,"tags":["python","matplotlib","linear-regression","scatter-plot","curve-fitting"],"question_length":113,"response_length":288,"tag_count":5} {"question":"Histogram outlined by added edgecolor I have plotted a histogram and was expecting to see the outlines of my bars but this is not the case. I'm using the following code: \n```py\nimport matplotlib.pyplot as plt\nfrom numpy.random import normal\ngaussian_numbers = normal(size=1000)\nplt.hist(gaussian_numbers)\nplt.title(\"Gaussian Histogram\")\nplt.xlabel(\"Value\")\nplt.ylabel(\"Frequency\")\nplt.show()\n```\nHow do I show the outline of the bars?","response":"It looks like either your linewidth was set to zero or your edgecolor was set to 'none'. Matplotlib changed the defaults for these in 2.0. Try using: \n```\nplt.hist(gaussian_numbers, edgecolor='black', linewidth=1.2)\n```","best_answers_score":0.6343,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/42741687\/histogram-outlined-by-added-edgecolor","best_answers_votes":184,"tags":["python","matplotlib","histogram"],"question_length":434,"response_length":219,"tag_count":3} {"question":"how to show values on top of bar plot [duplicate] This question already has answers here: How to add value labels on a bar chart (7 answers) Closed 6 years ago. Python newbie here. I want to show values above each bin in the following graph: This is my code: \n```\nx=[i for i in range(1,11)]\ny=[0.95,\n0.95,\n0.89,\n0.8,\n0.74,\n0.65,\n0.59,\n0.51,\n0.5,\n0.48]\n\nplt.bar(x, height= y)\nxlocs, xlabs = plt.xticks()\nxlocs=[i+1 for i in range(0,10)]\nxlabs=[i\/2 for i in range(0,10)]\nplt.xlabel('Max Sigma')\nplt.ylabel('Test Accuracy')\nplt.xticks(xlocs, xlabs)\nplt.show()\n```\nthis is the graph that I want:","response":"Simply add \n```\nfor i, v in enumerate(y):\n plt.text(xlocs[i] - 0.25, v + 0.01, str(v))\n```\nbefore plt.show(). You can adjust the centralization or height of the text by changing the (-0.25) and (0.01) values, respectively.","best_answers_score":0.6343,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/53066633\/how-to-show-values-on-top-of-bar-plot","best_answers_votes":44,"tags":["python","matplotlib"],"question_length":591,"response_length":225,"tag_count":2} {"question":"How to highlight specific x-value ranges I'm making a visualization of historical stock data for a project, and I'd like to highlight regions of drops. For instance, when the stock is experiencing significant drawdown, I would like to highlight it with a red region. Can I do this automatically, or will I have to draw a rectangle or something?","response":"Have a look at axvspan (and axhspan for highlighting a region of the y-axis). \n```\nimport matplotlib.pyplot as plt\n\nplt.plot(range(10))\nplt.axvspan(3, 6, color='red', alpha=0.5)\nplt.show()\n```\nIf you're using dates, then you'll need to convert your min and max x values to matplotlib dates. Use matplotlib.dates.date2num for datetime objects or matplotlib.dates.datestr2num for various string timestamps. \n```\nimport matplotlib.pyplot as plt\nimport matplotlib.dates as mdates\nimport datetime as dt\n\nt = mdates.drange(dt.datetime(2011, 10, 15), dt.datetime(2011, 11, 27),\n dt.timedelta(hours=2))\ny = np.sin(t)\n\nfig, ax = plt.subplots()\nax.plot_date(t, y, 'b-')\nax.axvspan(*mdates.datestr2num(['10\/27\/2011', '11\/2\/2011']), color='red', alpha=0.5)\nfig.autofmt_xdate()\nplt.show()\n```","best_answers_score":0.6322,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/8270981\/how-to-highlight-specific-x-value-ranges","best_answers_votes":180,"tags":["python","statistics","matplotlib"],"question_length":344,"response_length":796,"tag_count":3} {"question":"How to show labels on matplotlib plots When I execute the following code, it doesn't produce a plot with a label. \n```py\nimport matplotlib.pyplot as plt\nimport numpy as np\nx = np.arange(1, 5)\nplt.plot(x, x*1.5, label='Normal')\n```\nNumpy version is '1.6.2' Matplotlib version is '1.3.x' Why is this happening?","response":"You forgot to display the legend: \n```\n...\nplt.legend(loc='best')\nplt.show()\n```","best_answers_score":0.632,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/14657169\/how-to-show-labels-on-matplotlib-plots","best_answers_votes":138,"tags":["python","matplotlib","plot"],"question_length":308,"response_length":80,"tag_count":3} {"question":"How to highlight one point in a matplotlib plot Suppose, I have the following two lists that correspond to x- and y-coordinates. \n```py\nx = [1,2,3,4,5,6]\ny = [3,4,5,6,7,8]\n```\nI want the first pair (1,3) to be in a different color or shape. How can this be done using python?","response":"One of the simplest possible answers. \n```\nimport matplotlib.pyplot as plt\n\nx = [1,2,3,4,5,6]\ny = [3,4,5,6,7,8]\n\nplt.plot(x[1:], y[1:], 'ro')\nplt.plot(x[0], y[0], 'g*')\n\nplt.show()\n```","best_answers_score":0.6309,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/41489543\/how-to-highlight-one-point-in-a-matplotlib-plot","best_answers_votes":31,"tags":["python","matplotlib","plot","label","scatter-plot"],"question_length":275,"response_length":184,"tag_count":5} {"question":"Equivalent function for xticks for an AxesSubplot object So I am trying to use Axes objects to control my matlibplot figure. I am not using plt (aka import matlibplot.pyplot as plt) because I am embedding the figure in my tkinter GUI per this. However, I am also using subplots in the figure, so something like: \n```py\na = f.add_subplot(121)\na2 = f.add_subplot(122)\na.plot(fn2,mag)\na2.bar(range(0,10), magBin, width)\n```\nThis is all well and good, I can use the axes properties to control things (i.e. a.axesMethod()), but I want string labels for my bar plots, per this, see code. My dilemma is that I cannot use \n```py\nplt.xticks(ind+width, ('G1', 'G2', 'G3', 'G4', 'G5') )\n```\nAs in the example, because I cannot use plt if I want to embed it into my Tkinter GUI. I am limited to what I can do with Axes objects. I am trying to use a2.set_xticks, but this does not allow for the string as ticks functionality I need for my bar chart.","response":"you can use instead: \n```\naxes.set_xticks(ticks, minor=False)\n```\nand \n```\naxes.set_xticklabels(labels, fontdict=None, minor=False)\n```","best_answers_score":0.6307,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/8384120\/equivalent-function-for-xticks-for-an-axessubplot-object","best_answers_votes":124,"tags":["python","matplotlib","tkinter","plot","axis-labels"],"question_length":936,"response_length":135,"tag_count":5} {"question":"Matplotlib text dimensions Is it possible to determine the dimensions of a matplotlib text object? How can I find the width and height in pixels? Thanks Edit: I think I figured out a way to do this. I've included an example below. \n```\nimport matplotlib as plt\n\nf = plt.figure()\nr = f.canvas.get_renderer()\nt = plt.text(0.5, 0.5, 'test')\n\nbb = t.get_window_extent(renderer=r)\nwidth = bb.width\nheight = bb.height\n```","response":"```\nfrom matplotlib import pyplot as plt\n\nf = plt.figure()\nr = f.canvas.get_renderer()\nt = plt.text(0.5, 0.5, 'test')\n\nbb = t.get_window_extent(renderer=r)\nwidth = bb.width\nheight = bb.height\n```","best_answers_score":0.6301,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/5320205\/matplotlib-text-dimensions","best_answers_votes":32,"tags":["python","matplotlib"],"question_length":415,"response_length":195,"tag_count":2} {"question":"Setting matplotlib colorbar range I would like to set the matplotlib colorbar range. Here's what I have so far: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nx = np.arange(20)\ny = np.arange(20)\ndata = x[:-1,None]+y[None,:-1]\n\nfig = plt.gcf()\nax = fig.add_subplot(111)\n\nX,Y = np.meshgrid(x,y)\nquadmesh = ax.pcolormesh(X,Y,data)\nplt.colorbar(quadmesh)\n\n#RuntimeError: You must first define an image, eg with imshow\n#plt.clim(vmin=0,vmax=15) \n\n#AttributeError: 'AxesSubplot' object has no attribute 'clim'\n#ax.clim(vmin=0,vmax=15) \n\n#AttributeError: 'AxesSubplot' object has no attribute 'set_clim'\n#ax.set_clim(vmin=0,vmax=15) \n\nplt.show()\n```\nHow do I set the colorbar limits here?","response":"Arg. It's always the last thing you try: \n```\nquadmesh.set_clim(vmin=0, vmax=15)\n```\nworks.","best_answers_score":0.6275,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/15282189\/setting-matplotlib-colorbar-range","best_answers_votes":40,"tags":["python","matplotlib"],"question_length":691,"response_length":91,"tag_count":2} {"question":"How to type-hint a matplotlib.axes._subplots.AxesSubplots object in python3 I was wondering how is the \"best\" way to type-hint the axis-object of matplotlib-subplots. running \n```\nfrom matplotlib import pyplot as plt\n\nf, ax = plt.subplots()\nprint(type(ax))\n```\nreturns \n```\n\n```\nand running \n```\nfrom matplotlib import axes\nprint(type(axes._subplots))\nprint(type(axes._subplots.AxesSubplot))\n```\nyields \n```\n\nAttributeError: module 'matplotlib.axes._subplots' has no attribute 'AxesSubplots'\n```\nSo far a solution for type-hinting that works is as follows: \n```\ndef multi_rocker(\n axy: type(plt.subplots()[1]), \n y_trues: np.ndarray,\n y_preds: np.ndarray,\n ):\n \"\"\"\n One-Vs-All ROC-curve:\n \"\"\"\n fpr = dict()\n tpr = dict()\n roc_auc = dict()\n n_classes = y_trues.shape[1]\n wanted = list(range(n_classes))\n for i,x in enumerate(wanted):\n fpr[i], tpr[i], _ = roc_curve(y_trues[:, i], y_preds[:, i])\n roc_auc[i] = round(auc(fpr[i], tpr[i]),2)\n extra = 0\n for i in range(n_classes):\n axy.plot(fpr[i], tpr[i],)\n return\n```\nAnd the problem with it is that it isn't clear enough for code-sharing","response":"As described in Type hints for context manager : \n```\nimport matplotlib.pyplot as plt\n\ndef plot_func(ax: plt.Axes):\n ...\n```","best_answers_score":0.6259,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/63783154\/how-to-type-hint-a-matplotlib-axes-subplots-axessubplots-object-in-python3","best_answers_votes":34,"tags":["python","matplotlib"],"question_length":1233,"response_length":127,"tag_count":2} {"question":"How to draw vertical lines on a given plot Given a plot of a signal in time representation, how can I draw lines marking the corresponding time index? Specifically, given a signal plot with a time index ranging from 0 to 2.6 (seconds), I want to draw vertical red lines indicating the corresponding time index for the list [0.22058956, 0.33088437, 2.20589566]. How can I do it?","response":"The standard way to add vertical lines that will cover your entire plot window without you having to specify their actual height is plt.axvline \n```\nimport matplotlib.pyplot as plt\n\nplt.axvline(x=0.22058956)\nplt.axvline(x=0.33088437)\nplt.axvline(x=2.20589566)\n```\nOR \n```\nxcoords = [0.22058956, 0.33088437, 2.20589566]\nfor xc in xcoords:\n plt.axvline(x=xc)\n```\nYou can use many of the keywords available for other plot commands (e.g. color, linestyle, linewidth ...). You can pass in keyword arguments ymin and ymax if you like in axes corrdinates (e.g. ymin=0.25, ymax=0.75 will cover the middle half of the plot). There are corresponding functions for horizontal lines (axhline) and rectangles (axvspan).","best_answers_score":0.625,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/24988448\/how-to-draw-vertical-lines-on-a-given-plot","best_answers_votes":749,"tags":["python","pandas","matplotlib","seaborn"],"question_length":377,"response_length":709,"tag_count":4} {"question":"How to plot one single data point? I have the following code to plot a line and a point: \n```\ndf = pd.DataFrame({'x': [1, 2, 3], 'y': [3, 4, 6]})\npoint = pd.DataFrame({'x': [2], 'y': [5]})\nax = df.plot(x='x', y='y', label='line')\nax = point.plot(x='x', y='y', ax=ax, style='r-', label='point')\n```\nHow do I get the single data point to show up?","response":"To plot a single point you can do something like this: \n```\nplt.plot([x], [y], marker='o', markersize=3, color=\"red\")\n```","best_answers_score":0.6233,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/27779845\/how-to-plot-one-single-data-point","best_answers_votes":116,"tags":["python","pandas","matplotlib","plot"],"question_length":344,"response_length":121,"tag_count":4} {"question":"Is it possible to draw a boxplot given the percentile values instead of the original inputs? From what I can see, boxplot() method expects a sequence of raw values (numbers) as input, from which it then computes percentiles to draw the boxplot(s). I would like to have a method by which I could pass in the percentiles and get the corresponding boxplot. For example: Assume that I have run several benchmarks and for each benchmark I've measured latencies ( floating point values ). Now additionally, I have precomputed the percentiles for these values. Hence for each benchmark, I have the 25th, 50th, 75th percentile along with the min and max. Now given these data, I would like to draw the box plots for the benchmarks.","response":"As of 2020, there is a better method than the one in the accepted answer. The matplotlib.axes.Axes class provides a bxp method, which can be used to draw the boxes and whiskers based on the percentile values. Raw data is only needed for the outliers, and that is optional. Example: \n```\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots()\nboxes = [\n {\n 'label' : \"Male height\",\n 'whislo': 162.6, # Bottom whisker position\n 'q1' : 170.2, # First quartile (25th percentile)\n 'med' : 175.7, # Median (50th percentile)\n 'q3' : 180.4, # Third quartile (75th percentile)\n 'whishi': 187.8, # Top whisker position\n 'fliers': [] # Outliers\n }\n]\nax.bxp(boxes, showfliers=False)\nax.set_ylabel(\"cm\")\nplt.savefig(\"boxplot.png\")\nplt.close()\n```\nThis produces the following image:","best_answers_score":0.6231,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/27214537\/is-it-possible-to-draw-a-boxplot-given-the-percentile-values-instead-of-the-orig","best_answers_votes":54,"tags":["python","matplotlib","boxplot","percentile"],"question_length":723,"response_length":864,"tag_count":4} {"question":"matplotlib.pyplot, preserve aspect ratio of the plot Assuming we have a polygon coordinates as polygon = [(x1, y1), (x2, y2), ...], the following code displays the polygon: \n```\nimport matplotlib.pyplot as plt\nplt.fill(*zip(*polygon))\nplt.show()\n```\nBy default it is trying to adjust the aspect ratio so that the polygon (or whatever other diagram) fits inside the window, and automatically changing it so that it fits even after resizing. Which is great in many cases, except when you are trying to estimate visually if the image is distorted. How to fix the aspect ratio to be strictly 1:1? (Not sure if \"aspect ratio\" is the right term here, so in case it is not - I need both X and Y axes to have 1:1 scale, so that (0, 1) on both X and Y takes an exact same amount of screen space. And I need to keep it 1:1 no matter how I resize the window.)","response":"Does it help to use: \n```\nplt.axis('equal')\n```","best_answers_score":0.6229,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/2934878\/matplotlib-pyplot-preserve-aspect-ratio-of-the-plot","best_answers_votes":113,"tags":["python","matplotlib"],"question_length":848,"response_length":47,"tag_count":2} {"question":"Set the legend location of a pandas plot I know how to set the legend location of matplotlib plot with plt.legend(loc='lower left'), however, I am plotting with pandas method df.plot() and need to set the legend location to 'lower left'. Does anyone know how to do it? Edited: I am actually looking for a way to do it through pandas' df.plot(), not via plt.legend(loc='lower left')","response":"Well, Simply chain it. \n```\ndframe.rank(ascending=False).plot(kind= 'bar').legend(loc='best')\n```\nAssuming 'dframe' is a DataFrame.","best_answers_score":0.6213,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/59791884\/set-the-legend-location-of-a-pandas-plot","best_answers_votes":52,"tags":["python","pandas","matplotlib","plot","legend-properties"],"question_length":381,"response_length":131,"tag_count":5} {"question":"Hide matplotlib descriptions in jupyter notebook [duplicate] This question already has answers here: Disable the output of matplotlib pyplot (4 answers) Closed 8 years ago. I am not sure what is the correct term for this, but here is what I see when I plot something: The plots is actually what I want so see, but jupyter notebook also outputs some text: , which I am trying to get rid of. After some searching, the only thing I was able to find is plt.ioff(), which didn't help me. Is there a way to get rid of the text?","response":"You can finish the corresponding (matplotlib) line with a semicolon ;","best_answers_score":0.6208,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/38968404\/hide-matplotlib-descriptions-in-jupyter-notebook","best_answers_votes":76,"tags":["matplotlib","jupyter-notebook"],"question_length":522,"response_length":69,"tag_count":2} {"question":"Python scatter plot. Size and style of the marker I have a set of data that I want to show as a scatter plot. I want each point to be plotted as a square of size dx. \n```\nx = [0.5,0.1,0.3]\n y = [0.2,0.7,0.8]\n z = [10.,15.,12.]\n dx = [0.05,0.2,0.1]\n\n scatter(x,y,c=z,s=dx,marker='s')\n```\nThe problem is that the size s that the scatter function read is in points^2. What I'd like is having each point represented by a square of area dx^2, where this area is in 'real' units, the plot units. I hope you can get this point. I also have another question. The scatter function plots the markers with a black border, how can I drop this option and have no border at all?","response":"Translate from user data coordinate system to display coordinate system. and use edgecolors='none' to plot faces with no outlines. \n```\nimport numpy as np\n\nfig = figure()\nax = fig.add_subplot(111)\ndx_in_points = np.diff(ax.transData.transform(zip([0]*len(dx), dx))) \nscatter(x,y,c=z,s=dx_in_points**2,marker='s', edgecolors='none')\n```","best_answers_score":0.6183,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/9081553\/python-scatter-plot-size-and-style-of-the-marker","best_answers_votes":43,"tags":["python","plot","matplotlib","scatter"],"question_length":700,"response_length":335,"tag_count":4} {"question":"Fixing color in scatter plots in matplotlib I want to fix the color range on multiple scatter plots and add in a colorbar to each plot (which will be the same in each figure). Essentially, I'm fixing all aspects of the axes and colorspace etc. so that the plots are directly comparable by eye. For the life of me, I can't seem to figure out all the various ways of fixing the color-range. I've tried vmin, vmax, but it doesn't seem to do anything, I've also tried clim(x,y) and that doesn't seem to work either. This must come up here and there, I can't be the only one that wants to compare various subsets of data amongst plots... so, how do you fix the colors so that each data keeps it's color between plots and doesn't get remapped to a different color due to the change in max\/min of the subset -v- the whole set?","response":"Setting vmin and vmax should do this. Here's an example: \n```\nimport matplotlib.pyplot as plt\n\nxyc = range(20)\n\nplt.subplot(121)\nplt.scatter(xyc[:13], xyc[:13], c=xyc[:13], s=35, vmin=0, vmax=20)\nplt.colorbar()\nplt.xlim(0, 20)\nplt.ylim(0, 20)\n\nplt.subplot(122)\nplt.scatter(xyc[8:20], xyc[8:20], c=xyc[8:20], s=35, vmin=0, vmax=20) \nplt.colorbar()\nplt.xlim(0, 20)\nplt.ylim(0, 20)\n\nplt.show()\n```\nAnd the plot this produces:","best_answers_score":0.616,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/2925806\/fixing-color-in-scatter-plots-in-matplotlib","best_answers_votes":55,"tags":["python","colors","matplotlib","scatter-plot"],"question_length":819,"response_length":424,"tag_count":4} {"question":"What names can be used in plt.cm.get_cmap? I have this code : \n```\nplt.scatter(data_projected[:,0],data_projected[:,1],c=digits.target\n ,edgecolors='none',alpha=0.5,cmap=plt.cm.get_cmap('nipy_spectral',10));\n```\nMy confusion comes from plt.cm.get_cmap('nipy_spectral',10). Sometimes there will be plt.cm.get_cmap('RdYlBu') instead. Is the 'RdYlBu','nipy_spectral' the name of a color? And is there any other names to use instead? Is there a list of all colors available? I have read the document but it does not seem to help or I do not understand it.","response":"The first argument to plt.cm.get_cmap('nipy_spectral',10) is the name of a colormap. You can get a list of all available colormaps via \n```\nimport matplotlib.cm\nprint(matplotlib.cm.cmap_d.keys())\n```\nor you may have a look at the colormap reference.","best_answers_score":0.6116,"library_name":"matplotlib","question_url":"https:\/\/stackoverflow.com\/questions\/47302343\/what-names-can-be-used-in-plt-cm-get-cmap","best_answers_votes":23,"tags":["python","python-3.x","matplotlib","data-analysis","matplotlib-basemap"],"question_length":558,"response_length":249,"tag_count":5} {"question":"How do I get indices of N maximum values in a NumPy array? NumPy proposes a way to get the index of the maximum value of an array via np.argmax. I would like a similar thing, but returning the indexes of the N maximum values. For instance, if I have an array, [1, 3, 2, 4, 5], then nargmax(array, n=3) would return the indices [4, 3, 1] which correspond to the elements [5, 4, 3].","response":"Newer NumPy versions (1.8 and up) have a function called argpartition for this. To get the indices of the four largest elements, do \n```\n\n```python\na = np.array([9, 4, 4, 3, 3, 9, 0, 4, 6, 0])\n```\n\n```python\na\n#Output\n#array([9, 4, 4, 3, 3, 9, 0, 4, 6, 0])\n```\n\n```python\nind = np.argpartition(a, -4)[-4:]\n```\n\n```python\nind\n#Output\n#array([1, 5, 8, 0])\n```\n\n```python\ntop4 = a[ind]\n```\n\n```python\ntop4\n#Output\n#array([4, 9, 6, 9])\n#```\n#Unlike argsort, this function runs in linear time in the worst case, but the returned indices are not sorted, as can be seen from the result of evaluating a[ind]. If you need that too, sort them afterwards: \n#```\n```\n\n```python\nind[np.argsort(a[ind])]\n#Output\n#array([1, 8, 5, 0])\n#```\n#To get the top-k elements in sorted order in this way takes O(n + k log k) time.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/6910641\/how-do-i-get-indices-of-n-maximum-values-in-a-numpy-array","best_answers_votes":1025,"tags":["python","numpy","max","numpy-ndarray"],"question_length":380,"response_length":693,"tag_count":4} {"question":"How can the Euclidean distance be calculated with NumPy? I have two points in 3D space: \n```none\na = (ax, ay, az)\nb = (bx, by, bz)\n```\nI want to calculate the distance between them: \n```none\ndist = sqrt((ax-bx)^2 + (ay-by)^2 + (az-bz)^2)\n```\nHow do I do this with NumPy? I have: \n```py\nimport numpy\na = numpy.array((ax, ay, az))\nb = numpy.array((bx, by, bz))\n```","response":"Use numpy.linalg.norm: \n```\ndist = numpy.linalg.norm(a-b)\n```\nThis works because the Euclidean distance is the l2 norm, and the default value of the ord parameter in numpy.linalg.norm is 2. For more theory, see Introduction to Data Mining:","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/1401712\/how-can-the-euclidean-distance-be-calculated-with-numpy","best_answers_votes":1352,"tags":["python","numpy","euclidean-distance"],"question_length":362,"response_length":239,"tag_count":3} {"question":"How do I count the occurrence of a certain item in an ndarray? How do I count the number of 0s and 1s in the following array? \n```\ny = np.array([0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1])\n```\ny.count(0) gives: numpy.ndarray object has no attribute count","response":"Using numpy.unique: \n```\nimport numpy\na = numpy.array([0, 3, 0, 1, 0, 1, 2, 1, 0, 0, 0, 0, 1, 3, 4])\nunique, counts = numpy.unique(a, return_counts=True)\n\n```python\ndict(zip(unique, counts))\n#Output\n#{0: 7, 1: 4, 2: 1, 3: 2, 4: 1}\n#```\n#Non-numpy method using collections.Counter; \n#```\n#import collections, numpy\n#a = numpy.array([0, 3, 0, 1, 0, 1, 2, 1, 0, 0, 0, 0, 1, 3, 4])\n#counter = collections.Counter(a)\n```\n\n```python\ncounter\n#Output\n#Counter({0: 7, 1: 4, 3: 2, 2: 1, 4: 1})\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/28663856\/how-do-i-count-the-occurrence-of-a-certain-item-in-an-ndarray","best_answers_votes":1113,"tags":["python","numpy","multidimensional-array","count"],"question_length":247,"response_length":447,"tag_count":4} {"question":"How do I create a new column where the values are selected based on an existing column? How do I add a color column to the following dataframe so that color='green' if Set == 'Z', and color='red' otherwise? \n```\nType Set\n1 A Z\n2 B Z \n3 B X\n4 C Y\n```","response":"If you only have two choices to select from then use np.where: \n```\ndf['color'] = np.where(df['Set']=='Z', 'green', 'red')\n```\nFor example, \n```\nimport pandas as pd\nimport numpy as np\n\ndf = pd.DataFrame({'Type':list('ABBC'), 'Set':list('ZZXY')})\ndf['color'] = np.where(df['Set']=='Z', 'green', 'red')\nprint(df)\n```\nyields \n```\nSet Type color\n0 Z A green\n1 Z B green\n2 X B red\n3 Y C red\n```\nIf you have more than two conditions then use np.select. For example, if you want color to be yellow when (df['Set'] == 'Z') & (df['Type'] == 'A') otherwise blue when (df['Set'] == 'Z') & (df['Type'] == 'B') otherwise purple when (df['Type'] == 'B') otherwise black, then use \n```\ndf = pd.DataFrame({'Type':list('ABBC'), 'Set':list('ZZXY')})\nconditions = [\n (df['Set'] == 'Z') & (df['Type'] == 'A'),\n (df['Set'] == 'Z') & (df['Type'] == 'B'),\n (df['Type'] == 'B')]\nchoices = ['yellow', 'blue', 'purple']\ndf['color'] = np.select(conditions, choices, default='black')\nprint(df)\n```\nwhich yields \n```\nSet Type color\n0 Z A yellow\n1 Z B blue\n2 X B purple\n3 Y C black\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/19913659\/how-do-i-create-a-new-column-where-the-values-are-selected-based-on-an-existing","best_answers_votes":1060,"tags":["python","pandas","dataframe","numpy"],"question_length":288,"response_length":1122,"tag_count":4} {"question":"Pretty-print a NumPy array without scientific notation and with given precision How do I print formatted NumPy arrays in a way similar to this: \n```\nx = 1.23456\nprint('%.3f' % x)\n```\nIf I want to print the numpy.ndarray of floats, it prints several decimals, often in 'scientific' format, which is rather hard to read even for low-dimensional arrays. However, numpy.ndarray apparently has to be printed as a string, i.e., with %s. Is there a solution for this?","response":"Use numpy.set_printoptions to set the precision of the output: \n```\nimport numpy as np\nx = np.random.random(10)\nprint(x)\n# [ 0.07837821 0.48002108 0.41274116 0.82993414 0.77610352 0.1023732\n# 0.51303098 0.4617183 0.33487207 0.71162095]\n\nnp.set_printoptions(precision=3)\nprint(x)\n# [ 0.078 0.48 0.413 0.83 0.776 0.102 0.513 0.462 0.335 0.712]\n```\nAnd suppress suppresses the use of scientific notation for small numbers: \n```\ny = np.array([1.5e-10, 1.5, 1500])\nprint(y)\n# [ 1.500e-10 1.500e+00 1.500e+03]\n\nnp.set_printoptions(suppress=True)\nprint(y)\n# [ 0. 1.5 1500. ]\n```\nTo apply print options locally, using NumPy 1.15.0 or later, you could use the numpy.printoptions context manager. For example, inside the with-suite precision=3 and suppress=True are set: \n```\nx = np.random.random(10)\nwith np.printoptions(precision=3, suppress=True):\n print(x)\n # [ 0.073 0.461 0.689 0.754 0.624 0.901 0.049 0.582 0.557 0.348]\n```\nBut outside the with-suite the print options are back to default settings: \n```\nprint(x) \n# [ 0.07334334 0.46132615 0.68935231 0.75379645 0.62424021 0.90115836\n# 0.04879837 0.58207504 0.55694118 0.34768638]\n```\nIf you are using an earlier version of NumPy, you can create the context manager yourself. For example, \n```\nimport numpy as np\nimport contextlib\n\n@contextlib.contextmanager\ndef printoptions(*args, **kwargs):\n original = np.get_printoptions()\n np.set_printoptions(*args, **kwargs)\n try:\n yield\n finally: \n np.set_printoptions(**original)\n\nx = np.random.random(10)\nwith printoptions(precision=3, suppress=True):\n print(x)\n # [ 0.073 0.461 0.689 0.754 0.624 0.901 0.049 0.582 0.557 0.348]\n```\nTo prevent zeros from being stripped from the end of floats: np.set_printoptions now has a formatter parameter which allows you to specify a format function for each type. \n```\nnp.set_printoptions(formatter={'float': '{: 0.3f}'.format})\nprint(x)\n```\nwhich prints \n```\n[ 0.078 0.480 0.413 0.830 0.776 0.102 0.513 0.462 0.335 0.712]\n```\ninstead of \n```\n[ 0.078 0.48 0.413 0.83 0.776 0.102 0.513 0.462 0.335 0.712]\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/2891790\/pretty-print-a-numpy-array-without-scientific-notation-and-with-given-precision","best_answers_votes":796,"tags":["python","numpy","pretty-print"],"question_length":460,"response_length":2163,"tag_count":3} {"question":"How do I add an extra column to a NumPy array? Given the following 2D array: \n```\na = np.array([\n [1, 2, 3],\n [2, 3, 4],\n])\n```\nI want to add a column of zeros along the second axis to get: \n```\nb = np.array([\n [1, 2, 3, 0],\n [2, 3, 4, 0],\n])\n```","response":"np.r_[...] (docs) and np.c_[...] (docs) are useful alternatives to np.vstack and np.hstack. Note that they use square brackets [] instead of parentheses (). Some examples: \n```\n: import numpy as np\n: N = 3\n: A = np.eye(N)\n\n: np.c_[ A, np.ones(N) ] # add a column\narray([[ 1., 0., 0., 1.],\n [ 0., 1., 0., 1.],\n [ 0., 0., 1., 1.]])\n\n: np.c_[ np.ones(N), A, np.ones(N) ] # or two\narray([[ 1., 1., 0., 0., 1.],\n [ 1., 0., 1., 0., 1.],\n [ 1., 0., 0., 1., 1.]])\n\n: np.r_[ A, [A[1]] ] # add a row\narray([[ 1., 0., 0.],\n [ 0., 1., 0.],\n [ 0., 0., 1.],\n [ 0., 1., 0.]])\n: # not np.r_[ A, A[1] ]\n\n: np.r_[ A[0], 1, 2, 3, A[1] ] # mix vecs and scalars\n array([ 1., 0., 0., 1., 2., 3., 0., 1., 0.])\n\n: np.r_[ A[0], [1, 2, 3], A[1] ] # lists\n array([ 1., 0., 0., 1., 2., 3., 0., 1., 0.])\n\n: np.r_[ A[0], (1, 2, 3), A[1] ] # tuples\n array([ 1., 0., 0., 1., 2., 3., 0., 1., 0.])\n\n: np.r_[ A[0], 1:4, A[1] ] # same, 1:4 == arange(1,4) == 1,2,3\n array([ 1., 0., 0., 1., 2., 3., 0., 1., 0.])\n```\nThe reason for square brackets [] instead of round () is that Python converts 1:4 to slice objects in square brackets.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/8486294\/how-do-i-add-an-extra-column-to-a-numpy-array","best_answers_votes":510,"tags":["python","numpy"],"question_length":258,"response_length":1242,"tag_count":2} {"question":"Comparing two NumPy arrays for equality, element-wise What is the simplest way to compare two NumPy arrays for equality (where equality is defined as: A = B iff for all indices i: A[i] == B[i])? Simply using == gives me a boolean array: \n```\n\n```python\nnumpy.array([1,1,1]) == numpy.array([1,1,1])\n```\n\narray([ True, True, True], dtype=bool)\n```\nDo I have to and the elements of this array to determine if the arrays are equal, or is there a simpler way to compare?","response":"```\n(A==B).all()\n```\ntest if all values of array (A==B) are True. Note: maybe you also want to test A and B shape, such as A.shape == B.shape Special cases and alternatives (from dbaupp's answer and yoavram's comment) It should be noted that: this solution can have a strange behavior in a particular case: if either A or B is empty and the other one contains a single element, then it return True. For some reason, the comparison A==B returns an empty array, for which the all operator returns True. Another risk is if A and B don't have the same shape and aren't broadcastable, then this approach will raise an error. In conclusion, if you have a doubt about A and B shape or simply want to be safe: use one of the specialized functions: \n```\nnp.array_equal(A,B) # test if same shape, same elements values\nnp.array_equiv(A,B) # test if broadcastable shape, same elements values\nnp.allclose(A,B,...) # test if same shape, elements have close enough values\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/10580676\/comparing-two-numpy-arrays-for-equality-element-wise","best_answers_votes":681,"tags":["python","arrays","numpy","elementwise-operations"],"question_length":456,"response_length":962,"tag_count":4} {"question":"What is the difference between flatten and ravel functions in numpy? \n```\nimport numpy as np\ny = np.array(((1,2,3),(4,5,6),(7,8,9)))\nOUTPUT:\nprint(y.flatten())\n[1 2 3 4 5 6 7 8 9]\nprint(y.ravel())\n[1 2 3 4 5 6 7 8 9]\n```\nBoth function return the same list. Then what is the need of two different functions performing same job.","response":"The current API is that: flatten always returns a copy. ravel returns a contiguous view of the original array whenever possible. This isn't visible in the printed output, but if you modify the array returned by ravel, it may modify the entries in the original array. If you modify the entries in an array returned from flatten this will never happen. ravel will often be faster since no memory is copied, but you have to be more careful about modifying the array it returns. reshape((-1,)) gets a view whenever the strides of the array allow it even if that means you don't always get a contiguous array.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/28930465\/what-is-the-difference-between-flatten-and-ravel-functions-in-numpy","best_answers_votes":567,"tags":["python","numpy","multidimensional-array","flatten","numpy-ndarray"],"question_length":358,"response_length":604,"tag_count":5} {"question":"Convert NumPy array to Python list How do I convert a NumPy array into a Python List?","response":"Use tolist(): \n```\n\n```python\nimport numpy as np\n```\n\n```python\nnp.array([[1,2,3],[4,5,6]]).tolist()\n#Output\n#[[1, 2, 3], [4, 5, 6]]\n#```\n#Note that this converts the values from whatever numpy type they may have (e.g. np.int32 or np.float32) to the \"nearest compatible Python type\" (in a list). If you want to preserve the numpy data types, you could call list() on your array instead, and you'll end up with a list of numpy scalars. (Thanks to Mr_and_Mrs_D for pointing that out in a comment.)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/1966207\/convert-numpy-array-to-python-list","best_answers_votes":617,"tags":["python","list","numpy","numpy-ndarray"],"question_length":85,"response_length":466,"tag_count":4} {"question":"ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() Let x be a NumPy array. The following: \n```\n(x > 1) and (x < 3)\n```\nGives the error message: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() How do I fix this?","response":"If a and b are Boolean NumPy arrays, the & operation returns the elementwise-and of them: \n```\na & b\n```\nThat returns a Boolean array. To reduce this to a single Boolean value, use either \n```\n(a & b).any()\n```\nor \n```\n(a & b).all()\n```\nNote: if a and b are non-Boolean arrays, consider (a - b).any() or (a - b).all() instead. Rationale The NumPy developers felt there was no one commonly understood way to evaluate an array in Boolean context: it could mean True if any element is True, or it could mean True if all elements are True, or True if the array has non-zero length, just to name three possibilities. Since different users might have different needs and different assumptions, the NumPy developers refused to guess and instead decided to raise a ValueError whenever one tries to evaluate an array in Boolean context. Applying and to two numpy arrays causes the two arrays to be evaluated in Boolean context (by calling __bool__ in Python3 or __nonzero__ in Python2).","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/10062954\/valueerror-the-truth-value-of-an-array-with-more-than-one-element-is-ambiguous","best_answers_votes":323,"tags":["python","numpy"],"question_length":319,"response_length":977,"tag_count":2} {"question":"NumPy array initialization (fill with identical values) [duplicate] This question already has answers here: Create numpy matrix filled with NaNs (11 answers) Closed 1 year ago. I need to create a NumPy array of length n, each element of which is v. Is there anything better than: \n```\na = empty(n)\nfor i in range(n):\n a[i] = v\n```\nI know zeros and ones would work for v = 0, 1. I could use v * ones(n), but it won't work when v is None, and also would be much slower.","response":"NumPy 1.8 introduced np.full(), which is a more direct method than empty() followed by fill() for creating an array filled with a certain value: \n```\n\n```python\nnp.full((3, 5), 7)\n#Output\n#array([[ 7., 7., 7., 7., 7.],\n# [ 7., 7., 7., 7., 7.],\n# [ 7., 7., 7., 7., 7.]])\n```\n\n```python\nnp.full((3, 5), 7, dtype=int)\n#Output\n#array([[7, 7, 7, 7, 7],\n# [7, 7, 7, 7, 7],\n# [7, 7, 7, 7, 7]])\n#```\n#This is arguably the way of creating an array filled with certain values, because it explicitly describes what is being achieved (and it can in principle be very efficient since it performs a very specific task).\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/5891410\/numpy-array-initialization-fill-with-identical-values","best_answers_votes":571,"tags":["python","arrays","numpy"],"question_length":470,"response_length":600,"tag_count":3} {"question":"Frequency counts for unique values in a NumPy array How do I efficiently obtain the frequency count for each unique value in a NumPy array? \n```\n\n```python\nx = np.array([1,1,1,2,2,2,5,25,1,1])\n```\n\n```python\nfreq_count(x)\n#Output\n#[(1, 5), (2, 3), (5, 1), (25, 1)]\n#```\n```","response":"Use numpy.unique with return_counts=True (for NumPy 1.9+): \n```\nimport numpy as np\n\nx = np.array([1,1,1,2,2,2,5,25,1,1])\nunique, counts = np.unique(x, return_counts=True)\n\n>>> print(np.asarray((unique, counts)).T)\n [[ 1 5]\n [ 2 3]\n [ 5 1]\n [25 1]]\n```\nIn comparison with scipy.stats.itemfreq: \n```\n\n```python\nx = np.random.random_integers(0,100,1e6)\n```\n\n```python\n%timeit unique, counts = np.unique(x, return_counts=True)\n10 loops, best of 3: 31.5 ms per loop\n```\n\n```python\n%timeit scipy.stats.itemfreq(x)\n10 loops, best of 3: 170 ms per loop\n```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/10741346\/frequency-counts-for-unique-values-in-a-numpy-array","best_answers_votes":788,"tags":["python","arrays","performance","numpy"],"question_length":241,"response_length":540,"tag_count":4} {"question":"Dropping infinite values from dataframes in pandas? How do I drop nan, inf, and -inf values from a DataFrame without resetting mode.use_inf_as_null? Can I tell dropna to include inf in its definition of missing values so that the following works? \n```\ndf.dropna(subset=[\"col1\", \"col2\"], how=\"all\")\n```","response":"First replace() infs with NaN: \n```\ndf.replace([np.inf, -np.inf], np.nan, inplace=True)\n```\nand then drop NaNs via dropna(): \n```\ndf.dropna(subset=[\"col1\", \"col2\"], how=\"all\", inplace=True)\n```\nFor example: \n```\n\n```python\ndf = pd.DataFrame({\"col1\": [1, np.inf, -np.inf], \"col2\": [2, 3, np.nan]})\n```\n\n```python\ndf\n#Output\n# col1 col2\n#0 1.0 2.0\n#1 inf 3.0\n#2 -inf NaN\n```\n\n```python\ndf.replace([np.inf, -np.inf], np.nan, inplace=True)\n```\n\n```python\ndf\n#Output\n# col1 col2\n#0 1.0 2.0\n#1 NaN 3.0\n#2 NaN NaN\n```\n\n```python\ndf.dropna(subset=[\"col1\", \"col2\"], how=\"all\", inplace=True)\n```\n\n```python\ndf\n#Output\n# col1 col2\n#0 1.0 2.0\n#1 NaN 3.0\n#```\n#The same method also works for Series.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/17477979\/dropping-infinite-values-from-dataframes-in-pandas","best_answers_votes":715,"tags":["python","pandas","numpy"],"question_length":301,"response_length":629,"tag_count":3} {"question":"What is the difference between ndarray and array in NumPy? What is the difference between ndarray and array in NumPy? Where is their implementation in the NumPy source code?","response":"numpy.array is just a convenience function to create an ndarray; it is not a class itself. You can also create an array using numpy.ndarray, but it is not the recommended way. From the docstring of numpy.ndarray: Arrays should be constructed using array, zeros or empty ... The parameters given here refer to a low-level method (ndarray(...)) for instantiating an array. Most of the meat of the implementation is in C code, here in multiarray, but you can start looking at the ndarray interfaces here: https:\/\/github.com\/numpy\/numpy\/blob\/master\/numpy\/core\/numeric.py","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/15879315\/what-is-the-difference-between-ndarray-and-array-in-numpy","best_answers_votes":361,"tags":["python","arrays","numpy","multidimensional-array","numpy-ndarray"],"question_length":173,"response_length":566,"tag_count":5} {"question":"How do I remove NaN values from a NumPy array? How do I remove NaN values from a NumPy array? \n```\n[1, 2, NaN, 4, NaN, 8] \u27f6 [1, 2, 4, 8]\n```","response":"To remove NaN values from a NumPy array x: \n```\nx = x[~numpy.isnan(x)]\n```\nExplanation The inner function numpy.isnan returns a boolean\/logical array which has the value True everywhere that x is not-a-number. Since we want the opposite, we use the logical-not operator ~ to get an array with Trues everywhere that x is a valid number. Lastly, we use this logical array to index into the original array x, in order to retrieve just the non-NaN values.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/11620914\/how-do-i-remove-nan-values-from-a-numpy-array","best_answers_votes":629,"tags":["python","numpy","nan"],"question_length":144,"response_length":451,"tag_count":3} {"question":"Most efficient way to reverse a numpy array Believe it or not, after profiling my current code, the repetitive operation of numpy array reversion ate a giant chunk of the running time. What I have right now is the common view-based method: \n```\nreversed_arr = arr[::-1]\n```\nIs there any other way to do it more efficiently, or is it just an illusion from my obsession with unrealistic numpy performance?","response":"```\nreversed_arr = arr[::-1]\n```\ngives a reversed view into the original array arr. Any changes made to the original array arr will also be immediately visible in reversed_arr. The underlying data buffers for arr and reversed_arr are shared, so creating this view is always instantaneous, and does not require any additional memory allocation or copying for the array contents. See also, this discussion on NumPy views: How do I create a view onto a NumPy array? Possible solutions to performance problems regarding views Are you re-creating the view more often than you need to? You should be able to do something like this: \n```\narr = np.array(some_sequence)\nreversed_arr = arr[::-1]\n\ndo_something(arr)\nlook_at(reversed_arr)\ndo_something_else(arr)\nlook_at(reversed_arr)\n```\nI'm not a numpy expert, but this seems like it would be the fastest way to do things in numpy. If this is what you are already doing, I don't think you can improve on it.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/6771428\/most-efficient-way-to-reverse-a-numpy-array","best_answers_votes":342,"tags":["python","numpy"],"question_length":403,"response_length":946,"tag_count":2} {"question":"How to remove specific elements in a numpy array How can I remove some specific elements from a numpy array? Say I have \n```\nimport numpy as np\n\na = np.array([1,2,3,4,5,6,7,8,9])\n```\nI then want to remove 3,4,7 from a. All I know is the index of the values (index=[2,3,6]).","response":"Use numpy.delete(), which returns a new array with sub-arrays along an axis deleted. \n```\nnumpy.delete(a, index)\n```\nFor your specific question: \n```\nimport numpy as np\n\na = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9])\nindex = [2, 3, 6]\n\nnew_a = np.delete(a, index)\n\nprint(new_a)\n# Output: [1, 2, 5, 6, 8, 9]\n```\nNote that numpy.delete() returns a new array since array scalars are immutable, similar to strings in Python, so each time a change is made to it, a new object is created. I.e., to quote the delete() docs: \"A copy of arr with the elements specified by obj removed. Note that delete does not occur in-place...\" If the code I post has output, it is the result of running the code.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/10996140\/how-to-remove-specific-elements-in-a-numpy-array","best_answers_votes":464,"tags":["python","arrays","numpy"],"question_length":273,"response_length":684,"tag_count":3} {"question":"Convert array of indices to one-hot encoded array in NumPy Given a 1D array of indices: \n```\na = array([1, 0, 3])\n```\nI want to one-hot encode this as a 2D array: \n```\nb = array([[0,1,0,0], [1,0,0,0], [0,0,0,1]])\n```","response":"Create a zeroed array b with enough columns, i.e. a.max() + 1. Then, for each row i, set the a[i]th column to 1. \n```\n\n```python\na = np.array([1, 0, 3])\n```\n\n```python\nb = np.zeros((a.size, a.max() + 1))\n```\n\n```python\nb[np.arange(a.size), a] = 1\n```\n\n```python\nb\n#Output\n#array([[ 0., 1., 0., 0.],\n# [ 1., 0., 0., 0.],\n# [ 0., 0., 0., 1.]])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/29831489\/convert-array-of-indices-to-one-hot-encoded-array-in-numpy","best_answers_votes":547,"tags":["python","numpy","machine-learning","numpy-ndarray","one-hot-encoding"],"question_length":216,"response_length":316,"tag_count":5} {"question":"How to take column-slices of dataframe in pandas I load some machine learning data from a CSV file. The first 2 columns are observations and the remaining columns are features. Currently, I do the following: \n```\ndata = pandas.read_csv('mydata.csv')\n```\nwhich gives something like: \n```\ndata = pandas.DataFrame(np.random.rand(10,5), columns = list('abcde'))\n```\nI'd like to slice this dataframe in two dataframes: one containing the columns a and b and one containing the columns c, d and e. It is not possible to write something like \n```\nobservations = data[:'c']\nfeatures = data['c':]\n```\nI'm not sure what the best method is. Do I need a pd.Panel? By the way, I find dataframe indexing pretty inconsistent: data['a'] is permitted, but data[0] is not. On the other side, data['a':] is not permitted but data[0:] is. Is there a practical reason for this? This is really confusing if columns are indexed by Int, given that data[0] != data[0:1]","response":"2017 Answer - pandas 0.20: .ix is deprecated. Use .loc See the deprecation in the docs .loc uses label based indexing to select both rows and columns. The labels being the values of the index or the columns. Slicing with .loc includes the last element. Let's assume we have a DataFrame with the following columns: foo, bar, quz, ant, cat, sat, dat. \n```\n# selects all rows and all columns beginning at 'foo' up to and including 'sat'\ndf.loc[:, 'foo':'sat']\n# foo bar quz ant cat sat\n```\n.loc accepts the same slice notation that Python lists do for both row and columns. Slice notation being start:stop:step \n```\n# slice from 'foo' to 'cat' by every 2nd column\ndf.loc[:, 'foo':'cat':2]\n# foo quz cat\n\n# slice from the beginning to 'bar'\ndf.loc[:, :'bar']\n# foo bar\n\n# slice from 'quz' to the end by 3\ndf.loc[:, 'quz'::3]\n# quz sat\n\n# attempt from 'sat' to 'bar'\ndf.loc[:, 'sat':'bar']\n# no columns returned\n\n# slice from 'sat' to 'bar'\ndf.loc[:, 'sat':'bar':-1]\nsat cat ant quz bar\n\n# slice notation is syntatic sugar for the slice function\n# slice from 'quz' to the end by 2 with slice function\ndf.loc[:, slice('quz',None, 2)]\n# quz cat dat\n\n# select specific columns with a list\n# select columns foo, bar and dat\ndf.loc[:, ['foo','bar','dat']]\n# foo bar dat\n```\nYou can slice by rows and columns. For instance, if you have 5 rows with labels v, w, x, y, z \n```\n# slice from 'w' to 'y' and 'foo' to 'ant' by 3\ndf.loc['w':'y', 'foo':'ant':3]\n# foo ant\n# w\n# x\n# y\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/10665889\/how-to-take-column-slices-of-dataframe-in-pandas","best_answers_votes":324,"tags":["python","pandas","numpy","dataframe","slice"],"question_length":944,"response_length":1470,"tag_count":5} {"question":"Create numpy matrix filled with NaNs I have the following code: \n```\nr = numpy.zeros(shape = (width, height, 9))\n```\nIt creates a width x height x 9 matrix filled with zeros. Instead, I'd like to know if there's a function or way to initialize them instead to NaNs in an easy way.","response":"You rarely need loops for vector operations in numpy. You can create an uninitialized array and assign to all entries at once: \n```\n\n```python\na = numpy.empty((3,3,))\n```\n\n```python\na[:] = numpy.nan\n```\n\n```python\na\n#Output\n#array([[ NaN, NaN, NaN],\n# [ NaN, NaN, NaN],\n# [ NaN, NaN, NaN]])\n#```\n#I have timed the alternatives a[:] = numpy.nan here and a.fill(numpy.nan) as posted by Blaenk: \n#```\n#$ python -mtimeit \"import numpy as np; a = np.empty((100,100));\" \"a.fill(np.nan)\"\n#10000 loops, best of 3: 54.3 usec per loop\n#$ python -mtimeit \"import numpy as np; a = np.empty((100,100));\" \"a[:] = np.nan\" \n#10000 loops, best of 3: 88.8 usec per loop\n#```\n#The timings show a preference for ndarray.fill(..) as the faster alternative. OTOH, I like numpy's convenience implementation where you can assign values to whole slices at the time, the code's intention is very clear. Note that ndarray.fill performs its operation in-place, so numpy.empty((3,3,)).fill(numpy.nan) will instead return None.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/1704823\/create-numpy-matrix-filled-with-nans","best_answers_votes":415,"tags":["python","numpy"],"question_length":280,"response_length":966,"tag_count":2} {"question":"How do I create a numpy array of all True or all False? In Python, how do I create a numpy array of arbitrary shape filled with all True or all False?","response":"The answer: \n```py\nnumpy.full((2, 2), True)\n```\nExplanation: numpy creates arrays of all ones or all zeros very easily: e.g. numpy.ones((2, 2)) or numpy.zeros((2, 2)) Since True and False are represented in Python as 1 and 0, respectively, we have only to specify this array should be boolean using the optional dtype parameter and we are done: \n```py\nnumpy.ones((2, 2), dtype=bool)\n```\nreturns: \n```\narray([[ True, True],\n [ True, True]], dtype=bool)\n```\nUPDATE: 30 October 2013 Since numpy version 1.8, we can use full to achieve the same result with syntax that more clearly shows our intent (as fmonegaglia points out): \n```\nnumpy.full((2, 2), True, dtype=bool)\n```\nUPDATE: 16 January 2017 Since at least numpy version 1.12, full automatically casts to the dtype of the second parameter, so we can just write: \n```\nnumpy.full((2, 2), True)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/21174961\/how-do-i-create-a-numpy-array-of-all-true-or-all-false","best_answers_votes":438,"tags":["python","arrays","numpy","boolean","numpy-ndarray"],"question_length":150,"response_length":855,"tag_count":5} {"question":"Take multiple lists into dataframe How do I take multiple lists and put them as different columns in a python dataframe? I tried this solution but had some trouble. Attempt 1: Have three lists, and zip them together and use that res = zip(lst1,lst2,lst3) Yields just one column Attempt 2: \n```\npercentile_list = pd.DataFrame({'lst1Tite' : [lst1],\n 'lst2Tite' : [lst2],\n 'lst3Tite' : [lst3] }, \n columns=['lst1Tite','lst1Tite', 'lst1Tite'])\n```\nyields either one row by 3 columns (the way above) or if I transpose it is 3 rows and 1 column How do I get a 100 row (length of each independent list) by 3 column (three lists) pandas dataframe?","response":"I think you're almost there, try removing the extra square brackets around the lst's (Also you don't need to specify the column names when you're creating a dataframe from a dict like this): \n```\nimport pandas as pd\nlst1 = range(100)\nlst2 = range(100)\nlst3 = range(100)\npercentile_list = pd.DataFrame(\n {'lst1Title': lst1,\n 'lst2Title': lst2,\n 'lst3Title': lst3\n })\n\npercentile_list\n lst1Title lst2Title lst3Title\n0 0 0 0\n1 1 1 1\n2 2 2 2\n3 3 3 3\n4 4 4 4\n5 5 5 5\n6 6 6 6\n...\n```\nIf you need a more performant solution you can use np.column_stack rather than zip as in your first attempt, this has around a 2x speedup on the example here, however comes at bit of a cost of readability in my opinion: \n```\nimport numpy as np\npercentile_list = pd.DataFrame(np.column_stack([lst1, lst2, lst3]), \n columns=['lst1Title', 'lst2Title', 'lst3Title'])\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/30522724\/take-multiple-lists-into-dataframe","best_answers_votes":513,"tags":["python","numpy","pandas"],"question_length":732,"response_length":1068,"tag_count":3} {"question":"How to convert a NumPy array to PIL image applying matplotlib colormap I want to take a NumPy 2D array which represents a grayscale image, and convert it to an RGB PIL image while applying some of the matplotlib colormaps. I can get a reasonable PNG output by using the pyplot.figure.figimage command: \n```\ndpi = 100.0\nw, h = myarray.shape[1]\/dpi, myarray.shape[0]\/dpi\nfig = plt.figure(figsize=(w,h), dpi=dpi)\nfig.figimage(sub, cmap=cm.gist_earth)\nplt.savefig('out.png')\n```\nAlthough I could adapt this to get what I want (probably using StringIO do get the PIL image), I wonder if there is not a simpler way to do that, since it seems to be a very natural problem of image visualization. Let's say, something like this: \n```\ncolored_PIL_image = magic_function(array, cmap)\n```","response":"Quite a busy one-liner, but here it is: First ensure your NumPy array, myarray, is normalised with the max value at 1.0. Apply the colormap directly to myarray. Rescale to the 0-255 range. Convert to integers, using np.uint8(). Use Image.fromarray(). And you're done: \n```\nfrom PIL import Image\nfrom matplotlib import cm\nim = Image.fromarray(np.uint8(cm.gist_earth(myarray)*255))\n```\nwith plt.savefig(): with im.save():","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/10965417\/how-to-convert-a-numpy-array-to-pil-image-applying-matplotlib-colormap","best_answers_votes":400,"tags":["python","numpy","matplotlib","python-imaging-library","color-mapping"],"question_length":777,"response_length":419,"tag_count":5} {"question":"numpy matrix vector multiplication [duplicate] This question already has answers here: how does multiplication differ for NumPy Matrix vs Array classes? (8 answers) Closed 11 years ago. When I multiply two numpy arrays of sizes (n x n)*(n x 1), I get a matrix of size (n x n). Following normal matrix multiplication rules, an (n x 1) vector is expected, but I simply cannot find any information about how this is done in Python's Numpy module. The thing is that I don't want to implement it manually to preserve the speed of the program. Example code is shown below: \n```\na = np.array([[5, 1, 3], [1, 1, 1], [1, 2, 1]])\nb = np.array([1, 2, 3])\n\nprint a*b\n >>\n [[5 2 9]\n [1 2 3]\n [1 4 3]]\n```\nWhat I want is: \n```\nprint a*b\n >>\n [16 6 8]\n```","response":"Simplest solution Use numpy.dot or a.dot(b). See the documentation here. \n```\n\n```python\na = np.array([[ 5, 1 ,3], \n#Output\n# [ 1, 1 ,1], \n# [ 1, 2 ,1]])\n```\n\n```python\nb = np.array([1, 2, 3])\n```\n\n```python\nprint a.dot(b)\n#Output\n#array([16, 6, 8])\n#```\n#This occurs because numpy arrays are not matrices, and the standard operations *, +, -, \/ work element-wise on arrays. Note that while you can use numpy.matrix (as of early 2021) where * will be treated like standard matrix multiplication, numpy.matrix is deprecated and may be removed in future releases.. See the note in its documentation (reproduced below): It is no longer recommended to use this class, even for linear algebra. Instead use regular arrays. The class may be removed in the future. Thanks @HopeKing. Other Solutions Also know there are other options: As noted below, if using python3.5+ and numpy v1.10+, the @ operator works as you'd expect: \n#```\n```\n\n```python\nprint(a @ b)\n#Output\n#array([16, 6, 8])\n#```\n#If you want overkill, you can use numpy.einsum. The documentation will give you a flavor for how it works, but honestly, I didn't fully understand how to use it until reading this answer and just playing around with it on my own. \n#```\n```\n\n```python\nnp.einsum('ji,i->j', a, b)\n#Output\n#array([16, 6, 8])\n#```\n#As of mid 2016 (numpy 1.10.1), you can try the experimental numpy.matmul, which works like numpy.dot with two major exceptions: no scalar multiplication but it works with stacks of matrices. \n#```\n```\n\n```python\nnp.matmul(a, b)\n#Output\n#array([16, 6, 8])\n#```\n#numpy.inner functions the same way as numpy.dot for matrix-vector multiplication but behaves differently for matrix-matrix and tensor multiplication (see Wikipedia regarding the differences between the inner product and dot product in general or see this SO answer regarding numpy's implementations). \n#```\n```\n\n```python\nnp.inner(a, b)\n#Output\n#array([16, 6, 8])\n```\n\n# Beware using for matrix-matrix multiplication though!\n\n```python\nb = a.T\n```\n\n```python\nnp.dot(a, b)\n#Output\n#array([[35, 9, 10],\n# [ 9, 3, 4],\n# [10, 4, 6]])\n```\n\n```python\nnp.inner(a, b) \n#Output\n#array([[29, 12, 19],\n# [ 7, 4, 5],\n# [ 8, 5, 6]])\n#```\n#If you have multiple 2D arrays to dot together, you may consider the np.linalg.multi_dot function, which simplifies the syntax of many nested np.dots. Note that this only works with 2D arrays (i.e. not for matrix-vector multiplication). \n#```\n```\n\n```python\nnp.dot(np.dot(a, a.T), a).dot(a.T)\n#Output\n# array([[1406, 382, 446],\n# [ 382, 106, 126],\n# [ 446, 126, 152]])\n# >>> np.linalg.multi_dot((a, a.T, a, a.T))\n# array([[1406, 382, 446],\n# [ 382, 106, 126],\n# [ 446, 126, 152]])\n#```\n#Rarer options for edge cases If you have tensors (arrays of dimension greater than or equal to one), you can use numpy.tensordot with the optional argument axes=1: \n#```\n```\n\n```python\nnp.tensordot(a, b, axes=1)\n#Output\n#array([16, 6, 8])\n#```\n#Don't use numpy.vdot if you have a matrix of complex numbers, as the matrix will be flattened to a 1D array, then it will try to find the complex conjugate dot product between your flattened matrix and vector (which will fail due to a size mismatch n*m vs n).\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/21562986\/numpy-matrix-vector-multiplication","best_answers_votes":471,"tags":["python","arrays","numpy","vector","matrix"],"question_length":752,"response_length":3041,"tag_count":5} {"question":"Transposing a 1D NumPy array I use Python and NumPy and have some problems with \"transpose\": \n```\nimport numpy as np\na = np.array([5,4])\nprint(a)\nprint(a.T)\n```\nInvoking a.T is not transposing the array. If a is for example [[],[]] then it transposes correctly, but I need the transpose of [...,...,...].","response":"It's working exactly as it's supposed to. The transpose of a 1D array is still a 1D array! (If you're used to matlab, it fundamentally doesn't have a concept of a 1D array. Matlab's \"1D\" arrays are 2D.) If you want to turn your 1D vector into a 2D array and then transpose it, just slice it with np.newaxis (or None, they're the same, newaxis is just more readable). \n```\nimport numpy as np\na = np.array([5,4])[np.newaxis]\nprint(a)\nprint(a.T)\n```\nGenerally speaking though, you don't ever need to worry about this. Adding the extra dimension is usually not what you want, if you're just doing it out of habit. Numpy will automatically broadcast a 1D array when doing various calculations. There's usually no need to distinguish between a row vector and a column vector (neither of which are vectors. They're both 2D!) when you just want a vector.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/5954603\/transposing-a-1d-numpy-array","best_answers_votes":318,"tags":["python","numpy","transpose"],"question_length":304,"response_length":846,"tag_count":3} {"question":"Relationship between SciPy and NumPy SciPy appears to provide most (but not all [1]) of NumPy's functions in its own namespace. In other words, if there's a function named numpy.foo, there's almost certainly a scipy.foo. Most of the time, the two appear to be exactly the same, oftentimes even pointing to the same function object. Sometimes, they're different. To give an example that came up recently: numpy.log10 is a ufunc that returns NaNs for negative arguments; scipy.log10 returns complex values for negative arguments and doesn't appear to be a ufunc. The same can be said about log, log2 and logn, but not about log1p [2]. On the other hand, numpy.exp and scipy.exp appear to be different names for the same ufunc. This is also true of scipy.log1p and numpy.log1p. Another example is numpy.linalg.solve vs scipy.linalg.solve. They're similar, but the latter offers some additional features over the former. Why the apparent duplication? If this is meant to be a wholesale import of numpy into the scipy namespace, why the subtle differences in behaviour and the missing functions? Is there some overarching logic that would help clear up the confusion? [1] numpy.min, numpy.max, numpy.abs and a few others have no counterparts in the scipy namespace. [2] Tested using NumPy 1.5.1 and SciPy 0.9.0rc2.","response":"Last time I checked it, the scipy __init__ method executes a \n```\nfrom numpy import *\n```\nso that the whole numpy namespace is included into scipy when the scipy module is imported. The log10 behavior you are describing is interesting, because both versions are coming from numpy. One is a ufunc, the other is a numpy.lib function. Why scipy is preferring the library function over the ufunc, I don't know off the top of my head. EDIT: In fact, I can answer the log10 question. Looking in the scipy __init__ method I see this: \n```\n# Import numpy symbols to scipy name space\nimport numpy as _num\nfrom numpy import oldnumeric\nfrom numpy import *\nfrom numpy.random import rand, randn\nfrom numpy.fft import fft, ifft\nfrom numpy.lib.scimath import *\n```\nThe log10 function you get in scipy comes from numpy.lib.scimath. Looking at that code, it says: \n```\n\"\"\"\nWrapper functions to more user-friendly calling of certain math functions\nwhose output data-type is different than the input data-type in certain\ndomains of the input.\n\nFor example, for functions like log() with branch cuts, the versions in this\nmodule provide the mathematically valid answers in the complex plane:\n\n```python\nimport math\n```\n\n```python\nfrom numpy.lib import scimath\n```\n\n```python\nscimath.log(-math.exp(1)) == (1+1j*math.pi)\n#Output\n#True\n```\n\nSimilarly, sqrt(), other base logarithms, power() and trig functions are\ncorrectly handled. See their respective docstrings for specific examples.\n\"\"\"\n```\nIt seems that module overlays the base numpy ufuncs for sqrt, log, log2, logn, log10, power, arccos, arcsin, and arctanh. That explains the behavior you are seeing. The underlying design reason why it is done like that is probably buried in a mailing list post somewhere.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/6200910\/relationship-between-scipy-and-numpy","best_answers_votes":156,"tags":["python","numpy","scipy"],"question_length":1309,"response_length":1704,"tag_count":3} {"question":"Counting unique values in a column in pandas dataframe like in Qlik? If I have a table like this: \n```\ndf = pd.DataFrame({\n 'hID': [101, 102, 103, 101, 102, 104, 105, 101],\n 'dID': [10, 11, 12, 10, 11, 10, 12, 10],\n 'uID': ['James', 'Henry', 'Abe', 'James', 'Henry', 'Brian', 'Claude', 'James'],\n 'mID': ['A', 'B', 'A', 'B', 'A', 'A', 'A', 'C']\n})\n```\nI can do count(distinct hID) in Qlik to come up with count of 5 for unique hID. How do I do that in python using a pandas dataframe? Or maybe a numpy array? Similarly, if were to do count(hID) I will get 8 in Qlik. What is the equivalent way to do it in pandas?","response":"Count distinct values, use nunique: \n```\ndf['hID'].nunique()\n5\n```\nCount only non-null values, use count: \n```\ndf['hID'].count()\n8\n```\nCount total values including null values, use the size attribute: \n```\ndf['hID'].size\n8\n```\nEdit to add condition Use boolean indexing: \n```\ndf.loc[df['mID']=='A','hID'].agg(['nunique','count','size'])\n```\nOR using query: \n```\ndf.query('mID == \"A\"')['hID'].agg(['nunique','count','size'])\n```\nOutput: \n```\nnunique 5\ncount 5\nsize 5\nName: hID, dtype: int64\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/45759966\/counting-unique-values-in-a-column-in-pandas-dataframe-like-in-qlik","best_answers_votes":419,"tags":["python","pandas","numpy","qlikview","qliksense"],"question_length":645,"response_length":507,"tag_count":5} {"question":"np.mean() vs np.average() in Python NumPy? I notice that \n```\n\n```python\nnp.mean([1, 2, 3])\n#Output\n#2.0\n```\n\n```python\nnp.average([1, 2, 3])\n#Output\n#2.0\n#```\n#However, there should be some differences, since after all they are two different functions. What are the differences between them?\n```","response":"np.average takes an optional weight parameter. If it is not supplied they are equivalent. Take a look at the source code: Mean, Average np.mean: \n```\ntry:\n mean = a.mean\nexcept AttributeError:\n return _wrapit(a, 'mean', axis, dtype, out)\nreturn mean(axis, dtype, out)\n```\nnp.average: \n```\n...\nif weights is None :\n avg = a.mean(axis)\n scl = avg.dtype.type(a.size\/avg.size)\nelse:\n #code that does weighted mean here\n\nif returned: #returned is another optional argument\n scl = np.multiply(avg, 0) + scl\n return avg, scl\nelse:\n return avg\n...\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/20054243\/np-mean-vs-np-average-in-python-numpy","best_answers_votes":240,"tags":["python","numpy","statistics","average","mean"],"question_length":283,"response_length":567,"tag_count":5} {"question":"How to count the number of true elements in a NumPy bool array I have a NumPy array 'boolarr' of boolean type. I want to count the number of elements whose values are True. Is there a NumPy or Python routine dedicated for this task? Or, do I need to iterate over the elements in my script?","response":"You have multiple options. Two options are the following. \n```\nboolarr.sum()\nnumpy.count_nonzero(boolarr)\n```\nHere's an example: \n```\n\n```python\nimport numpy as np\n```\n\n```python\nboolarr = np.array([[0, 0, 1], [1, 0, 1], [1, 0, 1]], dtype=np.bool)\n```\n\n```python\nboolarr\n#Output\n#array([[False, False, True],\n# [ True, False, True],\n# [ True, False, True]], dtype=bool)\n```\n\n```python\nboolarr.sum()\n#Output\n#5\n#```\n#Of course, that is a bool-specific answer. More generally, you can use numpy.count_nonzero. \n#```\n```\n\n```python\nnp.count_nonzero(boolarr)\n#Output\n#5\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/8364674\/how-to-count-the-number-of-true-elements-in-a-numpy-bool-array","best_answers_votes":365,"tags":["python","arrays","numpy","count","boolean"],"question_length":289,"response_length":502,"tag_count":5} {"question":"How to split data into 3 sets (train, validation and test)? I have a pandas dataframe and I wish to divide it to 3 separate sets. I know that using train_test_split from sklearn.cross_validation, one can divide the data in two sets (train and test). However, I couldn't find any solution about splitting the data into three sets. Preferably, I'd like to have the indices of the original data. I know that a workaround would be to use train_test_split two times and somehow adjust the indices. But is there a more standard \/ built-in way to split the data into 3 sets instead of 2?","response":"Numpy solution. We will shuffle the whole dataset first (df.sample(frac=1, random_state=42)) and then split our data set into the following parts: 60% - train set, 20% - validation set, 20% - test set \n```\n\n```python\ntrain, validate, test = \\\n np.split(df.sample(frac=1, random_state=42), \n [int(.6*len(df)), int(.8*len(df))])\n```\n\n```python\ntrain\n#Output\n#A B C D E\n#0 0.046919 0.792216 0.206294 0.440346 0.038960\n#2 0.301010 0.625697 0.604724 0.936968 0.870064\n#1 0.642237 0.690403 0.813658 0.525379 0.396053\n#9 0.488484 0.389640 0.599637 0.122919 0.106505\n#8 0.842717 0.793315 0.554084 0.100361 0.367465\n#7 0.185214 0.603661 0.217677 0.281780 0.938540\n```\n\n```python\nvalidate\n#Output\n#A B C D E\n#5 0.806176 0.008896 0.362878 0.058903 0.026328\n#6 0.145777 0.485765 0.589272 0.806329 0.703479\n```\n\n```python\ntest\n#Output\n#A B C D E\n#4 0.521640 0.332210 0.370177 0.859169 0.401087\n#3 0.333348 0.964011 0.083498 0.670386 0.169619\n#```\n#[int(.6*len(df)), int(.8*len(df))] - is an indices_or_sections array for numpy.split(). Here is a small demo for np.split() usage - let's split 20-elements array into the following parts: 80%, 10%, 10%: \n#```\n```\n\n```python\na = np.arange(1, 21)\n```\n\n```python\na\n#Output\n#array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20])\n```\n\n```python\nnp.split(a, [int(.8 * len(a)), int(.9 * len(a))])\n#Output\n#[array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]),\n# array([17, 18]),\n# array([19, 20])]\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/38250710\/how-to-split-data-into-3-sets-train-validation-and-test","best_answers_votes":295,"tags":["pandas","numpy","dataframe","machine-learning","scikit-learn"],"question_length":580,"response_length":1653,"tag_count":5} {"question":"How to flatten only some dimensions of a numpy array Is there a quick way to \"sub-flatten\" or flatten only some of the first dimensions in a numpy array? For example, given a numpy array of dimensions (50,100,25), the resultant dimensions would be (5000,25)","response":"Take a look at numpy.reshape . \n```\n\n```python\narr = numpy.zeros((50,100,25))\n```\n\n```python\narr.shape\n#Output\n## (50, 100, 25)\n```\n\n```python\nnew_arr = arr.reshape(5000,25)\n```\n\n```python\nnew_arr.shape \n#Output\n## (5000, 25)\n```\n\n# One shape dimension can be -1. \n# In this case, the value is inferred from \n# the length of the array and remaining dimensions.\n\n```python\nanother_arr = arr.reshape(-1, arr.shape[-1])\n```\n\n```python\nanother_arr.shape\n#Output\n## (5000, 25)\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/18757742\/how-to-flatten-only-some-dimensions-of-a-numpy-array","best_answers_votes":222,"tags":["python","numpy","flatten"],"question_length":257,"response_length":389,"tag_count":3} {"question":"What is the inverse function of zip in python? [duplicate] This question already has answers here: Transpose\/Unzip Function (inverse of zip)? (14 answers) Closed 2 years ago. I've used the zip function from the Numpy library to sort tuples and now I have a list containing all the tuples. I had since modified that list and now I would like to restore the tuples so I can use my data. How can I do this?","response":"```\nlst1, lst2 = zip(*zipped_list)\n```\nshould give you the unzipped list. *zipped_list unpacks the zipped_list object. it then passes all the tuples from the zipped_list object to zip, which just packs them back up as they were when you passed them in. so if: \n```\na = [1,2,3]\nb = [4,5,6]\n```\nthen zipped_list = zip(a,b) gives you: \n```\n[(1,4), (2,5), (3,6)]\n```\nand *zipped_list gives you back \n```\n(1,4), (2,5), (3,6)\n```\nzipping that with zip(*zipped_list) gives you back the two collections: \n```\n[(1, 2, 3), (4, 5, 6)]\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/13635032\/what-is-the-inverse-function-of-zip-in-python","best_answers_votes":496,"tags":["python","list","numpy","tuples"],"question_length":403,"response_length":527,"tag_count":4} {"question":"Concatenate a NumPy array to another NumPy array I have a numpy_array. Something like [ a b c ]. And then I want to concatenate it with another NumPy array (just like we create a list of lists). How do we create a NumPy array containing NumPy arrays? I tried to do the following without any luck \n```\n\n```python\nM = np.array([])\n```\n\n```python\nM\n#Output\n#array([], dtype=float64)\n```\n\n```python\nM.append(a,axis=0)\n#Output\n#Traceback (most recent call last):\n# File \"\", line 1, in \n#AttributeError: 'numpy.ndarray' object has no attribute 'append'\n```\n\n```python\na\n#Output\n#array([1, 2, 3])\n#```\n```","response":"```\n\n```python\nimport numpy as np\n```\n\n```python\na = np.array([[1, 2, 3], [4, 5, 6]])\n```\n\n```python\nb = np.array([[9, 8, 7], [6, 5, 4]])\n```\n\n```python\nnp.concatenate((a, b))\n#Output\n#array([[1, 2, 3],\n# [4, 5, 6],\n# [9, 8, 7],\n# [6, 5, 4]])\n#```\n#or this: \n#```\n```\n\n```python\na = np.array([1, 2, 3])\n```\n\n```python\nb = np.array([4, 5, 6])\n```\n\n```python\nnp.vstack((a, b))\n#Output\n#array([[1, 2, 3],\n# [4, 5, 6]])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/9775297\/concatenate-a-numpy-array-to-another-numpy-array","best_answers_votes":294,"tags":["python","numpy"],"question_length":539,"response_length":396,"tag_count":2} {"question":"Extracting specific columns in numpy array This is an easy question but say I have an MxN matrix. All I want to do is extract specific columns and store them in another numpy array but I get invalid syntax errors. Here is the code: \n```\nextractedData = data[[:,1],[:,9]].\n```\nIt seems like the above line should suffice but I guess not. I looked around but couldn't find anything syntax wise regarding this specific scenario.","response":"I assume you wanted columns 1 and 9? To select multiple columns at once, use \n```\nX = data[:, [1, 9]]\n```\nTo select one at a time, use \n```\nx, y = data[:, 1], data[:, 9]\n```\nWith names: \n```\ndata[:, ['Column Name1','Column Name2']]\n```\nYou can get the names from data.dtype.names\u2026","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/8386675\/extracting-specific-columns-in-numpy-array","best_answers_votes":398,"tags":["python","syntax","numpy"],"question_length":425,"response_length":280,"tag_count":3} {"question":"Numpy first occurrence of value greater than existing value I have a 1D array in numpy and I want to find the position of the index where a value exceeds the value in numpy array. E.g. \n```\naa = range(-10,10)\n```\nFind position in aa where, the value 5 gets exceeded.","response":"This is a little faster (and looks nicer) \n```\nnp.argmax(aa>5)\n```\nSince argmax will stop at the first True (\"In case of multiple occurrences of the maximum values, the indices corresponding to the first occurrence are returned.\") and doesn't save another list. \n```\n\n```python\nN = 10000\n```\n\n```python\naa = np.arange(-N,N)\n```\n\n```python\ntimeit np.argmax(aa>N\/2)\n100000 loops, best of 3: 52.3 us per loop\n```\n\n```python\ntimeit np.where(aa>N\/2)[0][0]\n10000 loops, best of 3: 141 us per loop\n```\n\n```python\ntimeit np.nonzero(aa>N\/2)[0][0]\n10000 loops, best of 3: 142 us per loop\n```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/16243955\/numpy-first-occurrence-of-value-greater-than-existing-value","best_answers_votes":315,"tags":["python","numpy"],"question_length":266,"response_length":554,"tag_count":2} {"question":"How to add a new row to an empty numpy array Using standard Python arrays, I can do the following: \n```\narr = []\narr.append([1,2,3])\narr.append([4,5,6])\n# arr is now [[1,2,3],[4,5,6]]\n```\nHowever, I cannot do the same thing in numpy. For example: \n```\narr = np.array([])\narr = np.append(arr, np.array([1,2,3]))\narr = np.append(arr, np.array([4,5,6]))\n# arr is now [1,2,3,4,5,6]\n```\nI also looked into vstack, but when I use vstack on an empty array, I get: \n```\nValueError: all the input array dimensions except for the concatenation axis must match exactly\n```\nSo how do I do append a new row to an empty array in numpy?","response":"The way to \"start\" the array that you want is: \n```\narr = np.empty((0,3), int)\n```\nWhich is an empty array but it has the proper dimensionality. \n```\n>>> arr\narray([], shape=(0, 3), dtype=int64)\n```\nThen be sure to append along axis 0: \n```\narr = np.append(arr, np.array([[1,2,3]]), axis=0)\narr = np.append(arr, np.array([[4,5,6]]), axis=0)\n```\nBut, @jonrsharpe is right. In fact, if you're going to be appending in a loop, it would be much faster to append to a list as in your first example, then convert to a numpy array at the end, since you're really not using numpy as intended during the loop: \n```\n\n```python\n%%timeit\n .....: l = []\n .....: for i in xrange(1000):\n .....: l.append([3*i+1,3*i+2,3*i+3])\n .....: l = np.asarray(l)\n .....: \n1000 loops, best of 3: 1.18 ms per loop\n```\n\n```python\n%%timeit\n .....: a = np.empty((0,3), int)\n .....: for i in xrange(1000):\n .....: a = np.append(a, 3*i+np.array([[1,2,3]]), 0)\n .....: \n100 loops, best of 3: 18.5 ms per loop\n```\n\n```python\nnp.allclose(a, l)\n#Output\n#True\n#```\n#The numpythonic way to do it depends on your application, but it would be more like: \n#```\n```\n\n```python\ntimeit n = np.arange(1,3001).reshape(1000,3)\n100000 loops, best of 3: 5.93 \u00b5s per loop\n```\n\n```python\nnp.allclose(a, n)\n#Output\n#True\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/22392497\/how-to-add-a-new-row-to-an-empty-numpy-array","best_answers_votes":343,"tags":["python","numpy","scipy"],"question_length":621,"response_length":1277,"tag_count":3} {"question":"Numpy where function multiple conditions I have an array of distances called dists. I want to select dists which are within a range. \n```\ndists[(np.where(dists >= r)) and (np.where(dists <= r + dr))]\n```\nHowever, this selects only for the condition \n```\n(np.where(dists <= r + dr))\n```\nIf I do the commands sequentially by using a temporary variable it works fine. Why does the above code not work, and how do I get it to work?","response":"The best way in your particular case would just be to change your two criteria to one criterion: \n```\ndists[abs(dists - r - dr\/2.) <= dr\/2.]\n```\nIt only creates one boolean array, and in my opinion is easier to read because it says, is dist within a dr or r? (Though I'd redefine r to be the center of your region of interest instead of the beginning, so r = r + dr\/2.) But that doesn't answer your question. The answer to your question: You don't actually need where if you're just trying to filter out the elements of dists that don't fit your criteria: \n```\ndists[(dists >= r) & (dists <= r+dr)]\n```\nBecause the & will give you an elementwise and (the parentheses are necessary). Or, if you do want to use where for some reason, you can do: \n```\ndists[(np.where((dists >= r) & (dists <= r + dr)))]\n```\nWhy: The reason it doesn't work is because np.where returns a list of indices, not a boolean array. You're trying to get and between two lists of numbers, which of course doesn't have the True\/False values that you expect. If a and b are both True values, then a and b returns b. So saying something like [0,1,2] and [2,3,4] will just give you [2,3,4]. Here it is in action: \n```\n\n```python\ndists = np.arange(0,10,.5)\n```\n\n```python\nr = 5\n```\n\n```python\ndr = 1\n```\n\n```python\nnp.where(dists >= r)\n#Output\n#(array([10, 11, 12, 13, 14, 15, 16, 17, 18, 19]),)\n```\n\n```python\nnp.where(dists <= r+dr)\n#Output\n#(array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]),)\n```\n\n```python\nnp.where(dists >= r) and np.where(dists <= r+dr)\n#Output\n#(array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]),)\n#```\n#What you were expecting to compare was simply the boolean array, for example \n#```\n```\n\n```python\ndists >= r\n#Output\n#array([False, False, False, False, False, False, False, False, False,\n# False, True, True, True, True, True, True, True, True,\n# True, True], dtype=bool)\n```\n\n```python\ndists <= r + dr\n#Output\n#array([ True, True, True, True, True, True, True, True, True,\n# True, True, True, True, False, False, False, False, False,\n# False, False], dtype=bool)\n```\n\n```python\n(dists >= r) & (dists <= r + dr)\n#Output\n#array([False, False, False, False, False, False, False, False, False,\n# False, True, True, True, False, False, False, False, False,\n# False, False], dtype=bool)\n#```\n#Now you can call np.where on the combined boolean array: \n#```\n```\n\n```python\nnp.where((dists >= r) & (dists <= r + dr))\n#Output\n#(array([10, 11, 12]),)\n```\n\n```python\ndists[np.where((dists >= r) & (dists <= r + dr))]\n#Output\n#array([ 5. , 5.5, 6. ])\n#```\n#Or simply index the original array with the boolean array using fancy indexing \n#```\n```\n\n```python\ndists[(dists >= r) & (dists <= r + dr)]\n#Output\n#array([ 5. , 5.5, 6. ])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/16343752\/numpy-where-function-multiple-conditions","best_answers_votes":313,"tags":["python","numpy"],"question_length":427,"response_length":2748,"tag_count":2} {"question":"How to normalize a NumPy array to within a certain range? After doing some processing on an audio or image array, it needs to be normalized within a range before it can be written back to a file. This can be done like so: \n```\n# Normalize audio channels to between -1.0 and +1.0\naudio[:,0] = audio[:,0]\/abs(audio[:,0]).max()\naudio[:,1] = audio[:,1]\/abs(audio[:,1]).max()\n\n# Normalize image to between 0 and 255\nimage = image\/(image.max()\/255.0)\n```\nIs there a less verbose, convenience function way to do this? matplotlib.colors.Normalize() doesn't seem to be related.","response":"```\n# Normalize audio channels to between -1.0 and +1.0\naudio \/= np.max(np.abs(audio),axis=0)\n# Normalize image to between 0 and 255\nimage *= (255.0\/image.max())\n```\nUsing \/= and *= allows you to eliminate an intermediate temporary array, thus saving some memory. Multiplication is less expensive than division, so \n```\nimage *= 255.0\/image.max() # Uses 1 division and image.size multiplications\n```\nis marginally faster than \n```\nimage \/= image.max()\/255.0 # Uses 1+image.size divisions\n```\nSince we are using basic numpy methods here, I think this is about as efficient a solution in numpy as can be. In-place operations do not change the dtype of the container array. Since the desired normalized values are floats, the audio and image arrays need to have floating-point point dtype before the in-place operations are performed. If they are not already of floating-point dtype, you'll need to convert them using astype. For example, \n```\nimage = image.astype('float64')\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/1735025\/how-to-normalize-a-numpy-array-to-within-a-certain-range","best_answers_votes":218,"tags":["python","arrays","numpy","scipy","convenience-methods"],"question_length":568,"response_length":982,"tag_count":5} {"question":"From ND to 1D arrays Say I have an array a: \n```\na = np.array([[1,2,3], [4,5,6]])\n\narray([[1, 2, 3],\n [4, 5, 6]])\n```\nI would like to convert it to a 1D array (i.e. a column vector): \n```\nb = np.reshape(a, (1,np.product(a.shape)))\n```\nbut this returns \n```\narray([[1, 2, 3, 4, 5, 6]])\n```\nwhich is not the same as: \n```\narray([1, 2, 3, 4, 5, 6])\n```\nI can take the first element of this array to manually convert it to a 1D array: \n```\nb = np.reshape(a, (1,np.product(a.shape)))[0]\n```\nbut this requires me to know how many dimensions the original array has (and concatenate [0]'s when working with higher dimensions) Is there a dimensions-independent way of getting a column\/row vector from an arbitrary ndarray?","response":"Use np.ravel (for a 1D view) or np.ndarray.flatten (for a 1D copy) or np.ndarray.flat (for an 1D iterator): \n```\n\n```python\na = np.array([[1,2,3], [4,5,6]])\n```\n\n```python\nb = a.ravel()\n```\n\n```python\nb\n#Output\n#array([1, 2, 3, 4, 5, 6])\n#```\n#Note that ravel() returns a view of a when possible. So modifying b also modifies a. ravel() returns a view when the 1D elements are contiguous in memory, but would return a copy if, for example, a were made from slicing another array using a non-unit step size (e.g. a = x[::2]). If you want a copy rather than a view, use \n#```\n```\n\n```python\nc = a.flatten()\n```\nIf you just want an iterator, use np.ndarray.flat: \n```\n```\n\n```python\nd = a.flat\n```\n\n```python\nd\n#Output\n#\n```\n\n```python\nlist(d)\n#Output\n#[1, 2, 3, 4, 5, 6]\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/13730468\/from-nd-to-1d-arrays","best_answers_votes":396,"tags":["python","numpy"],"question_length":719,"response_length":771,"tag_count":2} {"question":"Filtering a list based on a list of booleans I have a list of values which I need to filter given the values in a list of booleans: \n```\nlist_a = [1, 2, 4, 6]\nfilter = [True, False, True, False]\n```\nI generate a new filtered list with the following line: \n```\nfiltered_list = [i for indx,i in enumerate(list_a) if filter[indx] == True]\n```\nwhich results in: \n```\nprint filtered_list\n[1,4]\n```\nThe line works but looks (to me) a bit overkill and I was wondering if there was a simpler way to achieve the same. Advices Summary of two good advices given in the answers below: 1- Don't name a list filter like I did because it is a built-in function. 2- Don't compare things to True like I did with if filter[idx]==True.. since it's unnecessary. Just using if filter[idx] is enough.","response":"You're looking for itertools.compress: \n```\n\n```python\nfrom itertools import compress\n```\n\n```python\nlist_a = [1, 2, 4, 6]\n```\n\n```python\nfil = [True, False, True, False]\n```\n\n```python\nlist(compress(list_a, fil))\n#Output\n#[1, 4]\n#```\n#Timing comparisons(py3.x): \n#```\n```\n\n```python\nlist_a = [1, 2, 4, 6]\n```\n\n```python\nfil = [True, False, True, False]\n```\n\n```python\n%timeit list(compress(list_a, fil))\n#Output\n#100000 loops, best of 3: 2.58 us per loop\n```\n\n```python\n%timeit [i for (i, v) in zip(list_a, fil) if v] #winner\n#Output\n#100000 loops, best of 3: 1.98 us per loop\n```\n\n```python\nlist_a = [1, 2, 4, 6]*100\n```\n\n```python\nfil = [True, False, True, False]*100\n```\n\n```python\n%timeit list(compress(list_a, fil)) #winner\n#Output\n#10000 loops, best of 3: 24.3 us per loop\n```\n\n```python\n%timeit [i for (i, v) in zip(list_a, fil) if v]\n#Output\n#10000 loops, best of 3: 82 us per loop\n```\n\n```python\nlist_a = [1, 2, 4, 6]*10000\n```\n\n```python\nfil = [True, False, True, False]*10000\n```\n\n```python\n%timeit list(compress(list_a, fil)) #winner\n#Output\n#1000 loops, best of 3: 1.66 ms per loop\n```\n\n```python\n%timeit [i for (i, v) in zip(list_a, fil) if v] \n#Output\n#100 loops, best of 3: 7.65 ms per loop\n#```\n#Don't use filter as a variable name, it is a built-in function.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/18665873\/filtering-a-list-based-on-a-list-of-booleans","best_answers_votes":287,"tags":["python","list","numpy"],"question_length":778,"response_length":1066,"tag_count":3} {"question":"How to copy a 2D array into a 3rd dimension, N times? I'd like to copy a numpy 2D array into a third dimension. For example, given the 2D numpy array: \n```\nimport numpy as np\n\narr = np.array([[1, 2], [1, 2]])\n# arr.shape = (2, 2)\n```\nconvert it into a 3D matrix with N such copies in a new dimension. Acting on arr with N=3, the output should be: \n```py\nnew_arr[:,:,0]\n# array([[1, 2], [1, 2]])\n\nnew_arr[:,:,1]\n# array([[1, 2], [1, 2]])\n\nnew_arr[:,:,2]\n# array([[1, 2], [1, 2]])\n\n# new_arr.shape = (2, 2, 3)\n```","response":"Probably the cleanest way is to use np.repeat: \n```\na = np.array([[1, 2], [1, 2]])\nprint(a.shape)\n# (2, 2)\n\n# indexing with np.newaxis inserts a new 3rd dimension, which we then repeat the\n# array along, (you can achieve the same effect by indexing with None, see below)\nb = np.repeat(a[:, :, np.newaxis], 3, axis=2)\n\nprint(b.shape)\n# (2, 2, 3)\n\nprint(b[:, :, 0])\n# [[1 2]\n# [1 2]]\n\nprint(b[:, :, 1])\n# [[1 2]\n# [1 2]]\n\nprint(b[:, :, 2])\n# [[1 2]\n# [1 2]]\n```\nHaving said that, you can often avoid repeating your arrays altogether by using broadcasting. For example, let's say I wanted to add a (3,) vector: \n```\nc = np.array([1, 2, 3])\n```\nto a. I could copy the contents of a 3 times in the third dimension, then copy the contents of c twice in both the first and second dimensions, so that both of my arrays were (2, 2, 3), then compute their sum. However, it's much simpler and quicker to do this: \n```\nd = a[..., None] + c[None, None, :]\n```\nHere, a[..., None] has shape (2, 2, 1) and c[None, None, :] has shape (1, 1, 3)*. When I compute the sum, the result gets 'broadcast' out along the dimensions of size 1, giving me a result of shape (2, 2, 3): \n```\nprint(d.shape)\n# (2, 2, 3)\n\nprint(d[..., 0]) # a + c[0]\n# [[2 3]\n# [2 3]]\n\nprint(d[..., 1]) # a + c[1]\n# [[3 4]\n# [3 4]]\n\nprint(d[..., 2]) # a + c[2]\n# [[4 5]\n# [4 5]]\n```\nBroadcasting is a very powerful technique because it avoids the additional overhead involved in creating repeated copies of your input arrays in memory. * Although I included them for clarity, the None indices into c aren't actually necessary - you could also do a[..., None] + c, i.e. broadcast a (2, 2, 1) array against a (3,) array. This is because if one of the arrays has fewer dimensions than the other then only the trailing dimensions of the two arrays need to be compatible. To give a more complicated example: \n```\na = np.ones((6, 1, 4, 3, 1)) # 6 x 1 x 4 x 3 x 1\nb = np.ones((5, 1, 3, 2)) # 5 x 1 x 3 x 2\nresult = a + b # 6 x 5 x 4 x 3 x 2\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/32171917\/how-to-copy-a-2d-array-into-a-3rd-dimension-n-times","best_answers_votes":272,"tags":["python","arrays","numpy"],"question_length":511,"response_length":2028,"tag_count":3} {"question":"Efficiently sorting a numpy array in descending order? I am surprised this specific question hasn't been asked before, but I really didn't find it on SO nor on the documentation of np.sort. Say I have a random numpy array holding integers, e.g: \n```\n> temp = np.random.randint(1,10, 10) \n> temp\narray([2, 4, 7, 4, 2, 2, 7, 6, 4, 4])\n```\nIf I sort it, I get ascending order by default: \n```\n> np.sort(temp)\narray([2, 2, 2, 4, 4, 4, 4, 6, 7, 7])\n```\nbut I want the solution to be sorted in descending order. Now, I know I can always do: \n```\nreverse_order = np.sort(temp)[::-1]\n```\nbut is this last statement efficient? Doesn't it create a copy in ascending order, and then reverses this copy to get the result in reversed order? If this is indeed the case, is there an efficient alternative? It doesn't look like np.sort accepts parameters to change the sign of the comparisons in the sort operation to get things in reverse order.","response":"temp[::-1].sort() sorts the array in place, whereas np.sort(temp)[::-1] creates a new array. \n```\n\n```python\ntemp = np.random.randint(1,10, 10)\n```\n\n```python\ntemp\n#Output\n#array([5, 2, 7, 4, 4, 2, 8, 6, 4, 4])\n```\n\n```python\nid(temp)\n#Output\n#139962713524944\n```\n\n```python\ntemp[::-1].sort()\n```\n\n```python\ntemp\n#Output\n#array([8, 7, 6, 5, 4, 4, 4, 4, 2, 2])\n```\n\n```python\nid(temp)\n#Output\n#139962713524944\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/26984414\/efficiently-sorting-a-numpy-array-in-descending-order","best_answers_votes":225,"tags":["python","arrays","sorting","numpy"],"question_length":933,"response_length":385,"tag_count":4} {"question":"How to convert list of numpy arrays into single numpy array? Suppose I have ; \n```\nLIST = [[array([1, 2, 3, 4, 5]), array([1, 2, 3, 4, 5],[1,2,3,4,5])] # inner lists are numpy arrays\n```\nI try to convert; \n```\narray([[1, 2, 3, 4, 5],\n [1, 2, 3, 4, 5],\n [1, 2, 3, 4, 5])\n```\nI am solving it by iteration on vstack right now but it is really slow for especially large LIST What do you suggest for the best efficient way?","response":"In general you can concatenate a whole sequence of arrays along any axis: \n```\nnumpy.concatenate( LIST, axis=0 )\n```\nbut you do have to worry about the shape and dimensionality of each array in the list (for a 2-dimensional 3x5 output, you need to ensure that they are all 2-dimensional n-by-5 arrays already). If you want to concatenate 1-dimensional arrays as the rows of a 2-dimensional output, you need to expand their dimensionality. As Jorge's answer points out, there is also the function stack, introduced in numpy 1.10: \n```\nnumpy.stack( LIST, axis=0 )\n```\nThis takes the complementary approach: it creates a new view of each input array and adds an extra dimension (in this case, on the left, so each n-element 1D array becomes a 1-by-n 2D array) before concatenating. It will only work if all the input arrays have the same shape. vstack (or equivalently row_stack) is often an easier-to-use solution because it will take a sequence of 1- and\/or 2-dimensional arrays and expand the dimensionality automatically where necessary and only where necessary, before concatenating the whole list together. Where a new dimension is required, it is added on the left. Again, you can concatenate a whole list at once without needing to iterate: \n```\nnumpy.vstack( LIST )\n```\nThis flexible behavior is also exhibited by the syntactic shortcut numpy.r_[ array1, ...., arrayN ] (note the square brackets). This is good for concatenating a few explicitly-named arrays but it becomes less readable in your situation because [] subscripting will not accept a list. You would need to convert your sequence to a tuple: numpy.r_[tuple(LIST)]. It's more readable to simply use vstack(). There is also an analogous function column_stack and shortcut c_[...], for horizontal (column-wise) stacking, as well as an almost-analogous function hstack\u2014although for some reason the latter is less flexible (it is stricter about input arrays' dimensionality, and tries to concatenate 1-D arrays end-to-end instead of treating them as columns). Finally, in the specific case of vertical stacking of 1-D arrays, the following also works: \n```\nnumpy.array( LIST )\n```\n...because arrays can be constructed out of a sequence of other arrays, adding a new dimension to the beginning.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/27516849\/how-to-convert-list-of-numpy-arrays-into-single-numpy-array","best_answers_votes":268,"tags":["python","list","numpy"],"question_length":430,"response_length":2258,"tag_count":3} {"question":"What are the differences between Pandas and NumPy+SciPy in Python? [closed] Closed. This question is opinion-based. It is not currently accepting answers. Want to improve this question? Because this question may lead to opinionated discussion, debate, and answers, it has been closed. You may edit the question if you feel you can improve it so that it requires answers that include facts and citations or a detailed explanation of the proposed solution. If edited, the question will be reviewed and might be reopened. Closed 10 years ago. Improve this question They both seem exceedingly similar and I'm curious as to which package would be more beneficial for financial data analysis.","response":"pandas provides high level data manipulation tools built on top of NumPy. NumPy by itself is a fairly low-level tool, similar to MATLAB. pandas on the other hand provides rich time series functionality, data alignment, NA-friendly statistics, groupby, merge and join methods, and lots of other conveniences. It has become very popular in recent years in financial applications. I will have a chapter dedicated to financial data analysis using pandas in my upcoming book.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/11077023\/what-are-the-differences-between-pandas-and-numpyscipy-in-python","best_answers_votes":329,"tags":["python","numpy","scipy","pandas"],"question_length":686,"response_length":470,"tag_count":4} {"question":"Numpy matrix to array I am using numpy. I have a matrix with 1 column and N rows and I want to get an array from with N elements. For example, if i have M = matrix([[1], [2], [3], [4]]), I want to get A = array([1,2,3,4]). To achieve it, I use A = np.array(M.T)[0]. Does anyone know a more elegant way to get the same result? Thanks!","response":"If you'd like something a bit more readable, you can do this: \n```\nA = np.squeeze(np.asarray(M))\n```\nEquivalently, you could also do: A = np.asarray(M).reshape(-1), but that's a bit less easy to read.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/3337301\/numpy-matrix-to-array","best_answers_votes":244,"tags":["python","arrays","matrix","numpy"],"question_length":333,"response_length":200,"tag_count":4} {"question":"initialize a numpy array [duplicate] This question already has answers here: Create numpy matrix filled with NaNs (11 answers) Closed 1 year ago. Is there way to initialize a numpy array of a shape and add to it? I will explain what I need with a list example. If I want to create a list of objects generated in a loop, I can do: \n```\na = []\nfor i in range(5):\n a.append(i)\n```\nI want to do something similar with a numpy array. I know about vstack, concatenate etc. However, it seems these require two numpy arrays as inputs. What I need is: \n```\nbig_array # Initially empty. This is where I don't know what to specify\nfor i in range(5):\n array i of shape = (2,4) created.\n add to big_array\n```\nThe big_array should have a shape (10,4). How to do this? EDIT: I want to add the following clarification. I am aware that I can define big_array = numpy.zeros((10,4)) and then fill it up. However, this requires specifying the size of big_array in advance. I know the size in this case, but what if I do not? When we use the .append function for extending the list in python, we don't need to know its final size in advance. I am wondering if something similar exists for creating a bigger array from smaller arrays, starting with an empty array.","response":"numpy.zeros Return a new array of given shape and type, filled with zeros. or numpy.ones Return a new array of given shape and type, filled with ones. or numpy.empty Return a new array of given shape and type, without initializing entries. However, the mentality in which we construct an array by appending elements to a list is not much used in numpy, because it's less efficient (numpy datatypes are much closer to the underlying C arrays). Instead, you should preallocate the array to the size that you need it to be, and then fill in the rows. You can use numpy.append if you must, though.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/4535374\/initialize-a-numpy-array","best_answers_votes":226,"tags":["python","arrays","numpy"],"question_length":1251,"response_length":593,"tag_count":3} {"question":"How to create a DataFrame of random integers with Pandas? I know that if I use randn, the following code gives me what I am looking for, but with elements from a normal distribution. But what if I just wanted random integers? \n```py\nimport pandas as pd\nimport numpy as np\ndf = pd.DataFrame(np.random.randn(100, 4), columns=list('ABCD'))\n```\nrandint works by providing a range, but not an array like randn. So how do I do this with random integers between some range?","response":"numpy.random.randint accepts a third argument (size) , in which you can specify the size of the output array. You can use this to create your DataFrame - \n```\ndf = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD'))\n```\nHere - np.random.randint(0,100,size=(100, 4)) - creates an output array of size (100,4) with random integer elements between [0,100) . Demo - \n```python\nimport numpy as np\nimport pandas as pd\ndf = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD'))\n```\nwhich produces: \n```none\nA B C D\n0 45 88 44 92\n1 62 34 2 86\n2 85 65 11 31\n3 74 43 42 56\n4 90 38 34 93\n5 0 94 45 10\n6 58 23 23 60\n.. .. .. .. ..\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/32752292\/how-to-create-a-dataframe-of-random-integers-with-pandas","best_answers_votes":296,"tags":["python","pandas","dataframe","numpy","random"],"question_length":466,"response_length":713,"tag_count":5} {"question":"Pytorch tensor to numpy array I have a pytorch Tensor of shape [4, 3, 966, 1296]. I want to convert it to numpy array using the following code: \n```\nimgs = imgs.numpy()[:, ::-1, :, :]\n```\nHow does that code work?","response":"I believe you also have to use .detach(). I had to convert my Tensor to a numpy array on Colab which uses CUDA and GPU. I did it like the following: \n```\n# this is just my embedding matrix which is a Torch tensor object\nembedding = learn.model.u_weight\n\nembedding_list = list(range(0, 64382))\n\ninput = torch.cuda.LongTensor(embedding_list)\ntensor_array = embedding(input)\n# the output of the line below is a numpy array\ntensor_array.cpu().detach().numpy()\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/49768306\/pytorch-tensor-to-numpy-array","best_answers_votes":163,"tags":["python","numpy","pytorch"],"question_length":212,"response_length":459,"tag_count":3} {"question":"How to return 0 with divide by zero I'm trying to perform an element wise divide in python, but if a zero is encountered, I need the quotient to just be zero. For example: \n```\narray1 = np.array([0, 1, 2])\narray2 = np.array([0, 1, 1])\n\narray1 \/ array2 # should be np.array([0, 1, 2])\n```\nI could always just use a for-loop through my data, but to really utilize numpy's optimizations, I need the divide function to return 0 upon divide by zero errors instead of ignoring the error. Unless I'm missing something, it doesn't seem numpy.seterr() can return values upon errors. Does anyone have any other suggestions on how I could get the best out of numpy while setting my own divide by zero error handling?","response":"In numpy v1.7+, you can take advantage of the \"where\" option for ufuncs. You can do things in one line and you don't have to deal with the errstate context manager. \n```\n\n```python\na = np.array([-1, 0, 1, 2, 3], dtype=float)\n```\n\n```python\nb = np.array([ 0, 0, 0, 2, 2], dtype=float)\n```\n\n# If you don't pass `out` the indices where (b == 0) will be uninitialized!\n\n```python\nc = np.divide(a, b, out=np.zeros_like(a), where=b!=0)\n```\n\n```python\nprint(c)\n#Output\n#[ 0. 0. 0. 1. 1.5]\n#```\n#In this case, it does the divide calculation anywhere 'where' b does not equal zero. When b does equal zero, then it remains unchanged from whatever value you originally gave it in the 'out' argument.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/26248654\/how-to-return-0-with-divide-by-zero","best_answers_votes":363,"tags":["python","arrays","numpy","error-handling","divide-by-zero"],"question_length":705,"response_length":645,"tag_count":5} {"question":"Find the most frequent number in a NumPy array Suppose I have the following NumPy array: \n```\na = np.array([1,2,3,1,2,1,1,1,3,2,2,1])\n```\nHow can I find the most frequent number in this array?","response":"If your list contains all non-negative ints, you should take a look at numpy.bincounts: http:\/\/docs.scipy.org\/doc\/numpy\/reference\/generated\/numpy.bincount.html and then probably use np.argmax: \n```\na = np.array([1,2,3,1,2,1,1,1,3,2,2,1])\ncounts = np.bincount(a)\nprint(np.argmax(counts))\n```\nFor a more complicated list (that perhaps contains negative numbers or non-integer values), you can use np.histogram in a similar way. Alternatively, if you just want to work in python without using numpy, collections.Counter is a good way of handling this sort of data. \n```\nfrom collections import Counter\na = [1,2,3,1,2,1,1,1,3,2,2,1]\nb = Counter(a)\nprint(b.most_common(1))\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/6252280\/find-the-most-frequent-number-in-a-numpy-array","best_answers_votes":253,"tags":["python","numpy"],"question_length":192,"response_length":671,"tag_count":2} {"question":"How do I use numpy.where()? What should I pass, and what does the result mean? [closed] Closed. This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Guide the asker to update the question so it focuses on a single, specific problem. Narrowing the question will help others answer the question concisely. You may edit the question if you feel you can improve it yourself. If edited, the question will be reviewed and might be reopened. Closed 9 years ago. Improve this question I tried reading the documentation for numpy.where(), but I'm still confused. What should I pass for the condition, x and y values? When I pass only condition, what does the result mean and how can I use it? What about when I pass all three? I found How does python numpy.where() work? but it didn't answer my question because it seems to be about the implementation rather than about how to use it. Numpy where() on a 2D matrix also didn't explain things for me; I'm looking for a step-by-step explanation, rather than a how-to guide for a specific case. Please include examples with both 1D and 2D source data.","response":"After fiddling around for a while, I figured things out, and am posting them here hoping it will help others. Intuitively, np.where is like asking \"tell me where in this array, entries satisfy a given condition\". \n```\n\n```python\na = np.arange(5,10)\n```\n\n```python\nnp.where(a < 8) # tell me where in a, entries are < 8\n#Output\n#(array([0, 1, 2]),) # answer: entries indexed by 0, 1, 2\n#```\n#It can also be used to get entries in array that satisfy the condition: \n#```\n```\n\n```python\na[np.where(a < 8)] \n#Output\n#array([5, 6, 7]) # selects from a entries 0, 1, 2\n#```\n#When a is a 2d array, np.where() returns an array of row idx's, and an array of col idx's: \n#```\n```\n\n```python\na = np.arange(4,10).reshape(2,3)\n#Output\n#array([[4, 5, 6],\n# [7, 8, 9]])\n```\n\n```python\nnp.where(a > 8)\n#Output\n#(array(1), array(2))\n#```\n#As in the 1d case, we can use np.where() to get entries in the 2d array that satisfy the condition: \n#```\n```\n\n```python\na[np.where(a > 8)] # selects from a entries 0, 1, 2\n#Output\n#```\n#array([9]) Note, when a is 1d, np.where() still returns an array of row idx's and an array of col idx's, but columns are of length 1, so latter is empty array.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/34667282\/how-do-i-use-numpy-where-what-should-i-pass-and-what-does-the-result-mean","best_answers_votes":304,"tags":["python","numpy","scipy"],"question_length":1145,"response_length":1076,"tag_count":3} {"question":"How to get element-wise matrix multiplication (Hadamard product) in numpy? I have two matrices \n```py\na = np.matrix([[1,2], [3,4]])\nb = np.matrix([[5,6], [7,8]])\n```\nand I want to get the element-wise product, [[1*5,2*6], [3*7,4*8]], which equals \n```\nmatrix([[5, 12], [21, 32]])\n```\nI have tried np.dot(a,b) and a*b but both give the result matrix([[19, 22], [43, 50]]) which is the matrix product, not the element-wise product. How can I get the the element-wise product (aka Hadamard product) using built-in functions?","response":"For elementwise multiplication of matrix objects, you can use numpy.multiply: \n```\nimport numpy as np\na = np.array([[1,2],[3,4]])\nb = np.array([[5,6],[7,8]])\nnp.multiply(a,b)\n```\nResult \n```\narray([[ 5, 12],\n [21, 32]])\n```\nHowever, you should really use array instead of matrix. matrix objects have all sorts of horrible incompatibilities with regular ndarrays. With ndarrays, you can just use * for elementwise multiplication: \n```\na * b\n```\nIf you're on Python 3.5+, you don't even lose the ability to perform matrix multiplication with an operator, because @ does matrix multiplication now: \n```\na @ b # matrix multiplication\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/40034993\/how-to-get-element-wise-matrix-multiplication-hadamard-product-in-numpy","best_answers_votes":260,"tags":["python","numpy","matrix","matrix-multiplication","elementwise-operations"],"question_length":521,"response_length":640,"tag_count":5} {"question":"best way to preserve numpy arrays on disk I am looking for a fast way to preserve large numpy arrays. I want to save them to the disk in a binary format, then read them back into memory relatively fastly. cPickle is not fast enough, unfortunately. I found numpy.savez and numpy.load. But the weird thing is, numpy.load loads a npy file into \"memory-map\". That means regular manipulating of arrays really slow. For example, something like this would be really slow: \n```\n#!\/usr\/bin\/python\nimport numpy as np;\nimport time; \nfrom tempfile import TemporaryFile\n\nn = 10000000;\n\na = np.arange(n)\nb = np.arange(n) * 10\nc = np.arange(n) * -0.5\n\nfile = TemporaryFile()\nnp.savez(file,a = a, b = b, c = c);\n\nfile.seek(0)\nt = time.time()\nz = np.load(file)\nprint \"loading time = \", time.time() - t\n\nt = time.time()\naa = z['a']\nbb = z['b']\ncc = z['c']\nprint \"assigning time = \", time.time() - t;\n```\nmore precisely, the first line will be really fast, but the remaining lines that assign the arrays to obj are ridiculously slow: \n```\nloading time = 0.000220775604248\nassining time = 2.72940087318\n```\nIs there any better way of preserving numpy arrays? Ideally, I want to be able to store multiple arrays in one file.","response":"I've compared performance (space and time) for a number of ways to store numpy arrays. Few of them support multiple arrays per file, but perhaps it's useful anyway. Npy and binary files are both really fast and small for dense data. If the data is sparse or very structured, you might want to use npz with compression, which'll save a lot of space but cost some load time. If portability is an issue, binary is better than npy. If human readability is important, then you'll have to sacrifice a lot of performance, but it can be achieved fairly well using csv (which is also very portable of course). More details and the code are available at the github repo.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/9619199\/best-way-to-preserve-numpy-arrays-on-disk","best_answers_votes":317,"tags":["python","numpy","pickle","binary-data","preserve"],"question_length":1205,"response_length":660,"tag_count":5} {"question":"How to save and load numpy.array() data properly? I wonder, how to save and load numpy.array data properly. Currently I'm using the numpy.savetxt() method. For example, if I got an array markers, which looks like this: I try to save it by the use of: \n```\nnumpy.savetxt('markers.txt', markers)\n```\nIn other script I try to open previously saved file: \n```\nmarkers = np.fromfile(\"markers.txt\")\n```\nAnd that's what I get... Saved data first looks like this: \n```\n0.000000000000000000e+00\n0.000000000000000000e+00\n0.000000000000000000e+00\n0.000000000000000000e+00\n0.000000000000000000e+00\n0.000000000000000000e+00\n0.000000000000000000e+00\n0.000000000000000000e+00\n0.000000000000000000e+00\n0.000000000000000000e+00\n```\nBut when I save just loaded data by the use of the same method, ie. numpy.savetxt() it looks like this: \n```\n1.398043286095131769e-76\n1.398043286095288860e-76\n1.396426376485745879e-76\n1.398043286055061908e-76\n1.398043286095288860e-76\n1.182950697433698368e-76\n1.398043275797188953e-76\n1.398043286095288860e-76\n1.210894289234927752e-99\n1.398040649781712473e-76\n```\nWhat am I doing wrong? PS there are no other \"backstage\" operation which I perform. Just saving and loading, and that's what I get. Thank you in advance.","response":"The most reliable way I have found to do this is to use np.savetxt with np.loadtxt and not np.fromfile which is better suited to binary files written with tofile. The np.fromfile and np.tofile methods write and read binary files whereas np.savetxt writes a text file. So, for example: \n```\na = np.array([1, 2, 3, 4])\nnp.savetxt('test1.txt', a, fmt='%d')\nb = np.loadtxt('test1.txt', dtype=int)\na == b\n# array([ True, True, True, True], dtype=bool)\n```\nOr: \n```\na.tofile('test2.dat')\nc = np.fromfile('test2.dat', dtype=int)\nc == a\n# array([ True, True, True, True], dtype=bool)\n```\nI use the former method even if it is slower and creates bigger files (sometimes): the binary format can be platform dependent (for example, the file format depends on the endianness of your system). There is a platform independent format for NumPy arrays, which can be saved and read with np.save and np.load: \n```\nnp.save('test3.npy', a) # .npy extension is added if not given\nd = np.load('test3.npy')\na == d\n# array([ True, True, True, True], dtype=bool)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/28439701\/how-to-save-and-load-numpy-array-data-properly","best_answers_votes":262,"tags":["python","arrays","numpy"],"question_length":1231,"response_length":1053,"tag_count":3} {"question":"Finding local maxima\/minima with Numpy in a 1D numpy array Can you suggest a module function from numpy\/scipy that can find local maxima\/minima in a 1D numpy array? Obviously the simplest approach ever is to have a look at the nearest neighbours, but I would like to have an accepted solution that is part of the numpy distro.","response":"In SciPy >= 0.11 \n```\nimport numpy as np\nfrom scipy.signal import argrelextrema\n\nx = np.random.random(12)\n\n# for local maxima\nargrelextrema(x, np.greater)\n\n# for local minima\nargrelextrema(x, np.less)\n```\nProduces \n```\n\n```python\nx\n#Output\n#array([ 0.56660112, 0.76309473, 0.69597908, 0.38260156, 0.24346445,\n# 0.56021785, 0.24109326, 0.41884061, 0.35461957, 0.54398472,\n# 0.59572658, 0.92377974])\n```\n\n```python\nargrelextrema(x, np.greater)\n#Output\n#(array([1, 5, 7]),)\n```\n\n```python\nargrelextrema(x, np.less)\n#Output\n#(array([4, 6, 8]),)\n#```\n#Note, these are the indices of x that are local max\/min. To get the values, try: \n#```\n```\n\n```python\nx[argrelextrema(x, np.greater)[0]]\n#Output\n#```\n#scipy.signal also provides argrelmax and argrelmin for finding maxima and minima respectively.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/4624970\/finding-local-maxima-minima-with-numpy-in-a-1d-numpy-array","best_answers_votes":287,"tags":["python","numpy"],"question_length":326,"response_length":725,"tag_count":2} {"question":"Detect if a NumPy array contains at least one non-numeric value? I need to write a function which will detect if the input contains at least one value which is non-numeric. If a non-numeric value is found I will raise an error (because the calculation should only return a numeric value). The number of dimensions of the input array is not known in advance - the function should give the correct value regardless of ndim. As an extra complication the input could be a single float or numpy.float64 or even something oddball like a zero-dimensional array. The obvious way to solve this is to write a recursive function which iterates over every iterable object in the array until it finds a non-iterabe. It will apply the numpy.isnan() function over every non-iterable object. If at least one non-numeric value is found then the function will return False immediately. Otherwise if all the values in the iterable are numeric it will eventually return True. That works just fine, but it's pretty slow and I expect that NumPy has a much better way to do it. What is an alternative that is faster and more numpyish? Here's my mockup: \n```\ndef contains_nan( myarray ):\n \"\"\"\n @param myarray : An n-dimensional array or a single float\n @type myarray : numpy.ndarray, numpy.array, float\n @returns: bool\n Returns true if myarray is numeric or only contains numeric values.\n Returns false if at least one non-numeric value exists\n Not-A-Number is given by the numpy.isnan() function.\n \"\"\"\n return True\n```","response":"This should be faster than iterating and will work regardless of shape. \n```\nnumpy.isnan(myarray).any()\n```\nEdit: 30x faster: \n```\nimport timeit\ns = 'import numpy;a = numpy.arange(10000.).reshape((100,100));a[10,10]=numpy.nan'\nms = [\n 'numpy.isnan(a).any()',\n 'any(numpy.isnan(x) for x in a.flatten())']\nfor m in ms:\n print \" %.2f s\" % timeit.Timer(m, s).timeit(1000), m\n```\nResults: \n```\n0.11 s numpy.isnan(a).any()\n 3.75 s any(numpy.isnan(x) for x in a.flatten())\n```\nBonus: it works fine for non-array NumPy types: \n```\n\n```python\na = numpy.float64(42.)\n```\n\n```python\nnumpy.isnan(a).any()\n#Output\n#False\n```\n\n```python\na = numpy.float64(numpy.nan)\n```\n\n```python\nnumpy.isnan(a).any()\n#Output\n#True\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/911871\/detect-if-a-numpy-array-contains-at-least-one-non-numeric-value","best_answers_votes":297,"tags":["python","numpy"],"question_length":1522,"response_length":658,"tag_count":2} {"question":"Generate random array of floats between a range I haven't been able to find a function to generate an array of random floats of a given length between a certain range. I've looked at Random sampling but no function seems to do what I need. random.uniform comes close but it only returns a single element, not a specific number. This is what I'm after: \n```\nran_floats = some_function(low=0.5, high=13.3, size=50)\n```\nwhich would return an array of 50 random non-unique floats (ie: repetitions are allowed) uniformly distributed in the range [0.5, 13.3]. Is there such a function?","response":"np.random.uniform fits your use case: \n```\nsampl = np.random.uniform(low=0.5, high=13.3, size=(50,))\n```\nUpdate Oct 2019: While the syntax is still supported, it looks like the API changed with NumPy 1.17 to support greater control over the random number generator. Going forward the API has changed and you should look at https:\/\/docs.scipy.org\/doc\/numpy\/reference\/random\/generated\/numpy.random.Generator.uniform.html The enhancement proposal is here: https:\/\/numpy.org\/neps\/nep-0019-rng-policy.html","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/22071987\/generate-random-array-of-floats-between-a-range","best_answers_votes":262,"tags":["python","arrays","random","numpy"],"question_length":579,"response_length":500,"tag_count":4} {"question":"How do I find the length (or dimensions, size) of a numpy matrix in python? [duplicate] This question already has answers here: Numpy array dimensions (10 answers) Closed 12 years ago. For a numpy matrix in python \n```\nfrom numpy import matrix\nA = matrix([[1,2],[3,4]])\n```\nHow can I find the length of a row (or column) of this matrix? Equivalently, how can I know the number of rows or columns? So far, the only solution I've found is: \n```\nlen(A)\nlen(A[:,1])\nlen(A[1,:])\n```\nWhich returns 2, 2, and 1, respectively. From this I've gathered that len() will return the number of rows, so I can always us the transpose, len(A.T), for the number of columns. However, this feels unsatisfying and arbitrary, as when reading the line len(A), it isn't immediately obvious that this should return the number of rows. It actually works differently than len([1,2]) would for a 2D python array, as this would return 2. So, is there a more intuitive way to find the size of a matrix, or is this the best I have?","response":"shape is a property of both numpy ndarray's and matrices. \n```\nA.shape\n```\nwill return a tuple (m, n), where m is the number of rows, and n is the number of columns. In fact, the numpy matrix object is built on top of the ndarray object, one of numpy's two fundamental objects (along with a universal function object), so it inherits from ndarray","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/14847457\/how-do-i-find-the-length-or-dimensions-size-of-a-numpy-matrix-in-python","best_answers_votes":286,"tags":["python","matrix","numpy"],"question_length":1001,"response_length":346,"tag_count":3} {"question":"ValueError when checking if variable is None or numpy.array I'd like to check if variable is None or numpy.array. I've implemented check_a function to do this. \n```\ndef check_a(a):\n if not a:\n print \"please initialize a\"\n\na = None\ncheck_a(a)\na = np.array([1,2])\ncheck_a(a)\n```\nBut, this code raises ValueError. What is the straight forward way? \n```\nValueError Traceback (most recent call last)\n in ()\n 6 check_a(a)\n 7 a = np.array([1,2])\n----> 8 check_a(a)\n\n in check_a(a)\n 1 def check_a(a):\n----> 2 if not a:\n 3 print \"please initialize a\"\n 4 \n 5 a = None\n\nValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()\n```","response":"Using not a to test whether a is None assumes that the other possible values of a have a truth value of True. However, most NumPy arrays don't have a truth value at all, and not cannot be applied to them. If you want to test whether an object is None, the most general, reliable way is to literally use an is check against None: \n```\nif a is None:\n ...\nelse:\n ...\n```\nThis doesn't depend on objects having a truth value, so it works with NumPy arrays. Note that the test has to be is, not ==. is is an object identity test. == is whatever the arguments say it is, and NumPy arrays say it's a broadcasted elementwise equality comparison, producing a boolean array: \n```\n\n```python\na = numpy.arange(5)\n```\n\n```python\na == None\n#Output\n#array([False, False, False, False, False])\n```\n\n```python\nif a == None:\n pass\n#Output\n#...\n#Traceback (most recent call last):\n# File \"\", line 1, in \n#ValueError: The truth value of an array with more than one element is ambiguous.\n# Use a.any() or a.all()\n#```\n#On the other side of things, if you want to test whether an object is a NumPy array, you can test its type: \n#```\n## Careful - the type is np.ndarray, not np.array. np.array is a factory function.\n#if type(a) is np.ndarray:\n# ...\n#else:\n# ...\n#```\n#You can also use isinstance, which will also return True for subclasses of that type (if that is what you want). Considering how terrible and incompatible np.matrix is, you may not actually want this: \n#```\n## Again, ndarray, not array, because array is a factory function.\n#if isinstance(a, np.ndarray):\n# ...\n#else:\n# ...\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/36783921\/valueerror-when-checking-if-variable-is-none-or-numpy-array","best_answers_votes":287,"tags":["python","numpy","is-empty"],"question_length":819,"response_length":1547,"tag_count":3} {"question":"Test if numpy array contains only zeros We initialize a numpy array with zeros as bellow: \n```\nnp.zeros((N,N+1))\n```\nBut how do we check whether all elements in a given n*n numpy array matrix is zero. The method just need to return a True if all the values are indeed zero.","response":"The other answers posted here will work, but the clearest and most efficient function to use is numpy.any(): \n```\n\n```python\nall_zeros = not np.any(a)\n#Output\n#```\n#or \n#```\n```\n\n```python\nall_zeros = not a.any()\n#Output\n#```\n#This is preferred over numpy.all(a==0) because it uses less RAM. (It does not require the temporary array created by the a==0 term.) Also, it is faster than numpy.count_nonzero(a) because it can return immediately when the first nonzero element has been found. Edit: As @Rachel pointed out in the comments, np.any() no longer uses \"short-circuit\" logic, so you won't see a speed benefit for small arrays.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/18395725\/test-if-numpy-array-contains-only-zeros","best_answers_votes":251,"tags":["python","numpy"],"question_length":273,"response_length":592,"tag_count":2} {"question":"NumPy selecting specific column index per row by using a list of indexes I'm struggling to select the specific columns per row of a NumPy matrix. Suppose I have the following matrix which I would call X: \n```\n[1, 2, 3]\n[4, 5, 6]\n[7, 8, 9]\n```\nI also have a list of column indexes per every row which I would call Y: \n```\n[1, 0, 2]\n```\nI need to get the values: \n```\n[2]\n[4]\n[9]\n```\nInstead of a list with indexes Y, I can also produce a matrix with the same shape as X where every column is a bool \/ int in the range 0-1 value, indicating whether this is the required column. \n```\n[0, 1, 0]\n[1, 0, 0]\n[0, 0, 1]\n```\nI know this can be done with iterating over the array and selecting the column values I need. However, this will be executed frequently on big arrays of data and that's why it has to run as fast as it can. I was thus wondering if there is a better solution?","response":"If you've got a boolean array you can do direct selection based on that like so: \n```\n\n```python\na = np.array([True, True, True, False, False])\n```\n\n```python\nb = np.array([1,2,3,4,5])\n```\n\n```python\nb[a]\n#Output\n#array([1, 2, 3])\n#```\n#To go along with your initial example you could do the following: \n#```\n```\n\n```python\na = np.array([[1,2,3], [4,5,6], [7,8,9]])\n```\n\n```python\nb = np.array([[False,True,False],[True,False,False],[False,False,True]])\n```\n\n```python\na[b]\n#Output\n#array([2, 4, 9])\n#```\n#You can also add in an arange and do direct selection on that, though depending on how you're generating your boolean array and what your code looks like YMMV. \n#```\n```\n\n```python\na = np.array([[1,2,3], [4,5,6], [7,8,9]])\n```\n\n```python\na[np.arange(len(a)), [1,0,2]]\n#Output\n#array([2, 4, 9])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/23435782\/numpy-selecting-specific-column-index-per-row-by-using-a-list-of-indexes","best_answers_votes":166,"tags":["python","numpy"],"question_length":872,"response_length":686,"tag_count":2} {"question":"Most efficient way to find mode in numpy array I have a 2D array containing integers (both positive or negative). Each row represents the values over time for a particular spatial site, whereas each column represents values for various spatial sites for a given time. So if the array is like: \n```\n1 3 4 2 2 7\n5 2 2 1 4 1\n3 3 2 2 1 1\n```\nThe result should be \n```\n1 3 2 2 2 1\n```\nNote that when there are multiple values for mode, any one (selected randomly) may be set as mode. I can iterate over the columns finding mode one at a time but I was hoping numpy might have some in-built function to do that. Or if there is a trick to find that efficiently without looping.","response":"Check scipy.stats.mode() (inspired by @tom10's comment): \n```\nimport numpy as np\nfrom scipy import stats\n\na = np.array([[1, 3, 4, 2, 2, 7],\n [5, 2, 2, 1, 4, 1],\n [3, 3, 2, 2, 1, 1]])\n\nm = stats.mode(a)\nprint(m)\n```\nOutput: \n```\nModeResult(mode=array([[1, 3, 2, 2, 1, 1]]), count=array([[1, 2, 2, 2, 1, 2]]))\n```\nAs you can see, it returns both the mode as well as the counts. You can select the modes directly via m[0]: \n```\nprint(m[0])\n```\nOutput: \n```\n[[1 3 2 2 1 1]]\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/16330831\/most-efficient-way-to-find-mode-in-numpy-array","best_answers_votes":205,"tags":["python","numpy","2d","mode"],"question_length":670,"response_length":499,"tag_count":4} {"question":"Iterating over a numpy array Is there a less verbose alternative to this: \n```\nfor x in xrange(array.shape[0]):\n for y in xrange(array.shape[1]):\n do_stuff(x, y)\n```\nI came up with this: \n```\nfor x, y in itertools.product(map(xrange, array.shape)):\n do_stuff(x, y)\n```\nWhich saves one indentation, but is still pretty ugly. I'm hoping for something that looks like this pseudocode: \n```\nfor x, y in array.indices:\n do_stuff(x, y)\n```\nDoes anything like that exist?","response":"I think you're looking for the ndenumerate. \n```\n\n```python\na =numpy.array([[1,2],[3,4],[5,6]])\n```\n\n```python\nfor (x,y), value in numpy.ndenumerate(a):\n print x,y\n\n#Output\n#0 0\n#0 1\n#1 0\n#1 1\n#2 0\n#2 1\n#```\n#Regarding the performance. It is a bit slower than a list comprehension. \n#```\n#X = np.zeros((100, 100, 100))\n```\n\n%timeit list([((i,j,k), X[i,j,k]) for i in range(X.shape[0]) for j in range(X.shape[1]) for k in range(X.shape[2])])\n1 loop, best of 3: 376 ms per loop\n\n%timeit list(np.ndenumerate(X))\n1 loop, best of 3: 570 ms per loop\n```\nIf you are worried about the performance you could optimise a bit further by looking at the implementation of ndenumerate, which does 2 things, converting to an array and looping. If you know you have an array, you can call the .coords attribute of the flat iterator. \n```\na = X.flat\n%timeit list([(a.coords, x) for x in a.flat])\n1 loop, best of 3: 305 ms per loop\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/6967463\/iterating-over-a-numpy-array","best_answers_votes":220,"tags":["python","numpy"],"question_length":480,"response_length":884,"tag_count":2} {"question":"python numpy machine epsilon I am trying to understand what is machine epsilon. According to the Wikipedia, it can be calculated as follows: \n```\ndef machineEpsilon(func=float):\n machine_epsilon = func(1)\n while func(1)+func(machine_epsilon) != func(1):\n machine_epsilon_last = machine_epsilon\n machine_epsilon = func(machine_epsilon) \/ func(2)\n return machine_epsilon_last\n```\nHowever, it is suitable only for double precision numbers. I am interested in modifying it to support also single precision numbers. I read that numpy can be used, particularly numpy.float32 class. Can anybody help with modifying the function?","response":"An easier way to get the machine epsilon for a given float type is to use np.finfo(): \n```\nprint(np.finfo(float).eps)\n# 2.22044604925e-16\n\nprint(np.finfo(np.float32).eps)\n# 1.19209e-07\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/19141432\/python-numpy-machine-epsilon","best_answers_votes":290,"tags":["python","numpy","epsilon"],"question_length":644,"response_length":188,"tag_count":3} {"question":"What is the difference between NaN and None? I am reading two columns of a csv file using pandas readcsv() and then assigning the values to a dictionary. The columns contain strings of numbers and letters. Occasionally there are cases where a cell is empty. In my opinion, the value read to that dictionary entry should be None but instead nan is assigned. Surely None is more descriptive of an empty cell as it has a null value, whereas nan just says that the value read is not a number. Is my understanding correct, what IS the difference between None and nan? Why is nan assigned instead of None? Also, my dictionary check for any empty cells has been using numpy.isnan(): \n```\nfor k, v in my_dict.iteritems():\n if np.isnan(v):\n```\nBut this gives me an error saying that I cannot use this check for v. I guess it is because an integer or float variable, not a string is meant to be used. If this is true, how can I check v for an \"empty cell\"\/nan case?","response":"NaN is used as a placeholder for missing data consistently in pandas, consistency is good. I usually read\/translate NaN as \"missing\". Also see the 'working with missing data' section in the docs. Wes writes in the docs 'choice of NA-representation': After years of production use [NaN] has proven, at least in my opinion, to be the best decision given the state of affairs in NumPy and Python in general. The special value NaN (Not-A-Number) is used everywhere as the NA value, and there are API functions isna and notna which can be used across the dtypes to detect NA values. ... Thus, I have chosen the Pythonic \u201cpracticality beats purity\u201d approach and traded integer NA capability for a much simpler approach of using a special value in float and object arrays to denote NA, and promoting integer arrays to floating when NAs must be introduced. Note: the \"gotcha\" that integer Series containing missing data are upcast to floats. In my opinion the main reason to use NaN (over None) is that it can be stored with numpy's float64 dtype, rather than the less efficient object dtype, see NA type promotions. \n```\n# without forcing dtype it changes None to NaN!\ns_bad = pd.Series([1, None], dtype=object)\ns_good = pd.Series([1, np.nan])\n\n```python\ns_bad.dtype\n#Output\n#dtype('O')\n```\n\n```python\ns_good.dtype\n#Output\n#dtype('float64')\n#```\n#Jeff comments (below) on this: np.nan allows for vectorized operations; its a float value, while None, by definition, forces object type, which basically disables all efficiency in numpy. So repeat 3 times fast: object==bad, float==good Saying that, many operations may still work just as well with None vs NaN (but perhaps are not supported i.e. they may sometimes give surprising results): \n#```\n```\n\n```python\ns_bad.sum()\n#Output\n#1\n```\n\n```python\ns_good.sum()\n#Output\n#1.0\n#```\n#To answer the second question: You should be using isna and notna to test for missing data (NaN).\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/17534106\/what-is-the-difference-between-nan-and-none","best_answers_votes":182,"tags":["python","numpy","pandas","nan"],"question_length":958,"response_length":1899,"tag_count":4} {"question":"How do I compute derivative using Numpy? How do I calculate the derivative of a function, for example y = x2+1 using numpy? Let's say, I want the value of derivative at x = 5...","response":"You have four options Finite Differences Automatic Derivatives Symbolic Differentiation Compute derivatives by hand. Finite differences require no external tools but are prone to numerical error and, if you're in a multivariate situation, can take a while. Symbolic differentiation is ideal if your problem is simple enough. Symbolic methods are getting quite robust these days. SymPy is an excellent project for this that integrates well with NumPy. Look at the autowrap or lambdify functions or check out Jensen's blogpost about a similar question. Automatic derivatives are very cool, aren't prone to numeric errors, but do require some additional libraries (google for this, there are a few good options). This is the most robust but also the most sophisticated\/difficult to set up choice. If you're fine restricting yourself to numpy syntax then Theano might be a good choice. Here is an example using SymPy \n```\n\n```python\nfrom sympy import *\n```\n\n```python\nimport numpy as np\n```\n\n```python\nx = Symbol('x')\n```\n\n```python\ny = x**2 + 1\n```\n\n```python\nyprime = y.diff(x)\n```\n\n```python\nyprime\n#Output\n#2\u22c5x\n```\n\n```python\nf = lambdify(x, yprime, 'numpy')\n```\n\n```python\nf(np.ones(5))\n#Output\n#[ 2. 2. 2. 2. 2.]\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/9876290\/how-do-i-compute-derivative-using-numpy","best_answers_votes":208,"tags":["python","math","numpy"],"question_length":177,"response_length":1169,"tag_count":3} {"question":"Selecting specific rows and columns from NumPy array I've been going crazy trying to figure out what stupid thing I'm doing wrong here. I'm using NumPy, and I have specific row indices and specific column indices that I want to select from. Here's the gist of my problem: \n```\nimport numpy as np\n\na = np.arange(20).reshape((5,4))\n# array([[ 0, 1, 2, 3],\n# [ 4, 5, 6, 7],\n# [ 8, 9, 10, 11],\n# [12, 13, 14, 15],\n# [16, 17, 18, 19]])\n\n# If I select certain rows, it works\nprint a[[0, 1, 3], :]\n# array([[ 0, 1, 2, 3],\n# [ 4, 5, 6, 7],\n# [12, 13, 14, 15]])\n\n# If I select certain rows and a single column, it works\nprint a[[0, 1, 3], 2]\n# array([ 2, 6, 14])\n\n# But if I select certain rows AND certain columns, it fails\nprint a[[0,1,3], [0,2]]\n# Traceback (most recent call last):\n# File \"\", line 1, in \n# ValueError: shape mismatch: objects cannot be broadcast to a single shape\n```\nWhy is this happening? Surely I should be able to select the 1st, 2nd, and 4th rows, and 1st and 3rd columns? The result I'm expecting is: \n```\na[[0,1,3], [0,2]] => [[0, 2],\n [4, 6],\n [12, 14]]\n```","response":"As Toan suggests, a simple hack would be to just select the rows first, and then select the columns over that. \n```\n\n```python\na[[0,1,3], :] # Returns the rows you want\n#Output\n#array([[ 0, 1, 2, 3],\n# [ 4, 5, 6, 7],\n# [12, 13, 14, 15]])\n```\n\n```python\na[[0,1,3], :][:, [0,2]] # Selects the columns you want as well\n#Output\n#array([[ 0, 2],\n# [ 4, 6],\n# [12, 14]])\n#```\n#[Edit] The built-in method: np.ix_ I recently discovered that numpy gives you an in-built one-liner to doing exactly what @Jaime suggested, but without having to use broadcasting syntax (which suffers from lack of readability). From the docs: Using ix_ one can quickly construct index arrays that will index the cross product. a[np.ix_([1,3],[2,5])] returns the array [[a[1,2] a[1,5]], [a[3,2] a[3,5]]]. So you use it like this: \n#```\n```\n\n```python\na = np.arange(20).reshape((5,4))\n```\n\n```python\na[np.ix_([0,1,3], [0,2])]\n#Output\n#array([[ 0, 2],\n# [ 4, 6],\n# [12, 14]])\n#```\n#And the way it works is that it takes care of aligning arrays the way Jaime suggested, so that broadcasting happens properly: \n#```\n```\n\n```python\nnp.ix_([0,1,3], [0,2])\n#Output\n#(array([[0],\n# [1],\n# [3]]), array([[0, 2]]))\n#```\n#Also, as MikeC says in a comment, np.ix_ has the advantage of returning a view, which my first (pre-edit) answer did not. This means you can now assign to the indexed array: \n#```\n```\n\n```python\na[np.ix_([0,1,3], [0,2])] = -1\n```\n\n```python\na \n#Output\n#array([[-1, 1, -1, 3],\n# [-1, 5, -1, 7],\n# [ 8, 9, 10, 11],\n# [-1, 13, -1, 15],\n# [16, 17, 18, 19]])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/22927181\/selecting-specific-rows-and-columns-from-numpy-array","best_answers_votes":148,"tags":["python","arrays","numpy","multidimensional-array","numpy-slicing"],"question_length":1194,"response_length":1503,"tag_count":5} {"question":"Automatically import modules when entering the python or ipython interpreter I find myself typing import numpy as np almost every single time I fire up the python interpreter. How do I set up the python or ipython interpreter so that numpy is automatically imported?","response":"For ipython, there are two ways to achieve this. Both involve ipython's configuration directory which is located in ~\/.ipython. Create a custom ipython profile. Or you can add a startup file to ~\/.ipython\/profile_default\/startup\/ For simplicity, I'd use option 2. All you have to do is place a .py or .ipy file in the ~\/.ipython\/profile_default\/startup directory and it will automatically be executed. So you could simple place import numpy as np in a simple file and you'll have np in the namespace of your ipython prompt. Option 2 will actually work with a custom profile, but using a custom profile will allow you to change the startup requirements and other configuration based on a particular case. However, if you'd always like np to be available to you then by all means put it in the startup directory. For more information on ipython configuration. The docs have a much more complete explanation.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/11124578\/automatically-import-modules-when-entering-the-python-or-ipython-interpreter","best_answers_votes":97,"tags":["python","numpy","ipython"],"question_length":266,"response_length":905,"tag_count":3} {"question":"binning data in python with scipy\/numpy is there a more efficient way to take an average of an array in prespecified bins? for example, i have an array of numbers and an array corresponding to bin start and end positions in that array, and I want to just take the mean in those bins? I have code that does it below but i am wondering how it can be cut down and improved. thanks. \n```\nfrom scipy import *\nfrom numpy import *\n\ndef get_bin_mean(a, b_start, b_end):\n ind_upper = nonzero(a >= b_start)[0]\n a_upper = a[ind_upper]\n a_range = a_upper[nonzero(a_upper < b_end)[0]]\n mean_val = mean(a_range)\n return mean_val\n\n\ndata = rand(100)\nbins = linspace(0, 1, 10)\nbinned_data = []\n\nn = 0\nfor n in range(0, len(bins)-1):\n b_start = bins[n]\n b_end = bins[n+1]\n binned_data.append(get_bin_mean(data, b_start, b_end))\n\nprint binned_data\n```","response":"It's probably faster and easier to use numpy.digitize(): \n```\nimport numpy\ndata = numpy.random.random(100)\nbins = numpy.linspace(0, 1, 10)\ndigitized = numpy.digitize(data, bins)\nbin_means = [data[digitized == i].mean() for i in range(1, len(bins))]\n```\nAn alternative to this is to use numpy.histogram(): \n```\nbin_means = (numpy.histogram(data, bins, weights=data)[0] \/\n numpy.histogram(data, bins)[0])\n```\nTry for yourself which one is faster... :)","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/6163334\/binning-data-in-python-with-scipy-numpy","best_answers_votes":230,"tags":["python","numpy","scipy","scientific-computing"],"question_length":856,"response_length":461,"tag_count":4} {"question":"Efficiently checking if arbitrary object is NaN in Python \/ numpy \/ pandas? My numpy arrays use np.nan to designate missing values. As I iterate over the data set, I need to detect such missing values and handle them in special ways. Naively I used numpy.isnan(val), which works well unless val isn't among the subset of types supported by numpy.isnan(). For example, missing data can occur in string fields, in which case I get: \n```\n\n```python\nnp.isnan('some_string')\n#Output\n#Traceback (most recent call last):\n# File \"\", line 1, in \n#TypeError: Not implemented for this type\n#```\n#Other than writing an expensive wrapper that catches the exception and returns False, is there a way to handle this elegantly and efficiently?\n```","response":"pandas.isnull() (also pd.isna(), in newer versions) checks for missing values in both numeric and string\/object arrays. From the documentation, it checks for: NaN in numeric arrays, None\/NaN in object arrays Quick example: \n```\nimport pandas as pd\nimport numpy as np\ns = pd.Series(['apple', np.nan, 'banana'])\npd.isnull(s)\nOut[9]: \n0 False\n1 True\n2 False\ndtype: bool\n```\nThe idea of using numpy.nan to represent missing values is something that pandas introduced, which is why pandas has the tools to deal with it. Datetimes too (if you use pd.NaT you won't need to specify the dtype) \n```\n\n```python\ns = Series([Timestamp('20130101'),np.nan,Timestamp('20130102 9:30')],dtype='M8[ns]')\n```\n\n```python\ns\n#Output\n#0 2013-01-01 00:00:00\n#1 NaT\n#2 2013-01-02 09:30:00\n#dtype: datetime64[ns]``\n```\n\n```python\npd.isnull(s)\n#Output\n#0 False\n#1 True\n#2 False\n#dtype: bool\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/18689512\/efficiently-checking-if-arbitrary-object-is-nan-in-python-numpy-pandas","best_answers_votes":231,"tags":["python","numpy","pandas"],"question_length":723,"response_length":893,"tag_count":3} {"question":"Working with TIFFs (import, export) in Python using numpy I need a python method to open and import TIFF images into numpy arrays so I can analyze and modify the pixel data and then save them as TIFFs again. (They are basically light intensity maps in greyscale, representing the respective values per pixel) I couldn't find any documentation on PIL methods concerning TIFF. I tried to figure it out, but only got \"bad mode\" or \"file type not supported\" errors. What do I need to use here?","response":"First, I downloaded a test TIFF image from this page called a_image.tif. Then I opened with PIL like this: \n```\n\n```python\nfrom PIL import Image\n```\n\n```python\nim = Image.open('a_image.tif')\n```\n\n```python\nim.show()\n#Output\n#```\n#This showed the rainbow image. To convert to a numpy array, it's as simple as: \n#```\n```\n\n```python\nimport numpy\n```\n\n```python\nimarray = numpy.array(im)\n#Output\n#```\n#We can see that the size of the image and the shape of the array match up: \n#```\n```\n\n```python\nimarray.shape\n#Output\n#(44, 330)\n```\n\n```python\nim.size\n#Output\n#(330, 44)\n#```\n#And the array contains uint8 values: \n#```\n```\n\n```python\nimarray\n#Output\n#array([[ 0, 1, 2, ..., 244, 245, 246],\n# [ 0, 1, 2, ..., 244, 245, 246],\n# [ 0, 1, 2, ..., 244, 245, 246],\n# ..., \n# [ 0, 1, 2, ..., 244, 245, 246],\n# [ 0, 1, 2, ..., 244, 245, 246],\n# [ 0, 1, 2, ..., 244, 245, 246]], dtype=uint8)\n#```\n#Once you're done modifying the array, you can turn it back into a PIL image like this: \n#```\n```\n\n```python\nImage.fromarray(imarray)\n#Output\n#\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/7569553\/working-with-tiffs-import-export-in-python-using-numpy","best_answers_votes":159,"tags":["python","numpy","python-imaging-library","tiff"],"question_length":489,"response_length":985,"tag_count":4} {"question":"Normalize data in pandas Suppose I have a pandas data frame df: I want to calculate the column wise mean of a data frame. This is easy: \n```\ndf.apply(average)\n```\nthen the column wise range max(col) - min(col). This is easy again: \n```\ndf.apply(max) - df.apply(min)\n```\nNow for each element I want to subtract its column's mean and divide by its column's range. I am not sure how to do that Any help\/pointers are much appreciated.","response":"```\n\n```python\ndf\n#Output\n#a b c d\n#A -0.488816 0.863769 4.325608 -4.721202\n#B -11.937097 2.993993 -12.916784 -1.086236\n#C -5.569493 4.672679 -2.168464 -9.315900\n#D 8.892368 0.932785 4.535396 0.598124\n```\n\n```python\ndf_norm = (df - df.mean()) \/ (df.max() - df.min())\n```\n\n```python\ndf_norm\n#Output\n#a b c d\n#A 0.085789 -0.394348 0.337016 -0.109935\n#B -0.463830 0.164926 -0.650963 0.256714\n#C -0.158129 0.605652 -0.035090 -0.573389\n#D 0.536170 -0.376229 0.349037 0.426611\n```\n\n```python\ndf_norm.mean()\n#Output\n#a -2.081668e-17\n#b 4.857226e-17\n#c 1.734723e-17\n#d -1.040834e-17\n```\n\n```python\ndf_norm.max() - df_norm.min()\n#Output\n#a 1\n#b 1\n#c 1\n#d 1\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/12525722\/normalize-data-in-pandas","best_answers_votes":234,"tags":["python","pandas","numpy"],"question_length":430,"response_length":729,"tag_count":3} {"question":"How do I calculate r-squared using Python and Numpy? I'm using Python and Numpy to calculate a best fit polynomial of arbitrary degree. I pass a list of x values, y values, and the degree of the polynomial I want to fit (linear, quadratic, etc.). This much works, but I also want to calculate r (coefficient of correlation) and r-squared(coefficient of determination). I am comparing my results with Excel's best-fit trendline capability, and the r-squared value it calculates. Using this, I know I am calculating r-squared correctly for linear best-fit (degree equals 1). However, my function does not work for polynomials with degree greater than 1. Excel is able to do this. How do I calculate r-squared for higher-order polynomials using Numpy? Here's my function: \n```\nimport numpy\n\n# Polynomial Regression\ndef polyfit(x, y, degree):\n results = {}\n\n coeffs = numpy.polyfit(x, y, degree)\n # Polynomial Coefficients\n results['polynomial'] = coeffs.tolist()\n\n correlation = numpy.corrcoef(x, y)[0,1]\n\n # r\n results['correlation'] = correlation\n # r-squared\n results['determination'] = correlation**2\n\n return results\n```","response":"A very late reply, but just in case someone needs a ready function for this: scipy.stats.linregress i.e. \n```\nslope, intercept, r_value, p_value, std_err = scipy.stats.linregress(x, y)\n```\nas in @Adam Marples's answer.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/893657\/how-do-i-calculate-r-squared-using-python-and-numpy","best_answers_votes":209,"tags":["python","math","statistics","numpy","curve-fitting"],"question_length":1155,"response_length":218,"tag_count":5} {"question":"What does \"unsqueeze\" do in Pytorch? The PyTorch documentation says: Returns a new tensor with a dimension of size one inserted at the specified position. [...] \n```\n\n```python\nx = torch.tensor([1, 2, 3, 4])\n```\n\n```python\ntorch.unsqueeze(x, 0)\n#Output\n#tensor([[ 1, 2, 3, 4]])\n```\n\n```python\ntorch.unsqueeze(x, 1)\n#Output\n#tensor([[ 1],\n# [ 2],\n# [ 3],\n# [ 4]])\n#```\n```","response":"unsqueeze turns an n.d. tensor into an (n+1).d. one by adding an extra dimension of depth 1. However, since it is ambiguous which axis the new dimension should lie across (i.e. in which direction it should be \"unsqueezed\"), this needs to be specified by the dim argument. e.g. unsqueeze can be applied to a 2d tensor three different ways: The resulting unsqueezed tensors have the same information, but the indices used to access them are different.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/57237352\/what-does-unsqueeze-do-in-pytorch","best_answers_votes":201,"tags":["python","numpy","machine-learning","deep-learning","pytorch"],"question_length":340,"response_length":449,"tag_count":5} {"question":"How does numpy.histogram() work? While reading up on numpy, I encountered the function numpy.histogram(). What is it for and how does it work? In the docs they mention bins: What are they? Some googling led me to the definition of Histograms in general. I get that. But unfortunately I can't link this knowledge to the examples given in the docs.","response":"A bin is range that represents the width of a single bar of the histogram along the X-axis. You could also call this the interval. (Wikipedia defines them more formally as \"disjoint categories\".) The Numpy histogram function doesn't draw the histogram, but it computes the occurrences of input data that fall within each bin, which in turns determines the area (not necessarily the height if the bins aren't of equal width) of each bar. In this example: \n```\nnp.histogram([1, 2, 1], bins=[0, 1, 2, 3])\n```\nThere are 3 bins, for values ranging from 0 to 1 (excl 1.), 1 to 2 (excl. 2) and 2 to 3 (incl. 3), respectively. The way Numpy defines these bins if by giving a list of delimiters ([0, 1, 2, 3]) in this example, although it also returns the bins in the results, since it can choose them automatically from the input, if none are specified. If bins=5, for example, it will use 5 bins of equal width spread between the minimum input value and the maximum input value. The input values are 1, 2 and 1. Therefore, bin \"1 to 2\" contains two occurrences (the two 1 values), and bin \"2 to 3\" contains one occurrence (the 2). These results are in the first item in the returned tuple: array([0, 2, 1]). Since the bins here are of equal width, you can use the number of occurrences for the height of each bar. When drawn, you would have: a bar of height 0 for range\/bin [0,1] on the X-axis, a bar of height 2 for range\/bin [1,2], a bar of height 1 for range\/bin [2,3]. You can plot this directly with Matplotlib (its hist function also returns the bins and the values): \n```\n\n```python\nimport matplotlib.pyplot as plt\n```\n\n```python\nplt.hist([1, 2, 1], bins=[0, 1, 2, 3])\n#Output\n#(array([0, 2, 1]), array([0, 1, 2, 3]), )\n```\n\n```python\nplt.show()\n#Output\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/9141732\/how-does-numpy-histogram-work","best_answers_votes":201,"tags":["python","numpy","histogram"],"question_length":346,"response_length":1738,"tag_count":3} {"question":"Counting the number of non-NaN elements in a numpy ndarray in Python I need to calculate the number of non-NaN elements in a numpy ndarray matrix. How would one efficiently do this in Python? Here is my simple code for achieving this: \n```\nimport numpy as np\n\ndef numberOfNonNans(data):\n count = 0\n for i in data:\n if not np.isnan(i):\n count += 1\n return count\n```\nIs there a built-in function for this in numpy? Efficiency is important because I'm doing Big Data analysis. Thnx for any help!","response":"```\nnp.count_nonzero(~np.isnan(data))\n```\n~ inverts the boolean matrix returned from np.isnan. np.count_nonzero counts values that is not 0\\false. .sum should give the same result. But maybe more clearly to use count_nonzero Testing speed: \n```\n\n```python\ndata = np.random.random((10000,10000))\n```\n\n```python\ndata[[np.random.random_integers(0,10000, 100)],:][:, [np.random.random_integers(0,99, 100)]] = np.nan\n```\n\n```python\n%timeit data.size - np.count_nonzero(np.isnan(data))\n1 loops, best of 3: 309 ms per loop\n```\n\n```python\n%timeit np.count_nonzero(~np.isnan(data))\n1 loops, best of 3: 345 ms per loop\n```\n\n```python\n%timeit data.size - np.isnan(data).sum()\n1 loops, best of 3: 339 ms per loop\n```\ndata.size - np.count_nonzero(np.isnan(data)) seems to barely be the fastest here. other data might give different relative speed results.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/21778118\/counting-the-number-of-non-nan-elements-in-a-numpy-ndarray-in-python","best_answers_votes":257,"tags":["python","numpy","matrix","nan"],"question_length":519,"response_length":820,"tag_count":4} {"question":"LogisticRegression: Unknown label type: 'continuous' using sklearn in python I have the following code to test some of most popular ML algorithms of sklearn python library: \n```\nimport numpy as np\nfrom sklearn import metrics, svm\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.discriminant_analysis import LinearDiscriminantAnalysis\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.svm import SVC\n\ntrainingData = np.array([ [2.3, 4.3, 2.5], [1.3, 5.2, 5.2], [3.3, 2.9, 0.8], [3.1, 4.3, 4.0] ])\ntrainingScores = np.array( [3.4, 7.5, 4.5, 1.6] )\npredictionData = np.array([ [2.5, 2.4, 2.7], [2.7, 3.2, 1.2] ])\n\nclf = LinearRegression()\nclf.fit(trainingData, trainingScores)\nprint(\"LinearRegression\")\nprint(clf.predict(predictionData))\n\nclf = svm.SVR()\nclf.fit(trainingData, trainingScores)\nprint(\"SVR\")\nprint(clf.predict(predictionData))\n\nclf = LogisticRegression()\nclf.fit(trainingData, trainingScores)\nprint(\"LogisticRegression\")\nprint(clf.predict(predictionData))\n\nclf = DecisionTreeClassifier()\nclf.fit(trainingData, trainingScores)\nprint(\"DecisionTreeClassifier\")\nprint(clf.predict(predictionData))\n\nclf = KNeighborsClassifier()\nclf.fit(trainingData, trainingScores)\nprint(\"KNeighborsClassifier\")\nprint(clf.predict(predictionData))\n\nclf = LinearDiscriminantAnalysis()\nclf.fit(trainingData, trainingScores)\nprint(\"LinearDiscriminantAnalysis\")\nprint(clf.predict(predictionData))\n\nclf = GaussianNB()\nclf.fit(trainingData, trainingScores)\nprint(\"GaussianNB\")\nprint(clf.predict(predictionData))\n\nclf = SVC()\nclf.fit(trainingData, trainingScores)\nprint(\"SVC\")\nprint(clf.predict(predictionData))\n```\nThe first two works ok, but I got the following error in LogisticRegression call: \n```\nroot@ubupc1:\/home\/ouhma# python stack.py \nLinearRegression\n[ 15.72023529 6.46666667]\nSVR\n[ 3.95570063 4.23426243]\nTraceback (most recent call last):\n File \"stack.py\", line 28, in \n clf.fit(trainingData, trainingScores)\n File \"\/usr\/local\/lib\/python2.7\/dist-packages\/sklearn\/linear_model\/logistic.py\", line 1174, in fit\n check_classification_targets(y)\n File \"\/usr\/local\/lib\/python2.7\/dist-packages\/sklearn\/utils\/multiclass.py\", line 172, in check_classification_targets\n raise ValueError(\"Unknown label type: %r\" % y_type)\nValueError: Unknown label type: 'continuous'\n```\nThe input data is the same as in the previous calls, so what is going on here? And by the way, why there is a huge diference in the first prediction of LinearRegression() and SVR() algorithms (15.72 vs 3.95)?","response":"You are passing floats to a classifier which expects categorical values as the target vector. If you convert it to int it will be accepted as input (although it will be questionable if that's the right way to do it). It would be better to convert your training scores by using scikit's labelEncoder function. The same is true for your DecisionTree and KNeighbors qualifier. \n```\nfrom sklearn import preprocessing\nfrom sklearn import utils\n\nlab_enc = preprocessing.LabelEncoder()\nencoded = lab_enc.fit_transform(trainingScores)\n\n```python\narray([1, 3, 2, 0], dtype=int64)\n```\n\nprint(utils.multiclass.type_of_target(trainingScores))\n\n```python\ncontinuous\n```\n\nprint(utils.multiclass.type_of_target(trainingScores.astype('int')))\n\n```python\nmulticlass\n```\n\nprint(utils.multiclass.type_of_target(encoded))\n\n```python\nmulticlass\n#Output\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/41925157\/logisticregression-unknown-label-type-continuous-using-sklearn-in-python","best_answers_votes":134,"tags":["python","numpy","scikit-learn"],"question_length":2774,"response_length":787,"tag_count":3} {"question":"How to save a list as numpy array in python? Is possible to construct a NumPy array from a python list?","response":"First of all, I'd recommend you to go through NumPy's Quickstart tutorial, which will probably help with these basic questions. You can directly create an array from a list as: \n```\nimport numpy as np\na = np.array( [2,3,4] )\n```\nOr from a from a nested list in the same way: \n```\nimport numpy as np\na = np.array( [[2,3,4], [3,4,5]] )\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/5951135\/how-to-save-a-list-as-numpy-array-in-python","best_answers_votes":173,"tags":["python","list","numpy"],"question_length":103,"response_length":337,"tag_count":3} {"question":"sort eigenvalues and associated eigenvectors after using numpy.linalg.eig in python I'm using numpy.linalg.eig to obtain a list of eigenvalues and eigenvectors: \n```\nA = someMatrixArray\nfrom numpy.linalg import eig as eigenValuesAndVectors\n\nsolution = eigenValuesAndVectors(A)\n\neigenValues = solution[0]\neigenVectors = solution[1]\n```\nI would like to sort my eigenvalues (e.g. from lowest to highest), in a way I know what is the associated eigenvector after the sorting. I'm not finding any way of doing that with python functions. Is there any simple way or do I have to code my sort version?","response":"Use numpy.argsort. It returns the indices one would use to sort the array. \n```\nimport numpy as np\nimport numpy.linalg as linalg\n\nA = np.random.random((3,3))\neigenValues, eigenVectors = linalg.eig(A)\n\nidx = eigenValues.argsort()[::-1] \neigenValues = eigenValues[idx]\neigenVectors = eigenVectors[:,idx]\n```\nIf the eigenvalues are complex, the sort order is lexicographic (that is, complex numbers are sorted according to their real part first, with ties broken by their imaginary part).","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/8092920\/sort-eigenvalues-and-associated-eigenvectors-after-using-numpy-linalg-eig-in-pyt","best_answers_votes":216,"tags":["python","sorting","numpy"],"question_length":594,"response_length":487,"tag_count":3} {"question":"Numpy array assignment with copy For example, if we have a numpy array A, and we want a numpy array B with the same elements. What is the difference between the following (see below) methods? When is additional memory allocated, and when is it not? B = A B[:] = A (same as B[:]=A[:]?) numpy.copy(B, A)","response":"All three versions do different things: B = A This binds a new name B to the existing object already named A. Afterwards they refer to the same object, so if you modify one in place, you'll see the change through the other one too. B[:] = A (same as B[:]=A[:]?) This copies the values from A into an existing array B. The two arrays must have the same shape for this to work. B[:] = A[:] does the same thing (but B = A[:] would do something more like 1). numpy.copy(B, A) This is not legal syntax. You probably meant B = numpy.copy(A). This is almost the same as 2, but it creates a new array, rather than reusing the B array. If there were no other references to the previous B value, the end result would be the same as 2, but it will use more memory temporarily during the copy. Or maybe you meant numpy.copyto(B, A), which is legal, and is equivalent to 2?","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/19676538\/numpy-array-assignment-with-copy","best_answers_votes":162,"tags":["python","arrays","numpy"],"question_length":301,"response_length":860,"tag_count":3} {"question":"Numpy isnan() fails on an array of floats (from pandas dataframe apply) I have an array of floats (some normal numbers, some nans) that is coming out of an apply on a pandas dataframe. For some reason, numpy.isnan is failing on this array, however as shown below, each element is a float, numpy.isnan runs correctly on each element, the type of the variable is definitely a numpy array. What's going on?! \n```\nset([type(x) for x in tester])\nOut[59]: {float}\n\ntester\nOut[60]: \narray([-0.7000000000000001, nan, nan, nan, nan, nan, nan, nan, nan, nan,\n nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,\n nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,\n nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,\n nan, nan], dtype=object)\n\nset([type(x) for x in tester])\nOut[61]: {float}\n\nnp.isnan(tester)\nTraceback (most recent call last):\n\nFile \"\", line 1, in \nnp.isnan(tester)\n\nTypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''\n\nset([np.isnan(x) for x in tester])\nOut[65]: {False, True}\n\ntype(tester)\nOut[66]: numpy.ndarray\n```","response":"np.isnan can be applied to NumPy arrays of native dtype (such as np.float64): \n```\n\n```python\nnp.isnan(np.array([np.nan, 0], dtype=np.float64))\n#Output\n#array([ True, False], dtype=bool)\n#```\n#but raises TypeError when applied to object arrays: \n#```\n```\n\n```python\nnp.isnan(np.array([np.nan, 0], dtype=object))\nTypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''\n```\nSince you have Pandas, you could use pd.isnull instead -- it can accept NumPy arrays of object or native dtypes: \n```\n```\n\n```python\npd.isnull(np.array([np.nan, 0], dtype=float))\n#Output\n#array([ True, False], dtype=bool)\n```\n\n```python\npd.isnull(np.array([np.nan, 0], dtype=object))\n#Output\n#array([ True, False], dtype=bool)\n#```\n#Note that None is also considered a null value in object arrays.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/36000993\/numpy-isnan-fails-on-an-array-of-floats-from-pandas-dataframe-apply","best_answers_votes":206,"tags":["python","arrays","numpy","pandas"],"question_length":1226,"response_length":854,"tag_count":4} {"question":"What is the difference between i = i + 1 and i += 1 in a 'for' loop? [duplicate] This question already has answers here: When is \"i += x\" different from \"i = i + x\" in Python? (3 answers) Closed 8 years ago. I found out a curious thing today and was wondering if somebody could shed some light into what the difference is here? \n```\nimport numpy as np\n\nA = np.arange(12).reshape(4,3)\nfor a in A:\n a = a + 1\n\nB = np.arange(12).reshape(4,3)\nfor b in B:\n b += 1\n```\nAfter running each for loop, A has not changed, but B has had one added to each element. I actually use the B version to write to a initialized NumPy array within a for loop.","response":"The difference is that one modifies the data-structure itself (in-place operation) b += 1 while the other just reassigns the variable a = a + 1. Just for completeness: x += y is not always doing an in-place operation, there are (at least) three exceptions: If x doesn't implement an __iadd__ method then the x += y statement is just a shorthand for x = x + y. This would be the case if x was something like an int. If __iadd__ returns NotImplemented, Python falls back to x = x + y. The __iadd__ method could theoretically be implemented to not work in place. It'd be really weird to do that, though. As it happens your bs are numpy.ndarrays which implements __iadd__ and return itself so your second loop modifies the original array in-place. You can read more on this in the Python documentation of \"Emulating Numeric Types\". These [__i*__] methods are called to implement the augmented arithmetic assignments (+=, -=, *=, @=, \/=, \/\/=, %=, **=, >=, &=, ^=, |=). These methods should attempt to do the operation in-place (modifying self) and return the result (which could be, but does not have to be, self). If a specific method is not defined, the augmented assignment falls back to the normal methods. For instance, if x is an instance of a class with an __iadd__() method, x += y is equivalent to x = x.__iadd__(y) . Otherwise, x.__add__(y) and y.__radd__(x) are considered, as with the evaluation of x + y. In certain situations, augmented assignment can result in unexpected errors (see Why does a_tuple[i] += [\"item\"] raise an exception when the addition works?), but this behavior is in fact part of the data model.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/41446833\/what-is-the-difference-between-i-i-1-and-i-1-in-a-for-loop","best_answers_votes":134,"tags":["python","loops","numpy","operators"],"question_length":643,"response_length":1624,"tag_count":4} {"question":"Numpy: find index of the elements within range I have a numpy array of numbers, for example, \n```\na = np.array([1, 3, 5, 6, 9, 10, 14, 15, 56])\n```\nI would like to find all the indexes of the elements within a specific range. For instance, if the range is (6, 10), the answer should be (3, 4, 5). Is there a built-in function to do this?","response":"You can use np.where to get indices and np.logical_and to set two conditions: \n```\nimport numpy as np\na = np.array([1, 3, 5, 6, 9, 10, 14, 15, 56])\n\nnp.where(np.logical_and(a>=6, a<=10))\n# returns (array([3, 4, 5]),)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/13869173\/numpy-find-index-of-the-elements-within-range","best_answers_votes":209,"tags":["python","numpy"],"question_length":337,"response_length":220,"tag_count":2} {"question":"NumPy: Logarithm with base n From the numpy documentation on logarithms, I have found functions to take the logarithm with base e, 2, and 10: \n```py\nimport numpy as np\nnp.log(np.e**3) #3.0\nnp.log2(2**3) #3.0\nnp.log10(10**3) #3.0\n```\nHowever, how do I take the logarithm with base n (e.g. 42) in numpy?","response":"If you have numpy 1.23 or later, you can use np.emath.logn: \n```\nimport numpy as np\narray = np.array([74088, 3111696]) # = [42^3, 42^4]\nbase = 42\nexponent = np.emath.logn(base, array) # = [3, 4]\n```\nIf your version of numpy is older: To get the logarithm with a custom base using math.log: \n```\nimport math\nnumber = 74088 # = 42^3\nbase = 42\nexponent = math.log(number, base) # = 3\n```\nTo get the logarithm with a custom base using numpy.log: \n```\nimport numpy as np\narray = np.array([74088, 3111696]) # = [42^3, 42^4]\nbase = 42\nexponent = np.log(array) \/ np.log(base) # = [3, 4]\n```\nWhich uses the logarithm base change rule:","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/25169297\/numpy-logarithm-with-base-n","best_answers_votes":241,"tags":["python","math","numpy","logarithm"],"question_length":303,"response_length":631,"tag_count":4} {"question":"Index of element in NumPy array [duplicate] This question already has answers here: Is there a NumPy function to return the first index of something in an array? (24 answers) Closed 4 years ago. In Python we can get the index of a value in an array by using .index(). But with a NumPy array, when I try to do: \n```\ndecoding.index(i)\n```\nI get: AttributeError: 'numpy.ndarray' object has no attribute 'index' How could I do this on a NumPy array?","response":"Use np.where to get the indices where a given condition is True. Examples: For a 2D np.ndarray called a: \n```\ni, j = np.where(a == value) # when comparing arrays of integers\n\ni, j = np.where(np.isclose(a, value)) # when comparing floating-point arrays\n```\nFor a 1D array: \n```\ni, = np.where(a == value) # integers\n\ni, = np.where(np.isclose(a, value)) # floating-point\n```\nNote that this also works for conditions like >=, <=, != and so forth... You can also create a subclass of np.ndarray with an index() method: \n```\nclass myarray(np.ndarray):\n def __new__(cls, *args, **kwargs):\n return np.array(*args, **kwargs).view(myarray)\n def index(self, value):\n return np.where(self == value)\n```\nTesting: \n```\na = myarray([1,2,3,4,4,4,5,6,4,4,4])\na.index(4)\n#(array([ 3, 4, 5, 8, 9, 10]),)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/18079029\/index-of-element-in-numpy-array","best_answers_votes":197,"tags":["python","arrays","numpy","indexing","indexof"],"question_length":445,"response_length":812,"tag_count":5} {"question":"Find the index of the k smallest values of a numpy array In order to find the index of the smallest value, we can use argmin: \n```\nimport numpy as np\nA = np.array([1, 7, 9, 2, 0.1, 17, 17, 1.5])\nprint(A.argmin()) # 4 because A[4] = 0.1\n```\nBut how can I find the indices of the k-smallest values? I'm looking for something like: \n```\nA.argmin(numberofvalues=3) \n# [4, 0, 7] because A[4] <= A[0] <= A[7] <= all other A[i]\n```\nNote: in my use case A has between ~ 10 000 and 100 000 values, and I'm interested for only the indices of the k=10 smallest values. k will never be > 10.","response":"Use np.argpartition. It does not sort the entire array. It only guarantees that the kth element is in sorted position and all smaller elements will be moved before it. Thus the first k elements will be the k-smallest elements. \n```\nimport numpy as np\n\nA = np.array([1, 7, 9, 2, 0.1, 17, 17, 1.5])\nk = 3\n\nidx = np.argpartition(A, k)\nprint(idx)\n# [4 0 7 3 1 2 6 5]\n```\nThis returns the k-smallest values. Note that these may not be in sorted order. \n```\nprint(A[idx[:k]])\n# [ 0.1 1. 1.5]\n```\nTo obtain the k-largest values use \n```\nidx = np.argpartition(A, -k)\n# [4 0 7 3 1 2 6 5]\n\nA[idx[-k:]]\n# [ 9. 17. 17.]\n```\nWARNING: Do not (re)use idx = np.argpartition(A, k); A[idx[-k:]] to obtain the k-largest. That won't always work. For example, these are NOT the 3 largest values in x: \n```\nx = np.array([100, 90, 80, 70, 60, 50, 40, 30, 20, 10, 0])\nidx = np.argpartition(x, 3)\nx[idx[-3:]]\narray([ 70, 80, 100])\n```\nHere is a comparison against np.argsort, which also works but just sorts the entire array to get the result. \n```\n\n```python\nx = np.random.randn(100000)\n```\n\n```python\n%timeit idx0 = np.argsort(x)[:100]\n100 loops, best of 3: 8.26 ms per loop\n```\n\n```python\n%timeit idx1 = np.argpartition(x, 100)[:100]\n1000 loops, best of 3: 721 \u00b5s per loop\n```\n\n```python\nnp.alltrue(np.sort(np.argsort(x)[:100]) == np.sort(np.argpartition(x, 100)[:100]))\n#Output\n#True\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/34226400\/find-the-index-of-the-k-smallest-values-of-a-numpy-array","best_answers_votes":204,"tags":["python","numpy"],"question_length":586,"response_length":1351,"tag_count":2} {"question":"how is axis indexed in numpy's array? [duplicate] This question already has answers here: In Python NumPy what is a dimension and axis? (7 answers) Closed 1 year ago. From Numpy's tutorial, axis can be indexed with integers, like 0 is for column, 1 is for row, but I don't grasp why they are indexed this way? And How do I figure out each axis' index when coping with multidimensional array?","response":"By definition, the axis number of the dimension is the index of that dimension within the array's shape. It is also the position used to access that dimension during indexing. For example, if a 2D array a has shape (5,6), then you can access a[0,0] up to a[4,5]. Axis 0 is thus the first dimension (the \"rows\"), and axis 1 is the second dimension (the \"columns\"). In higher dimensions, where \"row\" and \"column\" stop really making sense, try to think of the axes in terms of the shapes and indices involved. If you do .sum(axis=n), for example, then dimension n is collapsed and deleted, with each value in the new matrix equal to the sum of the corresponding collapsed values. For example, if b has shape (5,6,7,8), and you do c = b.sum(axis=2), then axis 2 (dimension with size 7) is collapsed, and the result has shape (5,6,8). Furthermore, c[x,y,z] is equal to the sum of all elements b[x,y,:,z].","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/17079279\/how-is-axis-indexed-in-numpys-array","best_answers_votes":198,"tags":["python","numpy"],"question_length":391,"response_length":899,"tag_count":2} {"question":"Histogram Matplotlib So I have a little problem. I have a data set in scipy that is already in the histogram format, so I have the center of the bins and the number of events per bin. How can I now plot is as a histogram. I tried just doing \n```\nbins, n=hist()\n```\nbut it didn't like that. Any recommendations?","response":"```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nmu, sigma = 100, 15\nx = mu + sigma * np.random.randn(10000)\nhist, bins = np.histogram(x, bins=50)\nwidth = 0.7 * (bins[1] - bins[0])\ncenter = (bins[:-1] + bins[1:]) \/ 2\nplt.bar(center, hist, align='center', width=width)\nplt.show()\n```\nThe object-oriented interface is also straightforward: \n```\nfig, ax = plt.subplots()\nax.bar(center, hist, align='center', width=width)\nfig.savefig(\"1.png\")\n```\nIf you are using custom (non-constant) bins, you can pass compute the widths using np.diff, pass the widths to ax.bar and use ax.set_xticks to label the bin edges: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nmu, sigma = 100, 15\nx = mu + sigma * np.random.randn(10000)\nbins = [0, 40, 60, 75, 90, 110, 125, 140, 160, 200]\nhist, bins = np.histogram(x, bins=bins)\nwidth = np.diff(bins)\ncenter = (bins[:-1] + bins[1:]) \/ 2\n\nfig, ax = plt.subplots(figsize=(8,3))\nax.bar(center, hist, align='center', width=width)\nax.set_xticks(bins)\nfig.savefig(\"\/tmp\/out.png\")\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/5328556\/histogram-matplotlib","best_answers_votes":268,"tags":["python","numpy","matplotlib","scipy","histogram"],"question_length":310,"response_length":1033,"tag_count":5} {"question":"Linear regression with matplotlib \/ numpy I'm trying to generate a linear regression on a scatter plot I have generated, however my data is in list format, and all of the examples I can find of using polyfit require using arange. arange doesn't accept lists though. I have searched high and low about how to convert a list to an array and nothing seems clear. Am I missing something? Following on, how best can I use my list of integers as inputs to the polyfit? Here is the polyfit example I am following: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.arange(data)\ny = np.arange(data)\n\nm, b = np.polyfit(x, y, 1)\n\nplt.plot(x, y, 'yo', x, m*x+b, '--k')\nplt.show()\n```","response":"arange generates lists (well, numpy arrays); type help(np.arange) for the details. You don't need to call it on existing lists. \n```py\n\n```python\nx = [1,2,3,4]\n```\n\n```python\ny = [3,5,7,9] \n```\n\n```python\n\n```\n\n```python\nm,b = np.polyfit(x, y, 1)\n```\n\n```python\nm\n#Output\n#2.0000000000000009\n```\n\n```python\nb\n#Output\n#0.99999999999999833\n#```\n#I should add that I tend to use poly1d here rather than write out \"m*x+b\" and the higher-order equivalents, so my version of your code would look something like this: \n#```py\n#import numpy as np\n#import matplotlib.pyplot as plt\n```\n\nx = [1,2,3,4]\ny = [3,5,7,10] # 10, not 9, so the fit isn't perfect\n\ncoef = np.polyfit(x,y,1)\npoly1d_fn = np.poly1d(coef) \n# poly1d_fn is now a function which takes in x and returns an estimate for y\n\nplt.plot(x,y, 'yo', x, poly1d_fn(x), '--k') #'--k'=black dashed line, 'yo' = yellow circle marker\n\nplt.xlim(0, 5)\nplt.ylim(0, 12)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/6148207\/linear-regression-with-matplotlib-numpy","best_answers_votes":246,"tags":["python","numpy","matplotlib","linear-regression","curve-fitting"],"question_length":685,"response_length":821,"tag_count":5} {"question":"How does NumPy's transpose() method permute the axes of an array? \n```\n\n```python\narr = np.arange(16).reshape((2, 2, 4))\n```\n\n```python\narr\n#Output\n#array([[[ 0, 1, 2, 3],\n# [ 4, 5, 6, 7]],\n#\n# [[ 8, 9, 10, 11],\n# [12, 13, 14, 15]]])\n```\n\n```python\narr.transpose((1, 0, 2))\n#Output\n#array([[[ 0, 1, 2, 3],\n# [ 8, 9, 10, 11]],\n#\n# [[ 4, 5, 6, 7],\n# [12, 13, 14, 15]]])\n#```\n#When we pass a tuple of integers to the transpose() function, what happens? To be specific, this is a 3D array: how does NumPy transform the array when I pass the tuple of axes (1, 0 ,2)? Can you explain which row or column these integers refer to? And what are axis numbers in the context of NumPy?\n```","response":"To transpose an array, NumPy just swaps the shape and stride information for each axis. Here are the strides: \n```\n\n```python\narr.strides\n#Output\n#(64, 32, 8)\n```\n\n```python\narr.transpose(1, 0, 2).strides\n#Output\n#(32, 64, 8)\n#```\n#Notice that the transpose operation swapped the strides for axis 0 and axis 1. The lengths of these axes were also swapped (both lengths are 2 in this example). No data needs to be copied for this to happen; NumPy can simply change how it looks at the underlying memory to construct the new array. Visualising strides The stride value represents the number of bytes that must be travelled in memory in order to reach the next value of an axis of an array. Now, our 3D array arr looks this (with labelled axes): This array is stored in a contiguous block of memory; essentially it is one-dimensional. To interpret it as a 3D object, NumPy must jump over a certain constant number of bytes in order to move along one of the three axes: Since each integer takes up 8 bytes of memory (we're using the int64 dtype), the stride value for each dimension is 8 times the number of values that we need to jump. For instance, to move along axis 1, four values (32 bytes) are jumped, and to move along axis 0, eight values (64 bytes) need to be jumped. When we write arr.transpose(1, 0, 2) we are swapping axes 0 and 1. The transposed array looks like this: All that NumPy needs to do is to swap the stride information for axis 0 and axis 1 (axis 2 is unchanged). Now we must jump further to move along axis 1 than axis 0: This basic concept works for any permutation of an array's axes. The actual code that handles the transpose is written in C and can be found here.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/32034237\/how-does-numpys-transpose-method-permute-the-axes-of-an-array","best_answers_votes":290,"tags":["python","arrays","numpy","transpose"],"question_length":708,"response_length":1652,"tag_count":4} {"question":"Maximum allowed value for a numpy data type I am working with numpy arrays of a range of data types (uint8, uint16, int16, etc.). I would like to be able to check whether a number can be represented within the limits of an array for a given datatype. I am imagining something that looks like: \n```\n\n```python\nim.dtype\n#Output\n#dtype('uint16')\n```\n\n```python\ndtype_max(im.dtype)\n#Output\n#65535\n```\n\n```python\ndtype_min(im.dtype)\n#Output\n#0\n#```\n#Does something like this exist? By the way, I feel like this has to have been asked before, but my search came up empty, and all of the \"similar questions\" appear to be unrelated. Edit: Of course, now that I've asked, one of the \"related\" questions does have the answer. Oops.\n```","response":"```py\nmin_value = np.iinfo(im.dtype).min\nmax_value = np.iinfo(im.dtype).max\n```\ndocs: np.iinfo (machine limits for integer types) np.finfo (machine limits for floating point types)","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/23189506\/maximum-allowed-value-for-a-numpy-data-type","best_answers_votes":181,"tags":["python","numpy"],"question_length":663,"response_length":180,"tag_count":2} {"question":"Convert 2d numpy array into list of lists [duplicate] This question already has answers here: Convert NumPy array to Python list (7 answers) Closed 10 years ago. I use an external module (libsvm), which does not support numpy arrays, only tuples, lists and dicts. But my data is in a 2d numpy array. How can I convert it the pythonic way, aka without loops. \n```\n\n```python\nimport numpy\n```\n\n```python\narray = numpy.ones((2,4))\n```\n\n```python\ndata_list = list(array)\n```\n\n```python\ndata_list\n#Output\n#[array([ 1., 1., 1., 1.]), array([ 1., 1., 1., 1.])]\n```\n\n```python\ntype(data_list[0])\n#Output\n# # <= what I don't want\n```\n\n# non pythonic way using for loop\n\n```python\nnewdata=list()\n```\n\n```python\nfor line in data_list:\n line = list(line)\n newdata.append(line)\n```\n\n```python\ntype(newdata[0])\n#Output\n# # <= what I want\n#```\n```","response":"You can simply cast the matrix to list with matrix.tolist(), proof: \n```\n\n```python\nimport numpy\n```\n\n```python\na = numpy.ones((2,4))\n```\n\n```python\na\n#Output\n#array([[ 1., 1., 1., 1.],\n# [ 1., 1., 1., 1.]])\n```\n\n```python\na.tolist()\n#Output\n#[[1.0, 1.0, 1.0, 1.0], [1.0, 1.0, 1.0, 1.0]]\n```\n\n```python\ntype(a.tolist())\n#Output\n#\n```\n\n```python\ntype(a.tolist()[0])\n#Output\n#\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/9721884\/convert-2d-numpy-array-into-list-of-lists","best_answers_votes":182,"tags":["python","arrays","list","numpy","multidimensional-array"],"question_length":774,"response_length":317,"tag_count":5} {"question":"Pass percentiles to pandas agg function I want to pass the numpy percentile() function through pandas' agg() function as I do below with various other numpy statistics functions. Right now I have a dataframe that looks like this: \n```\nAGGREGATE MY_COLUMN\nA 10\nA 12\nB 5\nB 9\nA 84\nB 22\n```\nAnd my code looks like this: \n```\ngrouped = dataframe.groupby('AGGREGATE')\ncolumn = grouped['MY_COLUMN']\ncolumn.agg([np.sum, np.mean, np.std, np.median, np.var, np.min, np.max])\n```\nThe above code works, but I want to do something like \n```\ncolumn.agg([np.sum, np.mean, np.percentile(50), np.percentile(95)])\n```\nI.e., specify various percentiles to return from agg(). How should this be done?","response":"Perhaps not super efficient, but one way would be to create a function yourself: \n```\ndef percentile(n):\n def percentile_(x):\n return x.quantile(n)\n percentile_.__name__ = 'percentile_{:02.0f}'.format(n*100)\n return percentile_\n```\nThen include this in your agg: \n```\n\n```python\ncolumn.agg([np.sum, np.mean, np.std, np.median,\n np.var, np.min, np.max, percentile(50), percentile(95)])\n#Output\n#sum mean std median var amin amax percentile_50 percentile_95\n#AGGREGATE\n#A 106 35.333333 42.158431 12 1777.333333 10 84 12 76.8\n#B 36 12.000000 8.888194 9 79.000000 5 22 12 76.8\n#```\n#Note sure this is how it should be done though...\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/17578115\/pass-percentiles-to-pandas-agg-function","best_answers_votes":165,"tags":["python","pandas","numpy","aggregate"],"question_length":742,"response_length":791,"tag_count":4} {"question":"Add numpy array as column to Pandas data frame I have a Pandas data frame object of shape (X,Y) that looks like this: \n```\n[[1, 2, 3],\n[4, 5, 6],\n[7, 8, 9]]\n```\nand a numpy sparse matrix (CSC) of shape (X,Z) that looks something like this \n```\n[[0, 1, 0],\n[0, 0, 1],\n[1, 0, 0]]\n```\nHow can I add the content from the matrix to the data frame in a new named column such that the data frame will end up like this: \n```\n[[1, 2, 3, [0, 1, 0]],\n[4, 5, 6, [0, 0, 1]],\n[7, 8, 9, [1, 0, 0]]]\n```\nNotice the data frame now has shape (X, Y+1) and rows from the matrix are elements in the data frame.","response":"```\nimport numpy as np\nimport pandas as pd\nimport scipy.sparse as sparse\n\ndf = pd.DataFrame(np.arange(1,10).reshape(3,3))\narr = sparse.coo_matrix(([1,1,1], ([0,1,2], [1,2,0])), shape=(3,3))\ndf['newcol'] = arr.toarray().tolist()\nprint(df)\n```\nyields \n```\n0 1 2 newcol\n0 1 2 3 [0, 1, 0]\n1 4 5 6 [0, 0, 1]\n2 7 8 9 [1, 0, 0]\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/18646076\/add-numpy-array-as-column-to-pandas-data-frame","best_answers_votes":107,"tags":["python","numpy","pandas"],"question_length":589,"response_length":342,"tag_count":3} {"question":"ValueError: could not broadcast input array from shape (224,224,3) into shape (224,224) I have a list say, temp_list with following properties : \n```\nlen(temp_list) = 9260 \ntemp_list[0].shape = (224,224,3)\n```\nNow, when I am converting into numpy array, \n```\nx = np.array(temp_list)\n```\nI am getting the error : \n```\nValueError: could not broadcast input array from shape (224,224,3) into shape (224,224)\n```\nCan someone help me here?","response":"At least one item in your list is either not three dimensional, or its second or third dimension does not match the other elements. If only the first dimension does not match, the arrays are still matched, but as individual objects, no attempt is made to reconcile them into a new (four dimensional) array. Some examples are below: That is, the offending element's shape != (?, 224, 3), or ndim != 3 (with the ? being non-negative integer). That is what is giving you the error. You'll need to fix that, to be able to turn your list into a four (or three) dimensional array. Without context, it is impossible to say if you want to lose a dimension from the 3D items or add one to the 2D items (in the first case), or change the second or third dimension (in the second case). Here's an example of the error: \n```\n\n```python\na = [np.zeros((224,224,3)), np.zeros((224,224,3)), np.zeros((224,224))]\n```\n\n```python\nnp.array(a)\n#Output\n#ValueError: could not broadcast input array from shape (224,224,3) into shape (224,224)\n#```\n#or, different type of input, but the same error: \n#```\n```\n\n```python\na = [np.zeros((224,224,3)), np.zeros((224,224,3)), np.zeros((224,224,13))]\n```\n\n```python\nnp.array(a)\n#Output\n#Traceback (most recent call last):\n# File \"\", line 1, in \n#ValueError: could not broadcast input array from shape (224,224,3) into shape (224,224)\n#```\n#Alternatively, similar but with a different error message: \n#```\n```\n\n```python\na = [np.zeros((224,224,3)), np.zeros((224,224,3)), np.zeros((224,100,3))]\n```\n\n```python\nnp.array(a)\n#Output\n#Traceback (most recent call last):\n# File \"\", line 1, in \n#ValueError: could not broadcast input array from shape (224,224,3) into shape (224)\n#```\n#But the following will work, albeit with different results than (presumably) intended: \n#```\n```\n\n```python\na = [np.zeros((224,224,3)), np.zeros((224,224,3)), np.zeros((10,224,3))]\n```\n\n```python\nnp.array(a)\n#Output\n## long output omitted\n```\n\n```python\nnewa = np.array(a)\n```\n\n```python\nnewa.shape\n#Output\n#3 # oops\n```\n\n```python\nnewa.dtype\n#Output\n#dtype('O')\n```\n\n```python\nnewa[0].shape\n#Output\n#(224, 224, 3)\n```\n\n```python\nnewa[1].shape\n#Output\n#(224, 224, 3)\n```\n\n```python\nnewa[2].shape\n#Output\n#(10, 224, 3)\n#>>>\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/43977463\/valueerror-could-not-broadcast-input-array-from-shape-224-224-3-into-shape-2","best_answers_votes":119,"tags":["python","numpy"],"question_length":435,"response_length":2013,"tag_count":2} {"question":"Difference between np.random.seed() and np.random.RandomState() I know that to seed the randomness of numpy.random, and be able to reproduce it, I should us: \n```\nimport numpy as np\nnp.random.seed(1234)\n```\nbut what does np.random.RandomState() do?","response":"If you want to set the seed that calls to np.random... will use, use np.random.seed: \n```\nnp.random.seed(1234)\nnp.random.uniform(0, 10, 5)\n#array([ 1.9151945 , 6.22108771, 4.37727739, 7.85358584, 7.79975808])\nnp.random.rand(2,3)\n#array([[ 0.27259261, 0.27646426, 0.80187218],\n# [ 0.95813935, 0.87593263, 0.35781727]])\n```\nUse the class to avoid impacting the global numpy state: \n```\nr = np.random.RandomState(1234)\nr.uniform(0, 10, 5)\n#array([ 1.9151945 , 6.22108771, 4.37727739, 7.85358584, 7.79975808])\n```\nAnd it maintains the state just as before: \n```\nr.rand(2,3)\n#array([[ 0.27259261, 0.27646426, 0.80187218],\n# [ 0.95813935, 0.87593263, 0.35781727]])\n```\nYou can see the state of the sort of 'global' class with: \n```\nnp.random.get_state()\n```\nand of your own class instance with: \n```\nr.get_state()\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/22994423\/difference-between-np-random-seed-and-np-random-randomstate","best_answers_votes":117,"tags":["python","numpy","random"],"question_length":248,"response_length":839,"tag_count":3} {"question":"Extract upper or lower triangular part of a numpy matrix I have a matrix A and I want 2 matrices U and L such that U contains the upper triangular elements of A (all elements above and not including diagonal) and similarly for L(all elements below and not including diagonal). Is there a numpy method to do this? e.g \n```\nA = array([[ 4., 9., -3.],\n [ 2., 4., -2.],\n [-2., -3., 7.]])\n\nU = array([[ 0., 9., -3.],\n [ 0., 0., -2.],\n [ 0., 0., 0.]])\n\nL = array([[ 0., 0., 0.],\n [ 2., 0., 0.],\n [-2., -3., 0.]])\n```","response":"To extract the upper triangle values to a flat vector, you can do something like the following: \n```\nimport numpy as np\n\na = np.array([[1,2,3],[4,5,6],[7,8,9]])\nprint(a)\n\n#array([[1, 2, 3],\n# [4, 5, 6],\n# [7, 8, 9]])\n\na[np.triu_indices(3)]\n#or\nlist(a[np.triu_indices(3)])\n\n#array([1, 2, 3, 5, 6, 9])\n```\nSimilarly, for the lower triangle, use np.tril. IMPORTANT If you want to extract the values that are above the diagonal (or below) then use the k argument. This is usually used when the matrix is symmetric. \n```\nimport numpy as np\n\na = np.array([[1,2,3],[4,5,6],[7,8,9]])\n\n#array([[1, 2, 3],\n# [4, 5, 6],\n# [7, 8, 9]])\n\na[np.triu_indices(3, k = 1)]\n\n# this returns the following\narray([2, 3, 6])\n```\nEDIT (on 11.11.2019): To put back the extracted vector into a 2D symmetric array, one can follow my answer here: https:\/\/stackoverflow.com\/a\/58806626\/5025009","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/8905501\/extract-upper-or-lower-triangular-part-of-a-numpy-matrix","best_answers_votes":134,"tags":["python","numpy","machine-learning"],"question_length":582,"response_length":885,"tag_count":3} {"question":"What is the equivalent of MATLAB's repmat in NumPy I would like to execute the equivalent of the following MATLAB code using NumPy: repmat([1; 1], [1 1 1]). How would I accomplish this?","response":"Here is a much better (official) NumPy for Matlab Users link - I'm afraid the mathesaurus one is quite out of date. The numpy equivalent of repmat(a, m, n) is tile(a, (m, n)). This works with multiple dimensions and gives a similar result to matlab. (Numpy gives a 3d output array as you would expect - matlab for some reason gives 2d output - but the content is the same). Matlab: \n```\n>> repmat([1;1],[1,1,1])\n\nans =\n 1\n 1\n```\nPython: \n```\n\n```python\na = np.array([[1],[1]])\n```\n\n```python\nnp.tile(a, [1,1,1])\n#Output\n#array([[[1],\n# [1]]])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/1721802\/what-is-the-equivalent-of-matlabs-repmat-in-numpy","best_answers_votes":122,"tags":["python","matlab","numpy"],"question_length":185,"response_length":553,"tag_count":3} {"question":"Mean Squared Error in Numpy? Is there a method in numpy for calculating the Mean Squared Error between two matrices? I've tried searching but found none. Is it under a different name? If there isn't, how do you overcome this? Do you write it yourself or use a different lib?","response":"You can use: \n```\nmse = ((A - B)**2).mean(axis=ax)\n```\nOr \n```\nmse = (np.square(A - B)).mean(axis=ax)\n```\nwith ax=0 the average is performed along the row, for each column, returning an array with ax=1 the average is performed along the column, for each row, returning an array with omitting the ax parameter (or setting it to ax=None) the average is performed element-wise along the array, returning a scalar value","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/16774849\/mean-squared-error-in-numpy","best_answers_votes":158,"tags":["python","arrays","numpy","mean","mean-square-error"],"question_length":274,"response_length":415,"tag_count":5} {"question":"inverting image in Python with OpenCV I want to load a color image, convert it to grayscale, and then invert the data in the file. What I need: to iterate over the array in OpenCV and change every single value with this formula (it might be wrong but it seems reasonable for me): \n```\nimg[x,y] = abs(img[x,y] - 255)\n```\nbut I don't understand why doesn't it works: \n```\ndef inverte(imagem, name):\n imagem = abs(imagem - 255)\n cv2.imwrite(name, imagem)\n\n\ndef inverte2(imagem, name):\n for x in np.nditer(imagem, op_flags=['readwrite']):\n x = abs(x - 255)\n cv2.imwrite(name, imagem)\n\n\nif __name__ == '__main__':\n nome = str(sys.argv[1])\n image = cv2.imread(nome)\n gs_imagem = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)\n inverte(gs_imagem, \"invertida.png\")\n inverte2(gs_imagem, \"invertida2.png\")\n```\nI don't want to do an explicit loop (I am trying to be more pythonic). I can see that in one image that got a white background it turned black, but only this it doesn't looks like the other colors are having much (if any) change.","response":"You almost did it. You were tricked by the fact that abs(imagem-255) will give a wrong result since your dtype is an unsigned integer. You have to do (255-imagem) in order to keep the integers unsigned: \n```\ndef inverte(imagem, name):\n imagem = (255-imagem)\n cv2.imwrite(name, imagem)\n```\nAs recommended in the answer by Eric Olmon, you can also invert the image using the bitwise_not function of OpenCV: \n```\nimagem = cv2.bitwise_not(imagem)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/19580102\/inverting-image-in-python-with-opencv","best_answers_votes":227,"tags":["python","arrays","opencv","numpy","image-processing"],"question_length":1056,"response_length":452,"tag_count":5} {"question":"Consistently create same random numpy array I am waiting for another developer to finish a piece of code that will return an np array of shape (100,2000) with values of either -1,0, or 1. In the meantime, I want to randomly create an array of the same characteristics so I can get a head start on my development and testing. The thing is that I want this randomly created array to be the same each time, so that I'm not testing against an array that keeps changing its value each time I re-run my process. I can create my array like this, but is there a way to create it so that it's the same each time. I can pickle the object and unpickle it, but wondering if there's another way. \n```\nr = np.random.randint(3, size=(100, 2000)) - 1\n```","response":"Create your own instance of numpy.random.RandomState() with your chosen seed. Do not use numpy.random.seed() except to work around inflexible libraries that do not let you pass around your own RandomState instance. \n```\n[~]\n|1> from numpy.random import RandomState\n\n[~]\n|2> prng = RandomState(1234567890)\n\n[~]\n|3> prng.randint(-1, 2, size=10)\narray([ 1, 1, -1, 0, 0, -1, 1, 0, -1, -1])\n\n[~]\n|4> prng2 = RandomState(1234567890)\n\n[~]\n|5> prng2.randint(-1, 2, size=10)\narray([ 1, 1, -1, 0, 0, -1, 1, 0, -1, -1])\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/5836335\/consistently-create-same-random-numpy-array","best_answers_votes":221,"tags":["python","random","numpy"],"question_length":738,"response_length":522,"tag_count":3} {"question":"How to delete columns in numpy.array I would like to delete selected columns in a numpy.array . This is what I do: \n```\nn [397]: a = array([[ NaN, 2., 3., NaN],\n .....: [ 1., 2., 3., 9]])\n\n```python\nprint a\n[[ NaN 2. 3. NaN]\n [ 1. 2. 3. 9.]]\n```\n\n```python\nz = any(isnan(a), axis=0)\n```\n\n```python\nprint z\n[ True False False True]\n```\n\n```python\ndelete(a, z, axis = 1)\n#Output\n#array([[ 3., NaN],\n# [ 3., 9.]])\n#```\n#In this example my goal is to delete all the columns that contain NaN's. I expect the last command to result in: \n#```\n#array([[2., 3.],\n# [2., 3.]])\n#```\n#How can I do that?\n```","response":"Given its name, I think the standard way should be delete: \n```\nimport numpy as np\n\nA = np.delete(A, 1, 0) # delete second row of A\nB = np.delete(B, 2, 0) # delete third row of B\nC = np.delete(C, 1, 1) # delete second column of C\n```\nAccording to numpy's documentation page, the parameters for numpy.delete are as follow: numpy.delete(arr, obj, axis=None) arr refers to the input array, obj refers to which sub-arrays (e.g. column\/row no. or slice of the array) and axis refers to either column wise (axis = 1) or row-wise (axis = 0) delete operation.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/1642730\/how-to-delete-columns-in-numpy-array","best_answers_votes":180,"tags":["python","numpy","scipy"],"question_length":621,"response_length":554,"tag_count":3} {"question":"Elegant way to create empty pandas DataFrame with NaN of type float I want to create a Pandas DataFrame filled with NaNs. During my research I found an answer: \n```\nimport pandas as pd\n\ndf = pd.DataFrame(index=range(0,4),columns=['A'])\n```\nThis code results in a DataFrame filled with NaNs of type \"object\". So they cannot be used later on for example with the interpolate() method. Therefore, I created the DataFrame with this complicated code (inspired by this answer): \n```\nimport pandas as pd\nimport numpy as np\n\ndummyarray = np.empty((4,1))\ndummyarray[:] = np.nan\n\ndf = pd.DataFrame(dummyarray)\n```\nThis results in a DataFrame filled with NaN of type \"float\", so it can be used later on with interpolate(). Is there a more elegant way to create the same result?","response":"Simply pass the desired value as first argument, like 0, math.inf or, here, np.nan. The constructor then initializes and fills the value array to the size specified by arguments index and columns: \n```\n\n```python\nimport numpy as np\n```\n\n```python\nimport pandas as pd\n```\n\n```python\ndf = pd.DataFrame(np.nan, index=[0, 1, 2, 3], columns=['A', 'B'])\n```\n\n```python\ndf\n#Output\n# A B\n#0 NaN NaN\n#1 NaN NaN\n#2 NaN NaN\n#3 NaN NaN\n```\n\n```python\ndf.dtypes\n#Output\n#A float64\n#B float64\n#dtype: object\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/30053329\/elegant-way-to-create-empty-pandas-dataframe-with-nan-of-type-float","best_answers_votes":149,"tags":["python","pandas","numpy","dataframe","nan"],"question_length":766,"response_length":435,"tag_count":5} {"question":"mean, nanmean and warning: Mean of empty slice Say I construct three numpy arrays: \n```\na = np.array([1, 2, 3])\nb = np.array([np.NaN, np.NaN, 3])\nc = np.array([np.NaN, np.NaN, np.NaN])\n```\nNow I find that np.mean returns nan for both b and c: \n```\n\n```python\nnp.mean(a)\n#Output\n#2.0\n```\n\n```python\nnp.mean(b)\n#Output\n#nan\n```\n\n```python\nnp.mean(c)\n#Output\n#nan\n#```\n#Since numpy 1.8 (released 20 April 2016), we've been blessed with nanmean, which ignores nan values: \n#```\n```\n\n```python\nnp.nanmean(a)\n#Output\n#2.0\n```\n\n```python\nnp.nanmean(b)\n#Output\n#3.0\n```\n\n```python\nnp.nanmean(c)\n#Output\n#nan\n#C:\\python-3.4.3\\lib\\site-packages\\numpy\\lib\\nanfunctions.py:598: RuntimeWarning: Mean of empty slice\n# warnings.warn(\"Mean of empty slice\", RuntimeWarning)\n#```\n#So, nanmean is great, but it has the odd and undesirable behaviour of raising a warning when the array has nothing but nan values. How can I get the behaviour of nanmean without that warning? I don't like warnings, and I don't like suppressing them manually.\n```","response":"I really can't see any good reason not to just suppress the warning. The safest way would be to use the warnings.catch_warnings context manager to suppress the warning only where you anticipate it occurring - that way you won't miss any additional RuntimeWarnings that might be unexpectedly raised in some other part of your code: \n```\nimport numpy as np\nimport warnings\n\nx = np.ones((1000, 1000)) * np.nan\n\n# I expect to see RuntimeWarnings in this block\nwith warnings.catch_warnings():\n warnings.simplefilter(\"ignore\", category=RuntimeWarning)\n foo = np.nanmean(x, axis=1)\n```\n@dawg's solution would also work, but ultimately any additional steps that you have to take in order to avoid computing np.nanmean on an array of all NaNs are going to incur some extra overhead that you could avoid by just suppressing the warning. Also your intent will be much more clearly reflected in the code.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/29688168\/mean-nanmean-and-warning-mean-of-empty-slice","best_answers_votes":106,"tags":["python","numpy","warnings"],"question_length":899,"response_length":898,"tag_count":3} {"question":"Two-sample Kolmogorov-Smirnov Test in Python Scipy I can't figure out how to do a Two-sample KS test in Scipy. After reading the documentation of scipy kstest, I can see how to test whether a distribution is identical to standard normal distribution \n```py\nfrom scipy.stats import kstest\nimport numpy as np\n\nx = np.random.normal(0,1,1000)\ntest_stat = kstest(x, 'norm')\n#>>> test_stat\n#(0.021080234718821145, 0.76584491300591395)\n```\nWhich means that at p-value of 0.76 we cannot reject the null hypothesis that the two distributions are identical. However, I want to compare two distributions and see if I can reject the null hypothesis that they are identical, something like: \n```py\nfrom scipy.stats import kstest\nimport numpy as np\n\nx = np.random.normal(0,1,1000)\nz = np.random.normal(1.1,0.9, 1000)\n```\nand test whether x and z are identical. I tried the naive: \n```py\ntest_stat = kstest(x, z)\n```\nand got the following error: \n```none\nTypeError: 'numpy.ndarray' object is not callable\n```\nIs there a way to do a two-sample KS test in Python? If so, how should I do it?","response":"You are using the one-sample KS test. You probably want the two-sample test ks_2samp: \n```\n\n```python\nfrom scipy.stats import ks_2samp\n```\n\n```python\nimport numpy as np\n```\n\n```python\n\n```\n\n```python\nnp.random.seed(12345678)\n```\n\n```python\nx = np.random.normal(0, 1, 1000)\n```\n\n```python\ny = np.random.normal(0, 1, 1000)\n```\n\n```python\nz = np.random.normal(1.1, 0.9, 1000)\n```\n\n```python\n\n```\n\n```python\nks_2samp(x, y)\n#Output\n#Ks_2sampResult(statistic=0.022999999999999909, pvalue=0.95189016804849647)\n```\n\n```python\nks_2samp(x, z)\n#Output\n#Ks_2sampResult(statistic=0.41800000000000004, pvalue=3.7081494119242173e-77)\n#```\n#Results can be interpreted as following: You can either compare the statistic value given by python to the KS-test critical value table according to your sample size. When statistic value is higher than the critical value, the two distributions are different. Or you can compare the p-value to a level of significance a, usually a=0.05 or 0.01 (you decide, the lower a is, the more significant). If p-value is lower than a, then it is very probable that the two distributions are different.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/10884668\/two-sample-kolmogorov-smirnov-test-in-python-scipy","best_answers_votes":157,"tags":["python","numpy","scipy","statistics","distribution"],"question_length":1073,"response_length":989,"tag_count":5} {"question":"numpy division with RuntimeWarning: invalid value encountered in double_scalars I wrote the following script: \n```py\nimport numpy\n\nd = numpy.array([[1089, 1093]])\ne = numpy.array([[1000, 4443]])\nanswer = numpy.exp(-3 * d)\nanswer1 = numpy.exp(-3 * e)\nres = answer.sum()\/answer1.sum()\n```\nBut I got this result and with the error occurred: \n```none\nnan\nC:\\Users\\Desktop\\test.py:16: RuntimeWarning: invalid value encountered in double_scalars\n res = answer.sum()\/answer1.sum()\n```\nIt seems to be that the input element were too small that python turned them to be zeros, but indeed the division has its result. How to solve this kind of problem?","response":"You can't solve it. Simply answer1.sum()==0, and you can't perform a division by zero. This happens because answer1 is the exponential of 2 very large, negative numbers, so that the result is rounded to zero. nan is returned in this case because of the division by zero. Now to solve your problem you could: go for a library for high-precision mathematics, like mpmath. But that's less fun. as an alternative to a bigger weapon, do some math manipulation, as detailed below. go for a tailored scipy\/numpy function that does exactly what you want! Check out @Warren Weckesser answer. Here I explain how to do some math manipulation that helps on this problem. We have that for the numerator: \n```\nexp(-x)+exp(-y) = exp(log(exp(-x)+exp(-y)))\n = exp(log(exp(-x)*[1+exp(-y+x)]))\n = exp(log(exp(-x) + log(1+exp(-y+x)))\n = exp(-x + log(1+exp(-y+x)))\n```\nwhere above x=3* 1089 and y=3* 1093. Now, the argument of this exponential is -x + log(1+exp(-y+x)) = -x + 6.1441934777474324e-06 For the denominator you could proceed similarly but obtain that log(1+exp(-z+k)) is already rounded to 0, so that the argument of the exponential function at the denominator is simply rounded to -z=-3000. You then have that your result is \n```\nexp(-x + log(1+exp(-y+x)))\/exp(-z) = exp(-x+z+log(1+exp(-y+x)) \n = exp(-266.99999385580668)\n```\nwhich is already extremely close to the result that you would get if you were to keep only the 2 leading terms (i.e. the first number 1089 in the numerator and the first number 1000 at the denominator): \n```\nexp(3*(1089-1000))=exp(-267)\n```\nFor the sake of it, let's see how close we are from the solution of Wolfram alpha (link): \n```\nLog[(exp[-3*1089]+exp[-3*1093])\/([exp[-3*1000]+exp[-3*4443])] -> -266.999993855806522267194565420933791813296828742310997510523\n```\nThe difference between this number and the exponent above is +1.7053025658242404e-13, so the approximation we made at the denominator was fine. The final result is \n```\n'exp(-266.99999385580668) = 1.1050349147204485e-116\n```\nFrom wolfram alpha is (link) \n```\n1.105034914720621496.. \u00d7 10^-116 # Wolfram alpha.\n```\nand again, it is safe to use numpy here too.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/27784528\/numpy-division-with-runtimewarning-invalid-value-encountered-in-double-scalars","best_answers_votes":115,"tags":["python","arrays","numpy","warnings"],"question_length":643,"response_length":2222,"tag_count":4} {"question":"pandas select from Dataframe using startswith This works (using Pandas 12 dev) \n```\ntable2=table[table['SUBDIVISION'] =='INVERNESS']\n```\nThen I realized I needed to select the field using \"starts with\" Since I was missing a bunch. So per the Pandas doc as near as I could follow I tried \n```\ncriteria = table['SUBDIVISION'].map(lambda x: x.startswith('INVERNESS'))\ntable2 = table[criteria]\n```\nAnd got AttributeError: 'float' object has no attribute 'startswith' So I tried an alternate syntax with the same result \n```\ntable[[x.startswith('INVERNESS') for x in table['SUBDIVISION']]]\n```\nReference http:\/\/pandas.pydata.org\/pandas-docs\/stable\/indexing.html#boolean-indexing Section 4: List comprehensions and map method of Series can also be used to produce more complex criteria: What am I missing?","response":"You can use the str.startswith DataFrame method to give more consistent results: \n```\n\n```python\ns = pd.Series(['a', 'ab', 'c', 11, np.nan])\n```\n\n```python\ns\n#Output\n#0 a\n#1 ab\n#2 c\n#3 11\n#4 NaN\n#dtype: object\n```\n\n```python\ns.str.startswith('a', na=False)\n#Output\n#0 True\n#1 True\n#2 False\n#3 False\n#4 False\n#dtype: bool\n#```\n#and the boolean indexing will work just fine (I prefer to use loc, but it works just the same without): \n#```\n```\n\n```python\ns.loc[s.str.startswith('a', na=False)]\n#Output\n#0 a\n#1 ab\n#dtype: object\n#```\n#. It looks least one of your elements in the Series\/column is a float, which doesn't have a startswith method hence the AttributeError, the list comprehension should raise the same error...\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/17957890\/pandas-select-from-dataframe-using-startswith","best_answers_votes":137,"tags":["python","numpy","pandas"],"question_length":799,"response_length":730,"tag_count":3} {"question":"Should I use np.absolute or np.abs? Numpy provides both np.absolute and the alias np.abs defined via \n```\nfrom .numeric import absolute as abs\n```\nwhich seems to be in obvious violation of the zen of python: There should be one-- and preferably only one --obvious way to do it. So I'm guessing that there is a good reason for this. I have personally been using np.abs in almost all of my code and looking at e.g. the number of search results for np.abs vs np.absolute on Stack Overflow it seems like an overwhelming majority does the same (2130 vs 244 hits). Is there any reason i should preferentially use np.absolute over np.abs in my code, or should I simply go for the more \"standard\" np.abs?","response":"It's likely because there a built-in functions with the same name, abs. The same is true for np.amax, np.amin and np.round_. The aliases for the NumPy functions abs, min, max and round are only defined in the top-level package. So np.abs and np.absolute are completely identical. It doesn't matter which one you use. There are several advantages to the short names: They are shorter and they are known to Python programmers because the names are identical to the built-in Python functions. So end-users have it easier (less to type, less to remember). But there are reasons to have different names too: NumPy (or more generally 3rd party packages) sometimes need the Python functions abs, min, etc. So inside the package they define functions with a different name so you can still access the Python functions - and just in the top-level of the package you expose the \"shortcuts\". Note: Different names are not the only available option in that case: One could work around that with the Python module builtins to access the built-in functions if one shadowed a built-in name. It might also be the case (but that's pure speculation on my part) that they originally only included the long-named functions absolute (and so on) and only added the short aliases later. Being a large and well-used library the NumPy developers don't remove or deprecate stuff lightly. So they may just keep the long names around because it could break old code\/scripts if they would remove them.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/45413909\/should-i-use-np-absolute-or-np-abs","best_answers_votes":106,"tags":["python","numpy","absolute-value"],"question_length":696,"response_length":1472,"tag_count":3} {"question":"AttributeError: 'Tensor' object has no attribute 'numpy' I downloaded this code from GitHub. \n```\npredicted_id = tf.multinomial(tf.exp(predictions), num_samples=1)[0][0].numpy()\n```\nBut I get an error that says: \n```\nAttributeError: 'Tensor' object has no attribute 'numpy'\n```\nWhat is wrong, and how do I fix it?","response":"Since the accepted answer did not solve the problem for me so I thought it might be helpful for some people who face the problem and that already have tensorflow version >= 2.2.0 and eager execution enabled. The issue seems to be that for certain functions during the fitting model.fit() the @tf.function decorator prohibits the execution of functions like tensor.numpy() for performance reasons. The solution for me was to pass the flag run_eagerly=True to the model.compile() like this: \n```\nmodel.compile(..., run_eagerly=True)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/52357542\/attributeerror-tensor-object-has-no-attribute-numpy","best_answers_votes":104,"tags":["python","numpy","tensorflow","attributeerror","tensor"],"question_length":313,"response_length":534,"tag_count":5} {"question":"data type not understood I'm trying to use a matrix to compute stuff. The code is this \n```\nimport numpy as np\n# some code\nmmatrix = np.zeros(nrows, ncols)\nprint mmatrix[0, 0]\n```\nbut I get 'data type not understood', and it works if I do it from terminal.","response":"Try: \n```\nmmatrix = np.zeros((nrows, ncols))\n```\nSince the shape parameter has to be an int or sequence of ints http:\/\/docs.scipy.org\/doc\/numpy\/reference\/generated\/numpy.zeros.html Otherwise you are passing ncols to np.zeros as the dtype.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/5446522\/data-type-not-understood","best_answers_votes":163,"tags":["python","matrix","numpy"],"question_length":256,"response_length":238,"tag_count":3} {"question":"TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced I am trying to convert a CSV into numpy array. In the numpy array, I am replacing few elements with NaN. Then, I wanted to find the indices of the NaN elements in the numpy array. The code is: \n```\nimport pandas as pd\nimport matplotlib.pyplot as plyt\nimport numpy as np\n\nfilename = 'wether.csv'\n\ndf = pd.read_csv(filename,header = None )\n\nlist = df.values.tolist()\nlabels = list[0]\nwether_list = list[1:]\n\nyear = []\nmonth = []\nday = []\nmax_temp = []\n\nfor i in wether_list:\n year.append(i[1])\n month.append(i[2])\n day.append(i[3])\n max_temp.append(i[5])\n\nmid = len(max_temp) \/\/ 2\ntemps = np.array(max_temp[mid:])\ntemps[np.where(np.array(temps) == -99.9)] = np.nan\nplyt.plot(temps,marker = '.',color = 'black',linestyle = 'none')\n# plyt.show()\n\nprint(np.where(np.isnan(temps))[0])\n# print(len(pd.isnull(np.array(temps))))\n```\nWhen I execute this, I am getting a warning and an error. The warning is: \n```\nwether.py:26: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison\n temps[np.where(np.array(temps) == -99.9)] = np.nan\n```\nThe error is: \n```\nTraceback (most recent call last):\n File \"wether.py\", line 30, in \n print(np.where(np.isnan(temps))[0])\nTypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''\n```\nThis is a part of the dataset which I am using: \n```\n83168,2014,9,7,0.00000,89.00000,78.00000, 83.50000\n83168,2014,9,22,1.62000,90.00000,72.00000, 81.00000\n83168,2014,9,23,0.50000,87.00000,74.00000, 80.50000\n83168,2014,9,24,0.35000,82.00000,73.00000, 77.50000\n83168,2014,9,25,0.60000,85.00000,75.00000, 80.00000\n83168,2014,9,26,0.76000,89.00000,77.00000, 83.00000\n83168,2014,9,27,0.00000,89.00000,79.00000, 84.00000\n83168,2014,9,28,0.00000,90.00000,81.00000, 85.50000\n83168,2014,9,29,0.00000,90.00000,79.00000, 84.50000\n83168,2014,9,30,0.50000,89.00000,75.00000, 82.00000\n83168,2014,10,1,0.02000,91.00000,75.00000, 83.00000\n83168,2014,10,2,0.03000,93.00000,77.00000, 85.00000\n83168,2014,10,3,1.40000,93.00000,75.00000, 84.00000\n83168,2014,10,4,0.06000,89.00000,75.00000, 82.00000\n83168,2014,10,5,0.22000,91.00000,68.00000, 79.50000\n83168,2014,10,6,0.00000,84.00000,68.00000, 76.00000\n83168,2014,10,7,0.17000,85.00000,73.00000, 79.00000\n83168,2014,10,8,0.06000,84.00000,73.00000, 78.50000\n83168,2014,10,9,0.00000,87.00000,73.00000, 80.00000\n83168,2014,10,10,0.00000,88.00000,80.00000, 84.00000\n83168,2014,10,11,0.00000,87.00000,80.00000, 83.50000\n83168,2014,10,12,0.00000,88.00000,80.00000, 84.00000\n83168,2014,10,13,0.00000,88.00000,81.00000, 84.50000\n83168,2014,10,14,0.04000,88.00000,77.00000, 82.50000\n83168,2014,10,15,0.00000,88.00000,77.00000, 82.50000\n83168,2014,10,16,0.09000,89.00000,72.00000, 80.50000\n83168,2014,10,17,0.00000,85.00000,67.00000, 76.00000\n83168,2014,10,18,0.00000,84.00000,65.00000, 74.50000\n83168,2014,10,19,0.00000,84.00000,65.00000, 74.50000\n83168,2014,10,20,0.00000,85.00000,69.00000, 77.00000\n83168,2014,10,21,0.77000,87.00000,76.00000, 81.50000\n83168,2014,10,22,0.69000,81.00000,71.00000, 76.00000\n83168,2014,10,23,0.31000,82.00000,72.00000, 77.00000\n83168,2014,10,24,0.71000,79.00000,73.00000, 76.00000\n83168,2014,10,25,0.00000,81.00000,68.00000, 74.50000\n83168,2014,10,26,0.00000,82.00000,67.00000, 74.50000\n83168,2014,10,27,0.00000,83.00000,64.00000, 73.50000\n83168,2014,10,28,0.00000,83.00000,66.00000, 74.50000\n83168,2014,10,29,0.03000,86.00000,76.00000, 81.00000\n83168,2014,10,30,0.00000,85.00000,69.00000, 77.00000\n83168,2014,10,31,0.00000,85.00000,69.00000, 77.00000\n83168,2014,11,1,0.00000,86.00000,59.00000, 72.50000\n83168,2014,11,2,0.00000,77.00000,52.00000, 64.50000\n83168,2014,11,3,0.00000,70.00000,52.00000, 61.00000\n83168,2014,11,4,0.00000,77.00000,59.00000, 68.00000\n83168,2014,11,5,0.02000,79.00000,73.00000, 76.00000\n83168,2014,11,6,0.02000,82.00000,75.00000, 78.50000\n83168,2014,11,7,0.00000,83.00000,66.00000, 74.50000\n83168,2014,11,8,0.00000,84.00000,65.00000, 74.50000\n83168,2014,11,9,0.00000,84.00000,65.00000, 74.50000\n83168,2014,11,10,1.20000,72.00000,65.00000, 68.50000\n83168,2014,11,11,0.08000,77.00000,61.00000, 69.00000\n83168,2014,11,12,0.00000,80.00000,61.00000, 70.50000\n83168,2014,11,13,0.00000,83.00000,63.00000, 73.00000\n83168,2014,11,14,0.00000,83.00000,65.00000, 74.00000\n83168,2014,11,15,0.00000,82.00000,64.00000, 73.00000\n83168,2014,11,16,0.00000,83.00000,64.00000, 73.50000\n83168,2014,11,17,0.07000,84.00000,64.00000, 74.00000\n83168,2014,11,18,0.00000,86.00000,71.00000, 78.50000\n83168,2014,11,19,0.57000,78.00000,55.00000, 66.50000\n83168,2014,11,20,0.05000,72.00000,56.00000, 64.00000\n83168,2014,11,21,0.05000,77.00000,63.00000, 70.00000\n83168,2014,11,22,0.22000,77.00000,69.00000, 73.00000\n83168,2014,11,23,0.06000,79.00000,76.00000, 77.50000\n83168,2014,11,24,0.02000,84.00000,78.00000, 81.00000\n83168,2014,11,25,0.00000,86.00000,78.00000, 82.00000\n83168,2014,11,26,0.07000,85.00000,77.00000, 81.00000\n83168,2014,11,27,0.21000,82.00000,55.00000, 68.50000\n83168,2014,11,28,0.00000,73.00000,53.00000, 63.00000\n83168,2015,1,8,0.00000,80.00000,57.00000,\n83168,2015,1,9,0.05000,72.00000,56.00000,\n83168,2015,1,10,0.00000,72.00000,57.00000,\n83168,2015,1,11,0.00000,80.00000,57.00000,\n83168,2015,1,12,0.05000,80.00000,59.00000,\n83168,2015,1,13,0.85000,81.00000,69.00000,\n83168,2015,1,14,0.05000,81.00000,68.00000,\n83168,2015,1,15,0.00000,81.00000,64.00000,\n83168,2015,1,16,0.00000,78.00000,63.00000,\n83168,2015,1,17,0.00000,73.00000,55.00000,\n83168,2015,1,18,0.00000,76.00000,55.00000,\n83168,2015,1,19,0.00000,78.00000,55.00000,\n83168,2015,1,20,0.00000,75.00000,56.00000,\n83168,2015,1,21,0.02000,73.00000,65.00000,\n83168,2015,1,22,0.00000,80.00000,64.00000,\n83168,2015,1,23,0.00000,80.00000,71.00000,\n83168,2015,1,24,0.00000,79.00000,72.00000,\n83168,2015,1,25,0.00000,79.00000,49.00000,\n83168,2015,1,26,0.00000,79.00000,49.00000,\n83168,2015,1,27,0.10000,75.00000,53.00000,\n83168,2015,1,28,0.00000,68.00000,53.00000,\n83168,2015,1,29,0.00000,69.00000,53.00000,\n83168,2015,1,30,0.00000,72.00000,60.00000,\n83168,2015,1,31,0.00000,76.00000,58.00000,\n83168,2015,2,1,0.00000,76.00000,58.00000,\n83168,2015,2,2,0.05000,77.00000,58.00000,\n83168,2015,2,3,0.00000,84.00000,56.00000,\n83168,2015,2,4,0.00000,76.00000,56.00000,\n```\nI am unable to rectify the error. How to overcome the warning in the 26th line? How can one solve this error? Update : when I try the same thing in different way like reading dataset from file instead of converting to dataframes, I am not getting the error. What would be the reason for that? The code is: \n```\nweather_filename = 'wether.csv'\nweather_file = open(weather_filename)\nweather_data = weather_file.read()\nweather_file.close()\n\n# Break the weather records into lines\nlines = weather_data.split('\\n')\nlabels = lines[0]\nvalues = lines[1:]\nn_values = len(values)\n\n# Break the list of comma-separated value strings\n# into lists of values.\nyear = []\nmonth = []\nday = []\nmax_temp = []\nj_year = 1\nj_month = 2\nj_day = 3\nj_max_temp = 5\n\nfor i_row in range(n_values):\n split_values = values[i_row].split(',')\n if len(split_values) >= j_max_temp:\n year.append(int(split_values[j_year]))\n month.append(int(split_values[j_month]))\n day.append(int(split_values[j_day]))\n max_temp.append(float(split_values[j_max_temp]))\n\n# Isolate the recent data.\ni_mid = len(max_temp) \/\/ 2\ntemps = np.array(max_temp[i_mid:])\nyear = year[i_mid:]\nmonth = month[i_mid:]\nday = day[i_mid:]\ntemps[np.where(temps == -99.9)] = np.nan\n\n# Remove all the nans.\n# Trim both ends and fill nans in the middle.\n# Find the first non-nan.\ni_start = np.where(np.logical_not(np.isnan(temps)))[0][0]\ntemps = temps[i_start:]\nyear = year[i_start:]\nmonth = month[i_start:]\nday = day[i_start:]\ni_nans = np.where(np.isnan(temps))[0]\nprint(i_nans)\n```\nWhat is wrong in the first code and why the second doesn't even give a warning?","response":"Posting as it might help future users. As correctly pointed out by others, np.isnan won't work for object or string dtypes. If you're using pandas, as mentioned here you can directly use pd.isnull, which should work in your case. \n```\nimport pandas as pd\nimport numpy as np\nvar1 = ''\nvar2 = np.nan\n\n```python\ntype(var1)\n#Output\n#\n```\n\n```python\ntype(var2)\n#Output\n#\n```\n\n```python\npd.isnull(var1)\n#Output\n#False\n```\n\n```python\npd.isnull(var2)\n#Output\n#True\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/52657223\/typeerror-ufunc-isnan-not-supported-for-the-input-types-and-the-inputs-could","best_answers_votes":104,"tags":["python","numpy","nan","missing-data","numpy-ufunc"],"question_length":8022,"response_length":412,"tag_count":5} {"question":"Numpy: Should I use newaxis or None? In numpy one can use the 'newaxis' object in the slicing syntax to create an axis of length one, e.g.: \n```\nimport numpy as np\nprint np.zeros((3,5))[:,np.newaxis,:].shape\n# shape will be (3,1,5)\n```\nThe documentation states that one can also use None instead of newaxis, the effect is exactly the same. Is there any reason to choose one over the other? Is there any general preference or style guide? My impression is that newaxis is more popular, probably because it is more explicit. So is there any reason why None is allowed?","response":"None is allowed because numpy.newaxis is merely an alias for None. \n```\n\n```python\nimport numpy\n```\n\n```python\nnumpy.newaxis is None\n#Output\n#True\n#```\n#The authors probably chose it because they needed a convenient constant, and None was available. As for why you should prefer newaxis over None: mainly it's because it's more explicit, and partly because someday the numpy authors might change it to something other than None. (They're not planning to, and probably won't, but there's no good reason to prefer None.)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/944863\/numpy-should-i-use-newaxis-or-none","best_answers_votes":122,"tags":["python","numpy"],"question_length":566,"response_length":506,"tag_count":2} {"question":"Fastest way to grow a numpy numeric array Requirements: I need to grow an array arbitrarily large from data. I can guess the size (roughly 100-200) with no guarantees that the array will fit every time Once it is grown to its final size, I need to perform numeric computations on it, so I'd prefer to eventually get to a 2-D numpy array. Speed is critical. As an example, for one of 300 files, the update() method is called 45 million times (takes 150s or so) and the finalize() method is called 500k times (takes total of 106s) ... taking a total of 250s or so. Here is my code: \n```\ndef __init__(self):\n self.data = []\n\ndef update(self, row):\n self.data.append(row)\n\ndef finalize(self):\n dx = np.array(self.data)\n```\nOther things I tried include the following code ... but this is waaaaay slower. \n```\ndef class A:\n def __init__(self):\n self.data = np.array([])\n\n def update(self, row):\n np.append(self.data, row)\n\n def finalize(self):\n dx = np.reshape(self.data, size=(self.data.shape[0]\/5, 5))\n```\nHere is a schematic of how this is called: \n```\nfor i in range(500000):\n ax = A()\n for j in range(200):\n ax.update([1,2,3,4,5])\n ax.finalize()\n # some processing on ax\n```","response":"I tried a few different things, with timing. \n```\nimport numpy as np\n```\nThe method you mention as slow: (32.094 seconds) \n```\nclass A:\n\n def __init__(self):\n self.data = np.array([])\n\n def update(self, row):\n self.data = np.append(self.data, row)\n\n def finalize(self):\n return np.reshape(self.data, newshape=(self.data.shape[0]\/5, 5))\n```\nRegular ol Python list: (0.308 seconds) \n```\nclass B:\n\n def __init__(self):\n self.data = []\n\n def update(self, row):\n for r in row:\n self.data.append(r)\n\n def finalize(self):\n return np.reshape(self.data, newshape=(len(self.data)\/5, 5))\n```\nTrying to implement an arraylist in numpy: (0.362 seconds) \n```\nclass C:\n\n def __init__(self):\n self.data = np.zeros((100,))\n self.capacity = 100\n self.size = 0\n\n def update(self, row):\n for r in row:\n self.add(r)\n\n def add(self, x):\n if self.size == self.capacity:\n self.capacity *= 4\n newdata = np.zeros((self.capacity,))\n newdata[:self.size] = self.data\n self.data = newdata\n\n self.data[self.size] = x\n self.size += 1\n\n def finalize(self):\n data = self.data[:self.size]\n return np.reshape(data, newshape=(len(data)\/5, 5))\n```\nAnd this is how I timed it: \n```\nx = C()\nfor i in xrange(100000):\n x.update([i])\n```\nSo it looks like regular old Python lists are pretty good ;)","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/7133885\/fastest-way-to-grow-a-numpy-numeric-array","best_answers_votes":121,"tags":["python","performance","numpy"],"question_length":1232,"response_length":1459,"tag_count":3} {"question":"Optimize finding index of nearest point in 2d arrays 2d NumPy array x_array contains positional information in x-direction, y_array positions in y-direction. I then have a list of x,y points. For each point in the list I find the array index of the location closest to that point, based on this code: \n```\nimport time\nimport numpy\n\ndef find_index_of_nearest_xy(y_array, x_array, y_point, x_point):\n distance = (y_array-y_point)**2 + (x_array-x_point)**2\n idy,idx = numpy.where(distance==distance.min())\n return idy[0],idx[0]\n\ndef do_all(y_array, x_array, points):\n store = []\n for i in xrange(points.shape[1]):\n store.append(find_index_of_nearest_xy(y_array,x_array,points[0,i],points[1,i]))\n return store\n\n\n# Create some dummy data\ny_array = numpy.random.random(10000).reshape(100,100)\nx_array = numpy.random.random(10000).reshape(100,100)\n\npoints = numpy.random.random(10000).reshape(2,5000)\n\n# Time how long it takes to run\nstart = time.time()\nresults = do_all(y_array, x_array, points)\nend = time.time()\nprint 'Completed in: ',end-start\n```\nI want to speed it up.","response":"Here is a scipy.spatial.KDTree example \n```\n\n```python\nfrom scipy import spatial\n```\n\n```python\nimport numpy as np\n```\n\n```python\nA = np.random.random((10,2))*100\n```\n\n```python\nA\n#Output\n#array([[ 68.83402637, 38.07632221],\n# [ 76.84704074, 24.9395109 ],\n# [ 16.26715795, 98.52763827],\n# [ 70.99411985, 67.31740151],\n# [ 71.72452181, 24.13516764],\n# [ 17.22707611, 20.65425362],\n# [ 43.85122458, 21.50624882],\n# [ 76.71987125, 44.95031274],\n# [ 63.77341073, 78.87417774],\n# [ 8.45828909, 30.18426696]])\n```\n\n```python\npt = [6, 30] # <-- the point to find\n```\n\n```python\nA[spatial.KDTree(A).query(pt)[1]] # <-- the nearest point\n#Output\n#array([ 8.45828909, 30.18426696])\n#\n##how it works!\n```\n\n```python\ndistance,index = spatial.KDTree(A).query(pt)\n```\n\n```python\ndistance # <-- The distances to the nearest neighbors\n#Output\n#2.4651855048258393\n```\n\n```python\nindex # <-- The locations of the neighbors\n#Output\n#9\n#\n##then\n```\n\n```python\nA[index]\n#Output\n#array([ 8.45828909, 30.18426696])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/10818546\/optimize-finding-index-of-nearest-point-in-2d-arrays","best_answers_votes":100,"tags":["python","algorithm","numpy"],"question_length":1092,"response_length":992,"tag_count":3} {"question":"Why does numpy std() give a different result to matlab std()? I try to convert matlab code to numpy and figured out that numpy has a different result with the std function. in matlab \n```\nstd([1,3,4,6])\nans = 2.0817\n```\nin numpy \n```\nnp.std([1,3,4,6])\n1.8027756377319946\n```\nIs this normal? And how should I handle this?","response":"The NumPy function np.std takes an optional parameter ddof: \"Delta Degrees of Freedom\". By default, this is 0. Set it to 1 to get the MATLAB result: \n```\n\n```python\nnp.std([1,3,4,6], ddof=1)\n#Output\n#2.0816659994661326\n#```\n#To add a little more context, in the calculation of the variance (of which the standard deviation is the square root) we typically divide by the number of values we have. But if we select a random sample of N elements from a larger distribution and calculate the variance, division by N can lead to an underestimate of the actual variance. To fix this, we can lower the number we divide by (the degrees of freedom) to a number less than N (usually N-1). The ddof parameter allows us change the divisor by the amount we specify. Unless told otherwise, NumPy will calculate the biased estimator for the variance (ddof=0, dividing by N). This is what you want if you are working with the entire distribution (and not a subset of values which have been randomly picked from a larger distribution). If the ddof parameter is given, NumPy divides by N - ddof instead. The default behaviour of MATLAB's std is to correct the bias for sample variance by dividing by N-1. This gets rid of some of (but probably not all of) of the bias in the standard deviation. This is likely to be what you want if you're using the function on a random sample of a larger distribution. The nice answer by @hbaderts gives further mathematical details.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/27600207\/why-does-numpy-std-give-a-different-result-to-matlab-std","best_answers_votes":175,"tags":["python","matlab","numpy","standard-deviation"],"question_length":321,"response_length":1432,"tag_count":4} {"question":"How do you find the IQR in Numpy? Is there a baked-in Numpy\/Scipy function to find the interquartile range? I can do it pretty easily myself, but mean() exists which is basically sum\/len... \n```\ndef IQR(dist):\n return np.percentile(dist, 75) - np.percentile(dist, 25)\n```","response":"np.percentile takes multiple percentile arguments, and you are slightly better off doing: \n```\nq75, q25 = np.percentile(x, [75 ,25])\niqr = q75 - q25\n```\nor \n```\niqr = np.subtract(*np.percentile(x, [75, 25]))\n```\nthan making two calls to percentile: \n```\n\n```python\nx = np.random.rand(1e6)\n```\n\n```python\n%timeit q75, q25 = np.percentile(x, [75 ,25]); iqr = q75 - q25\n10 loops, best of 3: 24.2 ms per loop\n```\n\n```python\n%timeit iqr = np.subtract(*np.percentile(x, [75, 25]))\n10 loops, best of 3: 24.2 ms per loop\n```\n\n```python\n%timeit iqr = np.percentile(x, 75) - np.percentile(x, 25)\n10 loops, best of 3: 33.7 ms per loop\n```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/23228244\/how-do-you-find-the-iqr-in-numpy","best_answers_votes":160,"tags":["python","numpy","scipy"],"question_length":274,"response_length":608,"tag_count":3} {"question":"Find out if a matrix is positive definite with NumPy How can I find out if a matrix is positive definite? My matrix is a NumPy matrix. I was expecting to find any related method in the NumPy library, but I didn't have any success.","response":"You can also check if all the eigenvalues of matrix are positive. If so, the matrix is positive definite: \n```\nimport numpy as np\n\ndef is_pos_def(x):\n return np.all(np.linalg.eigvals(x) > 0)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/16266720\/find-out-if-a-matrix-is-positive-definite-with-numpy","best_answers_votes":114,"tags":["python","matrix","numpy","scipy"],"question_length":230,"response_length":197,"tag_count":4} {"question":"Using a pre-trained word embedding (word2vec or Glove) in TensorFlow I've recently reviewed an interesting implementation for convolutional text classification. However all TensorFlow code I've reviewed uses a random (not pre-trained) embedding vectors like the following: \n```\nwith tf.device('\/cpu:0'), tf.name_scope(\"embedding\"):\n W = tf.Variable(\n tf.random_uniform([vocab_size, embedding_size], -1.0, 1.0),\n name=\"W\")\n self.embedded_chars = tf.nn.embedding_lookup(W, self.input_x)\n self.embedded_chars_expanded = tf.expand_dims(self.embedded_chars, -1)\n```\nDoes anybody know how to use the results of Word2vec or a GloVe pre-trained word embedding instead of a random one?","response":"There are a few ways that you can use a pre-trained embedding in TensorFlow. Let's say that you have the embedding in a NumPy array called embedding, with vocab_size rows and embedding_dim columns and you want to create a tensor W that can be used in a call to tf.nn.embedding_lookup(). Simply create W as a tf.constant() that takes embedding as its value: \n```\nW = tf.constant(embedding, name=\"W\")\n```\nThis is the easiest approach, but it is not memory efficient because the value of a tf.constant() is stored multiple times in memory. Since embedding can be very large, you should only use this approach for toy examples. Create W as a tf.Variable and initialize it from the NumPy array via a tf.placeholder(): \n```\nW = tf.Variable(tf.constant(0.0, shape=[vocab_size, embedding_dim]),\n trainable=False, name=\"W\")\n\nembedding_placeholder = tf.placeholder(tf.float32, [vocab_size, embedding_dim])\nembedding_init = W.assign(embedding_placeholder)\n\n# ...\nsess = tf.Session()\n\nsess.run(embedding_init, feed_dict={embedding_placeholder: embedding})\n```\nThis avoid storing a copy of embedding in the graph, but it does require enough memory to keep two copies of the matrix in memory at once (one for the NumPy array, and one for the tf.Variable). Note that I've assumed that you want to hold the embedding matrix constant during training, so W is created with trainable=False. If the embedding was trained as part of another TensorFlow model, you can use a tf.train.Saver to load the value from the other model's checkpoint file. This means that the embedding matrix can bypass Python altogether. Create W as in option 2, then do the following: \n```\nW = tf.Variable(...)\n\nembedding_saver = tf.train.Saver({\"name_of_variable_in_other_model\": W})\n\n# ...\nsess = tf.Session()\nembedding_saver.restore(sess, \"checkpoint_filename.ckpt\")\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/35687678\/using-a-pre-trained-word-embedding-word2vec-or-glove-in-tensorflow","best_answers_votes":132,"tags":["python","numpy","tensorflow","deep-learning"],"question_length":699,"response_length":1843,"tag_count":4} {"question":"TypeError: Object of type 'float32' is not JSON serializable [duplicate] This question already has answers here: Convert numpy type to python (7 answers) Closed 6 years ago. I'm working with numpy.float32 numbers and they don't go into JSON. What's the right approach to overcome this issue? \n```\nimport numpy as np\nimport json\n\na = np.float32(1)\njson.dumps(a)\n\nTypeError: Object of type 'float32' is not JSON serializable\n```","response":"It has to be a string, so you can have: \n```\njson.dumps(str(a))\n```\nEDIT: JSON is a format for serialising object data. It doesn't really care or know about Python types, the json package tries to translate whatever object you pass json.dumps() into a string form via a conversion table that only supports some types (see doc below). This is the reason why I think it's a good idea to just pass a string to avoid this issue: numpy.float32 just isn't in the table. Because some have commented that explicitly passing a string to dumps \"sounds wrong\" I'll just add the doc here json.dumps(obj, *, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, cls=None, indent=None, separators=None, default=None, sort_keys=False, **kw) Serialize obj to a JSON formatted str using this conversion table. The arguments have the same meaning as in dump(). Note Keys in key\/value pairs of JSON are always of the type str. When a dictionary is converted into JSON, all the keys of the dictionary are coerced to strings. As a result of this, if a dictionary is converted into JSON and then back into a dictionary, the dictionary may not equal the original one. That is, loads(dumps(x)) != x if x has non-string keys. taken from the official docs here: https:\/\/docs.python.org\/3\/library\/json.html","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/53082708\/typeerror-object-of-type-float32-is-not-json-serializable","best_answers_votes":69,"tags":["python","json","numpy"],"question_length":426,"response_length":1297,"tag_count":3} {"question":"How to apply numpy.linalg.norm to each row of a matrix? I have a 2D matrix and I want to take norm of each row. But when I use numpy.linalg.norm(X) directly, it takes the norm of the whole matrix. I can take norm of each row by using a for loop and then taking norm of each X[i], but it takes a huge time since I have 30k rows. Any suggestions to find a quicker way? Or is it possible to apply np.linalg.norm to each row of a matrix?","response":"For numpy 1.9+ Note that, as perimosocordiae shows, as of NumPy version 1.9, np.linalg.norm(x, axis=1) is the fastest way to compute the L2-norm. For numpy < 1.9 If you are computing an L2-norm, you could compute it directly (using the axis=-1 argument to sum along rows): \n```\nnp.sum(np.abs(x)**2,axis=-1)**(1.\/2)\n```\nLp-norms can be computed similarly of course. It is considerably faster than np.apply_along_axis, though perhaps not as convenient: \n```\n\n```python\n%timeit np.apply_along_axis(np.linalg.norm, 1, x)\n1000 loops, best of 3: 208 us per loop\n```\n\n```python\n%timeit np.sum(np.abs(x)**2,axis=-1)**(1.\/2)\n100000 loops, best of 3: 18.3 us per loop\n```\nOther ord forms of norm can be computed directly too (with similar speedups): \n```\n```\n\n```python\n%timeit np.apply_along_axis(lambda row:np.linalg.norm(row,ord=1), 1, x)\n1000 loops, best of 3: 203 us per loop\n```\n\n```python\n%timeit np.sum(abs(x), axis=-1)\n100000 loops, best of 3: 10.9 us per loop\n```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/7741878\/how-to-apply-numpy-linalg-norm-to-each-row-of-a-matrix","best_answers_votes":103,"tags":["python","numpy"],"question_length":433,"response_length":945,"tag_count":2} {"question":"How to convert list of model objects to pandas dataframe? I have an array of objects of this class \n```\nclass CancerDataEntity(Model):\n\n age = columns.Text(primary_key=True)\n gender = columns.Text(primary_key=True)\n cancer = columns.Text(primary_key=True)\n deaths = columns.Integer()\n ...\n```\nWhen printed, array looks like this \n```\n[CancerDataEntity(age=u'80-85+', gender=u'Female', cancer=u'All cancers (C00-97,B21)', deaths=15306), CancerDataEntity(...\n```\nI want to convert this to a data frame so I can play with it in a more suitable way to me - to aggregate, count, sum and similar. How I wish this data frame to look, would be something like this: \n```\nage gender cancer deaths\n0 80-85+ Female ... 15306\n1 ...\n```\nIs there a way to achieve this using numpy\/pandas easily, without manually processing the input array?","response":"A much cleaner way to to this is to define a to_dict method on your class and then use pandas.DataFrame.from_records \n```\nclass Signal(object):\n def __init__(self, x, y):\n self.x = x\n self.y = y\n\n def to_dict(self):\n return {\n 'x': self.x,\n 'y': self.y,\n }\n```\ne.g. \n```\n\n```python\nsignals = [Signal(3, 9), Signal(4, 16)]\n```\n\n```python\npandas.DataFrame.from_records([s.to_dict() for s in signals])\n#Output\n#x y\n#0 3 9\n#1 4 16\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/34997174\/how-to-convert-list-of-model-objects-to-pandas-dataframe","best_answers_votes":123,"tags":["python","numpy","pandas"],"question_length":870,"response_length":487,"tag_count":3} {"question":"What is the most efficient way to check if a value exists in a NumPy array? I have a very large NumPy array \n```\n1 40 3\n4 50 4\n5 60 7\n5 49 6\n6 70 8\n8 80 9\n8 72 1\n9 90 7\n....\n```\nI want to check to see if a value exists in the 1st column of the array. I've got a bunch of homegrown ways (e.g. iterating through each row and checking), but given the size of the array I'd like to find the most efficient method. Thanks!","response":"How about \n```\nif value in my_array[:, col_num]:\n do_whatever\n```\nEdit: I think __contains__ is implemented in such a way that this is the same as @detly's version","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/7088625\/what-is-the-most-efficient-way-to-check-if-a-value-exists-in-a-numpy-array","best_answers_votes":111,"tags":["python","performance","numpy"],"question_length":417,"response_length":166,"tag_count":3} {"question":"Get the position of the largest value in a multi-dimensional NumPy array How can I get get the position (indices) of the largest value in a multi-dimensional NumPy array?","response":"The argmax() method should help. Update (After reading comment) I believe the argmax() method would work for multi dimensional arrays as well. The linked documentation gives an example of this: \n```\n\n```python\na = array([[10,50,30],[60,20,40]])\n```\n\n```python\nmaxindex = a.argmax()\n```\n\n```python\nmaxindex\n#Output\n#3\n#```\n#Update 2 (Thanks to KennyTM's comment) You can use unravel_index(a.argmax(), a.shape) to get the index as a tuple: \n#```\n```\n\n```python\nfrom numpy import unravel_index\n```\n\n```python\nunravel_index(a.argmax(), a.shape)\n#Output\n#(1, 0)\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/3584243\/get-the-position-of-the-largest-value-in-a-multi-dimensional-numpy-array","best_answers_votes":203,"tags":["python","arrays","indexing","numpy"],"question_length":170,"response_length":488,"tag_count":4} {"question":"Importing data from a MySQL database into a Pandas data frame including column names [duplicate] This question already has answers here: How to convert SQL Query result to PANDAS Data Structure? (18 answers) Closed 9 years ago. I am importing data from a MySQL database into a Pandas data frame. The following excerpt is the code that I am using: \n```\nimport mysql.connector as sql\nimport pandas as pd\n\ndb_connection = sql.connect(host='hostname', database='db_name', user='username', password='password')\ndb_cursor = db_connection.cursor()\ndb_cursor.execute('SELECT * FROM table_name')\n\ntable_rows = db_cursor.fetchall()\n\ndf = pd.DataFrame(table_rows)\n```\nWhen I print the data frame it does properly represent the data but my question is, is it possible to also keep the column names? Here is an example output: \n```\n0 1 2 3 4 5 6 7 8\n0 :ID[giA0CqQcx+(9kbuSKV== NaN NaN None None None None None None\n1 lXB+jIS)DN!CXmj>0(P8^]== NaN NaN None None None None None None \n2 lXB+jIS)DN!CXmj>0(P8^]== NaN NaN None None None None None None \n3 lXB+jIS)DN!CXmj>0(P8^]== NaN NaN None None None None None None \n4 lXB+jIS)DN!CXmj>0(P8^]== NaN NaN None None None None None None\n```\nWhat I would like to do is keep the column name, which would replace the pandas column indexes. For example, instead of having 0, the column name would be: \"First_column\" as in the MySQL table. Is there a good way to go about this? or is there a more efficient approach of importing data from MySQL into a Pandas data frame than mine?","response":"IMO it would be much more efficient to use pandas for reading data from your MySQL server: \n```\nfrom sqlalchemy import create_engine\nimport pandas as pd\n\ndb_connection_str = 'mysql+pymysql:\/\/mysql_user:mysql_password@mysql_host\/mysql_db'\ndb_connection = create_engine(db_connection_str)\n\ndf = pd.read_sql('SELECT * FROM table_name', con=db_connection)\n```\nthis should also take care of column names...","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/37730243\/importing-data-from-a-mysql-database-into-a-pandas-data-frame-including-column-n","best_answers_votes":203,"tags":["python","mysql","sql","pandas","numpy"],"question_length":1572,"response_length":401,"tag_count":5} {"question":"Save \/ load scipy sparse csr_matrix in portable data format How do you save\/load a scipy sparse csr_matrix in a portable format? The scipy sparse matrix is created on Python 3 (Windows 64-bit) to run on Python 2 (Linux 64-bit). Initially, I used pickle (with protocol=2 and fix_imports=True) but this didn't work going from Python 3.2.2 (Windows 64-bit) to Python 2.7.2 (Windows 32-bit) and got the error: \n```\nTypeError: ('data type not understood', , (, (0,), '[98]')).\n```\nNext, tried numpy.save and numpy.load as well as scipy.io.mmwrite() and scipy.io.mmread() and none of these methods worked either.","response":"edit: scipy 0.19 now has scipy.sparse.save_npz and scipy.sparse.load_npz. \n```\nfrom scipy import sparse\n\nsparse.save_npz(\"yourmatrix.npz\", your_matrix)\nyour_matrix_back = sparse.load_npz(\"yourmatrix.npz\")\n```\nFor both functions, the file argument may also be a file-like object (i.e. the result of open) instead of a filename. Got an answer from the Scipy user group: A csr_matrix has 3 data attributes that matter: .data, .indices, and .indptr. All are simple ndarrays, so numpy.save will work on them. Save the three arrays with numpy.save or numpy.savez, load them back with numpy.load, and then recreate the sparse matrix object with: \n```\nnew_csr = csr_matrix((data, indices, indptr), shape=(M, N))\n```\nSo for example: \n```\ndef save_sparse_csr(filename, array):\n np.savez(filename, data=array.data, indices=array.indices,\n indptr=array.indptr, shape=array.shape)\n\ndef load_sparse_csr(filename):\n loader = np.load(filename)\n return csr_matrix((loader['data'], loader['indices'], loader['indptr']),\n shape=loader['shape'])\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/8955448\/save-load-scipy-sparse-csr-matrix-in-portable-data-format","best_answers_votes":146,"tags":["python","numpy","scipy"],"question_length":660,"response_length":1071,"tag_count":3} {"question":"Get year, month or day from numpy datetime64 I have an array of datetime64 type: \n```\ndates = np.datetime64(['2010-10-17', '2011-05-13', \"2012-01-15\"])\n```\nIs there a better way than looping through each element just to get np.array of years: \n```\nyears = f(dates)\n#output:\narray([2010, 2011, 2012], dtype=int8) #or dtype = string\n```\nI'm using stable numpy version 1.6.2.","response":"I find the following tricks give between 2x and 4x speed increase versus the pandas method described in this answer (i.e. pd.DatetimeIndex(dates).year etc.). The speed of [dt.year for dt in dates.astype(object)] I find to be similar to the pandas method. Also these tricks can be applied directly to ndarrays of any shape (2D, 3D etc.) \n```\ndates = np.arange(np.datetime64('2000-01-01'), np.datetime64('2010-01-01'))\nyears = dates.astype('datetime64[Y]').astype(int) + 1970\nmonths = dates.astype('datetime64[M]').astype(int) % 12 + 1\ndays = dates - dates.astype('datetime64[M]') + 1\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/13648774\/get-year-month-or-day-from-numpy-datetime64","best_answers_votes":95,"tags":["python","datetime","numpy"],"question_length":372,"response_length":586,"tag_count":3} {"question":"Get index of a row of a pandas dataframe as an integer Assume an easy dataframe, for example \n```\nA B\n0 1 0.810743\n1 2 0.595866\n2 3 0.154888\n3 4 0.472721\n4 5 0.894525\n5 6 0.978174\n6 7 0.859449\n7 8 0.541247\n8 9 0.232302\n9 10 0.276566\n```\nHow can I retrieve an index value of a row, given a condition? For example: dfb = df[df['A']==5].index.values.astype(int) returns [4], but what I would like to get is just 4. This is causing me troubles later in the code. Based on some conditions, I want to have a record of the indexes where that condition is fulfilled, and then select rows between. I tried \n```\ndfb = df[df['A']==5].index.values.astype(int)\ndfbb = df[df['A']==8].index.values.astype(int)\ndf.loc[dfb:dfbb,'B']\n```\nfor a desired output \n```\nA B\n4 5 0.894525\n5 6 0.978174\n6 7 0.859449\n```\nbut I get TypeError: '[4]' is an invalid key","response":"The easier is add [0] - select first value of list with one element: \n```\ndfb = df[df['A']==5].index.values.astype(int)[0]\ndfbb = df[df['A']==8].index.values.astype(int)[0]\n```\n```\ndfb = int(df[df['A']==5].index[0])\ndfbb = int(df[df['A']==8].index[0])\n```\nBut if possible some values not match, error is raised, because first value not exist. Solution is use next with iter for get default parameetr if values not matched: \n```\ndfb = next(iter(df[df['A']==5].index), 'no match')\nprint (dfb)\n4\n\ndfb = next(iter(df[df['A']==50].index), 'no match')\nprint (dfb)\nno match\n```\nThen it seems need substract 1: \n```\nprint (df.loc[dfb:dfbb-1,'B'])\n4 0.894525\n5 0.978174\n6 0.859449\nName: B, dtype: float64\n```\nAnother solution with boolean indexing or query: \n```\nprint (df[(df['A'] >= 5) & (df['A'] < 8)])\n A B\n4 5 0.894525\n5 6 0.978174\n6 7 0.859449\n\nprint (df.loc[(df['A'] >= 5) & (df['A'] < 8), 'B'])\n4 0.894525\n5 0.978174\n6 0.859449\nName: B, dtype: float64\n```\n```\nprint (df.query('A >= 5 and A < 8'))\n A B\n4 5 0.894525\n5 6 0.978174\n6 7 0.859449\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/41217310\/get-index-of-a-row-of-a-pandas-dataframe-as-an-integer","best_answers_votes":109,"tags":["python","pandas","numpy"],"question_length":891,"response_length":1093,"tag_count":3} {"question":"Numpy: Checking if a value is NaT \n```\nnat = np.datetime64('NaT')\nnat == nat\n>> FutureWarning: In the future, 'NAT == x' and 'x == NAT' will always be False.\n\nnp.isnan(nat)\n>> TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''\n```\nHow can I check if a datetime64 is NaT? I can't seem to dig anything out of the docs. I know Pandas can do it, but I'd rather not add a dependency for something so basic.","response":"NumPy has an isnat function as of version 1.13.0: \n```\nimport numpy as np\n\nnp.isnat(np.datetime64(\"NaT\"))\n```\npandas can check for NaT with pandas.isnull: \n```\n\n```python\nimport numpy as np\n```\n\n```python\nimport pandas as pd\n```\n\n```python\npd.isnull(np.datetime64('NaT'))\n#Output\n#True\n#```\n#If you don't want to use pandas you can also define your own function (parts are taken from the pandas source): \n#```\n#nat_as_integer = np.datetime64('NAT').view('i8')\n```\n\ndef isnat(your_datetime):\n dtype_string = str(your_datetime.dtype)\n if 'datetime64' in dtype_string or 'timedelta64' in dtype_string:\n return your_datetime.view('i8') == nat_as_integer\n return False # it can't be a NaT if it's not a dateime\n```\nThis correctly identifies NaT values: \n```\n\n```python\nisnat(np.datetime64('NAT'))\n#Output\n#True\n```\n\n```python\nisnat(np.timedelta64('NAT'))\n#Output\n#True\n#```\n#And realizes if it's not a datetime or timedelta: \n#```\n```\n\n```python\nisnat(np.timedelta64('NAT').view('i8'))\n#Output\n#False\n#```\n#In the future there might be an isnat-function in the numpy code, at least they have a (currently open) pull request about it: Link to the PR (NumPy github)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/38509538\/numpy-checking-if-a-value-is-nat","best_answers_votes":119,"tags":["python","numpy"],"question_length":515,"response_length":1069,"tag_count":2} {"question":"Determining duplicate values in an array Suppose I have an array \n```\na = np.array([1, 2, 1, 3, 3, 3, 0])\n```\nHow can I (efficiently, Pythonically) find which elements of a are duplicates (i.e., non-unique values)? In this case the result would be array([1, 3, 3]) or possibly array([1, 3]) if efficient. I've come up with a few methods that appear to work: Masking \n```\nm = np.zeros_like(a, dtype=bool)\nm[np.unique(a, return_index=True)[1]] = True\na[~m]\n```\nSet operations \n```\na[~np.in1d(np.arange(len(a)), np.unique(a, return_index=True)[1], assume_unique=True)]\n```\nThis one is cute but probably illegal (as a isn't actually unique): \n```\nnp.setxor1d(a, np.unique(a), assume_unique=True)\n```\nHistograms \n```\nu, i = np.unique(a, return_inverse=True)\nu[np.bincount(i) > 1]\n```\nSorting \n```\ns = np.sort(a, axis=None)\ns[:-1][s[1:] == s[:-1]]\n```\nPandas \n```\ns = pd.Series(a)\ns[s.duplicated()]\n```\nIs there anything I've missed? I'm not necessarily looking for a numpy-only solution, but it has to work with numpy data types and be efficient on medium-sized data sets (up to 10 million in size). Conclusions Testing with a 10 million size data set (on a 2.8GHz Xeon): \n```\na = np.random.randint(10**7, size=10**7)\n```\nThe fastest is sorting, at 1.1s. The dubious xor1d is second at 2.6s, followed by masking and Pandas Series.duplicated at 3.1s, bincount at 5.6s, and in1d and senderle's setdiff1d both at 7.3s. Steven's Counter is only a little slower, at 10.5s; trailing behind are Burhan's Counter.most_common at 110s and DSM's Counter subtraction at 360s. I'm going to use sorting for performance, but I'm accepting Steven's answer because the performance is acceptable and it feels clearer and more Pythonic. Edit: discovered the Pandas solution. If Pandas is available it's clear and performs well.","response":"As of numpy version 1.9.0, np.unique has an argument return_counts which greatly simplifies your task: \n```\nu, c = np.unique(a, return_counts=True)\ndup = u[c > 1]\n```\nThis is similar to using Counter, except you get a pair of arrays instead of a mapping. I'd be curious to see how they perform relative to each other. It's probably worth mentioning that even though np.unique is quite fast in practice due to its numpyness, it has worse algorithmic complexity than the Counter solution. np.unique is sort-based, so runs asymptotically in O(n log n) time. Counter is hash-based, so has O(n) complexity. This will not matter much for anything but the largest datasets.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/11528078\/determining-duplicate-values-in-an-array","best_answers_votes":94,"tags":["python","numpy","duplicates","unique"],"question_length":1803,"response_length":666,"tag_count":4} {"question":"Why are NumPy arrays so fast? I just changed a program I am writing to hold my data as numpy arrays as I was having performance issues, and the difference was incredible. It originally took 30 minutes to run and now takes 2.5 seconds! I was wondering how it does it. I assume it is that the because it removes the need for for loops but beyond that I am stumped.","response":"Numpy arrays are densely packed arrays of homogeneous type. Python lists, by contrast, are arrays of pointers to objects, even when all of them are of the same type. So, you get the benefits of locality of reference. Also, many Numpy operations are implemented in C, avoiding the general cost of loops in Python, pointer indirection and per-element dynamic type checking. The speed boost depends on which operations you're performing, but a few orders of magnitude isn't uncommon in number crunching programs.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/8385602\/why-are-numpy-arrays-so-fast","best_answers_votes":135,"tags":["python","arrays","numpy"],"question_length":362,"response_length":509,"tag_count":3} {"question":"Python RuntimeWarning: overflow encountered in long scalars I am new to programming. In my latest Python 2.7 project I encountered the following: RuntimeWarning: overflow encountered in long_scalars Could someone please elaborate what this means and what I could do to fix that? The code runs through, but I'm not sure if it is a good idea to just ignore the warning. It happens during an append process like: \n```\nSomeList.append(VeryLongFormula)\n```","response":"Here's an example which issues the same warning: \n```\nimport numpy as np\nnp.seterr(all='warn')\nA = np.array([10])\na=A[-1]\na**a\n```\nyields \n```\nRuntimeWarning: overflow encountered in long_scalars\n```\nIn the example above it happens because a is of dtype int32, and the maximim value storable in an int32 is 2**31-1. Since 10**10 > 2**32-1, the exponentiation results in a number that is bigger than that which can be stored in an int32. Note that you can not rely on np.seterr(all='warn') to catch all overflow errors in numpy. For example, on 32-bit NumPy \n```\n>>> np.multiply.reduce(np.arange(21)+1)\n-1195114496\n```\nwhile on 64-bit NumPy: \n```\n>>> np.multiply.reduce(np.arange(21)+1)\n-4249290049419214848\n```\nBoth fail without any warning, although it is also due to an overflow error. The correct answer is that 21! equals \n```\n\n```python\nimport math\n```\n\n```python\nmath.factorial(21)\n#Output\n#51090942171709440000L\n#```\n#According to numpy developer, Robert Kern, Unlike true floating point errors (where the hardware FPU sets a flag whenever it does an atomic operation that overflows), we need to implement the integer overflow detection ourselves. We do it on the scalars, but not arrays because it would be too slow to implement for every atomic operation on arrays. So the burden is on you to choose appropriate dtypes so that no operation overflows.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/7559595\/python-runtimewarning-overflow-encountered-in-long-scalars","best_answers_votes":95,"tags":["python","numpy","append","overflow","scalar"],"question_length":451,"response_length":1350,"tag_count":5} {"question":"What is an intuitive explanation of np.unravel_index? I have read the documentation for np.unravel_index and played around with the function, but I can't figure out what it is doing.","response":"Computer memory is addressed linearly. Each memory cell corresponds to a number. A block of memory can be addressed in terms of a base, which is the memory address of its first element, and the item index. For example, assuming the base address is 10,000: \n```\nitem index 0 1 2 3\nmemory address 10,000 10,001 10,002 10,003\n```\nTo store multi-dimensional blocks, their geometry must somehow be made to fit into linear memory. In C and NumPy, this is done row-by-row. A 2D example would be: \n```\n| 0 1 2 3\n--+------------------------\n0 | 0 1 2 3\n1 | 4 5 6 7\n2 | 8 9 10 11\n```\nSo, for example, in this 3-by-4 block the 2D index (1, 2) would correspond to the linear index 6 which is 1 x 4 + 2. unravel_index does the inverse. Given a linear index, it computes the corresponding ND index. Since this depends on the block dimensions, these also have to be passed. So, in our example, we can get the original 2D index (1, 2) back from the linear index 6: \n```\n\n```python\nnp.unravel_index(6, (3, 4))\n#Output\n#(1, 2)\n#```\n#Note: The above glosses over a few details. 1) Translating the item index to memory address also has to account for item size. For example, an integer typically has 4 or 8 bytes. So, in the latter case, the memory address for item i would be base + 8 x i. 2). NumPy is a bit more flexible than suggested. It can organize ND data column-by-column if desired. It can even handle data that are not contiguous in memory but for example leave gaps, etc. Bonus reading: internal memory layout of an ndarray\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/48135736\/what-is-an-intuitive-explanation-of-np-unravel-index","best_answers_votes":131,"tags":["python","numpy","indexing","numpy-ndarray","tensor"],"question_length":182,"response_length":1582,"tag_count":5} {"question":"TypeError: only length-1 arrays can be converted to Python scalars while plot showing I have such Python code: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef f(x):\n return np.int(x)\n\nx = np.arange(1, 15.1, 0.1)\nplt.plot(x, f(x))\nplt.show()\n```\nAnd such error: \n```\nTypeError: only length-1 arrays can be converted to Python scalars\n```\nHow can I fix it?","response":"The error \"only length-1 arrays can be converted to Python scalars\" is raised when the function expects a single value but you pass an array instead. np.int was an alias for the built-in int, which is deprecated in numpy v1.20. The argument for int should be a scalar and it does not accept array-like objects. In general, if you want to apply a function to each element of the array, you can use np.vectorize: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef f(x):\n return int(x)\nf2 = np.vectorize(f)\nx = np.arange(1, 15.1, 0.1)\nplt.plot(x, f2(x))\nplt.show()\n```\nYou can skip the definition of f(x) and just pass the function int to the vectorize function: f2 = np.vectorize(int). Note that np.vectorize is just a convenience function and basically a for loop. That will be inefficient over large arrays. Whenever you have the possibility, use truly vectorized functions or methods (like astype(int) as @FFT suggests).","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/36680402\/typeerror-only-length-1-arrays-can-be-converted-to-python-scalars-while-plot-sh","best_answers_votes":104,"tags":["python","numpy"],"question_length":370,"response_length":934,"tag_count":2} {"question":"How do you Unit Test Python DataFrames How do I unit test Python dataframes? I have functions that have an input and output as dataframes. Almost every function I have does this. Now if I want to unit test this what is the best method of doing it? It seems a bit of an effort to create a new dataframe (with values populated) for every function? Are there any materials you can refer me to? Should you write unit tests for these functions?","response":"While Pandas' test functions are primarily used for internal testing, NumPy includes a very useful set of testing functions that are documented here: NumPy Test Support. These functions compare NumPy arrays, but you can get the array that underlies a Pandas DataFrame using the values property. You can define a simple DataFrame and compare what your function returns to what you expect. One technique you can use is to define one set of test data for a number of functions. That way, you can use Pytest Fixtures to define that DataFrame once, and use it in multiple tests. In terms of resources, I found this article on Testing with NumPy and Pandas to be very useful. I also did a short presentation about data analysis testing at PyCon Canada 2016: Automate Your Data Analysis Testing.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/41852686\/how-do-you-unit-test-python-dataframes","best_answers_votes":58,"tags":["python","pandas","dataframe","numpy","unit-testing"],"question_length":439,"response_length":788,"tag_count":5} {"question":"How to perform element-wise Boolean operations on NumPy arrays [duplicate] This question already has answers here: Logical operators for Boolean indexing in Pandas (4 answers) Closed 6 years ago. The community reviewed whether to reopen this question 3 years ago and left it closed: Original close reason(s) were not resolved For example, I would like to create a mask that masks elements with value between 40 and 60: \n```\nfoo = np.asanyarray(range(100))\nmask = (foo < 40).__or__(foo > 60)\n```\nWhich just looks ugly. I can't write \n```\n(foo < 40) or (foo > 60)\n```\nbecause I end up with: \n```\nValueError Traceback (most recent call last)\n ...\n ----> 1 (foo < 40) or (foo > 60)\n ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()\n```\nIs there a canonical way of doing element-wise Boolean operations on NumPy arrays with good looking code?","response":"Try this: \n```\nmask = (foo < 40) | (foo > 60)\n```\nNote: the __or__ method in an object overloads the bitwise or operator (|), not the Boolean or operator.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/8632033\/how-to-perform-element-wise-boolean-operations-on-numpy-arrays","best_answers_votes":127,"tags":["python","numpy","boolean-operations"],"question_length":895,"response_length":154,"tag_count":3} {"question":"What does \".T\" mean for a Numpy array? I saw this example in the SciPy documentation: \n```\nx, y = np.random.multivariate_normal(mean, cov, 5000).T\n```\nWhat does the final .T actually do here?","response":"The .T accesses the attribute T of the object, which happens to be a NumPy array. The T attribute is the transpose of the array, see the documentation. Apparently you are creating random coordinates in the plane. The output of multivariate_normal() might look like this: \n```\n\n```python\nnp.random.multivariate_normal([0, 0], [[1, 0], [0, 1]], 5) \n#Output\n#array([[ 0.59589335, 0.97741328],\n# [-0.58597307, 0.56733234],\n# [-0.69164572, 0.17840394],\n# [-0.24992978, -2.57494471],\n# [ 0.38896689, 0.82221377]])\n#```\n#The transpose of this matrix is: \n#```\n#array([[ 0.59589335, -0.58597307, -0.69164572, -0.24992978, 0.38896689],\n# [ 0.97741328, 0.56733234, 0.17840394, -2.57494471, 0.82221377]])\n#```\n#which can be conveniently separated in x and y parts by sequence unpacking.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/5741372\/what-does-t-mean-for-a-numpy-array","best_answers_votes":98,"tags":["python","numpy"],"question_length":191,"response_length":787,"tag_count":2} {"question":"pandas equivalent of np.where np.where has the semantics of a vectorized if\/else (similar to Apache Spark's when\/otherwise DataFrame method). I know that I can use np.where on pandas.Series, but pandas often defines its own API to use instead of raw numpy functions, which is usually more convenient with pd.Series\/pd.DataFrame. Sure enough, I found pandas.DataFrame.where. However, at first glance, it has completely different semantics. I could not find a way to rewrite the most basic example of np.where using pandas where: \n```\n# df is pd.DataFrame\n# how to write this using df.where?\ndf['C'] = np.where((df['A']<0) | (df['B']>0), df['A']+df['B'], df['A']\/df['B'])\n```\nAm I missing something obvious? Or is pandas' where intended for a completely different use case, despite same name as np.where?","response":"Try: \n```\n(df['A'] + df['B']).where((df['A'] < 0) | (df['B'] > 0), df['A'] \/ df['B'])\n```\nThe difference between the numpy where and DataFrame where is that the default values are supplied by the DataFrame that the where method is being called on (docs). I.e. \n```\nnp.where(m, A, B)\n```\nis roughly equivalent to \n```\nA.where(m, B)\n```\nIf you wanted a similar call signature using pandas, you could take advantage of the way method calls work in Python: \n```\npd.DataFrame.where(cond=(df['A'] < 0) | (df['B'] > 0), self=df['A'] + df['B'], other=df['A'] \/ df['B'])\n```\nor without kwargs (Note: that the positional order of arguments is different from the numpy where argument order): \n```\npd.DataFrame.where(df['A'] + df['B'], (df['A'] < 0) | (df['B'] > 0), df['A'] \/ df['B'])\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/38579532\/pandas-equivalent-of-np-where","best_answers_votes":81,"tags":["python","pandas","numpy","where-clause"],"question_length":802,"response_length":777,"tag_count":4} {"question":"SQL-like window functions in PANDAS: Row Numbering in Python Pandas Dataframe I come from a sql background and I use the following data processing step frequently: Partition the table of data by one or more fields For each partition, add a rownumber to each of its rows that ranks the row by one or more other fields, where the analyst specifies ascending or descending EX: \n```\ndf = pd.DataFrame({'key1' : ['a','a','a','b','a'],\n 'data1' : [1,2,2,3,3],\n 'data2' : [1,10,2,3,30]})\ndf\n data1 data2 key1 \n0 1 1 a \n1 2 10 a \n2 2 2 a \n3 3 3 b \n4 3 30 a\n```\nI'm looking for how to do the PANDAS equivalent to this sql window function: \n```\nRN = ROW_NUMBER() OVER (PARTITION BY Key1 ORDER BY Data1 ASC, Data2 DESC)\n\n\n data1 data2 key1 RN\n0 1 1 a 1 \n1 2 10 a 2 \n2 2 2 a 3\n3 3 3 b 1\n4 3 30 a 4\n```\nI've tried the following which I've gotten to work where there are no 'partitions': \n```\ndef row_number(frame,orderby_columns, orderby_direction,name):\n frame.sort_index(by = orderby_columns, ascending = orderby_direction, inplace = True)\n frame[name] = list(xrange(len(frame.index)))\n```\nI tried to extend this idea to work with partitions (groups in pandas) but the following didn't work: \n```\ndf1 = df.groupby('key1').apply(lambda t: t.sort_index(by=['data1', 'data2'], ascending=[True, False], inplace = True)).reset_index()\n\ndef nf(x):\n x['rn'] = list(xrange(len(x.index)))\n\ndf1['rn1'] = df1.groupby('key1').apply(nf)\n```\nBut I just got a lot of NaNs when I do this. Ideally, there'd be a succinct way to replicate the window function capability of sql (i've figured out the window based aggregates...that's a one liner in pandas)...can someone share with me the most idiomatic way to number rows like this in PANDAS?","response":"you can also use sort_values(), groupby() and finally cumcount() + 1: \n```\ndf['RN'] = df.sort_values(['data1','data2'], ascending=[True,False]) \\\n .groupby(['key1']) \\\n .cumcount() + 1\nprint(df)\n```\nyields: \n```\ndata1 data2 key1 RN\n0 1 1 a 1\n1 2 10 a 2\n2 2 2 a 3\n3 3 3 b 1\n4 3 30 a 4\n```\nPS tested with pandas 0.18","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/17775935\/sql-like-window-functions-in-pandas-row-numbering-in-python-pandas-dataframe","best_answers_votes":123,"tags":["python","pandas","numpy","dataframe"],"question_length":2054,"response_length":413,"tag_count":4} {"question":"In Python NumPy what is a dimension and axis? I am coding with Pythons NumPy module. If coordinates of a point in 3D space are described as [1, 2, 1], wouldn't that be three dimensions, three axis, a rank of three? Or if that is one dimension then shouldn't it be points (plural), not point? Here is the documentation: In Numpy dimensions are called axes. The number of axes is rank. For example, the coordinates of a point in 3D space [1, 2, 1] is an array of rank 1, because it has one axis. That axis has a length of 3. Source: Link","response":"In numpy arrays, dimensionality refers to the number of axes needed to index it, not the dimensionality of any geometrical space. For example, you can describe the locations of points in 3D space with a 2D array: \n```py\narray([[0, 0, 0],\n [1, 2, 3],\n [2, 2, 2],\n [9, 9, 9]])\n```\nWhich has shape of (4, 3) and dimension 2. But it can describe 3D space because the length of each row (axis 1) is three, so each row can be the x, y, and z component of a point's location. The length of axis 0 indicates the number of points (here, 4). However, that is more of an application to the math that the code is describing, not an attribute of the array itself. In mathematics, the dimension of a vector would be its length (e.g., x, y, and z components of a 3d vector), but in numpy, any \"vector\" is really just considered a 1d array of varying length. The array doesn't care what the dimension of the space (if any) being described is. You can play around with this, and see the number of dimensions and shape of an array like so: \n```py\n\n```python\na = np.arange(9)\n```\n\n```python\na\n#Output\n#array([0, 1, 2, 3, 4, 5, 6, 7, 8])\n```\n\n```python\na.ndim # number of dimensions\n#Output\n#1\n```\n\n```python\na.shape\n#Output\n#(9,)\n```\n\n```python\nb = np.array([[0,0,0],[1,2,3],[2,2,2],[9,9,9]])\n```\n\n```python\nb\n#Output\n#array([[0, 0, 0],\n# [1, 2, 3],\n# [2, 2, 2],\n# [9, 9, 9]])\n```\n\n```python\nb.ndim\n#Output\n#2\n```\n\n```python\nb.shape\n#Output\n#(4, 3)\n#```\n#Arrays can have many dimensions, but they become hard to visualize above two or three: \n#```py\n```\n\n```python\nc = np.random.rand(2,2,3,4)\n```\n\n```python\nc\n#Output\n#array([[[[ 0.33018579, 0.98074944, 0.25744133, 0.62154557],\n# [ 0.70959511, 0.01784769, 0.01955593, 0.30062579],\n# [ 0.83634557, 0.94636324, 0.88823617, 0.8997527 ]],\n#\n# [[ 0.4020885 , 0.94229555, 0.309992 , 0.7237458 ],\n# [ 0.45036185, 0.51943908, 0.23432001, 0.05226692],\n# [ 0.03170345, 0.91317231, 0.11720796, 0.31895275]]],\n#\n#\n# [[[ 0.47801989, 0.02922993, 0.12118226, 0.94488471],\n# [ 0.65439109, 0.77199972, 0.67024853, 0.27761443],\n# [ 0.31602327, 0.42678546, 0.98878701, 0.46164756]],\n#\n# [[ 0.31585844, 0.80167337, 0.17401188, 0.61161196],\n# [ 0.74908902, 0.45300247, 0.68023488, 0.79672751],\n# [ 0.23597218, 0.78416727, 0.56036792, 0.55973686]]]])\n```\n\n```python\nc.ndim\n#Output\n#4\n```\n\n```python\nc.shape\n#Output\n#(2, 2, 3, 4)\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/19389910\/in-python-numpy-what-is-a-dimension-and-axis","best_answers_votes":109,"tags":["python","numpy"],"question_length":535,"response_length":2446,"tag_count":2} {"question":"Histogram values of a Pandas Series I have some values in a Python Pandas Series (type: pandas.core.series.Series) \n```\n\n```python\nseries = pd.Series([0.0,950.0,-70.0,812.0,0.0,-90.0,0.0,0.0,-90.0,0.0,-64.0,208.0,0.0,-90.0,0.0,-80.0,0.0,0.0,-80.0,-48.0,840.0,-100.0,190.0,130.0,-100.0,-100.0,0.0,-50.0,0.0,-100.0,-100.0,0.0,-90.0,0.0,-90.0,-90.0,63.0,-90.0,0.0,0.0,-90.0,-80.0,0.0,])\n```\n\n```python\nseries.min()\n#Output\n#-100.0\n```\n\n```python\nseries.max()\n#Output\n#950.0\n#```\n#I would like to get values of histogram (not necessary plotting histogram)... I just need to get the frequency for each interval. Let's say that my intervals are going from [-200; -150] to [950; 1000] so lower bounds are \n#```\n#lwb = range(-200,1000,50)\n#```\n#and upper bounds are \n#```\n#upb = range(-150,1050,50)\n#```\n#I don't know how to get frequency (the number of values that are inside each interval) now... I'm sure that defining lwb and upb is not necessary... but I don't know what function I should use to perform this! (after diving in Pandas doc, I think cut function can help me because it's a discretization problem... but I'm don't understand how to use it) After being able to do this, I will have a look at the way to display histogram (but that's an other problem)\n```","response":"You just need to use the histogram function of NumPy: \n```\nimport numpy as np\ncount, division = np.histogram(series)\n```\nwhere division is the automatically calculated border for your bins and count is the population inside each bin. If you need to fix a certain number of bins, you can use the argument bins and specify a number of bins, or give it directly the boundaries between each bin. \n```\ncount, division = np.histogram(series, bins = [-201,-149,949,1001])\n```\nto plot the results you can use the matplotlib function hist, but if you are working in pandas each Series has its own handle to the hist function, and you can give it the chosen binning: \n```\nseries.hist(bins=division)\n```\nEdit: As mentioned by another poster, Pandas is built on top of NumPy. Since OP is explicitly using Pandas, we can do away with the additional import by accessing NumPy through Pandas: \n```py\ncount, division = pd.np.histogram(series)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/13129618\/histogram-values-of-a-pandas-series","best_answers_votes":122,"tags":["python","pandas","numpy","matplotlib"],"question_length":1232,"response_length":930,"tag_count":4} {"question":"Is there special significance to 16331239353195370.0? Using import numpy as np I've noticed that \n```\nnp.tan(np.pi\/2)\n```\ngives the number in the title and not np.inf \n```\n16331239353195370.0\n```\nI'm curious about this number. Is it related to some system machine precision parameter? Could I have calculated it from something? (I'm thinking along the lines of something similar to sys.float_info) EDIT: The same result is indeed reproducible in other environments such as Java, octace, matlab... The suggested dupe does not explain why, though.","response":"pi isn't exactly representable as Python float (same as the platform C's double type). The closest representable approximation is used. Here's the exact approximation in use on my box (probably the same as on your box): \n```\n\n```python\nimport math\n```\n\n```python\n(math.pi \/ 2).as_integer_ratio()\n#Output\n#(884279719003555, 562949953421312)\n#```\n#To find the tangent of that ratio, I'm going to switch to wxMaxima now: \n#```\n#(%i1) fpprec: 32;\n#(%o1) 32\n#(%i2) tan(bfloat(884279719003555) \/ 562949953421312);\n#(%o2) 1.6331239353195369755967737041529b16\n#```\n#So essentially identical to what you got. The binary approximation to pi\/2 used is a little bit less than the mathematical (\"infinite precision\") value of pi\/2. So you get a very large tangent instead of infinity. The computed tan() is appropriate for the actual input! For exactly the same kinds of reasons, e.g., \n#```\n```\n\n```python\nmath.sin(math.pi)\n#Output\n#1.2246467991473532e-16\n#```\n#doesn't return 0. The approximation math.pi is a little bit less than pi, and the displayed result is correct given that truth. OTHER WAYS OF SEEING math.pi There are several ways to see the exact approximation in use: \n#```\n```\n\n```python\nimport math\n```\n\n```python\nmath.pi.as_integer_ratio()\n#Output\n#(884279719003555, 281474976710656)\n#```\n#math.pi is exactly equal to the mathematical (\"infinite precision\") value of that ratio. Or as an exact float in hex notation: \n#```\n```\n\n```python\nmath.pi.hex()\n#Output\n#'0x1.921fb54442d18p+1'\n#```\n#Or in a way most easily understood by just about everyone: \n#```\n```\n\n```python\nimport decimal\n```\n\n```python\ndecimal.Decimal(math.pi)\n#Output\n#Decimal('3.141592653589793115997963468544185161590576171875')\n#```\n#While it may not be immediately obvious, every finite binary float is exactly representable as a finite decimal float (the reverse is not true; e.g. the decimal 0.1 is not exactly representable as a finite binary float), and the Decimal(some_float) constructor produces the exact equivalent. Here's the true value of pi followed by the exact decimal value of math.pi, and a caret on the third line points to the first digit where they differ: \n#```\n#true 3.14159265358979323846264338327950288419716939937510...\n#math.pi 3.141592653589793115997963468544185161590576171875\n# ^\n#```\n#math.pi is the same across \"almost all\" boxes now, because almost all boxes now use the same binary floating-point format (IEEE 754 double precision). You can use any of the ways above to confirm that on your box, or to find the precise approximation in use if your box is an exception.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/38295501\/is-there-special-significance-to-16331239353195370-0","best_answers_votes":122,"tags":["python","numpy","numerical-methods"],"question_length":545,"response_length":2444,"tag_count":3} {"question":"Resampling a numpy array representing an image I am looking for how to resample a numpy array representing image data at a new size, preferably having a choice of the interpolation method (nearest, bilinear, etc.). I know there is \n```\nscipy.misc.imresize\n```\nwhich does exactly this by wrapping PIL's resize function. The only problem is that since it uses PIL, the numpy array has to conform to image formats, giving me a maximum of 4 \"color\" channels. I want to be able to resize arbitrary images, with any number of \"color\" channels. I was wondering if there is a simple way to do this in scipy\/numpy, or if I need to roll my own. I have two ideas for how to concoct one myself: a function that runs scipy.misc.imresize on every channel separately create my own using scipy.ndimage.interpolation.affine_transform The first one would probably be slow for large data, and the second one does not seem to offer any other interpolation method except splines.","response":"Based on your description, you want scipy.ndimage.zoom. Bilinear interpolation would be order=1, nearest is order=0, and cubic is the default (order=3). zoom is specifically for regularly-gridded data that you want to resample to a new resolution. As a quick example: \n```\nimport numpy as np\nimport scipy.ndimage\n\nx = np.arange(9).reshape(3,3)\n\nprint 'Original array:'\nprint x\n\nprint 'Resampled by a factor of 2 with nearest interpolation:'\nprint scipy.ndimage.zoom(x, 2, order=0)\n\n\nprint 'Resampled by a factor of 2 with bilinear interpolation:'\nprint scipy.ndimage.zoom(x, 2, order=1)\n\n\nprint 'Resampled by a factor of 2 with cubic interpolation:'\nprint scipy.ndimage.zoom(x, 2, order=3)\n```\nAnd the result: \n```\nOriginal array:\n[[0 1 2]\n [3 4 5]\n [6 7 8]]\nResampled by a factor of 2 with nearest interpolation:\n[[0 0 1 1 2 2]\n [0 0 1 1 2 2]\n [3 3 4 4 5 5]\n [3 3 4 4 5 5]\n [6 6 7 7 8 8]\n [6 6 7 7 8 8]]\nResampled by a factor of 2 with bilinear interpolation:\n[[0 0 1 1 2 2]\n [1 2 2 2 3 3]\n [2 3 3 4 4 4]\n [4 4 4 5 5 6]\n [5 5 6 6 6 7]\n [6 6 7 7 8 8]]\nResampled by a factor of 2 with cubic interpolation:\n[[0 0 1 1 2 2]\n [1 1 1 2 2 3]\n [2 2 3 3 4 4]\n [4 4 5 5 6 6]\n [5 6 6 7 7 7]\n [6 6 7 7 8 8]]\n```\nEdit: As Matt S. pointed out, there are a couple of caveats for zooming multi-band images. I'm copying the portion below almost verbatim from one of my earlier answers: Zooming also works for 3D (and nD) arrays. However, be aware that if you zoom by 2x, for example, you'll zoom along all axes. \n```\ndata = np.arange(27).reshape(3,3,3)\nprint 'Original:\\n', data\nprint 'Zoomed by 2x gives an array of shape:', ndimage.zoom(data, 2).shape\n```\nThis yields: \n```\nOriginal:\n[[[ 0 1 2]\n [ 3 4 5]\n [ 6 7 8]]\n\n [[ 9 10 11]\n [12 13 14]\n [15 16 17]]\n\n [[18 19 20]\n [21 22 23]\n [24 25 26]]]\nZoomed by 2x gives an array of shape: (6, 6, 6)\n```\nIn the case of multi-band images, you usually don't want to interpolate along the \"z\" axis, creating new bands. If you have something like a 3-band, RGB image that you'd like to zoom, you can do this by specifying a sequence of tuples as the zoom factor: \n```\nprint 'Zoomed by 2x along the last two axes:'\nprint ndimage.zoom(data, (1, 2, 2))\n```\nThis yields: \n```\nZoomed by 2x along the last two axes:\n[[[ 0 0 1 1 2 2]\n [ 1 1 1 2 2 3]\n [ 2 2 3 3 4 4]\n [ 4 4 5 5 6 6]\n [ 5 6 6 7 7 7]\n [ 6 6 7 7 8 8]]\n\n [[ 9 9 10 10 11 11]\n [10 10 10 11 11 12]\n [11 11 12 12 13 13]\n [13 13 14 14 15 15]\n [14 15 15 16 16 16]\n [15 15 16 16 17 17]]\n\n [[18 18 19 19 20 20]\n [19 19 19 20 20 21]\n [20 20 21 21 22 22]\n [22 22 23 23 24 24]\n [23 24 24 25 25 25]\n [24 24 25 25 26 26]]]\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/13242382\/resampling-a-numpy-array-representing-an-image","best_answers_votes":134,"tags":["python","image-processing","numpy","scipy","python-imaging-library"],"question_length":958,"response_length":2651,"tag_count":5} {"question":"Fitting a Normal distribution to 1D data I have a 1 dimensional array. I can compute the \"mean\" and \"standard deviation\" of this sample and plot the \"Normal distribution\" but I have a problem: I want to plot the data and Normal distribution in the same figure. I dont know how to plot both the data and the normal distribution. Any Idea about \"Gaussian probability density function in scipy.stats\"? \n```\ns = np.std(array)\nm = np.mean(array)\nplt.plot(norm.pdf(array,m,s))\n```","response":"You can use matplotlib to plot the histogram and the PDF (as in the link in @MrE's answer). For fitting and for computing the PDF, you can use scipy.stats.norm, as follows. \n```\nimport numpy as np\nfrom scipy.stats import norm\nimport matplotlib.pyplot as plt\n\n\n# Generate some data for this demonstration.\ndata = norm.rvs(10.0, 2.5, size=500)\n\n# Fit a normal distribution to the data:\nmu, std = norm.fit(data)\n\n# Plot the histogram.\nplt.hist(data, bins=25, density=True, alpha=0.6, color='g')\n\n# Plot the PDF.\nxmin, xmax = plt.xlim()\nx = np.linspace(xmin, xmax, 100)\np = norm.pdf(x, mu, std)\nplt.plot(x, p, 'k', linewidth=2)\ntitle = \"Fit results: mu = %.2f, std = %.2f\" % (mu, std)\nplt.title(title)\n\nplt.show()\n```\nHere's the plot generated by the script:","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/20011122\/fitting-a-normal-distribution-to-1d-data","best_answers_votes":191,"tags":["python","numpy","matplotlib","scipy"],"question_length":474,"response_length":755,"tag_count":4} {"question":"Concatenate two NumPy arrays vertically I tried the following: \n```\n\n```python\na = np.array([1,2,3])\n```\n\n```python\nb = np.array([4,5,6])\n```\n\n```python\nnp.concatenate((a,b), axis=0)\n#Output\n#array([1, 2, 3, 4, 5, 6])\n```\n\n```python\nnp.concatenate((a,b), axis=1)\n#Output\n#array([1, 2, 3, 4, 5, 6])\n#```\n#However, I'd expect at least that one result looks like this \n#```\n#array([[1, 2, 3],\n# [4, 5, 6]])\n#```\n#Why is it not concatenated vertically?\n```","response":"Because both a and b have only one axis, as their shape is (3), and the axis parameter specifically refers to the axis of the elements to concatenate. this example should clarify what concatenate is doing with axis. Take two vectors with two axis, with shape (2,3): \n```\na = np.array([[1,5,9], [2,6,10]])\nb = np.array([[3,7,11], [4,8,12]])\n```\nconcatenates along the 1st axis (rows of the 1st, then rows of the 2nd): \n```\nnp.concatenate((a,b), axis=0)\narray([[ 1, 5, 9],\n [ 2, 6, 10],\n [ 3, 7, 11],\n [ 4, 8, 12]])\n```\nconcatenates along the 2nd axis (columns of the 1st, then columns of the 2nd): \n```\nnp.concatenate((a, b), axis=1)\narray([[ 1, 5, 9, 3, 7, 11],\n [ 2, 6, 10, 4, 8, 12]])\n```\nto obtain the output you presented, you can use vstack \n```\na = np.array([1,2,3])\nb = np.array([4,5,6])\nnp.vstack((a, b))\narray([[1, 2, 3],\n [4, 5, 6]])\n```\nYou can still do it with concatenate, but you need to reshape them first: \n```\nnp.concatenate((a.reshape(1,3), b.reshape(1,3)))\narray([[1, 2, 3],\n [4, 5, 6]])\n```\nFinally, as proposed in the comments, one way to reshape them is to use newaxis: \n```\nnp.concatenate((a[np.newaxis,:], b[np.newaxis,:]))\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/21887754\/concatenate-two-numpy-arrays-vertically","best_answers_votes":115,"tags":["python","arrays","numpy","concatenation"],"question_length":389,"response_length":1199,"tag_count":4} {"question":"Ignore divide by 0 warning in NumPy I have a function for statistic issues: \n```\nimport numpy as np\nfrom scipy.special import gamma as Gamma\n\ndef Foo(xdata):\n ...\n return x1 * (\n ( #R is a numpy vector\n ( ((R - x2)\/beta) ** (x3 -1) ) * \n ( np.exp( - ((R - x2) \/ x4) ) ) \/\n ( x4 * Gamma(x3))\n ).real\n )\n```\nSometimes I get from the shell the following warning: \n```\nRuntimeWarning: divide by zero encountered in...\n```\nI use the numpy isinf function to correct the results of the function in other files, so I do not need this warning. Is there a way to ignore the message? In other words, I do not want the shell to print this message. I do not want to disable all python warnings, just this one.","response":"You can disable the warning with numpy.seterr. Put this before the possible division by zero: \n```\nnp.seterr(divide='ignore')\n```\nThat'll disable zero division warnings globally. If you just want to disable them for a little bit, you can use numpy.errstate in a with clause: \n```\nwith np.errstate(divide='ignore'):\n # some code here\n```\nFor a zero by zero division (undetermined, results in a NaN), the error behaviour has changed with numpy version 1.8.0: this is now considered \"invalid\", while previously it was \"divide\". Thus, if there is a chance you your numerator could be zero as well, use \n```\nnp.seterr(divide='ignore', invalid='ignore')\n```\nor \n```\nwith np.errstate(divide='ignore', invalid='ignore'):\n # some code here\n```\nSee the \"Compatibility\" section in the release notes, last paragraph before the \"New Features\" section: Comparing NaN floating point numbers now raises the invalid runtime warning. If a NaN is expected the warning can be ignored using np.errstate.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/29950557\/ignore-divide-by-0-warning-in-numpy","best_answers_votes":209,"tags":["python","numpy","suppress-warnings","divide-by-zero"],"question_length":800,"response_length":988,"tag_count":4} {"question":"Inverse of a matrix using numpy I'd like to use numpy to calculate the inverse. But I'm getting an error: \n```\n'numpy.ndarry' object has no attribute I\n```\nTo calculate inverse of a matrix in numpy, say matrix M, it should be simply: print M.I Here's the code: \n```\nx = numpy.empty((3,3), dtype=int)\nfor comb in combinations_with_replacement(range(10), 9):\n x.flat[:] = comb\n print x.I\n```\nI'm presuming, this error occurs because x is now flat, thus 'I' command is not compatible. Is there a work around for this? My goal is to print the INVERSE MATRIX of every possible numerical matrix combination.","response":"The I attribute only exists on matrix objects, not ndarrays. You can use numpy.linalg.inv to invert arrays: \n```\ninverse = numpy.linalg.inv(x)\n```\nNote that the way you're generating matrices, not all of them will be invertible. You will either need to change the way you're generating matrices, or skip the ones that aren't invertible. \n```\ntry:\n inverse = numpy.linalg.inv(x)\nexcept numpy.linalg.LinAlgError:\n # Not invertible. Skip this one.\n pass\nelse:\n # continue with what you were doing\n```\nAlso, if you want to go through all 3x3 matrices with elements drawn from [0, 10), you want the following: \n```\nfor comb in itertools.product(range(10), repeat=9):\n```\nrather than combinations_with_replacement, or you'll skip matrices like \n```\nnumpy.array([[0, 1, 0],\n [0, 0, 0],\n [0, 0, 0]])\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/21638895\/inverse-of-a-matrix-using-numpy","best_answers_votes":114,"tags":["python","numpy","matrix"],"question_length":605,"response_length":831,"tag_count":3} {"question":"How to turn a boolean array into index array in numpy Is there an efficient Numpy mechanism to retrieve the integer indexes of locations in an array based on a condition is true as opposed to the Boolean mask array? For example: \n```\nx=np.array([range(100,1,-1)])\n#generate a mask to find all values that are a power of 2\nmask=x&(x-1)==0\n#This will tell me those values\nprint x[mask]\n```\nIn this case, I'd like to know the indexes i of mask where mask[i]==True. Is it possible to generate these without looping?","response":"Another option: \n```\n\n```python\nnumpy.where(mask)\n#Output\n#(array([36, 68, 84, 92, 96, 98]),)\n#```\n#which is the same thing as numpy.where(mask==True).\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/8218032\/how-to-turn-a-boolean-array-into-index-array-in-numpy","best_answers_votes":111,"tags":["python","arrays","numpy"],"question_length":511,"response_length":147,"tag_count":3} {"question":"Mesh grid functions in Python (meshgrid mgrid ogrid ndgrid) I'm looking for a clear comparison of meshgrid-like functions. Unfortunately I don't find it! Numpy http:\/\/docs.scipy.org\/doc\/numpy\/reference\/ provides mgrid ogrid meshgrid Scitools http:\/\/hplgit.github.io\/scitools\/doc\/api\/html\/index.html provides ndgrid boxgrid Ideally a table summarizing all this would be perfect!","response":"numpy.meshgrid is modelled after Matlab's meshgrid command. It is used to vectorise functions of two variables, so that you can write \n```\nx = numpy.array([1, 2, 3])\ny = numpy.array([10, 20, 30]) \nXX, YY = numpy.meshgrid(x, y)\nZZ = XX + YY\n\nZZ => array([[11, 12, 13],\n [21, 22, 23],\n [31, 32, 33]])\n```\nSo ZZ contains all the combinations of x and y put into the function. When you think about it, meshgrid is a bit superfluous for numpy arrays, as they broadcast. This means you can do \n```\nXX, YY = numpy.atleast_2d(x, y)\nYY = YY.T # transpose to allow broadcasting\nZZ = XX + YY\n```\nand get the same result. mgrid and ogrid are helper classes which use index notation so that you can create XX and YY in the previous examples directly, without having to use something like linspace. The order in which the output are generated is reversed. \n```\nYY, XX = numpy.mgrid[10:40:10, 1:4]\nZZ = XX + YY # These are equivalent to the output of meshgrid\n\nYY, XX = numpy.ogrid[10:40:10, 1:4]\nZZ = XX + YY # These are equivalent to the atleast_2d example\n```\nI am not familiar with the scitools stuff, but ndgrid seems equivalent to meshgrid, while BoxGrid is actually a whole class to help with this kind of generation.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/12402045\/mesh-grid-functions-in-python-meshgrid-mgrid-ogrid-ndgrid","best_answers_votes":93,"tags":["python","numpy","scipy"],"question_length":377,"response_length":1233,"tag_count":3} {"question":"How to generate audio from a numpy array? I want to create \"heart rate monitor\" effect from a 2D array in numpy and want the tone to reflect the values in the array.","response":"You can use the write function from scipy.io.wavfile to create a wav file which you can then play however you wish. Note that the array must be integers, so if you have floats, you might want to scale them appropriately: \n```\nimport numpy as np\nfrom scipy.io.wavfile import write\n\nrate = 44100\ndata = np.random.uniform(-1, 1, rate) # 1 second worth of random samples between -1 and 1\nscaled = np.int16(data \/ np.max(np.abs(data)) * 32767)\nwrite('test.wav', rate, scaled)\n```\nIf you want Python to actually play audio, then this page provides an overview of some of the packages\/modules.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/10357992\/how-to-generate-audio-from-a-numpy-array","best_answers_votes":112,"tags":["python","audio","numpy","matplotlib"],"question_length":165,"response_length":586,"tag_count":4} {"question":"How can numpy be so much faster than my Fortran routine? I get a 512^3 array representing a Temperature distribution from a simulation (written in Fortran). The array is stored in a binary file that's about 1\/2G in size. I need to know the minimum, maximum and mean of this array and as I will soon need to understand Fortran code anyway, I decided to give it a go and came up with the following very easy routine. \n```\ninteger gridsize,unit,j\n real mini,maxi\n double precision mean\n\n gridsize=512\n unit=40\n open(unit=unit,file='T.out',status='old',access='stream',&\n form='unformatted',action='read')\n read(unit=unit) tmp\n mini=tmp\n maxi=tmp\n mean=tmp\n do j=2,gridsize**3\n read(unit=unit) tmp\n if(tmp>maxi)then\n maxi=tmp\n elseif(tmp in \n----> 1 foo()\n\n in foo()\n 1 def foo():\n 2 print('one')\n----> 3 x = np.array([[1],[1,2]])\n 4 return x\n 5 \n\nVisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray\n```\nThe error gives a traceback telling me where the warning was raised. There may be ways of refining the warning filter to catch just this one, and not others of the same category. I haven't used this mechanism much. Read np.warnings.filterwarnings docs for more details.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/63097829\/debugging-numpy-visibledeprecationwarning-ndarray-from-ragged-nested-sequences","best_answers_votes":88,"tags":["python-3.x","numpy","debugging"],"question_length":609,"response_length":2263,"tag_count":3} {"question":"Iterating over arbitrary dimension of numpy.array Is there function to get an iterator over an arbitrary dimension of a numpy array? Iterating over the first dimension is easy... \n```\n\n```python\nc = numpy.arange(24).reshape(2,3,4)\n```\n\n```python\nfor r in c :\n ....: print r\n ....: \n[[ 0 1 2 3]\n [ 4 5 6 7]\n [ 8 9 10 11]]\n[[12 13 14 15]\n [16 17 18 19]\n [20 21 22 23]]\n```\nBut iterating over other dimensions is harder. For example, the last dimension: \n```\n```\n\n```python\nfor r in c.swapaxes(2,0).swapaxes(1,2) :\n ....: print r\n ....: \n[[ 0 4 8]\n [12 16 20]]\n[[ 1 5 9]\n [13 17 21]]\n[[ 2 6 10]\n [14 18 22]]\n[[ 3 7 11]\n [15 19 23]]\n```\nI'm making a generator to do this myself, but I'm surprised there isn't a function named something like numpy.ndarray.iterdim(axis=0) to do this automatically.\n```","response":"What you propose is quite fast, but the legibility can be improved with the clearer forms: \n```\nfor i in range(c.shape[-1]):\n print c[:,:,i]\n```\nor, better (faster, more general and more explicit): \n```\nfor i in range(c.shape[-1]):\n print c[...,i]\n```\nHowever, the first approach above appears to be about twice as slow as the swapaxes() approach: \n```\npython -m timeit -s 'import numpy; c = numpy.arange(24).reshape(2,3,4)' \\\n 'for r in c.swapaxes(2,0).swapaxes(1,2): u = r'\n100000 loops, best of 3: 3.69 usec per loop\n\npython -m timeit -s 'import numpy; c = numpy.arange(24).reshape(2,3,4)' \\\n 'for i in range(c.shape[-1]): u = c[:,:,i]'\n100000 loops, best of 3: 6.08 usec per loop\n\npython -m timeit -s 'import numpy; c = numpy.arange(24).reshape(2,3,4)' \\\n 'for r in numpy.rollaxis(c, 2): u = r'\n100000 loops, best of 3: 6.46 usec per loop\n```\nI would guess that this is because swapaxes() does not copy any data, and because the handling of c[:,:,i] might be done through general code (that handles the case where : is replaced by a more complicated slice). Note however that the more explicit second solution c[...,i] is both quite legible and quite fast: \n```\npython -m timeit -s 'import numpy; c = numpy.arange(24).reshape(2,3,4)' \\\n 'for i in range(c.shape[-1]): u = c[...,i]'\n100000 loops, best of 3: 4.74 usec per loop\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/1589706\/iterating-over-arbitrary-dimension-of-numpy-array","best_answers_votes":75,"tags":["python","numpy","loops"],"question_length":808,"response_length":1350,"tag_count":3} {"question":"Numpy \u2018smart\u2019 symmetric matrix Is there a smart and space-efficient symmetric matrix in numpy which automatically (and transparently) fills the position at [j][i] when [i][j] is written to? \n```\nimport numpy\na = numpy.symmetric((3, 3))\na[0][1] = 1\na[1][0] == a[0][1]\n# True\nprint(a)\n# [[0 1 0], [1 0 0], [0 0 0]]\n\nassert numpy.all(a == a.T) # for any symmetric matrix\n```\nAn automatic Hermitian would also be nice, although I won\u2019t need that at the time of writing.","response":"If you can afford to symmetrize the matrix just before doing calculations, the following should be reasonably fast: \n```\ndef symmetrize(a):\n \"\"\"\n Return a symmetrized version of NumPy array a.\n\n Values 0 are replaced by the array value at the symmetric\n position (with respect to the diagonal), i.e. if a_ij = 0,\n then the returned array a' is such that a'_ij = a_ji.\n\n Diagonal values are left untouched.\n\n a -- square NumPy array, such that a_ij = 0 or a_ji = 0, \n for i != j.\n \"\"\"\n return a + a.T - numpy.diag(a.diagonal())\n```\nThis works under reasonable assumptions (such as not doing both a[0, 1] = 42 and the contradictory a[1, 0] = 123 before running symmetrize). If you really need a transparent symmetrization, you might consider subclassing numpy.ndarray and simply redefining __setitem__: \n```\nclass SymNDArray(numpy.ndarray):\n \"\"\"\n NumPy array subclass for symmetric matrices.\n\n A SymNDArray arr is such that doing arr[i,j] = value\n automatically does arr[j,i] = value, so that array\n updates remain symmetrical.\n \"\"\"\n\n def __setitem__(self, (i, j), value):\n super(SymNDArray, self).__setitem__((i, j), value) \n super(SymNDArray, self).__setitem__((j, i), value) \n\ndef symarray(input_array):\n \"\"\"\n Return a symmetrized version of the array-like input_array.\n\n The returned array has class SymNDArray. Further assignments to the array\n are thus automatically symmetrized.\n \"\"\"\n return symmetrize(numpy.asarray(input_array)).view(SymNDArray)\n\n# Example:\na = symarray(numpy.zeros((3, 3)))\na[0, 1] = 42\nprint a # a[1, 0] == 42 too!\n```\n(or the equivalent with matrices instead of arrays, depending on your needs). This approach even handles more complicated assignments, like a[:, 1] = -1, which correctly sets a[1, :] elements. Note that Python 3 removed the possibility of writing def \u2026(\u2026, (i, j),\u2026), so the code has to be slightly adapted before running with Python 3: def __setitem__(self, indexes, value): (i, j) = indexes\u2026","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/2572916\/numpy-smart-symmetric-matrix","best_answers_votes":97,"tags":["python","matrix","numpy"],"question_length":465,"response_length":2059,"tag_count":3} {"question":"NumPy array slice using None This had me scratching my head for a while. I was unintentionally slicing an array with None and getting something other than an error (I expected an error). Instead, it returns an array with an extra dimension. \n```\n\n```python\nimport numpy\n```\n\n```python\na = numpy.arange(4).reshape(2,2)\n```\n\n```python\na\n#Output\n#array([[0, 1],\n# [2, 3]])\n```\n\n```python\na[None]\n#Output\n#array([[[0, 1],\n# [2, 3]]])\n#```\n#Is this behavior intentional or a side-effect? If intentional, is there some rationale for it?\n```","response":"Using None is equivalent to using numpy.newaxis, so yes, it's intentional. In fact, they're the same thing, but, of course, newaxis spells it out better. The docs: The newaxis object can be used in all slicing operations to create an axis of length one. newaxis is an alias for \u2018None\u2019, and \u2018None\u2019 can be used in place of this with the same result. A related SO question.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/1408311\/numpy-array-slice-using-none","best_answers_votes":85,"tags":["python","arrays","numpy"],"question_length":481,"response_length":370,"tag_count":3} {"question":"How to overplot a line on a scatter plot in python? I have two vectors of data and I've put them into pyplot.scatter(). Now I'd like to over plot a linear fit to these data. How would I do this? I've tried using scikitlearn and np.polyfit().","response":"```\nimport numpy as np\nfrom numpy.polynomial.polynomial import polyfit\nimport matplotlib.pyplot as plt\n\n# Sample data\nx = np.arange(10)\ny = 5 * x + 10\n\n# Fit with polyfit\nb, m = polyfit(x, y, 1)\n\nplt.plot(x, y, '.')\nplt.plot(x, b + m * x, '-')\nplt.show()\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/19068862\/how-to-overplot-a-line-on-a-scatter-plot-in-python","best_answers_votes":152,"tags":["python","numpy","matplotlib","linear-regression","scatter-plot"],"question_length":241,"response_length":258,"tag_count":5} {"question":"Average values in two Numpy arrays Given two ndarrays \n```\nold_set = [[0, 1], [4, 5]]\nnew_set = [[2, 7], [0, 1]]\n```\nI'm looking to get the mean of the respective values between the two arrays so that the data ends up something like: \n```\nend_data = [[1, 4], [2, 3]]\n```\nbasically it would apply something like \n```\nfor i in len(old_set):\n end_data[i] = (old_set[i]+new_set[i])\/2\n```\nBut I'm unsure what syntax to use.. Thanks for the help in advance!","response":"You can create a 3D array containing your 2D arrays to be averaged, then average along axis=0 using np.mean or np.average (the latter allows for weighted averages): \n```\nnp.mean( np.array([ old_set, new_set ]), axis=0 )\n```\nThis averaging scheme can be applied to any (n)-dimensional array, because the created (n+1)-dimensional array will always contain the original arrays to be averaged along its axis=0.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/18461623\/average-values-in-two-numpy-arrays","best_answers_votes":183,"tags":["python","arrays","numpy"],"question_length":454,"response_length":407,"tag_count":3} {"question":"python pandas flatten a dataframe to a list I have a df like so: \n```\nimport pandas\na=[['1\/2\/2014', 'a', '6', 'z1'], \n ['1\/2\/2014', 'a', '3', 'z1'], \n ['1\/3\/2014', 'c', '1', 'x3'],\n ]\ndf = pandas.DataFrame.from_records(a[1:],columns=a[0])\n```\nI want to flatten the df so it is one continuous list like so: ['1\/2\/2014', 'a', '6', 'z1', '1\/2\/2014', 'a', '3', 'z1','1\/3\/2014', 'c', '1', 'x3'] I can loop through the rows and extend to a list, but is a much easier way to do it?","response":"You can use .flatten() on the DataFrame converted to a NumPy array: \n```\ndf.to_numpy().flatten()\n```\nand you can also add .tolist() if you want the result to be a Python list. Edit In previous versions of Pandas, the values attributed was used instead of the .to_numpy() method, as mentioned in the comments below.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/25440008\/python-pandas-flatten-a-dataframe-to-a-list","best_answers_votes":138,"tags":["python","list","numpy","pandas","dataframe"],"question_length":480,"response_length":314,"tag_count":5} {"question":"TypeError: unhashable type: 'numpy.ndarray' From an array with three columns, I want to be able to just take a slice of data from all three columns where the values in the first column are equal to the values defined in above. \n```py\nabove = {1, 5, 10}\ndata = np.arange(9).reshape(-1, 3)\nenergies = np.hsplit(data, 3)[0]\nslice = set(energies) & above\n```\nThe above comes back with: \n```none\nTraceback (most recent call last):\n File \"\", line 1, in \n slice = set(energies) & above\nTypeError: unhashable type: 'numpy.ndarray'\n```\nHow do I resolve this error?","response":"Your variable energies probably has the wrong shape: \n```\n\n```python\nfrom numpy import array\n```\n\n```python\nset([1,2,3]) & set(range(2, 10))\n#Output\n#set([2, 3])\n```\n\n```python\nset(array([1,2,3])) & set(range(2,10))\n#Output\n#set([2, 3])\n```\n\n```python\nset(array([[1,2,3],])) & set(range(2,10))\n#Output\n#Traceback (most recent call last):\n# File \"\", line 1, in \n#TypeError: unhashable type: 'numpy.ndarray'\n#```\n#And that's what happens if you read columnar data using your approach: \n#```\n```\n\n```python\ndata\n#Output\n#array([[ 1., 2., 3.],\n# [ 3., 4., 5.],\n# [ 5., 6., 7.],\n# [ 8., 9., 10.]])\n```\n\n```python\nhsplit(data,3)[0]\n#Output\n#array([[ 1.],\n# [ 3.],\n# [ 5.],\n# [ 8.]])\n#```\n#Probably you can simply use \n#```\n```\n\n```python\ndata[:,0]\n#Output\n#array([ 1., 3., 5., 8.])\n#```\n#instead. (P.S. Your code looks like it's undecided about whether it's data or elementdata. I've assumed it's simply a typo.)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/9022656\/typeerror-unhashable-type-numpy-ndarray","best_answers_votes":58,"tags":["python","arrays","numpy","typeerror","unique"],"question_length":579,"response_length":837,"tag_count":5} {"question":"Rearrange columns of numpy 2D array Is there a way to change the order of the columns in a numpy 2D array to a new and arbitrary order? For example, I have an array \n```\narray([[10, 20, 30, 40, 50],\n [ 6, 7, 8, 9, 10]])\n```\nand I want to change it into, say \n```\narray([[10, 30, 50, 40, 20],\n [ 6, 8, 10, 9, 7]])\n```\nby applying the permutation \n```\n0 -> 0\n1 -> 4\n2 -> 1\n3 -> 3\n4 -> 2\n```\non the columns. In the new matrix, I therefore want the first column of the original to stay in place, the second to move to the last column and so on. Is there a numpy function to do it? I have a fairly large matrix and expect to get even larger ones, so I need a solution that does this quickly and in place if possible (permutation matrices are a no-go) Thank you.","response":"This is possible in O(n) time and O(n) space using fancy indexing: \n```\n\n```python\nimport numpy as np\n```\n\n```python\na = np.array([[10, 20, 30, 40, 50],\n [ 6, 7, 8, 9, 10]])\n```\n\n```python\npermutation = [0, 4, 1, 3, 2]\n```\n\n```python\nidx = np.empty_like(permutation)\n```\n\n```python\nidx[permutation] = np.arange(len(permutation))\n```\n\n```python\na[:, idx] # return a rearranged copy\n#Output\n#array([[10, 30, 50, 40, 20],\n# [ 6, 8, 10, 9, 7]])\n```\n\n```python\na[:] = a[:, idx] # in-place modification of a\n#Output\n#```\n#Note that a[:, idx] is returning a copy, not a view. An O(1)-space solution is not possible in the general case, due to how numpy arrays are strided in memory.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/20265229\/rearrange-columns-of-numpy-2d-array","best_answers_votes":104,"tags":["python","arrays","numpy","edit-in-place"],"question_length":774,"response_length":613,"tag_count":4} {"question":"Easy way to test if each element in an numpy array lies between two values? I was wondering if there was a syntactically simple way of checking if each element in a numpy array lies between two numbers. In other words, just as numpy.array([1,2,3,4,5]) < 5 will return array([True, True, True, True, False]), I was wondering if it was possible to do something akin to this: \n```\n1 < numpy.array([1,2,3,4,5]) < 5\n```\n... to obtain ... \n```\narray([False, True, True, True, False])\n```\nI understand that I can obtain this through logical chaining of boolean tests, but I'm working through some rather complex code and I was looking for a syntactically clean solution. Any tips?","response":"One solution would be: \n```py\nimport numpy as np\na = np.array([1, 2, 3, 4, 5])\n(a > 1) & (a < 5) # 1 < element < 5?\n# array([False, True, True, True, False])\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/10542240\/easy-way-to-test-if-each-element-in-an-numpy-array-lies-between-two-values","best_answers_votes":115,"tags":["python","numpy"],"question_length":673,"response_length":165,"tag_count":2} {"question":"Does matplotlib have a function for drawing diagonal lines in axis coordinates? Matplotlib Axes have the functions axhline and axvline for drawing horizontal or vertical lines at a given y or x coordinate (respectively) independently of the data scale on an Axes. Is there a similar function for plotting a constant diagonal? For example, if I have a scatterplot of variables with a similar domain, it is often useful to know whether they fall above or below the line of y = x: \n```\nmean, cov = [0, 0], [(1, .6), (.6, 1)]\nx, y = np.random.multivariate_normal(mean, cov, 100).T\ny += x + 1\nf, ax = plt.subplots(figsize=(6, 6))\nax.scatter(x, y, c=\".3\")\nax.plot([-3, 3], [-3, 3], ls=\"--\", c=\".3\")\nax.set(xlim=(-3, 3), ylim=(-3, 3))\n```\nThis can of course be done programmatically by grabbing the axis limits, (ax.get_xlim(), etc.), but that a) takes a few extra steps and b) is brittle in cases where more data might end up on the plot and shift the limits. (Actually in some cases just adding the constant line itself stretches the axes). It would be preferable to just do, e.g., ax.axdline(ls=\"--\", c=\".3\"), but it's not clear if something like this exists in the matplotlib codebase. All you would need to do would be modify the axhline code to plot from [0, 1] in axes coordinates for both x and y, I think.","response":"Drawing a diagonal from the lower left to the upper right corners of your plot would be accomplished by the following ax.plot([0, 1], [0, 1], transform=ax.transAxes) Using transform=ax.transAxes, the supplied x and y coordinates are interpreted as axes coordinates instead of data coordinates. This, as @fqq pointed out, is only the identity line when your x and y limits are equal. To draw the line y=x such that it always extends to the limits of your plot, an approach similar to the one given by @Ffisegydd would work, and can be written as the following function. \n```\ndef add_identity(axes, *line_args, **line_kwargs):\n identity, = axes.plot([], [], *line_args, **line_kwargs)\n def callback(axes):\n low_x, high_x = axes.get_xlim()\n low_y, high_y = axes.get_ylim()\n low = max(low_x, low_y)\n high = min(high_x, high_y)\n identity.set_data([low, high], [low, high])\n callback(axes)\n axes.callbacks.connect('xlim_changed', callback)\n axes.callbacks.connect('ylim_changed', callback)\n return axes\n```\nExample usage: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nmean, cov = [0, 0], [(1, .6), (.6, 1)]\nx, y = np.random.multivariate_normal(mean, cov, 100).T\ny += x + 1\n\nf, ax = plt.subplots(figsize=(6, 6))\nax.scatter(x, y, c=\".3\")\nadd_identity(ax, color='r', ls='--')\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/22104256\/does-matplotlib-have-a-function-for-drawing-diagonal-lines-in-axis-coordinates","best_answers_votes":68,"tags":["python","numpy","matplotlib"],"question_length":1307,"response_length":1346,"tag_count":3} {"question":"Numpy shuffle multidimensional array by row only, keep column order unchanged How can I shuffle a multidimensional array by row only in Python (so do not shuffle the columns). I am looking for the most efficient solution, because my matrix is very huge. Is it also possible to do this highly efficient on the original array (to save memory)? Example: \n```\nimport numpy as np\nX = np.random.random((6, 2))\nprint(X)\nY = ???shuffle by row only not colls???\nprint(Y)\n```\nWhat I expect now is original matrix: \n```\n[[ 0.48252164 0.12013048]\n [ 0.77254355 0.74382174]\n [ 0.45174186 0.8782033 ]\n [ 0.75623083 0.71763107]\n [ 0.26809253 0.75144034]\n [ 0.23442518 0.39031414]]\n```\nOutput shuffle the rows not cols e.g.: \n```\n[[ 0.45174186 0.8782033 ]\n [ 0.48252164 0.12013048]\n [ 0.77254355 0.74382174]\n [ 0.75623083 0.71763107]\n [ 0.23442518 0.39031414]\n [ 0.26809253 0.75144034]]\n```","response":"You can use numpy.random.shuffle(). This function only shuffles the array along the first axis of a multi-dimensional array. The order of sub-arrays is changed but their contents remains the same. \n```\n\n```python\nimport numpy as np\n```\n\n```python\n\n```\n\n```python\nX = np.random.random((6, 2))\n```\n\n```python\nX\n#Output\n#array([[0.71935047, 0.25796155],\n# [0.4621708 , 0.55140423],\n# [0.22605866, 0.61581771],\n# [0.47264172, 0.79307633],\n# [0.22701656, 0.11927993],\n# [0.20117207, 0.2754544 ]])\n```\n\n```python\nnp.random.shuffle(X)\n```\n\n```python\nX\n#Output\n#array([[0.71935047, 0.25796155],\n# [0.47264172, 0.79307633],\n# [0.4621708 , 0.55140423],\n# [0.22701656, 0.11927993],\n# [0.20117207, 0.2754544 ],\n# [0.22605866, 0.61581771]])\n#```\n#For other functionalities you can also check out the following functions: random.Generator.shuffle random.Generator.permutation random.Generator.permuted The function random.Generator.permuted is introduced in Numpy's 1.20.0 Release. The new function differs from shuffle and permutation in that the subarrays indexed by an axis are permuted rather than the axis being treated as a separate 1-D array for every combination of the other indexes. For example, it is now possible to permute the rows or columns of a 2-D array.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/35646908\/numpy-shuffle-multidimensional-array-by-row-only-keep-column-order-unchanged","best_answers_votes":88,"tags":["python","arrays","numpy","shuffle"],"question_length":886,"response_length":2380,"tag_count":4} {"question":"How do I plot list of tuples? I have the following data set. I would like to use Python or Gnuplot to plot the data. The tuples are of the form (x, y). The Y-axis should be a log axis, that is, log(y). A scatter plot or line plot would be ideal. How can this be done? \n```\n[(0, 6.0705199999997801e-08), (1, 2.1015700100300739e-08), \n (2, 7.6280656623374823e-09), (3, 5.7348209304555086e-09), \n (4, 3.6812203579604238e-09), (5, 4.1572516753310418e-09)]\n```","response":"If I get your question correctly, you could do something like this. \n```\n\n```python\nimport matplotlib.pyplot as plt\n```\n\n```python\ntestList =[(0, 6.0705199999997801e-08), (1, 2.1015700100300739e-08), \n#Output\n# (2, 7.6280656623374823e-09), (3, 5.7348209304555086e-09), \n# (4, 3.6812203579604238e-09), (5, 4.1572516753310418e-09)]\n```\n\n```python\nfrom math import log\n```\n\n```python\ntestList2 = [(elem1, log(elem2)) for elem1, elem2 in testList]\n```\n\n```python\ntestList2\n#Output\n#[(0, -16.617236475334405), (1, -17.67799605473062), (2, -18.691431541177973), (3, -18.9767093108359), (4, -19.420021520728017), (5, -19.298411635970396)]\n```\n\n```python\nzip(*testList2)\n#Output\n#[(0, 1, 2, 3, 4, 5), (-16.617236475334405, -17.67799605473062, -18.691431541177973, -18.9767093108359, -19.420021520728017, -19.298411635970396)]\n```\n\n```python\nplt.scatter(*zip(*testList2))\n```\n\n```python\nplt.show()\n#Output\n#```\n#which would give you something like Or as a line plot, \n#```\n```\n\n```python\nplt.plot(*zip(*testList2))\n```\n\n```python\nplt.show()\n#Output\n#```\n#EDIT - If you want to add a title and labels for the axis, you could do something like \n#```\n```\n\n```python\nplt.scatter(*zip(*testList2))\n```\n\n```python\nplt.title('Random Figure')\n```\n\n```python\nplt.xlabel('X-Axis')\n```\n\n```python\nplt.ylabel('Y-Axis')\n```\n\n```python\nplt.show()\n#Output\n#```\n#which would give you\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/18458734\/how-do-i-plot-list-of-tuples","best_answers_votes":119,"tags":["python","numpy","matplotlib","scipy","gnuplot"],"question_length":455,"response_length":1137,"tag_count":5} {"question":"1D numpy concatenate: TypeError: only integer scalar arrays can be converted to a scalar index [duplicate] This question already has answers here: Concatenating two one-dimensional NumPy arrays (7 answers) Closed 7 years ago. I want to store numpy array into to another numpy array I am using np.concatenate This is my code \n```\nx=np.concatenate(x,s_x)\n```\nThese are the type and the shape of x and s_x \n```\nType of s_x: , Shape of s_x: (173,)\nType of x: (0,), Shape of x: (0,)\n```\nThis is the error being displayed \n```\nTypeError: only integer scalar arrays can be converted to a scalar index\n```","response":"You need to pass the arrays as an iterable (a tuple or list), thus the correct syntax is \n```\nx=np.concatenate((x, s_x))\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/47388013\/1d-numpy-concatenate-typeerror-only-integer-scalar-arrays-can-be-converted-to","best_answers_votes":173,"tags":["python","arrays","numpy"],"question_length":644,"response_length":124,"tag_count":3} {"question":"Create a two-dimensional array with two one-dimensional arrays My function (name CovexHull(point)) accepts the argument as a two-dimensional array: \n```\nhull = ConvexHull(points)\n```\nSession \n```\n\n```python\npoints.ndim\n#Output\n#2\n```\n\n```python\npoints.shape\n#Output\n#(10, 2)\n```\n\n```python\npoints\n#Output\n#array([[ 0. , 0. ],\n# [ 1. , 0.8],\n# [ 0.9, 0.8],\n# [ 0.9, 0.7],\n# [ 0.9, 0.6],\n# [ 0.8, 0.5],\n# [ 0.8, 0.5],\n# [ 0.7, 0.5],\n# [ 0.1, 0. ],\n# [ 0. , 0. ]])\n#```\n#points is a NumPy array with ndim 2. I have two different NumPy arrays (tp and fp) like below: \n#```\n```\n\n```python\nfp.ndim\n#Output\n#1\n```\n\n```python\nfp.shape\n#Output\n#(10,)\n```\n\n```python\nfp\n#Output\n#array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.4,\n# 0.5, 0.6, 0.9, 1. ])\n#```\n#How can I create a two-dimensional NumPy array effectively (like points mentioned above) with tp and fp?\n```","response":"If you wish to combine two 10 element one-dimensional arrays into a two-dimensional array, np.vstack((tp, fp)).T will do it. np.vstack((tp, fp)) will return an array of shape (2, 10), and the T attribute returns the transposed array with shape (10, 2) (i.e., with the two one-dimensional arrays forming columns rather than rows). \n```\n\n```python\ntp = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n```\n\n```python\ntp.ndim\n#Output\n#1\n```\n\n```python\ntp.shape\n#Output\n#(10,)\n```\n\n```python\nfp = np.array([10, 11, 12, 13, 14, 15, 16, 17, 18, 19])\n```\n\n```python\nfp.ndim\n#Output\n#1\n```\n\n```python\nfp.shape\n#Output\n#(10,)\n```\n\n```python\ncombined = np.vstack((tp, fp)).T\n```\n\n```python\ncombined\n#Output\n#array([[ 0, 10],\n# [ 1, 11],\n# [ 2, 12],\n# [ 3, 13],\n# [ 4, 14],\n# [ 5, 15],\n# [ 6, 16],\n# [ 7, 17],\n# [ 8, 18],\n# [ 9, 19]])\n```\n\n```python\ncombined.ndim\n#Output\n#2\n```\n\n```python\ncombined.shape\n#Output\n#(10, 2)\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/17710672\/create-a-two-dimensional-array-with-two-one-dimensional-arrays","best_answers_votes":105,"tags":["python","arrays","numpy"],"question_length":859,"response_length":785,"tag_count":3} {"question":"only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices I am implementing fft and when I shuffle the data elements using bit reversal, I get the following error: \n```none\nIndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis \n(`None`) and integer or boolean arrays are valid indices.\n```\nMy code is: \n```py\ndef shuffle_bit_reversed_order(data: np.ndarray) -> np.ndarray:\n x = data.size\n n = x \/ 2\n y = n * np.mod(x, 2)\n data[x], data[y] = data[y], data[x]\n return data\n```\nI think the problem is my data is of type 'float64' and I may have used it as an integer but I don't know how I can solve it.","response":"I believe your problem is this: in your while loop, n is divided by 2, but never cast as an integer again, so it becomes a float at some point. It is then added onto y, which is then a float too, and that gives you the warning.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/34952651\/only-integers-slices-ellipsis-numpy-newaxis-none-and-intege","best_answers_votes":67,"tags":["python","python-3.x","numpy","index-error"],"question_length":694,"response_length":227,"tag_count":4} {"question":"Zero pad numpy array What's the more pythonic way to pad an array with zeros at the end? \n```\ndef pad(A, length):\n ...\n\nA = np.array([1,2,3,4,5])\npad(A, 8) # expected : [1,2,3,4,5,0,0,0]\n```\nIn my real use case, in fact I want to pad an array to the closest multiple of 1024. Ex: 1342 => 2048, 3000 => 3072","response":"numpy.pad with constant mode does what you need, where we can pass a tuple as second argument to tell how many zeros to pad on each size, a (2, 3) for instance will pad 2 zeros on the left side and 3 zeros on the right side: Given A as: \n```\nA = np.array([1,2,3,4,5])\n\nnp.pad(A, (2, 3), 'constant')\n# array([0, 0, 1, 2, 3, 4, 5, 0, 0, 0])\n```\nIt's also possible to pad a 2D numpy arrays by passing a tuple of tuples as padding width, which takes the format of ((top, bottom), (left, right)): \n```\nA = np.array([[1,2],[3,4]])\n\nnp.pad(A, ((1,2),(2,1)), 'constant')\n\n#array([[0, 0, 0, 0, 0], # 1 zero padded to the top\n# [0, 0, 1, 2, 0], # 2 zeros padded to the bottom\n# [0, 0, 3, 4, 0], # 2 zeros padded to the left\n# [0, 0, 0, 0, 0], # 1 zero padded to the right\n# [0, 0, 0, 0, 0]])\n```\nFor your case, you specify the left side to be zero and right side pad calculated from a modular division: \n```\nB = np.pad(A, (0, 1024 - len(A)%1024), 'constant')\nB\n# array([1, 2, 3, ..., 0, 0, 0])\nlen(B)\n# 1024\n```\nFor a larger A: \n```\nA = np.ones(3000)\nB = np.pad(A, (0, 1024 - len(A)%1024), 'constant')\nB\n# array([ 1., 1., 1., ..., 0., 0., 0.])\n\nlen(B)\n# 3072\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/38191855\/zero-pad-numpy-array","best_answers_votes":126,"tags":["python","numpy","numpy-ndarray","zero-padding"],"question_length":312,"response_length":1221,"tag_count":4} {"question":"'list' object has no attribute 'shape' how to create an array to numpy array? \n```\ndef test(X, N):\n [n,T] = X.shape\n print \"n : \", n\n print \"T : \", T\n\n\n\nif __name__==\"__main__\":\n\n X = [[[-9.035250067710876], [7.453250169754028], [33.34074878692627]], [[-6.63700008392334], [5.132999956607819], [31.66075038909912]], [[-5.1272499561309814], [8.251499891281128], [30.925999641418457]]]\n N = 200\n test(X, N)\n```\nI am getting error as \n```\nAttributeError: 'list' object has no attribute 'shape'\n```\nSo, I think I need to convert my X to numpy array?","response":"Use numpy.array to use shape attribute. \n```\n\n```python\nimport numpy as np\n```\n\n```python\nX = np.array([\n [[-9.035250067710876], [7.453250169754028], [33.34074878692627]],\n [[-6.63700008392334], [5.132999956607819], [31.66075038909912]],\n [[-5.1272499561309814], [8.251499891281128], [30.925999641418457]]\n])\n```\n\n```python\nX.shape\n#Output\n#(3L, 3L, 1L)\n#```\n#NOTE X.shape returns 3-items tuple for the given array; [n, T] = X.shape raises ValueError.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/21015674\/list-object-has-no-attribute-shape","best_answers_votes":94,"tags":["python","list","numpy"],"question_length":563,"response_length":436,"tag_count":3} {"question":"How do I tell if a column in a pandas dataframe is of type datetime? How do I tell if a column is numerical? I am trying to filter the columns in a pandas dataframe based on whether they are of type date or not. I can figure out which ones are, but then would have to parse that output or manually select columns. I want to select date columns automatically. Here's what I have so far as an example - I'd want to only select the 'date_col' column in this case. \n```\nimport pandas as pd\ndf = pd.DataFrame([['Feb-2017', 1, 2],\n ['Mar-2017', 1, 2],\n ['Apr-2017', 1, 2],\n ['May-2017', 1, 2]], \n columns=['date_str', 'col1', 'col2'])\ndf['date_col'] = pd.to_datetime(df['date_str'])\ndf.dtypes\n```\nOut: \n```\ndate_str object\ncol1 int64\ncol2 int64\ndate_col datetime64[ns]\ndtype: object\n```","response":"I just encountered this issue and found that @charlie-haley's answer isn't quite general enough for my use case. In particular np.datetime64 doesn't seem to match datetime64[ns, UTC]. \n```py\ndf['date_col'] = pd.to_datetime(df['date_str'], utc=True)\nprint(df.date_str.dtype) # datetime64[ns, UTC]\n```\nYou could also extend the list of dtypes to include other types, but that doesn't seem like a good solution for future compatability, so I ended up using the is_datetime64_any_dtype function from the pandas api instead. In: \n```py\nfrom pandas.api.types import is_datetime64_any_dtype as is_datetime\n\ndf[[column for column in df.columns if is_datetime(df[column])]]\n```\nOut: \n```py\ndate_col\n0 2017-02-01 00:00:00+00:00\n1 2017-03-01 00:00:00+00:00\n2 2017-04-01 00:00:00+00:00\n3 2017-05-01 00:00:00+00:00\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/43214204\/how-do-i-tell-if-a-column-in-a-pandas-dataframe-is-of-type-datetime-how-do-i-te","best_answers_votes":87,"tags":["python","pandas","numpy","dataframe"],"question_length":897,"response_length":806,"tag_count":4} {"question":"testing whether a Numpy array contains a given row Is there a Pythonic and efficient way to check whether a Numpy array contains at least one instance of a given row? By \"efficient\" I mean it terminates upon finding the first matching row rather than iterating over the entire array even if a result has already been found. With Python arrays this can be accomplished very cleanly with if row in array:, but this does not work as I would expect for Numpy arrays, as illustrated below. With Python arrays: \n```\n\n```python\na = [[1,2],[10,20],[100,200]]\n```\n\n```python\n[1,2] in a\n#Output\n#True\n```\n\n```python\n[1,20] in a\n#Output\n#False\n#```\n#but Numpy arrays give different and rather odd-looking results. (The __contains__ method of ndarray seems to be undocumented.) \n#```\n```\n\n```python\na = np.array([[1,2],[10,20],[100,200]])\n```\n\n```python\nnp.array([1,2]) in a\n#Output\n#True\n```\n\n```python\nnp.array([1,20]) in a\n#Output\n#True\n```\n\n```python\nnp.array([1,42]) in a\n#Output\n#True\n```\n\n```python\nnp.array([42,1]) in a\n#Output\n#False\n#```\n```","response":"You can use .tolist() \n```\n\n```python\na = np.array([[1,2],[10,20],[100,200]])\n```\n\n```python\n[1,2] in a.tolist()\n#Output\n#True\n```\n\n```python\n[1,20] in a.tolist()\n#Output\n#False\n```\n\n```python\n[1,20] in a.tolist()\n#Output\n#False\n```\n\n```python\n[1,42] in a.tolist()\n#Output\n#False\n```\n\n```python\n[42,1] in a.tolist()\n#Output\n#False\n#```\n#Or use a view: \n#```\n```\n\n```python\nany((a[:]==[1,2]).all(1))\n#Output\n#True\n```\n\n```python\nany((a[:]==[1,20]).all(1))\n#Output\n#False\n#```\n#Or generate over the numpy list (potentially VERY SLOW): \n#```\n#any(([1,2] == x).all() for x in a) # stops on first occurrence\n#```\n#Or use numpy logic functions: \n#```\n#any(np.equal(a,[1,2]).all(1))\n#```\n#If you time these: \n#```\n#import numpy as np\n#import time\n```\n\nn=300000\na=np.arange(n*3).reshape(n,3)\nb=a.tolist()\n\nt1,t2,t3=a[n\/\/100][0],a[n\/\/2][0],a[-10][0]\n\ntests=[ ('early hit',[t1, t1+1, t1+2]),\n ('middle hit',[t2,t2+1,t2+2]),\n ('late hit', [t3,t3+1,t3+2]),\n ('miss',[0,2,0])]\n\nfmt='\\t{:20}{:.5f} seconds and is {}' \n\nfor test, tgt in tests:\n print('\\n{}: {} in {:,} elements:'.format(test,tgt,n))\n\n name='view'\n t1=time.time()\n result=(a[...]==tgt).all(1).any()\n t2=time.time()\n print(fmt.format(name,t2-t1,result))\n\n name='python list'\n t1=time.time()\n result = True if tgt in b else False\n t2=time.time()\n print(fmt.format(name,t2-t1,result))\n\n name='gen over numpy'\n t1=time.time()\n result=any((tgt == x).all() for x in a)\n t2=time.time()\n print(fmt.format(name,t2-t1,result))\n\n name='logic equal'\n t1=time.time()\n np.equal(a,tgt).all(1).any()\n t2=time.time()\n print(fmt.format(name,t2-t1,result))\n```\nYou can see that hit or miss, the numpy routines are the same speed to search the array. The Python in operator is potentially a lot faster for an early hit, and the generator is just bad news if you have to go all the way through the array. Here are the results for 300,000 x 3 element array: \n```\nearly hit: [9000, 9001, 9002] in 300,000 elements:\n view 0.01002 seconds and is True\n python list 0.00305 seconds and is True\n gen over numpy 0.06470 seconds and is True\n logic equal 0.00909 seconds and is True\n\nmiddle hit: [450000, 450001, 450002] in 300,000 elements:\n view 0.00915 seconds and is True\n python list 0.15458 seconds and is True\n gen over numpy 3.24386 seconds and is True\n logic equal 0.00937 seconds and is True\n\nlate hit: [899970, 899971, 899972] in 300,000 elements:\n view 0.00936 seconds and is True\n python list 0.30604 seconds and is True\n gen over numpy 6.47660 seconds and is True\n logic equal 0.00965 seconds and is True\n\nmiss: [0, 2, 0] in 300,000 elements:\n view 0.00936 seconds and is False\n python list 0.01287 seconds and is False\n gen over numpy 6.49190 seconds and is False\n logic equal 0.00965 seconds and is False\n```\nAnd for 3,000,000 x 3 array: \n```\nearly hit: [90000, 90001, 90002] in 3,000,000 elements:\n view 0.10128 seconds and is True\n python list 0.02982 seconds and is True\n gen over numpy 0.66057 seconds and is True\n logic equal 0.09128 seconds and is True\n\nmiddle hit: [4500000, 4500001, 4500002] in 3,000,000 elements:\n view 0.09331 seconds and is True\n python list 1.48180 seconds and is True\n gen over numpy 32.69874 seconds and is True\n logic equal 0.09438 seconds and is True\n\nlate hit: [8999970, 8999971, 8999972] in 3,000,000 elements:\n view 0.09868 seconds and is True\n python list 3.01236 seconds and is True\n gen over numpy 65.15087 seconds and is True\n logic equal 0.09591 seconds and is True\n\nmiss: [0, 2, 0] in 3,000,000 elements:\n view 0.09588 seconds and is False\n python list 0.12904 seconds and is False\n gen over numpy 64.46789 seconds and is False\n logic equal 0.09671 seconds and is False\n```\nWhich seems to indicate that np.equal is the fastest pure numpy way to do this...","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/14766194\/testing-whether-a-numpy-array-contains-a-given-row","best_answers_votes":72,"tags":["python","numpy"],"question_length":893,"response_length":4042,"tag_count":2} {"question":"What is the difference between numpy.fft and scipy.fftpack? Is the later just a synonym of the former, or are they two different implementations of FFT? Which one is better?","response":"SciPy does more: http:\/\/docs.scipy.org\/doc\/numpy\/reference\/routines.fft.html http:\/\/docs.scipy.org\/doc\/scipy\/reference\/fftpack.html# In addition, SciPy exports some of the NumPy features through its own interface, for example if you execute scipy.fftpack.helper.fftfreq and numpy.fft.helper.fftfreq you're actually running the same code. However, SciPy has its own implementations of much functionality. The source has performance benchmarks that compare the original NumPy and new SciPy versions. My archaic laptop shows something like this: \n```\nFast Fourier Transform\n=================================================\n | real input | complex input \n-------------------------------------------------\n size | scipy | numpy | scipy | numpy \n-------------------------------------------------\n 100 | 0.07 | 0.06 | 0.06 | 0.07 (secs for 7000 calls)\n 1000 | 0.06 | 0.09 | 0.09 | 0.09 (secs for 2000 calls)\n 256 | 0.11 | 0.11 | 0.12 | 0.11 (secs for 10000 calls)\n 512 | 0.16 | 0.21 | 0.20 | 0.21 (secs for 10000 calls)\n 1024 | 0.03 | 0.04 | 0.04 | 0.04 (secs for 1000 calls)\n 2048 | 0.05 | 0.09 | 0.08 | 0.08 (secs for 1000 calls)\n 4096 | 0.05 | 0.08 | 0.07 | 0.09 (secs for 500 calls)\n 8192 | 0.10 | 0.20 | 0.19 | 0.21 (secs for 500 calls)\n```\nIt does seem that SciPy runs significantly faster as the array increases in size, though these are just contrived examples and it would be worth experimenting with both for your particular project. It's worth checking out the source code http:\/\/www.scipy.org\/Download#head-312ad78cdf85a9ca6fa17a266752069d23f785d1 . Yes those .f files really are Fortran! :-D","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/6363154\/what-is-the-difference-between-numpy-fft-and-scipy-fftpack","best_answers_votes":47,"tags":["python","numpy","scipy","fft"],"question_length":173,"response_length":1729,"tag_count":4} {"question":"Store numpy.array in cells of a Pandas.DataFrame I have a dataframe in which I would like to store 'raw' numpy.array: \n```\ndf['COL_ARRAY'] = df.apply(lambda r: np.array(do_something_with_r), axis=1)\n```\nbut it seems that pandas tries to 'unpack' the numpy.array. Is there a workaround? Other than using a wrapper (see edit below)? I tried reduce=False with no success. EDIT This works, but I have to use the 'dummy' Data class to wrap around the array, which is unsatisfactory and not very elegant. \n```\nclass Data:\n def __init__(self, v):\n self.v = v\n\nmeas = pd.read_excel(DATA_FILE)\nmeas['DATA'] = meas.apply(\n lambda r: Data(np.array(pd.read_csv(r['filename'])))),\n axis=1\n)\n```","response":"Use a wrapper around the numpy array i.e. pass the numpy array as list \n```\na = np.array([5, 6, 7, 8])\ndf = pd.DataFrame({\"a\": [a]})\n```\nOutput: \n```\na\n0 [5, 6, 7, 8]\n```\nOr you can use apply(np.array) by creating the tuples i.e. if you have a dataframe \n```\ndf = pd.DataFrame({'id': [1, 2, 3, 4],\n 'a': ['on', 'on', 'off', 'off'],\n 'b': ['on', 'off', 'on', 'off']})\n\ndf['new'] = df.apply(lambda r: tuple(r), axis=1).apply(np.array)\n```\nOutput : \n```\na b id new\n0 on on 1 [on, on, 1]\n1 on off 2 [on, off, 2]\n2 off on 3 [off, on, 3]\n3 off off 4 [off, off, 4]\n```\n```\ndf['new'][0]\n```\nOutput : \n```\narray(['on', 'on', '1'], dtype='O(1e6) data values. If you really need to downsample I'd set \n```\nx = np.sort(a)[::down_sampling_step]\n```\nEdit to respond to comment\/edit on why I use endpoint=False or the y as defined above. The following are some technical details. The empirical CDF is usually formally defined as \n```\nCDF(x) = \"number of samples <= x\"\/\"number of samples\"\n```\nin order to exactly match this formal definition you would need to use y = np.arange(1,len(x)+1)\/float(len(x)) so that we get y = [1\/N, 2\/N ... 1]. This estimator is an unbiased estimator that will converge to the true CDF in the limit of infinite samples Wikipedia ref.. I tend to use y = [0, 1\/N, 2\/N ... (N-1)\/N] since: (a) it is easier to code\/more idiomatic, (b) but is still formally justified since one can always exchange CDF(x) with 1-CDF(x) in the convergence proof, and (c) works with the (easy) downsampling method described above. In some particular cases, it is useful to define \n```\ny = (arange(len(x))+0.5)\/len(x)\n```\nwhich is intermediate between these two conventions. Which, in effect, says \"there is a 1\/(2N) chance of a value less than the lowest one I've seen in my sample, and a 1\/(2N) chance of a value greater than the largest one I've seen so far. Note that the selection of this convention interacts with the where parameter used in the plt.step if it seems more useful to display the CDF as a piecewise constant function. In order to exactly match the formal definition mentioned above, one would need to use where=pre the suggested y=[0,1\/N..., 1-1\/N] convention, or where=post with the y=[1\/N, 2\/N ... 1] convention, but not the other way around. However, for large samples, and reasonable distributions, the convention is given in the main body of the answer is easy to write, is an unbiased estimator of the true CDF, and works with the downsampling methodology.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/3209362\/how-to-plot-empirical-cdf-ecdf","best_answers_votes":134,"tags":["python","numpy","matplotlib","seaborn","ecdf"],"question_length":333,"response_length":2143,"tag_count":5} {"question":"How to add items into a numpy array I need to accomplish the following task: from: \n```\na = array([[1,3,4],[1,2,3]...[1,2,1]])\n```\n(add one element to each row) to: \n```\na = array([[1,3,4,x],[1,2,3,x]...[1,2,1,x]])\n```\nI have tried doing stuff like a[n] = array([1,3,4,x]) but numpy complained of shape mismatch. I tried iterating through a and appending element x to each item, but the changes are not reflected. Any ideas on how I can accomplish this?","response":"Appending data to an existing array is a natural thing to want to do for anyone with python experience. However, if you find yourself regularly appending to large arrays, you'll quickly discover that NumPy doesn't easily or efficiently do this the way a python list will. You'll find that every \"append\" action requires re-allocation of the array memory and short-term doubling of memory requirements. So, the more general solution to the problem is to try to allocate arrays to be as large as the final output of your algorithm. Then perform all your operations on sub-sets (slices) of that array. Array creation and destruction should ideally be minimized. That said, It's often unavoidable and the functions that do this are: for 2-D arrays: np.hstack np.vstack np.column_stack np.row_stack for 3-D arrays (the above plus): np.dstack for N-D arrays: np.concatenate","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/5064822\/how-to-add-items-into-a-numpy-array","best_answers_votes":134,"tags":["python","numpy"],"question_length":453,"response_length":867,"tag_count":2} {"question":"check how many elements are equal in two numpy arrays python I have two numpy arrays with number (Same length), and I want to count how many elements are equal between those two array (equal = same value and position in array) \n```\nA = [1, 2, 3, 4]\nB = [1, 2, 4, 3]\n```\nthen I want the return value to be 2 (just 1&2 are equal in position and value)","response":"Using numpy.sum: \n```\n\n```python\nimport numpy as np\n```\n\n```python\na = np.array([1, 2, 3, 4])\n```\n\n```python\nb = np.array([1, 2, 4, 3])\n```\n\n```python\nnp.sum(a == b)\n#Output\n#2\n```\n\n```python\n(a == b).sum()\n#Output\n#2\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/25490641\/check-how-many-elements-are-equal-in-two-numpy-arrays-python","best_answers_votes":124,"tags":["python","arrays","numpy"],"question_length":349,"response_length":152,"tag_count":3} {"question":"How can I solve error \"module 'numpy' has no attribute 'float'\" in Python? I am using NumPy 1.24.0. On running this sample code line, \n```py\nimport numpy as np\nnum = np.float(3)\n```\nI am getting this error: \n```none\nTraceback (most recent call last): File \"\", line 1, in File \"\/home\/ubuntu\/.local\/lib\/python3.8\/site-packages\/numpy\/__init__.py\", line 284, in __getattr__\n raise AttributeError(\"module {!r} has no attribute \" AttributeError: module 'numpy' has no attribute 'float'\n```\nHow can I fix it?","response":"The answer is already provided in the comments by @mattdmo and @tdelaney: NumPy 1.20 (release notes) deprecated numpy.float, numpy.int, and similar aliases, causing them to issue a deprecation warning NumPy 1.24 (release notes) removed these aliases altogether, causing an error when they are used In many cases you can simply replace the deprecated NumPy types by the equivalent Python built-in type, e.g., numpy.float becomes a \"plain\" Python float. For detailed guidelines on how to deal with various deprecated types, have a closer look at the table and guideline in the release notes for 1.20: ... To give a clear guideline for the vast majority of cases, for the types bool, object, str (and unicode) using the plain version is shorter and clear, and generally a good replacement. For float and complex you can use float64 and complex128 if you wish to be more explicit about the precision. For np.int a direct replacement with np.int_ or int is also good and will not change behavior, but the precision will continue to depend on the computer and operating system. If you want to be more explicit and review the current use, you have the following alternatives: np.int64 or np.int32 to specify the precision exactly. This ensures that results cannot depend on the computer or operating system. np.int_ or int (the default), but be aware that it depends on the computer and operating system. The C types: np.cint (int), np.int_ (long), np.longlong. np.intp which is 32bit on 32bit machines 64bit on 64bit machines. This can be the best type to use for indexing. ... If you have dependencies that use the deprecated types, a quick workaround would be to roll back your NumPy version to 1.24 or less (as suggested in some of the other answers), while waiting for the dependency to catch up. Alternatively, you could create a patch yourself and open a pull request, or monkey patch the dependency in your own code.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/74844262\/how-can-i-solve-error-module-numpy-has-no-attribute-float-in-python","best_answers_votes":65,"tags":["python","numpy"],"question_length":524,"response_length":1917,"tag_count":2} {"question":"list memory usage in ipython and jupyter I have a few (almost ten) Gb of memory taken by the ipython kernel. I think this is coming from large objects (matrices, lists, numpy arrays, ...) that I might have produced during some operation and now I do not need anymore. I would like to list all of the objects I have defined and sort them by their memory footprint. Is there a simple way to do that? For certain types there is nbytes method, but not for all ... so I am looking for a general way to list all objects I have made and their memory occupation.","response":"Assuming that you are using ipython or jupyter, you will need to do a little bit of work to get a list all of the objects you have defined. That means taking everything available in globals() and filtering out objects that are modules, builtins, ipython objects, etc. Once you are sure you have those objects, then you can proceed to grabbing their sizes with sys.getsizeof. This can be summed up as follows: \n```\nimport sys\n\n# These are the usual ipython objects, including this one you are creating\nipython_vars = ['In', 'Out', 'exit', 'quit', 'get_ipython', 'ipython_vars']\n\n# Get a sorted list of the objects and their sizes\nsorted([(x, sys.getsizeof(globals().get(x))) for x in dir() if not x.startswith('_') and x not in sys.modules and x not in ipython_vars], key=lambda x: x[1], reverse=True)\n```\nPlease keep in mind that for python objects (those created with python's builtin functions), sys.getsizeof will be very accurate. But it can be a bit inaccurate on objects created using third-party libraries. Furthermore, please be mindful that sys.getsizeof adds an additional garbage collector overhead if the object is managed by the garbage collector. So, some things may look a bit heavier than they actually are. As a side note, numpy's .nbytes method can be somewhat misleading in that it does not include memory consumed by non-element attributes of the array object.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/40993626\/list-memory-usage-in-ipython-and-jupyter","best_answers_votes":116,"tags":["python","numpy","memory","ipython","jupyter-notebook"],"question_length":554,"response_length":1380,"tag_count":5} {"question":"Iterating over Numpy matrix rows to apply a function each? I want to be able to iterate over the matrix to apply a function to each row. How can I do it for a Numpy matrix ?","response":"You can use numpy.apply_along_axis(). Assuming that your array is 2D, you can use it like: \n```\nimport numpy as np\n\nmyarray = np.array([[11, 12, 13],\n [21, 22, 23],\n [31, 32, 33]])\ndef myfunction(x):\n return x[0] + x[1]**2 + x[2]**3\n\nprint(np.apply_along_axis(myfunction, axis=1, arr=myarray))\n#[ 2352 12672 36992]\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/16468717\/iterating-over-numpy-matrix-rows-to-apply-a-function-each","best_answers_votes":93,"tags":["python","matrix","numpy"],"question_length":173,"response_length":359,"tag_count":3} {"question":"string representation of a numpy array with commas separating its elements I have a numpy array, for example: \n```\npoints = np.array([[-468.927, -11.299, 76.271, -536.723],\n [-429.379, -694.915, -214.689, 745.763],\n [ 0., 0., 0., 0. ]])\n```\nif I print it or turn it into a string with str() I get: \n```\nprint w_points\n[[-468.927 -11.299 76.271 -536.723]\n [-429.379 -694.915 -214.689 745.763]\n [ 0. 0. 0. 0. ]]\n```\nI need to turn it into a string that prints with separating commas while keeping the 2D array structure, that is: \n```\n[[-468.927, -11.299, 76.271, -536.723],\n [-429.379, -694.915, -214.689, 745.763],\n [ 0., 0., 0., 0. ]]\n```\nDoes anybody know an easy way of turning a numpy array to that form of string? I know that .tolist() adds the commas but the result loses the 2D structure.","response":"Try using repr \n```\n\n```python\nimport numpy as np\n```\n\n```python\npoints = np.array([[-468.927, -11.299, 76.271, -536.723],\n [-429.379, -694.915, -214.689, 745.763],\n [ 0., 0., 0., 0. ]])\n```\n\n```python\nprint(repr(points))\n#Output\n#array([[-468.927, -11.299, 76.271, -536.723],\n# [-429.379, -694.915, -214.689, 745.763],\n# [ 0. , 0. , 0. , 0. ]])\n#```\n#If you plan on using large numpy arrays, set np.set_printoptions(threshold=np.nan) first. Without it, the array representation will be truncated after about 1000 entries (by default). \n#```\n```\n\n```python\narr = np.arange(1001)\n```\n\n```python\nprint(repr(arr))\n#Output\n#array([ 0, 1, 2, ..., 998, 999, 1000])\n#```\n#Of course, if you have arrays that large, this starts to become less useful and you should probably analyze the data some way other than just looking at it and there are better ways of persisting a numpy array than saving it's repr to a file...\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/16423774\/string-representation-of-a-numpy-array-with-commas-separating-its-elements","best_answers_votes":104,"tags":["python","numpy"],"question_length":909,"response_length":948,"tag_count":2} {"question":"convert a grayscale image to a 3-channel image [duplicate] This question already has answers here: how to copy numpy array value into higher dimensions (7 answers) Closed 8 years ago. I want to convert a gray-scale image with shape (height,width) to a 3 channels image with shape (height,width,nchannels). The work is done with a for-loop, but there must be a neat way. Here is a piece code in program, can someone give a hint. please advice. \n```\n30 if img.shape == (height,width): # if img is grayscale, expand\n 31 print \"convert 1-channel image to \", nchannels, \" image.\"\n 32 new_img = np.zeros((height,width,nchannels))\n 33 for ch in range(nchannels):\n 34 for xx in range(height):\n 35 for yy in range(width):\n 36 new_img[xx,yy,ch] = img[xx,yy]\n 37 img = new_img\n```","response":"You can use np.stack to accomplish this much more concisely: \n```\nimg = np.array([[1, 2], [3, 4]])\nstacked_img = np.stack((img,)*3, axis=-1)\nprint(stacked_img)\n # array([[[1, 1, 1],\n # [2, 2, 2]],\n # [[3, 3, 3],\n # [4, 4, 4]]])\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/40119743\/convert-a-grayscale-image-to-a-3-channel-image","best_answers_votes":138,"tags":["python","numpy"],"question_length":885,"response_length":254,"tag_count":2} {"question":"Why do \"Not a Number\" values equal True when cast as boolean in Python\/Numpy? When casting a NumPy Not-a-Number value as a boolean, it becomes True, e.g. as follows. \n```\n\n```python\nimport numpy as np\n```\n\n```python\nbool(np.nan)\n#Output\n#True\n#```\n#This is the exact opposite to what I would intuitively expect. Is there a sound principle underlying this behaviour? (I suspect there might be as the same behaviour seems to occur in Octave.)\n```","response":"This is in no way NumPy-specific, but is consistent with how Python treats NaNs: \n```\n\n```python\nbool(float('nan'))\n#Output\n#True\n#```\n#The rules are spelled out in the documentation. I think it could be reasonably argued that the truth value of NaN should be False. However, this is not how the language works right now.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/15686318\/why-do-not-a-number-values-equal-true-when-cast-as-boolean-in-python-numpy","best_answers_votes":47,"tags":["python","math","numpy"],"question_length":411,"response_length":315,"tag_count":3} {"question":"Calculate weighted average using a pandas\/dataframe I have the following table. I want to calculate a weighted average grouped by each date based on the formula below. I can do this using some standard conventional code, but assuming that this data is in a pandas dataframe, is there any easier way to achieve this rather than through iteration? \n```\nDate ID wt value w_avg\n01\/01\/2012 100 0.50 60 0.791666667\n01\/01\/2012 101 0.75 80\n01\/01\/2012 102 1.00 100\n01\/02\/2012 201 0.50 100 0.722222222\n01\/02\/2012 202 1.00 80\n```\n01\/01\/2012 w_avg = 0.5 * ( 60\/ sum(60,80,100)) + .75 * (80\/ sum(60,80,100)) + 1.0 * (100\/sum(60,80,100)) 01\/02\/2012 w_avg = 0.5 * ( 100\/ sum(100,80)) + 1.0 * ( 80\/ sum(100,80))","response":"Let's first create the example pandas dataframe: \n```\n\n```python\nimport numpy as np\n```\n\n```python\nimport pandas as pd\n```\n\n```python\nindex = pd.Index(['01\/01\/2012','01\/01\/2012','01\/01\/2012','01\/02\/2012','01\/02\/2012'], name='Date')\n```\n\n```python\ndf = pd.DataFrame({'ID':[100,101,102,201,202],'wt':[.5,.75,1,.5,1],'value':[60,80,100,100,80]},index=index)\n```\nThen, the average of 'wt' weighted by 'value' and grouped by the index is obtained as: \n```\n```\n\n```python\ndf.groupby(df.index).apply(lambda x: np.average(x.wt, weights=x.value))\n#Output\n#Date\n#01\/01\/2012 0.791667\n#01\/02\/2012 0.722222\n#dtype: float64\n#```\n#Alternatively, one can also define a function: \n#```\n```\n\n```python\ndef grouped_weighted_avg(values, weights, by):\n ...: return (values * weights).groupby(by).sum() \/ weights.groupby(by).sum()\n```\n\n```python\ngrouped_weighted_avg(values=df.wt, weights=df.value, by=df.index)\n#Output\n#Date\n#01\/01\/2012 0.791667\n#01\/02\/2012 0.722222\n#dtype: float64\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/26205922\/calculate-weighted-average-using-a-pandas-dataframe","best_answers_votes":68,"tags":["python","numpy","pandas"],"question_length":763,"response_length":933,"tag_count":3} {"question":"numpy divide row by row sum How can I divide a numpy array row by the sum of all values in this row? This is one example. But I'm pretty sure there is a fancy and much more efficient way of doing this: \n```\nimport numpy as np\ne = np.array([[0., 1.],[2., 4.],[1., 5.]])\nfor row in xrange(e.shape[0]):\n e[row] \/= np.sum(e[row])\n```\nResult: \n```\narray([[ 0. , 1. ],\n [ 0.33333333, 0.66666667],\n [ 0.16666667, 0.83333333]])\n```","response":"Method #1: use None (or np.newaxis) to add an extra dimension so that broadcasting will behave: \n```\n\n```python\ne\n#Output\n#array([[ 0., 1.],\n# [ 2., 4.],\n# [ 1., 5.]])\n```\n\n```python\ne\/e.sum(axis=1)[:,None]\n#Output\n#array([[ 0. , 1. ],\n# [ 0.33333333, 0.66666667],\n# [ 0.16666667, 0.83333333]])\n#```\n#Method #2: go transpose-happy: \n#```\n```\n\n```python\n(e.T\/e.sum(axis=1)).T\n#Output\n#array([[ 0. , 1. ],\n# [ 0.33333333, 0.66666667],\n# [ 0.16666667, 0.83333333]])\n#```\n#(You can drop the axis= part for conciseness, if you want.) Method #3: (promoted from Jaime's comment) Use the keepdims argument on sum to preserve the dimension: \n#```\n```\n\n```python\ne\/e.sum(axis=1, keepdims=True)\n#Output\n#array([[ 0. , 1. ],\n# [ 0.33333333, 0.66666667],\n# [ 0.16666667, 0.83333333]])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/16202348\/numpy-divide-row-by-row-sum","best_answers_votes":133,"tags":["python","multidimensional-array","numpy"],"question_length":455,"response_length":787,"tag_count":3} {"question":"Get the mean across multiple Pandas DataFrames I'm generating a number of dataframes with the same shape, and I want to compare them to one another. I want to be able to get the mean and median across the dataframes. \n```\nSource.0 Source.1 Source.2 Source.3\ncluster \n0 0.001182 0.184535 0.814230 0.000054\n1 0.000001 0.160490 0.839508 0.000001\n2 0.000001 0.173829 0.826114 0.000055\n3 0.000432 0.180065 0.819502 0.000001\n4 0.000152 0.157041 0.842694 0.000113\n5 0.000183 0.174142 0.825674 0.000001\n6 0.000001 0.151556 0.848405 0.000038\n7 0.000771 0.177583 0.821645 0.000001\n8 0.000001 0.202059 0.797939 0.000001\n9 0.000025 0.189537 0.810410 0.000028\n10 0.006142 0.003041 0.493912 0.496905\n11 0.003739 0.002367 0.514216 0.479678\n12 0.002334 0.001517 0.529041 0.467108\n13 0.003458 0.000001 0.532265 0.464276\n14 0.000405 0.005655 0.527576 0.466364\n15 0.002557 0.003233 0.507954 0.486256\n16 0.004161 0.000001 0.491271 0.504568\n17 0.001364 0.001330 0.528311 0.468996\n18 0.002886 0.000001 0.506392 0.490721\n19 0.001823 0.002498 0.509620 0.486059\n\n Source.0 Source.1 Source.2 Source.3\ncluster \n0 0.000001 0.197108 0.802495 0.000396\n1 0.000001 0.157860 0.842076 0.000063\n2 0.094956 0.203057 0.701662 0.000325\n3 0.000001 0.181948 0.817841 0.000210\n4 0.000003 0.169680 0.830316 0.000001\n5 0.000362 0.177194 0.822443 0.000001\n6 0.000001 0.146807 0.852924 0.000268\n7 0.001087 0.178994 0.819564 0.000354\n8 0.000001 0.202182 0.797333 0.000485\n9 0.000348 0.181399 0.818252 0.000001\n10 0.003050 0.000247 0.506777 0.489926\n11 0.004420 0.000001 0.513927 0.481652\n12 0.006488 0.001396 0.527197 0.464919\n13 0.001510 0.000001 0.525987 0.472502\n14 0.000001 0.000001 0.520737 0.479261\n15 0.000001 0.001765 0.515658 0.482575\n16 0.000001 0.000001 0.492550 0.507448\n17 0.002855 0.000199 0.526535 0.470411\n18 0.000001 0.001952 0.498303 0.499744\n19 0.001232 0.000001 0.506612 0.492155\n```\nThen I want to get the mean of these two dataframes. What is the easiest way to do this? Just to clarify I want to get the mean for each particular cell when the indexes and columns of all the dataframes are exactly the same. So in the example I gave, the average for [0,Source.0] would be (0.001182 + 0.000001) \/ 2 = 0.0005915.","response":"Assuming the two dataframes have the same columns, you could just concatenate them and compute your summary stats on the concatenated frames: \n```\nimport numpy as np\nimport pandas as pd\n\n# some random data frames\ndf1 = pd.DataFrame(dict(x=np.random.randn(100), y=np.random.randint(0, 5, 100)))\ndf2 = pd.DataFrame(dict(x=np.random.randn(100), y=np.random.randint(0, 5, 100)))\n\n# concatenate them\ndf_concat = pd.concat((df1, df2))\n\nprint df_concat.mean()\n# x -0.163044\n# y 2.120000\n# dtype: float64\n\nprint df_concat.median()\n# x -0.192037\n# y 2.000000\n# dtype: float64\n```\nUpdate If you want to compute stats across each set of rows with the same index in the two datasets, you can use .groupby() to group the data by row index, then apply the mean, median etc.: \n```\nby_row_index = df_concat.groupby(df_concat.index)\ndf_means = by_row_index.mean()\n\nprint df_means.head()\n# x y\n# 0 -0.850794 1.5\n# 1 0.159038 1.5\n# 2 0.083278 1.0\n# 3 -0.540336 0.5\n# 4 0.390954 3.5\n```\nThis method will work even when your dataframes have unequal numbers of rows - if a particular row index is missing in one of the two dataframes, the mean\/median will be computed on the single existing row.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/25057835\/get-the-mean-across-multiple-pandas-dataframes","best_answers_votes":66,"tags":["python","r","numpy","pandas"],"question_length":2658,"response_length":1204,"tag_count":4} {"question":"Numpy integer nan [duplicate] This question already has answers here: NumPy or Pandas: Keeping array type as integer while having a NaN value (10 answers) Closed 11 years ago. Is there a way to store NaN in a Numpy array of integers? I get: \n```\na=np.array([1],dtype=long)\na[0]=np.nan\n\nTraceback (most recent call last):\n File \"\", line 1, in \nValueError: cannot convert float NaN to integer\n```","response":"No, you can't, at least with current version of NumPy. A nan is a special value for float arrays only. There are talks about introducing a special bit that would allow non-float arrays to store what in practice would correspond to a nan, but so far (2012\/10), it's only talks. In the meantime, you may want to consider the numpy.ma package: instead of picking an invalid integer like -99999, you could use the special numpy.ma.masked value to represent an invalid value. \n```\na = np.ma.array([1,2,3,4,5], dtype=int)\na[1] = np.ma.masked\nmasked_array(data = [1 -- 3 4 5],\n mask = [False True False False False],\n fill_value = 999999)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/12708807\/numpy-integer-nan","best_answers_votes":65,"tags":["python","numpy","integer","nan"],"question_length":410,"response_length":654,"tag_count":4} {"question":"Numpy remove a dimension from np array I have some images I want to work with, the problem is that there are two kinds of images both are 106 x 106 pixels, some are in color and some are black and white. one with only two (2) dimensions: (106,106) and one with three (3) (106,106,3) Is there a way I can strip this last dimension? I tried np.delete, but it did not seem to work. \n```\nnp.shape(np.delete(Xtrain[0], [2] , 2))\nOut[67]: (106, 106, 2)\n```","response":"You could use numpy's fancy indexing (an extension to Python's built-in slice notation): \n```\nx = np.zeros( (106, 106, 3) )\nresult = x[:, :, 0]\nprint(result.shape)\n```\nprints \n```\n(106, 106)\n```\nA shape of (106, 106, 3) means you have 3 sets of things that have shape (106, 106). So in order to \"strip\" the last dimension, you just have to pick one of these (that's what the fancy indexing does). You can keep any slice you want. I arbitrarily choose to keep the 0th, since you didn't specify what you wanted. So, result = x[:, :, 1] and result = x[:, :, 2] would give the desired shape as well: it all just depends on which slice you need to keep.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/37152031\/numpy-remove-a-dimension-from-np-array","best_answers_votes":126,"tags":["python","arrays","numpy"],"question_length":450,"response_length":648,"tag_count":3} {"question":"recover dict from 0-d numpy array What happened is that I (by mistake) saved a dictionary with the command numpy.save() (no error messages shown) and now I need to recover the data in the dictionary. When I load it with numpy.load() it has type (numpy.ndarray) and is 0-d, so it is not a dictionary any more and I can't access the data in it, 0-d arrays are not index-able so doing something like \n```\nmydict = numpy.load('mydict')\nmydict[0]['some_key']\n```\ndoesn't work. I also tried \n```\nrecdict = dict(mydict)\n```\nbut that didn't work either. Why numpy didn't warn me when I saved the dictionary with numpy.save()? Is there a way to recover the data? Thanks in advance!","response":"Use mydict.item() to obtain the array element as a Python scalar. \n```\n\n```python\nimport numpy as np\n```\n\n```python\nnp.save('\/tmp\/data.npy',{'a':'Hi Mom!'})\n```\n\n```python\nx=np.load('\/tmp\/data.npy')\n```\n\n```python\nx.item()\n#Output\n#{'a': 'Hi Mom!'}\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/8361561\/recover-dict-from-0-d-numpy-array","best_answers_votes":106,"tags":["python","dictionary","load","numpy","save"],"question_length":672,"response_length":203,"tag_count":5} {"question":"Store different datatypes in one NumPy array? I have two different arrays, one with strings and another with ints. I want to concatenate them, into one array where each column has the original datatype. My current solution for doing this (see below) converts the entire array into dtype = string, which seems very memory inefficient. combined_array = np.concatenate((A, B), axis = 1) Is it possible to mutiple dtypes in combined_array when A.dtype = string and B.dtype = int?","response":"One approach might be to use a record array. The \"columns\" won't be like the columns of standard numpy arrays, but for most use cases, this is sufficient: \n```\n\n```python\na = numpy.array(['a', 'b', 'c', 'd', 'e'])\n```\n\n```python\nb = numpy.arange(5)\n```\n\n```python\nrecords = numpy.rec.fromarrays((a, b), names=('keys', 'data'))\n```\n\n```python\nrecords\n#Output\n#rec.array([('a', 0), ('b', 1), ('c', 2), ('d', 3), ('e', 4)], \n# dtype=[('keys', '|S1'), ('data', '\", line 1, in \n#AttributeError: 'numpy.ndarray' object has no attribute 'keys'\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/11309739\/store-different-datatypes-in-one-numpy-array","best_answers_votes":57,"tags":["python","arrays","types","numpy"],"question_length":475,"response_length":1253,"tag_count":4} {"question":"numpy.unique with order preserved \n```\n['b','b','b','a','a','c','c']\n```\nnumpy.unique gives \n```\n['a','b','c']\n```\nHow can I get the original order preserved \n```\n['b','a','c']\n```\nGreat answers. Bonus question. Why do none of these methods work with this dataset? http:\/\/www.uploadmb.com\/dw.php?id=1364341573 Here's the question numpy sort wierd behavior","response":"Numpy np.unique() is slow, O(Nlog(N)), but you can do this by following code: \n```\nimport numpy as np\na = np.array(['b','b','b','a','a','c','c'])\n_, idx = np.unique(a, return_index=True)\nprint(a[np.sort(idx)])\n```\nOutput: \n```\n['b' 'a' 'c']\n```\nPandas pd.unique() is much faster for big array O(N): \n```\nimport pandas as pd\n\na = np.random.randint(0, 1000, 10000)\n%timeit np.unique(a)\n%timeit pd.unique(a)\n\n1000 loops, best of 3: 644 us per loop\n10000 loops, best of 3: 144 us per loop\n```\nNote: Pandas pd.unique() has the further benefit of preserving order by default: pandas.unique(values) Return unique values based on a hash table. Uniques are returned in order of appearance. This does NOT sort. Significantly faster than numpy.unique for long enough sequences. Includes NA values.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/15637336\/numpy-unique-with-order-preserved","best_answers_votes":120,"tags":["python","numpy"],"question_length":355,"response_length":786,"tag_count":2} {"question":"check if numpy array is multidimensional or not I want to check if a numpy array is multidimensional or not? \n```\nV = [[ -7.94627203e+01 -1.81562235e+02 -3.05418070e+02 -2.38451033e+02][ 9.43740653e+01 1.69312771e+02 1.68545575e+01 -1.44450299e+02][ 5.61599000e+00 8.76135909e+01 1.18959245e+02 -1.44049237e+02]]\n```\nHow can I do that in numpy?","response":"Use the .ndim property of the ndarray: \n```\n\n```python\na = np.array([[ -7.94627203e+01, -1.81562235e+02, -3.05418070e+02, -2.38451033e+02],[ 9.43740653e+01, 1.69312771e+02, 1.68545575e+01, -1.44450299e+02],[ 5.61599000e+00, 8.76135909e+01, 1.18959245e+02, -1.44049237e+02]])\n```\n\n```python\na.ndim\n#Output\n#2\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/21299798\/check-if-numpy-array-is-multidimensional-or-not","best_answers_votes":126,"tags":["python","numpy"],"question_length":359,"response_length":299,"tag_count":2} {"question":"How to shade region under the curve in matplotlib I want to use matplotlib to illustrate the definite integral between two regions: x_0, and x_1. How can I shade a region under a curve in matplotlib from x=-1, to x=1 given the following plot \n```\nimport numpy as np\nfrom matplotlib import pyplot as plt\ndef f(t):\n return t * t\n\nt = np.arange(-4,4,1\/40.)\nplt.plot(t,f(t))\n```","response":"The final answer I came up with is to use fill_between. I thought there would have been a simple shade between type method, but this does exactly what I want. \n```\nsection = np.arange(-1, 1, 1\/20.)\nplt.fill_between(section,f(section))\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/10046262\/how-to-shade-region-under-the-curve-in-matplotlib","best_answers_votes":84,"tags":["python","numpy","matplotlib"],"question_length":377,"response_length":238,"tag_count":3} {"question":"Rolling window for 1D arrays in Numpy? Is there a way to efficiently implement a rolling window for 1D arrays in Numpy? For example, I have this pure Python code snippet to calculate the rolling standard deviations for a 1D list, where observations is the 1D list of values, and n is the window length for the standard deviation: \n```\nstdev = []\nfor i, data in enumerate(observations[n-1:]):\n strip = observations[i:i+n]\n mean = sum(strip) \/ n\n stdev.append(sqrt(250*sum([(s-mean)**2 for s in strip])\/(n-1)))\n```\nIs there a way to do this completely within Numpy, i.e., without any Python loops? The standard deviation is trivial with numpy.std, but the rolling window part completely stumps me. I found this blog post regarding a rolling window in Numpy, but it doesn't seem to be for 1D arrays.","response":"Just use the blog code, but apply your function to the result. i.e. \n```\nnumpy.std(rolling_window(observations, n), 1)\n```\nwhere you have (from the blog): \n```\ndef rolling_window(a, window):\n shape = a.shape[:-1] + (a.shape[-1] - window + 1, window)\n strides = a.strides + (a.strides[-1],)\n return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/6811183\/rolling-window-for-1d-arrays-in-numpy","best_answers_votes":85,"tags":["python","python-3.x","numpy","window"],"question_length":805,"response_length":375,"tag_count":4} {"question":"Variance Inflation Factor in Python I'm trying to calculate the variance inflation factor (VIF) for each column in a simple dataset in python: \n```\na b c d\n1 2 4 4\n1 2 6 3\n2 3 7 4\n3 2 8 5\n4 1 9 4\n```\nI have already done this in R using the vif function from the usdm library which gives the following results: \n```\na <- c(1, 1, 2, 3, 4)\nb <- c(2, 2, 3, 2, 1)\nc <- c(4, 6, 7, 8, 9)\nd <- c(4, 3, 4, 5, 4)\n\ndf <- data.frame(a, b, c, d)\nvif_df <- vif(df)\nprint(vif_df)\n\nVariables VIF\n a 22.95\n b 3.00\n c 12.95\n d 3.00\n```\nHowever, when I do the same in python using the statsmodel vif function, my results are: \n```\na = [1, 1, 2, 3, 4]\nb = [2, 2, 3, 2, 1]\nc = [4, 6, 7, 8, 9]\nd = [4, 3, 4, 5, 4]\n\nck = np.column_stack([a, b, c, d])\n\nvif = [variance_inflation_factor(ck, i) for i in range(ck.shape[1])]\nprint(vif)\n\nVariables VIF\n a 47.136986301369774\n b 28.931506849315081\n c 80.31506849315096\n d 40.438356164383549\n```\nThe results are vastly different, even though the inputs are the same. In general, results from the statsmodel VIF function seem to be wrong, but I'm not sure if this is because of the way I am calling it or if it is an issue with the function itself. I was hoping someone could help me figure out whether I was incorrectly calling the statsmodel function or explain the discrepancies in the results. If it's an issue with the function then are there any VIF alternatives in python?","response":"As mentioned by others and in this post by Josef Perktold, the function's author, variance_inflation_factor expects the presence of a constant in the matrix of explanatory variables. One can use add_constant from statsmodels to add the required constant to the dataframe before passing its values to the function. \n```\nfrom statsmodels.stats.outliers_influence import variance_inflation_factor\nfrom statsmodels.tools.tools import add_constant\n\ndf = pd.DataFrame(\n {'a': [1, 1, 2, 3, 4],\n 'b': [2, 2, 3, 2, 1],\n 'c': [4, 6, 7, 8, 9],\n 'd': [4, 3, 4, 5, 4]}\n)\n\nX = add_constant(df)\n\n```python\npd.Series([variance_inflation_factor(X.values, i) \n#Output\n# for i in range(X.shape[1])], \n# index=X.columns)\n#const 136.875\n#a 22.950\n#b 3.000\n#c 12.950\n#d 3.000\n#dtype: float64\n#```\n#I believe you could also add the constant to the right most column of the dataframe using assign: \n#```\n#X = df.assign(const=1)\n```\n\n```python\npd.Series([variance_inflation_factor(X.values, i) \n#Output\n# for i in range(X.shape[1])], \n# index=X.columns)\n#a 22.950\n#b 3.000\n#c 12.950\n#d 3.000\n#const 136.875\n#dtype: float64\n#```\n#The source code itself is rather concise: \n#```\n#def variance_inflation_factor(exog, exog_idx):\n# \"\"\"\n# exog : ndarray, (nobs, k_vars)\n# design matrix with all explanatory variables, as for example used in\n# regression\n# exog_idx : int\n# index of the exogenous variable in the columns of exog\n# \"\"\"\n# k_vars = exog.shape[1]\n# x_i = exog[:, exog_idx]\n# mask = np.arange(k_vars) != exog_idx\n# x_noti = exog[:, mask]\n# r_squared_i = OLS(x_i, x_noti).fit().rsquared\n# vif = 1. \/ (1. - r_squared_i)\n# return vif\n#```\n#It is also rather simple to modify the code to return all of the VIFs as a series: \n#```\n#from statsmodels.regression.linear_model import OLS\n#from statsmodels.tools.tools import add_constant\n```\n\ndef variance_inflation_factors(exog_df):\n '''\n Parameters\n ----------\n exog_df : dataframe, (nobs, k_vars)\n design matrix with all explanatory variables, as for example used in\n regression.\n\n Returns\n -------\n vif : Series\n variance inflation factors\n '''\n exog_df = add_constant(exog_df)\n vifs = pd.Series(\n [1 \/ (1. - OLS(exog_df[col].values, \n exog_df.loc[:, exog_df.columns != col].values).fit().rsquared) \n for col in exog_df],\n index=exog_df.columns,\n name='VIF'\n )\n return vifs\n\n```python\nvariance_inflation_factors(df)\n#Output\n#const 136.875\n#a 22.950\n#b 3.000\n#c 12.950\n#Name: VIF, dtype: float64\n#```\n#Per the solution of @T_T, one can also simply do the following: \n#```\n#vifs = pd.Series(np.linalg.inv(df.corr().to_numpy()).diagonal(), \n# index=df.columns, \n# name='VIF')\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/42658379\/variance-inflation-factor-in-python","best_answers_votes":79,"tags":["python","r","numpy","statistics","statsmodels"],"question_length":1473,"response_length":2860,"tag_count":5} {"question":"Fast replacement of values in a numpy array I have a very large numpy array (containing up to a million elements) like the one below: \n```\n[0,1,6,5,1,2,7,6,2,3,8,7,3,4,9,8,5,6,11,10,6,7,12,11,7,\n8,13,12,8,9,14,13,10,11,16,15,11,12,17,16,12,13,18,17,13,\n14,19,18,15,16,21,20,16,17,22,21,17,18,23,22,18,19,24,23]\n```\nand a small dictionary map for replacing some of the elements in the above array \n```\n{4: 0, 9: 5, 14: 10, 19: 15, 20: 0, 21: 1, 22: 2, 23: 3, 24: 0}\n```\nI would like to replace some of the elements according to the map above. The numpy array is really large, and only a small subset of the elements (occurring as keys in the dictionary) will be replaced with the corresponding values. What is the fastest way to do this?","response":"I believe there's even more efficient method, but for now, try \n```\nfrom numpy import copy\n\nnewArray = copy(theArray)\nfor k, v in d.iteritems(): newArray[theArray==k] = v\n```\nMicrobenchmark and test for correctness: \n```\n#!\/usr\/bin\/env python2.7\n\nfrom numpy import copy, random, arange\n\nrandom.seed(0)\ndata = random.randint(30, size=10**5)\n\nd = {4: 0, 9: 5, 14: 10, 19: 15, 20: 0, 21: 1, 22: 2, 23: 3, 24: 0}\ndk = d.keys()\ndv = d.values()\n\ndef f1(a, d):\n b = copy(a)\n for k, v in d.iteritems():\n b[a==k] = v\n return b\n\ndef f2(a, d):\n for i in xrange(len(a)):\n a[i] = d.get(a[i], a[i])\n return a\n\ndef f3(a, dk, dv):\n mp = arange(0, max(a)+1)\n mp[dk] = dv\n return mp[a]\n\n\na = copy(data)\nres = f2(a, d)\n\nassert (f1(data, d) == res).all()\nassert (f3(data, dk, dv) == res).all()\n```\nResult: \n```\n$ python2.7 -m timeit -s 'from w import f1,f3,data,d,dk,dv' 'f1(data,d)'\n100 loops, best of 3: 6.15 msec per loop\n\n$ python2.7 -m timeit -s 'from w import f1,f3,data,d,dk,dv' 'f3(data,dk,dv)'\n100 loops, best of 3: 19.6 msec per loop\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/3403973\/fast-replacement-of-values-in-a-numpy-array","best_answers_votes":49,"tags":["python","replace","numpy"],"question_length":736,"response_length":1065,"tag_count":3} {"question":"What are the advantages of using numpy.identity over numpy.eye? Having looked over the man pages for numpy's eye and identity, I'd assumed that identity was a special case of eye, since it has fewer options (e.g. eye can fill shifted diagonals, identity cannot), but could plausibly run more quickly. However, this isn't the case on either small or large arrays: \n```\n\n```python\nnp.identity(3) \n#Output\n#array([[ 1., 0., 0.],\n# [ 0., 1., 0.],\n# [ 0., 0., 1.]])\n```\n\n```python\nnp.eye(3) \n#Output\n#array([[ 1., 0., 0.],\n# [ 0., 1., 0.],\n# [ 0., 0., 1.]])\n```\n\n```python\ntimeit.timeit(\"import numpy; numpy.identity(3)\", number = 10000)\n#Output\n#0.05699801445007324\n```\n\n```python\ntimeit.timeit(\"import numpy; numpy.eye(3)\", number = 10000) \n#Output\n#0.03787708282470703\n```\n\n```python\ntimeit.timeit(\"import numpy\", number = 10000) \n#Output\n#0.00960087776184082\n```\n\n```python\ntimeit.timeit(\"import numpy; numpy.identity(1000)\", number = 10000)\n#Output\n#11.379066944122314\n```\n\n```python\ntimeit.timeit(\"import numpy; numpy.eye(1000)\", number = 10000) \n#Output\n#11.247124910354614\n#```\n#What, then, is the advantage of using identity over eye?\n```","response":"identity just calls eye so there is no difference in how the arrays are constructed. Here's the code for identity: \n```\ndef identity(n, dtype=None):\n from numpy import eye\n return eye(n, dtype=dtype)\n```\nAs you say, the main difference is that with eye the diagonal can may be offset, whereas identity only fills the main diagonal. Since the identity matrix is such a common construct in mathematics, it seems the main advantage of using identity is for its name alone.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/28363447\/what-are-the-advantages-of-using-numpy-identity-over-numpy-eye","best_answers_votes":95,"tags":["python","arrays","performance","numpy"],"question_length":1161,"response_length":475,"tag_count":4} {"question":"How to add column to numpy array I am trying to add one column to the array created from recfromcsv. In this case it's an array: [210,8] (rows, cols). I want to add a ninth column. Empty or with zeroes doesn't matter. \n```\nfrom numpy import genfromtxt\nfrom numpy import recfromcsv\nimport numpy as np\nimport time\n\nif __name__ == '__main__':\n print(\"testing\")\n my_data = recfromcsv('LIAB.ST.csv', delimiter='\\t')\n array_size = my_data.size\n #my_data = np.append(my_data[:array_size],my_data[9:],0)\n\n new_col = np.sum(x,1).reshape((x.shape[0],1))\n np.append(x,new_col,1)\n```","response":"I think that your problem is that you are expecting np.append to add the column in-place, but what it does, because of how numpy data is stored, is create a copy of the joined arrays \n```\nReturns\n-------\nappend : ndarray\n A copy of `arr` with `values` appended to `axis`. Note that `append`\n does not occur in-place: a new array is allocated and filled. If\n `axis` is None, `out` is a flattened array.\n```\nso you need to save the output all_data = np.append(...): \n```\nmy_data = np.random.random((210,8)) #recfromcsv('LIAB.ST.csv', delimiter='\\t')\nnew_col = my_data.sum(1)[...,None] # None keeps (n, 1) shape\nnew_col.shape\n#(210,1)\nall_data = np.append(my_data, new_col, 1)\nall_data.shape\n#(210,9)\n```\nAlternative ways: \n```\nall_data = np.hstack((my_data, new_col))\n#or\nall_data = np.concatenate((my_data, new_col), 1)\n```\nI believe that the only difference between these three functions (as well as np.vstack) are their default behaviors for when axis is unspecified: concatenate assumes axis = 0 hstack assumes axis = 1 unless inputs are 1d, then axis = 0 vstack assumes axis = 0 after adding an axis if inputs are 1d append flattens array Based on your comment, and looking more closely at your example code, I now believe that what you are probably looking to do is add a field to a record array. You imported both genfromtxt which returns a structured array and recfromcsv which returns the subtly different record array (recarray). You used the recfromcsv so right now my_data is actually a recarray, which means that most likely my_data.shape = (210,) since recarrays are 1d arrays of records, where each record is a tuple with the given dtype. So you could try this: \n```\nimport numpy as np\nfrom numpy.lib.recfunctions import append_fields\nx = np.random.random(10)\ny = np.random.random(10)\nz = np.random.random(10)\ndata = np.array( list(zip(x,y,z)), dtype=[('x',float),('y',float),('z',float)])\ndata = np.recarray(data.shape, data.dtype, buf=data)\ndata.shape\n#(10,)\ntot = data['x'] + data['y'] + data['z'] # sum(axis=1) won't work on recarray\ntot.shape\n#(10,)\nall_data = append_fields(data, 'total', tot, usemask=False)\nall_data\n#array([(0.4374783740738456 , 0.04307289878861764, 0.021176067323686598, 0.5017273401861498),\n# (0.07622262416466963, 0.3962146058689695 , 0.27912715826653534 , 0.7515643883001745),\n# (0.30878532523061153, 0.8553768789387086 , 0.9577415585116588 , 2.121903762680979 ),\n# (0.5288343561208022 , 0.17048864443625933, 0.07915689716226904 , 0.7784798977193306),\n# (0.8804269791375121 , 0.45517504750917714, 0.1601389248542675 , 1.4957409515009568),\n# (0.9556552723429782 , 0.8884504475901043 , 0.6412854758843308 , 2.4853911958174133),\n# (0.0227638618687922 , 0.9295332854783015 , 0.3234597575660103 , 1.275756904913104 ),\n# (0.684075052174589 , 0.6654774682866273 , 0.5246593820025259 , 1.8742119024637423),\n# (0.9841793718333871 , 0.5813955915551511 , 0.39577520705133684 , 1.961350170439875 ),\n# (0.9889343795296571 , 0.22830104497714432, 0.20011292764078448 , 1.4173483521475858)], \n# dtype=[('x', '> E = [];\n>> A = [1 2 3 4 5; 10 20 30 40 50];\n>> E = [E ; A]\n\nE =\n\n 1 2 3 4 5\n 10 20 30 40 50\n```\nNow I want the same thing in Numpy but I have problems, look at this: \n```none\n\n```python\nE = array([],dtype=int)\n```\n\n```python\nE\n#Output\n#array([], dtype=int64)\n```\n\n```python\nA = array([[1,2,3,4,5],[10,20,30,40,50]])\n```\n\n```python\nE = vstack((E,A))\n#Output\n#Traceback (most recent call last):\n# File \"\", line 1, in \n# File \"\/System\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/Extras\/lib\/python\/numpy\/core\/shape_base.py\", line 226, in vstack\n# return _nx.concatenate(map(atleast_2d,tup),0)\n#ValueError: array dimensions must agree except for d_0\n#```\n#I have a similar situation when I do this with: \n#```none\n```\n\n```python\nE = concatenate((E,A),axis=0)\n#Output\n#Traceback (most recent call last):\n# File \"\", line 1, in \n#ValueError: arrays must have same number of dimensions\n#```\n#Or: \n#```none\n```\n\n```python\nE = append([E],[A],axis=0)\n#Output\n#Traceback (most recent call last):\n# File \"\", line 1, in \n# File \"\/System\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/Extras\/lib\/python\/numpy\/lib\/function_base.py\", line 3577, in append\n# return concatenate((arr, values), axis=axis)\n#ValueError: arrays must have same number of dimensions\n#```\n```","response":"if you know the number of columns before hand: \n```py\n\n```python\nxs = np.array([[1,2,3,4,5],[10,20,30,40,50]])\n```\n\n```python\nys = np.array([], dtype=np.int64).reshape(0,5)\n```\n\n```python\nys\n#Output\n#array([], shape=(0, 5), dtype=int64)\n```\n\n```python\nnp.vstack([ys, xs])\n#Output\n#array([[ 1., 2., 3., 4., 5.],\n# [ 10., 20., 30., 40., 50.]])\n#```\n#if not: \n#```py\n```\n\n```python\nys = np.array([])\n```\n\n```python\nys = np.vstack([ys, xs]) if ys.size else xs\n#Output\n#array([[ 1, 2, 3, 4, 5],\n# [10, 20, 30, 40, 50]])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/22732589\/concatenating-empty-array-in-numpy","best_answers_votes":135,"tags":["python","arrays","matlab","numpy"],"question_length":1296,"response_length":453,"tag_count":4} {"question":"numpy.sin function in degrees? I'm working on a problem that has to do with calculating angles of refraction and what not. However, it seems that I'm unable to use the numpy.sin() function in degrees. I have tried to use numpy.degrees() and numpy.rad2deg(). \n```\nnumpy.sin(90)\n\nnumpy.degrees(numpy.sin(90))\n```\nBoth return ~ 0.894 and ~ 51.2 respectively. Thanks for your help.","response":"You don't want to convert to degrees, because you already have your number (90) in degrees. You need to convert 90 from degrees to radians, and you need to do it before you take the sine: \n```\n\n```python\nnp.sin(np.deg2rad(90))\n#Output\n#1.0\n#```\n#(You can use either deg2rad or radians.)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/28077733\/numpy-sin-function-in-degrees","best_answers_votes":97,"tags":["python","math","numpy","trigonometry"],"question_length":377,"response_length":268,"tag_count":4} {"question":"Get the column names of a python numpy ndarray Let's say I have a data file called data.txt that looks like: \n```\nTIME FX FY FZ\n0 10 5 6\n1 2 4 7\n2 5 2 6\n...\n```\nIn Python run: \n```\nimport numpy as np\n\nmyData = np.genfromtxt(\"data.txt\", names=True)\n\n```python\nprint myData[\"TIME\"]\n#Output\n#[0, 1, 2]\n#```\n#The names at the top of my data file will vary, so what I would like to do is find out what the names of my arrays in the data file are. I would like something like: \n#```\n```\n\n```python\nprint myData.names\n#Output\n#[TIME, F0, F1, F2]\n#```\n#I thought about just to read in the data file and get the first line and parse it as a separate operation, but that doesn't seem very efficient or elegant.\n```","response":"Try: \n```\nmyData.dtype.names\n```\nThis will return a tuple of the field names. \n```\n\n```python\nmyData.dtype.names\n#Output\n#('TIME', 'FX', 'FY', 'FZ')\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/7561017\/get-the-column-names-of-a-python-numpy-ndarray","best_answers_votes":86,"tags":["python","arrays","numpy","names"],"question_length":674,"response_length":150,"tag_count":4} {"question":"np.random.rand vs np.random.random I find Python (and its ecosystem) to be full of strange conventions and inconsistencies and this is another example: np.random.rand Create an array of the given shape and populate it with random samples from a uniform distribution over [0, 1). np.random.random Return random floats in the half-open interval [0.0, 1.0). Results are from the \u201ccontinuous uniform\u201d distribution over the stated interval. ??? What exactly is the difference there?","response":"First note that numpy.random.random is actually an alias for numpy.random.random_sample. I'll use the latter in the following. (See this question and answer for more aliases.) Both functions generate samples from the uniform distribution on [0, 1). The only difference is in how the arguments are handled. With numpy.random.rand, the length of each dimension of the output array is a separate argument. With numpy.random.random_sample, the shape argument is a single tuple. For example, to create an array of samples with shape (3, 5), you can write \n```\nsample = np.random.rand(3, 5)\n```\nor \n```\nsample = np.random.random_sample((3, 5))\n```\n(Really, that's it.) Update As of version 1.17, NumPy has a new random API. The recommended method for generating samples from the uniform distribution on [0, 1) is: \n```\n\n```python\nrng = np.random.default_rng() # Create a default Generator.\n```\n\n```python\nrng.random(size=10) # Generate 10 samples.\n#Output\n#array([0.00416913, 0.31533329, 0.19057857, 0.48732511, 0.40638395,\n# 0.32165646, 0.02597142, 0.19788567, 0.08142055, 0.15755424])\n#```\n#The new Generator class does not have the rand() or random_sample() methods. There is a uniform() method that allows you to specify the lower and upper bounds of the distribution. E.g. \n#```\n```\n\n```python\nrng.uniform(1, 2, size=10)\n#Output\n#array([1.75573298, 1.79862591, 1.53700962, 1.29183769, 1.16439681,\n# 1.64413869, 1.7675135 , 1.02121057, 1.37345967, 1.73589452])\n#```\n#The old functions in the numpy.random namespace will continue to work, but they are considered \"frozen\", with no ongoing development. If you are writing new code, and you don't have to support pre-1.17 versions of numpy, it is recommended that you use the new random API.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/47231852\/np-random-rand-vs-np-random-random","best_answers_votes":72,"tags":["python","numpy"],"question_length":477,"response_length":1696,"tag_count":2} {"question":"Convert numpy array type and values from Float64 to Float32 I am trying to convert threshold array(pickle file of isolation forest from scikit learn) of type from Float64 to Float32 \n```\nfor i in range(len(tree.tree_.threshold)):\n tree.tree_.threshold[i] = tree.tree_.threshold[i].astype(np.float32)\n```\n\u200b Then Printing it \n```\nfor value in tree.tree_.threshold[:5]:\n print(type(value))\n print(value)\n```\nthe output i am getting is : \n```\n\n526226.0\n\n91.9514312744\n\n3.60330319405\n\n-2.0\n\n-2.0\n```\nI am not getting a proper conversion to Float32. I want to convert values and their type to Float32, Did anybody have any workaround this ?","response":"The problem is that you do not do any type conversion of the numpy array. You calculate a float32 variable and put it as an entry into a float64 numpy array. numpy then converts it properly back to float64 Try someting like this: \n```\na = np.zeros(4,dtype=\"float64\") \nprint a.dtype\nprint type(a[0])\na = np.float32(a)\nprint a.dtype\nprint type(a[0])\n```\nThe output (tested with python 2.7) \n```\nfloat64\n\nfloat32\n\n```\na is in your case the array tree.tree_.threshold","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/45955186\/convert-numpy-array-type-and-values-from-float64-to-float32","best_answers_votes":59,"tags":["python","numpy","scikit-learn","pickle"],"question_length":758,"response_length":507,"tag_count":4} {"question":"numpy: Efficiently avoid 0s when taking log(matrix) \n```\nfrom numpy import *\n\nm = array([[1,0],\n [2,3]])\n```\nI would like to compute the element-wise log2(m), but only in the places where m is not 0. In those places, I would like to have 0 as a result. I am now fighting against: \n```\nRuntimeWarning: divide by zero encountered in log2\n```\nTry 1: using where \n```\nres = where(m != 0, log2(m), 0)\n```\nwhich computes me the correct result, but I still get logged a RuntimeWarning: divide by zero encountered in log2. It looks like (and syntactically it is quite obvious) numpy still computes log2(m) on the full matrix and only afterwards where picks the values to keep. I would like to avoid this warning. Try 2: using masks \n```\nfrom numpy import ma\n\nres = ma.filled(log2(ma.masked_equal(m, 0)), 0)\n```\nSure masking away the zeros will prevent log2 to get applied to them, won't it? Unfortunately not: We still get RuntimeWarning: divide by zero encountered in log2. Even though the matrix is masked, log2 still seems to be applied to every element. How can I efficiently compute the element-wise log of a numpy array without getting division-by-zero warnings? Of course I could temporarily disable the logging of these warnings using seterr, but that doesn't look like a clean solution. And sure a double for loop would help with treating 0s specially, but defeats the efficiency of numpy. Any ideas?","response":"Another option is to use the where parameter of numpy's ufuncs: \n```\nm = np.array([[1., 0], [2, 3]])\nres = np.log2(m, out=np.zeros_like(m, dtype=np.float64), where=(m!=0))\n```\nNo RuntimeWarning is raised, and zeros are introduced where the log is not computed. 1 Note, courtesy of @JStrahl's comment, adding the argument dtype=np.float64 to np.zeros_like() avoids universal function (ufunc) (e.g. log2, log10 etc.) casting errors when m is not of the np.float64 data type.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/21752989\/numpy-efficiently-avoid-0s-when-taking-logmatrix","best_answers_votes":46,"tags":["numpy"],"question_length":1411,"response_length":472,"tag_count":1} {"question":"Plotting a decision boundary separating 2 classes using Matplotlib's pyplot I could really use a tip to help me plotting a decision boundary to separate to classes of data. I created some sample data (from a Gaussian distribution) via Python NumPy. In this case, every data point is a 2D coordinate, i.e., a 1 column vector consisting of 2 rows. E.g., \n```\n[ 1\n 2 ]\n```\nLet's assume I have 2 classes, class1 and class2, and I created 100 data points for class1 and 100 data points for class2 via the code below (assigned to the variables x1_samples and x2_samples). \n```\nmu_vec1 = np.array([0,0])\ncov_mat1 = np.array([[2,0],[0,2]])\nx1_samples = np.random.multivariate_normal(mu_vec1, cov_mat1, 100)\nmu_vec1 = mu_vec1.reshape(1,2).T # to 1-col vector\n\nmu_vec2 = np.array([1,2])\ncov_mat2 = np.array([[1,0],[0,1]])\nx2_samples = np.random.multivariate_normal(mu_vec2, cov_mat2, 100)\nmu_vec2 = mu_vec2.reshape(1,2).T\n```\nWhen I plot the data points for each class, it would look like this: Now, I came up with an equation for an decision boundary to separate both classes and would like to add it to the plot. However, I am not really sure how I can plot this function: \n```\ndef decision_boundary(x_vec, mu_vec1, mu_vec2):\n g1 = (x_vec-mu_vec1).T.dot((x_vec-mu_vec1))\n g2 = 2*( (x_vec-mu_vec2).T.dot((x_vec-mu_vec2)) )\n return g1 - g2\n```\nI would really appreciate any help! EDIT: Intuitively (If I did my math right) I would expect the decision boundary to look somewhat like this red line when I plot the function...","response":"Your question is more complicated than a simple plot : you need to draw the contour which will maximize the inter-class distance. Fortunately it's a well-studied field, particularly for SVM machine learning. The easiest method is to download the scikit-learn module, which provides a lot of cool methods to draw boundaries: scikit-learn: Support Vector Machines Code : \n```\n# -*- coding: utf-8 -*-\n\nimport numpy as np\nimport matplotlib\nfrom matplotlib import pyplot as plt\nimport scipy\nfrom sklearn import svm\n\n\nmu_vec1 = np.array([0,0])\ncov_mat1 = np.array([[2,0],[0,2]])\nx1_samples = np.random.multivariate_normal(mu_vec1, cov_mat1, 100)\nmu_vec1 = mu_vec1.reshape(1,2).T # to 1-col vector\n\nmu_vec2 = np.array([1,2])\ncov_mat2 = np.array([[1,0],[0,1]])\nx2_samples = np.random.multivariate_normal(mu_vec2, cov_mat2, 100)\nmu_vec2 = mu_vec2.reshape(1,2).T\n\n\nfig = plt.figure()\n\n\nplt.scatter(x1_samples[:,0],x1_samples[:,1], marker='+')\nplt.scatter(x2_samples[:,0],x2_samples[:,1], c= 'green', marker='o')\n\nX = np.concatenate((x1_samples,x2_samples), axis = 0)\nY = np.array([0]*100 + [1]*100)\n\nC = 1.0 # SVM regularization parameter\nclf = svm.SVC(kernel = 'linear', gamma=0.7, C=C )\nclf.fit(X, Y)\n```\nLinear Plot \n```\nw = clf.coef_[0]\na = -w[0] \/ w[1]\nxx = np.linspace(-5, 5)\nyy = a * xx - (clf.intercept_[0]) \/ w[1]\n\nplt.plot(xx, yy, 'k-')\n```\nMultiLinear Plot \n```\nC = 1.0 # SVM regularization parameter\nclf = svm.SVC(kernel = 'rbf', gamma=0.7, C=C )\nclf.fit(X, Y)\n\nh = .02 # step size in the mesh\n# create a mesh to plot in\nx_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1\ny_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1\nxx, yy = np.meshgrid(np.arange(x_min, x_max, h),\n np.arange(y_min, y_max, h))\n\n\n# Plot the decision boundary. For that, we will assign a color to each\n# point in the mesh [x_min, m_max]x[y_min, y_max].\nZ = clf.predict(np.c_[xx.ravel(), yy.ravel()])\n\n# Put the result into a color plot\nZ = Z.reshape(xx.shape)\nplt.contour(xx, yy, Z, cmap=plt.cm.Paired)\n```\nImplementation If you want to implement it yourself, you need to solve the following quadratic equation: The Wikipedia article Unfortunately, for non-linear boundaries like the one you draw, it's a difficult problem relying on a kernel trick but there isn't a clear cut solution.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/22294241\/plotting-a-decision-boundary-separating-2-classes-using-matplotlibs-pyplot","best_answers_votes":54,"tags":["python","numpy","matplotlib"],"question_length":1523,"response_length":2287,"tag_count":3} {"question":"gradient descent using python and numpy \n```\ndef gradient(X_norm,y,theta,alpha,m,n,num_it):\n temp=np.array(np.zeros_like(theta,float))\n for i in range(0,num_it):\n h=np.dot(X_norm,theta)\n #temp[j]=theta[j]-(alpha\/m)*( np.sum( (h-y)*X_norm[:,j][np.newaxis,:] ) )\n temp[0]=theta[0]-(alpha\/m)*(np.sum(h-y))\n temp[1]=theta[1]-(alpha\/m)*(np.sum((h-y)*X_norm[:,1]))\n theta=temp\n return theta\n\n\n\nX_norm,mean,std=featureScale(X)\n#length of X (number of rows)\nm=len(X)\nX_norm=np.array([np.ones(m),X_norm])\nn,m=np.shape(X_norm)\nnum_it=1500\nalpha=0.01\ntheta=np.zeros(n,float)[:,np.newaxis]\nX_norm=X_norm.transpose()\ntheta=gradient(X_norm,y,theta,alpha,m,n,num_it)\nprint theta\n```\nMy theta from the above code is 100.2 100.2, but it should be 100.2 61.09 in matlab which is correct.","response":"I think your code is a bit too complicated and it needs more structure, because otherwise you'll be lost in all equations and operations. In the end this regression boils down to four operations: Calculate the hypothesis h = X * theta Calculate the loss = h - y and maybe the squared cost (loss^2)\/2m Calculate the gradient = X' * loss \/ m Update the parameters theta = theta - alpha * gradient In your case, I guess you have confused m with n. Here m denotes the number of examples in your training set, not the number of features. Let's have a look at my variation of your code: \n```\nimport numpy as np\nimport random\n\n# m denotes the number of examples here, not the number of features\ndef gradientDescent(x, y, theta, alpha, m, numIterations):\n xTrans = x.transpose()\n for i in range(0, numIterations):\n hypothesis = np.dot(x, theta)\n loss = hypothesis - y\n # avg cost per example (the 2 in 2*m doesn't really matter here.\n # But to be consistent with the gradient, I include it)\n cost = np.sum(loss ** 2) \/ (2 * m)\n print(\"Iteration %d | Cost: %f\" % (i, cost))\n # avg gradient per example\n gradient = np.dot(xTrans, loss) \/ m\n # update\n theta = theta - alpha * gradient\n return theta\n\n\ndef genData(numPoints, bias, variance):\n x = np.zeros(shape=(numPoints, 2))\n y = np.zeros(shape=numPoints)\n # basically a straight line\n for i in range(0, numPoints):\n # bias feature\n x[i][0] = 1\n x[i][1] = i\n # our target variable\n y[i] = (i + bias) + random.uniform(0, 1) * variance\n return x, y\n\n# gen 100 points with a bias of 25 and 10 variance as a bit of noise\nx, y = genData(100, 25, 10)\nm, n = np.shape(x)\nnumIterations= 100000\nalpha = 0.0005\ntheta = np.ones(n)\ntheta = gradientDescent(x, y, theta, alpha, m, numIterations)\nprint(theta)\n```\nAt first I create a small random dataset which should look like this: As you can see I also added the generated regression line and formula that was calculated by excel. You need to take care about the intuition of the regression using gradient descent. As you do a complete batch pass over your data X, you need to reduce the m-losses of every example to a single weight update. In this case, this is the average of the sum over the gradients, thus the division by m. The next thing you need to take care about is to track the convergence and adjust the learning rate. For that matter you should always track your cost every iteration, maybe even plot it. If you run my example, the theta returned will look like this: \n```\nIteration 99997 | Cost: 47883.706462\nIteration 99998 | Cost: 47883.706462\nIteration 99999 | Cost: 47883.706462\n[ 29.25567368 1.01108458]\n```\nWhich is actually quite close to the equation that was calculated by excel (y = x + 30). Note that as we passed the bias into the first column, the first theta value denotes the bias weight.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/17784587\/gradient-descent-using-python-and-numpy","best_answers_votes":146,"tags":["python","numpy","machine-learning","linear-regression","gradient-descent"],"question_length":815,"response_length":2927,"tag_count":5} {"question":"find length of sequences of identical values in a numpy array (run length encoding) In a pylab program (which could probably be a matlab program as well) I have a numpy array of numbers representing distances: d[t] is the distance at time t (and the timespan of my data is len(d) time units). The events I'm interested in are when the distance is below a certain threshold, and I want to compute the duration of these events. It's easy to get an array of booleans with b = d0 and b[i-1] and b[i]:\n counter+=1\n if (b[i-1] and not b[i]) or i==len(b)-1:\n durations.append(counter)\n\nprint '.'\n```","response":"Fully numpy vectorized and generic RLE for any array (works with strings, booleans etc too). Outputs tuple of run lengths, start positions, and values. \n```\nimport numpy as np\n\ndef rle(inarray):\n \"\"\" run length encoding. Partial credit to R rle function. \n Multi datatype arrays catered for including non Numpy\n returns: tuple (runlengths, startpositions, values) \"\"\"\n ia = np.asarray(inarray) # force numpy\n n = len(ia)\n if n == 0: \n return (None, None, None)\n else:\n y = ia[1:] != ia[:-1] # pairwise unequal (string safe)\n i = np.append(np.where(y), n - 1) # must include last element posi\n z = np.diff(np.append(-1, i)) # run lengths\n p = np.cumsum(np.append(0, z))[:-1] # positions\n return(z, p, ia[i])\n```\nPretty fast (i7): \n```\nxx = np.random.randint(0, 5, 1000000)\n%timeit yy = rle(xx)\n100 loops, best of 3: 18.6 ms per loop\n```\nMultiple data types: \n```\nrle([True, True, True, False, True, False, False])\nOut[8]: \n(array([3, 1, 1, 2]),\n array([0, 3, 4, 5]),\n array([ True, False, True, False], dtype=bool))\n\nrle(np.array([5, 4, 4, 4, 4, 0, 0]))\nOut[9]: (array([1, 4, 2]), array([0, 1, 5]), array([5, 4, 0]))\n\nrle([\"hello\", \"hello\", \"my\", \"friend\", \"okay\", \"okay\", \"bye\"])\nOut[10]: \n(array([2, 1, 1, 2, 1]),\n array([0, 2, 3, 4, 6]),\n array(['hello', 'my', 'friend', 'okay', 'bye'], \n dtype='|S6'))\n```\nSame results as Alex Martelli above: \n```\nxx = np.random.randint(0, 2, 20)\n\nxx\nOut[60]: array([1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1])\n\nam = runs_of_ones_array(xx)\n\ntb = rle(xx)\n\nam\nOut[63]: array([4, 5, 2, 5])\n\ntb[0][tb[2] == 1]\nOut[64]: array([4, 5, 2, 5])\n\n%timeit runs_of_ones_array(xx)\n10000 loops, best of 3: 28.5 \u00b5s per loop\n\n%timeit rle(xx)\n10000 loops, best of 3: 38.2 \u00b5s per loop\n```\nSlightly slower than Alex (but still very fast), and much more flexible.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/1066758\/find-length-of-sequences-of-identical-values-in-a-numpy-array-run-length-encodi","best_answers_votes":81,"tags":["python","matlab","numpy","matplotlib"],"question_length":1495,"response_length":1963,"tag_count":4} {"question":"Normalize numpy array columns in python I have a numpy array where each cell of a specific row represents a value for a feature. I store all of them in an 100*4 matrix. \n```\nA B C\n1000 10 0.5\n765 5 0.35\n800 7 0.09\n```\nAny idea how I can normalize rows of this numpy.array where each value is between 0 and 1? My desired output is: \n```\nA B C\n1 1 1\n0.765 0.5 0.7\n0.8 0.7 0.18(which is 0.09\/0.5)\n```","response":"If I understand correctly, what you want to do is divide by the maximum value in each column. You can do this easily using broadcasting. Starting with your example array: \n```\nimport numpy as np\n\nx = np.array([[1000, 10, 0.5],\n [ 765, 5, 0.35],\n [ 800, 7, 0.09]])\n\nx_normed = x \/ x.max(axis=0)\n\nprint(x_normed)\n# [[ 1. 1. 1. ]\n# [ 0.765 0.5 0.7 ]\n# [ 0.8 0.7 0.18 ]]\n```\nx.max(0) takes the maximum over the 0th dimension (i.e. rows). This gives you a vector of size (ncols,) containing the maximum value in each column. You can then divide x by this vector in order to normalize your values such that the maximum value in each column will be scaled to 1. If x contains negative values you would need to subtract the minimum first: \n```\nx_normed = (x - x.min(0)) \/ x.ptp(0)\n```\nHere, x.ptp(0) returns the \"peak-to-peak\" (i.e. the range, max - min) along axis 0. This normalization also guarantees that the minimum value in each column will be 0.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/29661574\/normalize-numpy-array-columns-in-python","best_answers_votes":123,"tags":["python","numpy","normalize"],"question_length":431,"response_length":1002,"tag_count":3} {"question":"Find matching rows in 2 dimensional numpy array I would like to get the index of a 2 dimensional Numpy array that matches a row. For example, my array is this: \n```\nvals = np.array([[0, 0],\n [1, 0],\n [2, 0],\n [0, 1],\n [1, 1],\n [2, 1],\n [0, 2],\n [1, 2],\n [2, 2],\n [0, 3],\n [1, 3],\n [2, 3],\n [0, 0],\n [1, 0],\n [2, 0],\n [0, 1],\n [1, 1],\n [2, 1],\n [0, 2],\n [1, 2],\n [2, 2],\n [0, 3],\n [1, 3],\n [2, 3]])\n```\nI would like to get the index that matches the row [0, 1] which is index 3 and 15. When I do something like numpy.where(vals == [0 ,1]) I get... \n```\n(array([ 0, 3, 3, 4, 5, 6, 9, 12, 15, 15, 16, 17, 18, 21]), array([0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0]))\n```\nI want index array([3, 15]).","response":"You need the np.where function to get the indexes: \n```\n\n```python\nnp.where((vals == (0, 1)).all(axis=1))\n#Output\n#(array([ 3, 15]),)\n#```\n#Or, as the documentation states: If only condition is given, return condition.nonzero() You could directly call .nonzero() on the array returned by .all: \n#```\n```\n\n```python\n(vals == (0, 1)).all(axis=1).nonzero()\n#Output\n#(array([ 3, 15]),)\n#```\n#To dissassemble that: \n#```\n```\n\n```python\nvals == (0, 1)\n#Output\n#array([[ True, False],\n# [False, False],\n# ...\n# [ True, False],\n# [False, False],\n# [False, False]], dtype=bool)\n#```\n#and calling the .all method on that array (with axis=1) gives you True where both are True: \n#```\n```\n\n```python\n(vals == (0, 1)).all(axis=1)\n#Output\n#array([False, False, False, True, False, False, False, False, False,\n# False, False, False, False, False, False, True, False, False,\n# False, False, False, False, False, False], dtype=bool)\n#```\n#and to get which indexes are True: \n#```\n```\n\n```python\nnp.where((vals == (0, 1)).all(axis=1))\n#Output\n#(array([ 3, 15]),)\n#```\n#or \n#```\n```\n\n```python\n(vals == (0, 1)).all(axis=1).nonzero()\n#Output\n#(array([ 3, 15]),)\n#```\n#I find my solution a bit more readable, but as unutbu points out, the following may be faster, and returns the same value as (vals == (0, 1)).all(axis=1): \n#```\n```\n\n```python\n(vals[:, 0] == 0) & (vals[:, 1] == 1)\n#Output\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/25823608\/find-matching-rows-in-2-dimensional-numpy-array","best_answers_votes":97,"tags":["python","numpy","scipy"],"question_length":1069,"response_length":1257,"tag_count":3} {"question":"How to check if a variable is either a python list, numpy array or pandas series I have a function that takes in a variable that would work if it is any of the following three types \n```\n1. pandas Series\n 2. numpy array (ndarray)\n 3. python list\n```\nAny other type should be rejected. What is the most efficient way to check this?","response":"You can do it using isinstance: \n```\nimport pandas as pd\nimport numpy as np\ndef f(l):\n if isinstance(l,(list,pd.core.series.Series,np.ndarray)):\n print(5)\n else:\n raise Exception('wrong type')\n```\nThen f([1,2,3]) prints 5 while f(3.34) raises an error.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/43748991\/how-to-check-if-a-variable-is-either-a-python-list-numpy-array-or-pandas-series","best_answers_votes":65,"tags":["python","arrays","list","pandas","numpy"],"question_length":330,"response_length":272,"tag_count":5} {"question":"Getting the indices of several elements in a NumPy array at once Is there any way to get the indices of several elements in a NumPy array at once? E.g. \n```\nimport numpy as np\na = np.array([1, 2, 4])\nb = np.array([1, 2, 3, 10, 4])\n```\nI would like to find the index of each element of a in b, namely: [0,1,4]. I find the solution I am using a bit verbose: \n```\nimport numpy as np\n\na = np.array([1, 2, 4])\nb = np.array([1, 2, 3, 10, 4])\n\nc = np.zeros_like(a)\nfor i, aa in np.ndenumerate(a):\n c[i] = np.where(b == aa)[0]\n \nprint('c: {0}'.format(c))\n```\nOutput: \n```\nc: [0 1 4]\n```","response":"You could use in1d and nonzero (or where for that matter): \n```\n\n```python\nnp.in1d(b, a).nonzero()[0]\n#Output\n#array([0, 1, 4])\n#```\n#This works fine for your example arrays, but in general the array of returned indices does not honour the order of the values in a. This may be a problem depending on what you want to do next. In that case, a much better answer is the one @Jaime gives here, using searchsorted: \n#```\n```\n\n```python\nsorter = np.argsort(b)\n```\n\n```python\nsorter[np.searchsorted(b, a, sorter=sorter)]\n#Output\n#array([0, 1, 4])\n#```\n#This returns the indices for values as they appear in a. For instance: \n#```\n#a = np.array([1, 2, 4])\n#b = np.array([4, 2, 3, 1])\n```\n\n```python\nsorter = np.argsort(b)\n```\n\n```python\nsorter[np.searchsorted(b, a, sorter=sorter)]\n#Output\n#array([3, 1, 0]) # the other method would return [0, 1, 3]\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/32191029\/getting-the-indices-of-several-elements-in-a-numpy-array-at-once","best_answers_votes":67,"tags":["python","arrays","numpy"],"question_length":584,"response_length":762,"tag_count":3} {"question":"Python pandas: how to remove nan and -inf values I have the following dataframe \n```\ntime X Y X_t0 X_tp0 X_t1 X_tp1 X_t2 X_tp2\n0 0.002876 0 10 0 NaN NaN NaN NaN NaN\n1 0.002986 0 10 0 NaN 0 NaN NaN NaN\n2 0.037367 1 10 1 1.000000 0 NaN 0 NaN\n3 0.037374 2 10 2 0.500000 1 1.000000 0 NaN\n4 0.037389 3 10 3 0.333333 2 0.500000 1 1.000000\n5 0.037393 4 10 4 0.250000 3 0.333333 2 0.500000\n\n....\n1030308 9.962213 256 268 256 0.000000 256 0.003906 255 0.003922\n1030309 10.041799 0 268 0 -inf 256 0.000000 256 0.003906\n1030310 10.118960 0 268 0 NaN 0 -inf 256 0.000000\n```\nI tried with the following \n```\ndf.dropna(inplace=True)\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.40)\n\nX_train = X_train.drop('time', axis=1)\nX_train = X_train.drop('X_t1', axis=1)\nX_train = X_train.drop('X_t2', axis=1)\nX_test = X_test.drop('time', axis=1)\nX_test = X_test.drop('X_t1', axis=1)\nX_test = X_test.drop('X_t2', axis=1)\nX_test.fillna(X_test.mean(), inplace=True)\nX_train.fillna(X_train.mean(), inplace=True)\ny_train.fillna(y_train.mean(), inplace=True)\n```\nHowever, I am still getting this error ValueError: Input contains NaN, infinity or a value too large for dtype('float32'). whenever i try to fit a regression model fit(X_train, y_train) How can we remove both the NaN and -inf values at the same time?","response":"Use pd.DataFrame.isin and check for rows that have any with pd.DataFrame.any. Finally, use the boolean array to slice the dataframe. \n```\ndf[~df.isin([np.nan, np.inf, -np.inf]).any(1)]\n\n time X Y X_t0 X_tp0 X_t1 X_tp1 X_t2 X_tp2\n4 0.037389 3 10 3 0.333333 2.0 0.500000 1.0 1.000000\n5 0.037393 4 10 4 0.250000 3.0 0.333333 2.0 0.500000\n1030308 9.962213 256 268 256 0.000000 256.0 0.003906 255.0 0.003922\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/45745085\/python-pandas-how-to-remove-nan-and-inf-values","best_answers_votes":93,"tags":["python","python-3.x","pandas","numpy","dataframe"],"question_length":1600,"response_length":501,"tag_count":5} {"question":"Convert np.array of type float64 to type uint8 scaling values I have a particular np.array data which represents a particular grayscale image. I need to use SimpleBlobDetector() that unfortunately only accepts 8bit images, so I need to convert this image, obviously having a quality-loss. I've already tried: \n```\nimport numpy as np\nimport cv2\n[...]\ndata = data \/ data.max() #normalizes data in range 0 - 255\ndata = 255 * data\nimg = data.astype(np.uint8)\ncv2.imshow(\"Window\", img)\n```\nBut cv2.imshow is not giving the image as expected, but with strange distortion... In the end, I only need to convert a np.float64 to np.uint8 scaling all the values and truncating the rest, eg. 65535 becomes 255, 65534 becomes 254 and so on.... Any help? Thanks.","response":"A better way to normalize your image is to take each value and divide by the largest value experienced by the data type. This ensures that images that have a small dynamic range in your image remain small and they're not inadvertently normalized so that they become gray. For example, if your image had a dynamic range of [0-2], the code right now would scale that to have intensities of [0, 128, 255]. You want these to remain small after converting to np.uint8. Therefore, divide every value by the largest value possible by the image type, not the actual image itself. You would then scale this by 255 to produced the normalized result. Use numpy.iinfo and provide it the type (dtype) of the image and you will obtain a structure of information for that type. You would then access the max field from this structure to determine the maximum value. So with the above, do the following modifications to your code: \n```\nimport numpy as np\nimport cv2\n[...]\ninfo = np.iinfo(data.dtype) # Get the information of the incoming image type\ndata = data.astype(np.float64) \/ info.max # normalize the data to 0 - 1\ndata = 255 * data # Now scale by 255\nimg = data.astype(np.uint8)\ncv2.imshow(\"Window\", img)\n```\nNote that I've additionally converted the image into np.float64 in case the incoming data type is not so and to maintain floating-point precision when doing the division.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/46689428\/convert-np-array-of-type-float64-to-type-uint8-scaling-values","best_answers_votes":85,"tags":["python","image","numpy","opencv","image-processing"],"question_length":748,"response_length":1370,"tag_count":5} {"question":"Finding count of distinct elements in DataFrame in each column I am trying to find the count of distinct values in each column using Pandas. This is what I did. \n```\nimport pandas as pd\nimport numpy as np\n\n# Generate data.\nNROW = 10000\nNCOL = 100\ndf = pd.DataFrame(np.random.randint(1, 100000, (NROW, NCOL)),\n columns=['col' + x for x in np.arange(NCOL).astype(str)])\n```\nI need to count the number of distinct elements for each column, like this: \n```\ncol0 9538\ncol1 9505\ncol2 9524\n```\nWhat would be the most efficient way to do this, as this method will be applied to files which have size greater than 1.5GB? Based upon the answers, df.apply(lambda x: len(x.unique())) is the fastest (notebook). %timeit df.apply(lambda x: len(x.unique())) 10 loops, best of 3: 49.5 ms per loop %timeit df.nunique() 10 loops, best of 3: 59.7 ms per loop %timeit df.apply(pd.Series.nunique) 10 loops, best of 3: 60.3 ms per loop %timeit df.T.apply(lambda x: x.nunique(), axis=1) 10 loops, best of 3: 60.5 ms per loop","response":"As of pandas 0.20 we can use nunique directly on DataFrames, i.e.: \n```\ndf.nunique()\na 4\nb 5\nc 1\ndtype: int64\n```\nOther legacy options: You could do a transpose of the df and then using apply call nunique row-wise: \n```\n\n```python\ndf = pd.DataFrame({'a':[0,1,1,2,3],'b':[1,2,3,4,5],'c':[1,1,1,1,1]})\ndf\n#Output\n#a b c\n#0 0 1 1\n#1 1 2 1\n#2 1 3 1\n#3 2 4 1\n#4 3 5 1\n```\n\n```python\ndf.T.apply(lambda x: x.nunique(), axis=1)\n#Output\n#a 4\n#b 5\n#c 1\n#dtype: int64\n#```\n#EDIT As pointed out by @ajcr the transpose is unnecessary: \n#```\n```\n\n```python\ndf.apply(pd.Series.nunique)\n#Output\n#a 4\n#b 5\n#c 1\n#dtype: int64\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/30503321\/finding-count-of-distinct-elements-in-dataframe-in-each-column","best_answers_votes":98,"tags":["python","numpy","pandas"],"question_length":1027,"response_length":640,"tag_count":3} {"question":"In numpy.sum() there is parameter called \"keepdims\". What does it do? In numpy.sum() there is parameter called keepdims. What does it do? As you can see here in the documentation: http:\/\/docs.scipy.org\/doc\/numpy\/reference\/generated\/numpy.sum.html \n```\nnumpy.sum(a, axis=None, dtype=None, out=None, keepdims=False)[source]\nSum of array elements over a given axis.\n\nParameters: \n...\nkeepdims : bool, optional\n If this is set to True, the axes which are reduced are left in the result as\n dimensions with size one. With this option, the result will broadcast\n correctly against the input array.\n...\n```","response":"@Ney @hpaulj is correct, you need to experiment, but I suspect you don't realize that summation for some arrays can occur along axes. Observe the following which reading the documentation \n```\n\n```python\na\n#Output\n#array([[0, 0, 0],\n# [0, 1, 0],\n# [0, 2, 0],\n# [1, 0, 0],\n# [1, 1, 0]])\n```\n\n```python\nnp.sum(a, keepdims=True)\n#Output\n#array([[6]])\n```\n\n```python\nnp.sum(a, keepdims=False)\n#Output\n#6\n```\n\n```python\nnp.sum(a, axis=1, keepdims=True)\n#Output\n#array([[0],\n# [1],\n# [2],\n# [1],\n# [2]])\n```\n\n```python\nnp.sum(a, axis=1, keepdims=False)\n#Output\n#array([0, 1, 2, 1, 2])\n```\n\n```python\nnp.sum(a, axis=0, keepdims=True)\n#Output\n#array([[2, 4, 0]])\n```\n\n```python\nnp.sum(a, axis=0, keepdims=False)\n#Output\n#array([2, 4, 0])\n#```\n#You will notice that if you don't specify an axis (1st two examples), the numerical result is the same, but the keepdims = True returned a 2D array with the number 6, whereas, the second incarnation returned a scalar. Similarly, when summing along axis 1 (across rows), a 2D array is returned again when keepdims = True. The last example, along axis 0 (down columns), shows a similar characteristic... dimensions are kept when keepdims = True. Studying axes and their properties is critical to a full understanding of the power of NumPy when dealing with multidimensional data.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/39441517\/in-numpy-sum-there-is-parameter-called-keepdims-what-does-it-do","best_answers_votes":101,"tags":["python","numpy","sum"],"question_length":608,"response_length":1215,"tag_count":3} {"question":"Understanding tensordot After I learned how to use einsum, I am now trying to understand how np.tensordot works. However, I am a little bit lost especially regarding the various possibilities for the parameter axes. To understand it, as I have never practiced tensor calculus, I use the following example: \n```\nA = np.random.randint(2, size=(2, 3, 5))\nB = np.random.randint(2, size=(3, 2, 4))\n```\nIn this case, what are the different possible np.tensordot and how would you compute it manually?","response":"The idea with tensordot is pretty simple - We input the arrays and the respective axes along which the sum-reductions are intended. The axes that take part in sum-reduction are removed in the output and all of the remaining axes from the input arrays are spread-out as different axes in the output keeping the order in which the input arrays are fed. Let's look at few sample cases with one and two axes of sum-reductions and also swap the input places and see how the order is kept in the output. I. One axis of sum-reduction Inputs : \n```\n\n```python\nA = np.random.randint(2, size=(2, 6, 5))\n ...: B = np.random.randint(2, size=(3, 2, 4))\n ...:\n```\nCase #1: \n```\n```\n\n```python\nnp.tensordot(A, B, axes=((0),(1))).shape\n#Output\n#(6, 5, 3, 4)\n#\n#A : (2, 6, 5) -> reduction of axis=0\n#B : (3, 2, 4) -> reduction of axis=1\n#\n#Output : `(2, 6, 5)`, `(3, 2, 4)` ===(2 gone)==> `(6,5)` + `(3,4)` => `(6,5,3,4)`\n#```\n#Case #2 (same as case #1 but the inputs are fed swapped): \n#```\n```\n\n```python\nnp.tensordot(B, A, axes=((1),(0))).shape\n#Output\n#(3, 4, 6, 5)\n#\n#B : (3, 2, 4) -> reduction of axis=1\n#A : (2, 6, 5) -> reduction of axis=0\n#\n#Output : `(3, 2, 4)`, `(2, 6, 5)` ===(2 gone)==> `(3,4)` + `(6,5)` => `(3,4,6,5)`.\n#```\n#II. Two axes of sum-reduction Inputs : \n#```\n```\n\n```python\nA = np.random.randint(2, size=(2, 3, 5))\n ...: B = np.random.randint(2, size=(3, 2, 4))\n ...:\n```\nCase #1: \n```\n```\n\n```python\nnp.tensordot(A, B, axes=((0,1),(1,0))).shape\n#Output\n#(5, 4)\n#\n#A : (2, 3, 5) -> reduction of axis=(0,1)\n#B : (3, 2, 4) -> reduction of axis=(1,0)\n#\n#Output : `(2, 3, 5)`, `(3, 2, 4)` ===(2,3 gone)==> `(5)` + `(4)` => `(5,4)`\n#```\n#Case #2: \n#```\n```\n\n```python\nnp.tensordot(B, A, axes=((1,0),(0,1))).shape\n#Output\n#(4, 5)\n#\n#B : (3, 2, 4) -> reduction of axis=(1,0)\n#A : (2, 3, 5) -> reduction of axis=(0,1)\n#\n#Output : `(3, 2, 4)`, `(2, 3, 5)` ===(2,3 gone)==> `(4)` + `(5)` => `(4,5)`\n#```\n#We can extend this to as many axes as possible.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/41870228\/understanding-tensordot","best_answers_votes":76,"tags":["python","numpy","linear-algebra","tensor","dot-product"],"question_length":494,"response_length":1894,"tag_count":5} {"question":"Difference between np.int, np.int_, int, and np.int_t in cython? I am a bit struggled with so many int data types in cython. np.int, np.int_, np.int_t, int I guess int in pure python is equivalent to np.int_, then where does np.int come from? I cannot find the document from numpy? Also, why does np.int_ exist given we do already have int? In cython, I guess int becomes a C type when used as cdef int or ndarray[int], and when used as int() it stays as the python caster? Is np.int_ equivalent to long in C? so cdef long is the identical to cdef np.int_? Under what circumstances should I use np.int_t instead of np.int? e.g. cdef np.int_t, ndarray[np.int_t] ... Can someone briefly explain how the wrong use of those types would affect the performance of compiled cython code?","response":"It's a bit complicated because the names have different meanings depending on the context. int In Python The int is normally just a Python type, it's of arbitrary precision, meaning that you can store any conceivable integer inside it (as long as you have enough memory). \n```\n\n```python\nint(10**50)\n#Output\n# 100000000000000000000000000000000000000000000000000\n#```\n#However, when you use it as dtype for a NumPy array it will be interpreted as np.int_ 1. Which is not of arbitrary precision, it will have the same size as C's long: \n#```\n```\n\n```python\nnp.array(10**50, dtype=int)\n#Output\n# OverflowError: Python int too large to convert to C long\n#```\n#That also means the following two are equivalent: \n#```\n#np.array([1,2,3], dtype=int)\n# np.array([1,2,3], dtype=np.int_)\n#```\n#As Cython type identifier it has another meaning, here it stands for the c type int. It's of limited precision (typically 32bits). You can use it as Cython type, for example when defining variables with cdef: \n#```\n#cdef int value = 100 # variable\n# cdef int[:] arr = ... # memoryview\n#```\n#As return value or argument value for cdef or cpdef functions: \n#```\n#cdef int my_function(int argument1, int argument2):\n# # ...\n#```\n#As \"generic\" for ndarray: \n#```\n#cimport numpy as cnp\n# cdef cnp.ndarray[int, ndim=1] val = ...\n#```\n#For type casting: \n#```\n#avalue = (another_value)\n#```\n#And probably many more. In Cython but as Python type. You can still call int and you'll get a \"Python int\" (of arbitrary precision), or use it for isinstance or as dtype argument for np.array. Here the context is important, so converting to a Python int is different from converting to a C int: \n#```\n#cdef object val = int(10) # Python int\n# cdef int val = (10) # C int\n#```\n#np.int Actually this is very easy. It's just an alias for int: \n#```\n```\n\n```python\nint is np.int\n#Output\n#True\n#```\n#So everything from above applies to np.int as well. However you can't use it as a type-identifier except when you use it on the cimported package. In that case it represents the Python integer type. \n#```\n#cimport numpy as cnp\n```\n\ncpdef func(cnp.int obj):\n return obj\n```\nThis will expect obj to be a Python integer not a NumPy type: \n```\n\n```python\nfunc(np.int_(10))\n#Output\n#TypeError: Argument 'obj' has incorrect type (expected int, got numpy.int32)\n```\n\n```python\nfunc(10)\n#Output\n#10\n#```\n#My advise regarding np.int: Avoid it whenever possible. In Python code it's equivalent to int and in Cython code it's also equivalent to Pythons int but if used as type-identifier it will probably confuse you and everyone who reads the code! It certainly confused me... np.int_ Actually it only has one meaning: It's a Python type that represents a scalar NumPy type. You use it like Pythons int: \n#```\n```\n\n```python\nnp.int_(10) # looks like a normal Python integer\n#Output\n#10\n```\n\n```python\ntype(np.int_(10)) # but isn't (output may vary depending on your system!)\n#Output\n#numpy.int32\n#```\n#Or you use it to specify the dtype, for example with np.array: \n#```\n```\n\n```python\nnp.array([1,2,3], dtype=np.int_)\n#Output\n#array([1, 2, 3])\n#```\n#But you cannot use it as type-identifier in Cython. cnp.int_t It's the type-identifier version for np.int_. That means you can't use it as dtype argument. But you can use it as type for cdef declarations: \n#```\n#cimport numpy as cnp\n#import numpy as np\n```\n\ncdef cnp.int_t[:] arr = np.array([1,2,3], dtype=np.int_)\n |---TYPE---| |---DTYPE---|\n```\nThis example (hopefully) shows that the type-identifier with the trailing _t actually represents the type of an array using the dtype without the trailing t. You can't interchange them in Cython code! Notes There are several more numeric types in NumPy I'll include a list containing the NumPy dtype and Cython type-identifier and the C type identifier that could also be used in Cython here. But it's basically taken from the NumPy documentation and the Cython NumPy pxd file: \n```\nNumPy dtype Numpy Cython type C Cython type identifier\n\nnp.bool_ None None\nnp.int_ cnp.int_t long\nnp.intc None int \nnp.intp cnp.intp_t ssize_t\nnp.int8 cnp.int8_t signed char\nnp.int16 cnp.int16_t signed short\nnp.int32 cnp.int32_t signed int\nnp.int64 cnp.int64_t signed long long\nnp.uint8 cnp.uint8_t unsigned char\nnp.uint16 cnp.uint16_t unsigned short\nnp.uint32 cnp.uint32_t unsigned int\nnp.uint64 cnp.uint64_t unsigned long\nnp.float_ cnp.float64_t double\nnp.float32 cnp.float32_t float\nnp.float64 cnp.float64_t double\nnp.complex_ cnp.complex128_t double complex\nnp.complex64 cnp.complex64_t float complex\nnp.complex128 cnp.complex128_t double complex\n```\nActually there are Cython types for np.bool_: cnp.npy_bool and bint but both they can't be used for NumPy arrays currently. For scalars cnp.npy_bool will just be an unsigned integer while bint will be a boolean. Not sure what's going on there... 1 Taken From the NumPy documentation \"Data type objects\" Built-in Python types Several python types are equivalent to a corresponding array scalar when used to generate a dtype object: \n```\nint np.int_\nbool np.bool_\nfloat np.float_\ncomplex np.cfloat\nbytes np.bytes_\nstr np.bytes_ (Python2) or np.unicode_ (Python3)\nunicode np.unicode_\nbuffer np.void\n(all others) np.object_\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/21851985\/difference-between-np-int-np-int-int-and-np-int-t-in-cython","best_answers_votes":77,"tags":["python","c","numpy","cython"],"question_length":779,"response_length":5610,"tag_count":4} {"question":"Convert Pandas dataframe to Sparse Numpy Matrix directly I am creating a matrix from a Pandas dataframe as follows: \n```\ndense_matrix = np.array(df.as_matrix(columns = None), dtype=bool).astype(np.int)\n```\nAnd then into a sparse matrix with: \n```\nsparse_matrix = scipy.sparse.csr_matrix(dense_matrix)\n```\nIs there any way to go from a df straight to a sparse matrix? Thanks in advance.","response":"df.values is a numpy array, and accessing values that way is always faster than np.array. \n```\nscipy.sparse.csr_matrix(df.values)\n```\nYou might need to take the transpose first, like df.values.T. In DataFrames, the columns are axis 0.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/20459536\/convert-pandas-dataframe-to-sparse-numpy-matrix-directly","best_answers_votes":75,"tags":["python","numpy","pandas","scipy"],"question_length":385,"response_length":234,"tag_count":4} {"question":"python tilde unary operator as negation numpy bool array Should be a simple question, but I'm unable to find an answer anywhere. The ~ operator in python is a documented as a bitwise inversion operator. Fine. I have noticed seemingly schizophrenic behavior though, to wit: \n```\n~True -> -2\n~1 -> -2\n~False -> -1\n~0 -> -1\n~numpy.array([True,False],dtype=int) -> array([-2,-1])\n~numpy.array([True,False],dtype=bool) -> array([False,True])\n```\nIn the first 4 examples, I can see that python is implementing (as documented) ~x = -(x+1), with the input treated as an int even if it's boolean. Hence, for a scalar boolean, ~ is not treated as a logical negation. Not that the behavior is identical on a numpy array defined with boolean values by with an int type. Why does ~ then work as a logical negation operator on a boolean array (Also notice: ~numpy.isfinite(numpy.inf) -> True?)? It is extremely annoying that I must use not() on a scalar, but not() won't work to negate an array. Then for an array, I must use ~, but ~ won't work to negate a scalar...","response":"not is implemented through the __nonzero__ special method, which is required to return either True or False, so it can't give the required result. Instead the ~ operator is used, which is implemented through the __not__ special method. For the same reason, & and | are used in place of and and or. PEP 335 aimed to allow overloading of boolean operators but was rejected because of excessive overhead (it would e.g. complicate if statements). PEP 225 suggests a general syntax for \"elementwise\" operators, which would provide a more general solution, but has been deferred. It appears that the current situation, while awkward, is not painful enough to force change. np.isfinite when called on a scalar returns a value of type np.bool_, not bool. np.bool_ is also the type you get when extracting a scalar value from an array of bool dtype. If you use np.True_ and np.False_ in place of True and False you will get consistent behaviour under ~.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/13600988\/python-tilde-unary-operator-as-negation-numpy-bool-array","best_answers_votes":51,"tags":["python","numpy","boolean","multidimensional-array"],"question_length":1053,"response_length":944,"tag_count":4} {"question":"A fast way to find the largest N elements in an numpy array I know I can do it like the following: \n```\nimport numpy as np\nN=10\na=np.arange(1,100,1)\nnp.argsort()[-N:]\n```\nHowever, it is very slow since it did a full sort. I wonder whether numpy provide some methods the do it fast.","response":"numpy 1.8 implements partition and argpartition that perform partial sort ( in O(n) time as opposed to full sort that is O(n) * log(n)). \n```\nimport numpy as np\n\ntest = np.array([9,1,3,4,8,7,2,5,6,0])\n\ntemp = np.argpartition(-test, 4)\nresult_args = temp[:4]\n\ntemp = np.partition(-test, 4)\nresult = -temp[:4]\n```\nResult: \n```\n>>> result_args\narray([0, 4, 8, 5]) # indices of highest vals\n>>> result\narray([9, 8, 6, 7]) # highest vals\n```\nTiming: \n```\n\n```python\na = np.arange(10000)\n```\n\n```python\nnp.random.shuffle(a)\n```\n\n```python\n%timeit np.argsort(a)\n1000 loops, best of 3: 1.02 ms per loop\n```\n\n```python\n%timeit np.argpartition(a, 100)\n10000 loops, best of 3: 139 us per loop\n```\n\n```python\n%timeit np.argpartition(a, 1000)\n10000 loops, best of 3: 141 us per loop\n```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/10337533\/a-fast-way-to-find-the-largest-n-elements-in-an-numpy-array","best_answers_votes":90,"tags":["python","sorting","numpy"],"question_length":281,"response_length":751,"tag_count":3} {"question":"Figure to image as a numpy array I'm trying to get a numpy array image from a Matplotlib figure and I'm currently doing it by saving to a file, then reading the file back in, but I feel like there has to be a better way. Here's what I'm doing now: \n```\nfrom matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas\nfrom matplotlib.figure import Figure\n\nfig = Figure()\ncanvas = FigureCanvas(fig)\nax = fig.gca()\n\nax.text(0.0,0.0,\"Test\", fontsize=45)\nax.axis('off')\n\ncanvas.print_figure(\"output.png\")\nimage = plt.imread(\"output.png\")\n```\nI tried this: \n```\nimage = np.fromstring( canvas.tostring_rgb(), dtype='uint8' )\n```\nfrom an example I found but it gives me an error saying that 'FigureCanvasAgg' object has no attribute 'renderer'.","response":"In order to get the figure contents as RGB pixel values, the matplotlib.backend_bases.Renderer needs to first draw the contents of the canvas. You can do this by manually calling canvas.draw(): \n```\nfrom matplotlib.figure import Figure\n\nfig = Figure()\ncanvas = fig.canvas\nax = fig.gca()\n\nax.text(0.0,0.0,\"Test\", fontsize=45)\nax.axis('off')\n\ncanvas.draw() # Draw the canvas, cache the renderer\n\nimage_flat = np.frombuffer(canvas.tostring_rgb(), dtype='uint8') # (H * W * 3,)\n# NOTE: reversed converts (W, H) from get_width_height to (H, W)\nimage = image_flat.reshape(*reversed(canvas.get_width_height()), 3) # (H, W, 3)\n```\nSee here for more info on the Matplotlib API.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/35355930\/figure-to-image-as-a-numpy-array","best_answers_votes":65,"tags":["python","numpy","matplotlib"],"question_length":748,"response_length":671,"tag_count":3} {"question":"How to get the values from a NumPy array using multiple indices I have a NumPy array that looks like this: \n```\narr = np.array([100.10, 200.42, 4.14, 89.00, 34.55, 1.12])\n```\nHow can I get multiple values from this array by index? For example, how can I get the values at the index positions 1, 4, and 5? I was trying something like this, which is incorrect: \n```\narr[1, 4, 5]\n```","response":"Try like this: \n```\n\n```python\narr = np.array([100.10, 200.42, 4.14, 89.00, 34.55, 1.12])\n```\n\n```python\narr[[1,4,5]]\n#Output\n#array([ 200.42, 34.55, 1.12])\n#```\n#And for multidimensional arrays: \n#```\n```\n\n```python\narr = np.arange(9).reshape(3,3)\n```\n\n```python\narr\n#Output\n#array([[0, 1, 2],\n# [3, 4, 5],\n# [6, 7, 8]])\n```\n\n```python\narr[[0, 1, 1], [1, 0, 2]]\n#Output\n#array([1, 3, 5])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/14162026\/how-to-get-the-values-from-a-numpy-array-using-multiple-indices","best_answers_votes":101,"tags":["python","numpy"],"question_length":380,"response_length":326,"tag_count":2} {"question":"Filtering (reducing) a NumPy Array Suppose I have a NumPy array arr that I want to element-wise filter (reduce) depending on the truth value of a (broadcastable) function, e.g. I want to get only values below a certain threshold value k: \n```py\ndef cond(x):\n return x < k\n```\nThere are a couple of methods, e.g.: Using a generator: np.fromiter((x for x in arr if cond(x)), dtype=arr.dtype) (which is a memory efficient version of using a list comprehension: np.array([x for x in arr if cond(x)]) because np.fromiter() will produce a NumPy array directly, without the need to allocate an intermediate Python list) Using boolean masking: arr[cond(arr)] Using integer indexing: arr[np.nonzero(cond(arr))] (or equivalently using np.where() as it defaults to np.nonzero() with only one condition) Using explicit looping with: single pass and final copying\/resizing two passes: one to determine the size of the result and one to actually perform the computation (The last two approaches to be accelerated with Cython or Numba) Which is the fastest? What about memory efficiency? (EDITED: To use directly np.nonzero() instead of np.where() as per @ShadowRanger comment)","response":"Summary Using a loop-based approach with single pass and copying, accelerated with Numba, offers the best overall trade-off in terms of speed, memory efficiency and flexibility. If the execution of the condition function is sufficiently fast, two-passes (filter2_nb()) may be faster, while they are more memory efficient regardless. Also, for sufficiently large inputs, resizing instead of copying (filter_resize_xnb()) leads to faster execution. If the data type (and the condition function) is known ahead of time and can be compiled, the Cython acceleration seems to be faster. It is likely that a similar hard-coding of the condition would lead to comparable speed-up with Numba accerelation as well. When it comes to NumPy-only based approaches, boolean masking or integer indexing are of comparable speed, and which one comes out faster depends largely on the filtering factor, i.e. the portion of values that passes through the filtering condition. The np.fromiter() approach is much slower (it would be off-charts in the plot), but does not produce large temporary objects. Note that the following tests are meant to give some insights into the different approaches and should be taken with a grain of salt. The most relevant assumptions are that the condition is broadcastable and that it would eventually compute very fast. Definitions Using a generator: \n```py\ndef filter_fromiter(arr, cond):\n return np.fromiter((x for x in arr if cond(x)), dtype=arr.dtype)\n```\nUsing boolean masking: \n```py\ndef filter_mask(arr, cond):\n return arr[cond(arr)]\n```\nUsing integer indexing: \n```py\ndef filter_idx(arr, cond):\n return arr[np.nonzero(cond(arr))]\n```\n4a. Using explicit looping, with single pass and final copying\/resizing Cython-accelerated with copying (pre-compiled condition) \n```\n%%cython -c-O3 -c-march=native -a\n#cython: language_level=3, boundscheck=False, wraparound=False, initializedcheck=False, cdivision=True, infer_types=True\n\n\nimport numpy as np\n\n\ncdef long NUM = 1048576\ncdef long MAX_VAL = 1048576\ncdef long K = 1048576 \/\/ 2\n\n\ncdef int cond_cy(long x, long k=K):\n return x < k\n\n\ncdef size_t _filter_cy(long[:] arr, long[:] result, size_t size):\n cdef size_t j = 0\n for i in range(size):\n if cond_cy(arr[i]):\n result[j] = arr[i]\n j += 1\n return j\n\n\ndef filter_cy(arr):\n result = np.empty_like(arr)\n new_size = _filter_cy(arr, result, arr.size)\n return result[:new_size].copy()\n```\nCython-accelerated with resizing (pre-compiled condition) \n```\ndef filter_resize_cy(arr):\n result = np.empty_like(arr)\n new_size = _filter_cy(arr, result, arr.size)\n result.resize(new_size)\n return result\n```\nNumba-accelerated with copying \n```py\nimport numba as nb\n\n\n@nb.njit\ndef cond_nb(x, k=K):\n return x < k\n\n\n@nb.njit\ndef filter_nb(arr, cond_nb):\n result = np.empty_like(arr)\n j = 0\n for i in range(arr.size):\n if cond_nb(arr[i]):\n result[j] = arr[i]\n j += 1\n return result[:j].copy()\n```\nNumba-accelerated with resizing \n```py\n@nb.njit\ndef _filter_out_nb(arr, out, cond_nb):\n j = 0\n for i in range(arr.size):\n if cond_nb(arr[i]):\n out[j] = arr[i]\n j += 1\n return j\n\n\ndef filter_resize_xnb(arr, cond_nb):\n result = np.empty_like(arr)\n j = _filter_out_nb(arr, result, cond_nb)\n result.resize(j, refcheck=False) # unsupported in NoPython mode\n return result\n```\nNumba-accelerated with a generator and np.fromiter() \n```py\n@nb.njit\ndef filter_gen_nb(arr, cond_nb):\n for i in range(arr.size):\n if cond_nb(arr[i]):\n yield arr[i]\n\n\ndef filter_gen_xnb(arr, cond_nb):\n return np.fromiter(filter_gen_nb(arr, cond_nb), dtype=arr.dtype)\n```\n4b. Using explicit looping with two passes: one to determine the size of the result and one to actually perform the computation Cython-accelerated (pre-compiled condition) \n```\n%%cython -c-O3 -c-march=native -a\n#cython: language_level=3, boundscheck=False, wraparound=False, initializedcheck=False, cdivision=True, infer_types=True\n\n\ncdef size_t _filtered_size_cy(long[:] arr, size_t size):\n cdef size_t j = 0\n for i in range(size):\n if cond_cy(arr[i]):\n j += 1\n return j\n\n\ndef filter2_cy(arr):\n cdef size_t new_size = _filtered_size_cy(arr, arr.size)\n result = np.empty(new_size, dtype=arr.dtype)\n new_size = _filter_cy(arr, result, arr.size)\n return result\n```\nNumba-accelerated \n```py\n@nb.njit\ndef filter2_nb(arr, cond_nb):\n j = 0\n for i in range(arr.size):\n if cond_nb(arr[i]):\n j += 1\n result = np.empty(j, dtype=arr.dtype)\n j = 0\n for i in range(arr.size):\n if cond_nb(arr[i]):\n result[j] = arr[i]\n j += 1\n return result\n```\nTiming Benchmarks (The generator-based filter_fromiter() method is much slower than the others -- by approx. 2 orders of magnitude. Similar (and perhaps slightly worse) performances can be expected from a list comprehension. This would be true for any explicit looping with non-accelerated code.) The timing would depend on both the input array size and the percent of filtered items. As a function of input size The first graph addresses the timings as a function of the input size (for ~50% filtering factor -- i.e. 50% of the elements appear in the result): In general, explicit looping with one form of acceleration leads to the fastest execution, with slight variations depending on input size. Within NumPy, the integer indexing approaches are basically on par with boolean masking. The benefits of using np.fromiter() (no pre-allocation) can be reaped by writing a Numba-accelerated generator, which would come out slower than the other approaches (within an order of magnitude), but much faster than pure-Python looping. As a function of filling The second graph addresses the timings as a function of items passing through the filter (for a fixed input size of ~1 million elements): The first observation is that all methods are slowest when approaching a ~50% filling and with less, or more filling they are faster, and fastest towards no filling (highest percent of filtered-out values, lowest percent of passing through values as indicated in the x-axis of the graph). Again, explicit looping with some mean of acceleration leads to the fastest execution. Within NumPy, the integer indexing and boolean masking approaches are again basically the same. (Full code available here) Memory Considerations The generator-based filter_fromiter() method requires only minimal temporary storage, independently of the size of the input. Memory-wise this is the most efficient method. This approach can be effectively speed up with a Numba-accelerated generator. Of similar memory efficiency are the Cython \/ Numba two-passes methods, because the size of the output is determined during the first pass. The caveat here is that computing the condition has to be fast for these methods to be fast. On the memory side, the single-pass solutions for both Cython and Numba require a temporary array of the size of the input. Hence, these are not very memory-efficient compared to two-passes or the generator-based one. Yet they are of similar asymptotic temporary memory footprint compared to masking, but the constant term is typically larger than masking. The boolean masking solution requires a temporary array of the size of the input but of type bool, which in NumPy is 1 byte, so this is ~8 times smaller than the default size of a NumPy array on a typical 64-bit system. The integer indexing solution has the same requirement as the boolean mask slicing in the first step (inside np.nonzero() call), which gets converted to a series of ints (typically int64 on a 64-bit system) in the second step (the output of np.nonzero()). This second step, therefore, has variable memory requirements, depending on the number of filtered elements. Remarks both boolean masking and integer indexing require some form of conditioning that is capable of producing a boolean mask (or, alternatively, a list of indices); in the above implementation, the condition is broadcastable the generator and the Numba-accelerated methods are also the most flexible when it comes to specifying a different filtering condition the Numba-accelerated methods require the condition to be Numba-compatible to access the Numba acceleration in NoPython mode the Cython solution requires specifying the data types for it to be fast, or extra effort for multiple types dispatching, and extra effort (not explored here) to get the same level of flexibility as the other methods for both Numba and Cython, the filtering condition can be hard-coded, leading to marginal but appreaciable speed differences the single-pass solutions require additional code to handle the unused (but otherwise initially allotted) memory. the NumPy methods do NOT return a view of the input, but a copy, as a result of advanced indexing: \n```py\narr = np.arange(100)\nk = 50\nprint('`arr[arr > k]` is a copy: ', arr[arr > k].base is None)\n# `arr[arr > k]` is a copy: True\nprint('`arr[np.where(arr > k)]` is a copy: ', arr[np.where(arr > k)].base is None)\n# `arr[np.where(arr > k)]` is a copy: True\nprint('`arr[:k]` is a copy: ', arr[:k].base is None)\n# `arr[:k]` is a copy: False\n```\n(EDITED: various improvements based on @ShadowRanger, @PaulPanzer, @max9111 and @DavidW comments.)","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/58422690\/filtering-reducing-a-numpy-array","best_answers_votes":117,"tags":["python","numpy","cython","numba"],"question_length":1165,"response_length":9388,"tag_count":4} {"question":"Melt the Upper Triangular Matrix of a Pandas Dataframe Given a square pandas DataFrame of the following form: \n```\na b c\na 1 .5 .3\nb .5 1 .4\nc .3 .4 1\n```\nHow can the upper triangle be melted to get a matrix of the following form \n```\nRow Column Value\n a a 1\n a b .5 \n a c .3\n b b 1\n b c .4\n c c 1 \n\n#Note the combination a,b is only listed once. There is no b,a listing\n```\nI'm more interested in an idiomatic pandas solution, a custom indexer would be easy enough to write by hand... Thank you in advance for your consideration and response.","response":"First I convert lower values of df to NaN by where and numpy.triu and then stack, reset_index and set column names: \n```\nimport numpy as np\n\nprint df\n a b c\na 1.0 0.5 0.3\nb 0.5 1.0 0.4\nc 0.3 0.4 1.0\n\nprint np.triu(np.ones(df.shape)).astype(np.bool)\n[[ True True True]\n [False True True]\n [False False True]]\n\ndf = df.where(np.triu(np.ones(df.shape)).astype(np.bool))\nprint df\n a b c\na 1 0.5 0.3\nb NaN 1.0 0.4\nc NaN NaN 1.0\n\ndf = df.stack().reset_index()\ndf.columns = ['Row','Column','Value']\nprint df\n\n Row Column Value\n0 a a 1.0\n1 a b 0.5\n2 a c 0.3\n3 b b 1.0\n4 b c 0.4\n5 c c 1.0\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/34417685\/melt-the-upper-triangular-matrix-of-a-pandas-dataframe","best_answers_votes":85,"tags":["python","pandas","numpy","reshape","melt"],"question_length":640,"response_length":686,"tag_count":5} {"question":"Pandas : compute mean or std (standard deviation) over entire dataframe Here is my problem, I have a dataframe like this : \n```\nDepr_1 Depr_2 Depr_3\nS3 0 5 9\nS2 4 11 8\nS1 6 11 12\nS5 0 4 11\nS4 4 8 8\n```\nand I just want to calculate the mean over the full dataframe, as the following doesn't work : \n```\ndf.mean()\n```\nThen I came up with : \n```\ndf.mean().mean()\n```\nBut this trick won't work for computing the standard deviation. My final attempts were : \n```\ndf.get_values().mean()\ndf.get_values().std()\n```\nExcept that in the latter case, it uses mean() and std() function from numpy. It's not a problem for the mean, but it is for std, as the pandas function uses by default ddof=1, unlike the numpy one where ddof=0.","response":"You could convert the dataframe to be a single column with stack (this changes the shape from 5x3 to 15x1) and then take the standard deviation: \n```\ndf.stack().std() # pandas default degrees of freedom is one\n```\nAlternatively, you can use values to convert from a pandas dataframe to a numpy array before taking the standard deviation: \n```\ndf.values.std(ddof=1) # numpy default degrees of freedom is zero\n```\nUnlike pandas, numpy will give the standard deviation of the entire array by default, so there is no need to reshape before taking the standard deviation. A couple of additional notes: The numpy approach here is a bit faster than the pandas one, which is generally true when you have the option to accomplish the same thing with either numpy or pandas. The speed difference will depend on the size of your data, but numpy was roughly 10x faster when I tested a few different sized dataframes on my laptop (numpy version 1.15.4 and pandas version 0.23.4). The numpy and pandas approaches here will not give exactly the same answers, but will be extremely close (identical at several digits of precision). The discrepancy is due to slight differences in implementation behind the scenes that affect how the floating point values get rounded.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/25140998\/pandas-compute-mean-or-std-standard-deviation-over-entire-dataframe","best_answers_votes":97,"tags":["python","pandas","dataframe","numpy","statistics"],"question_length":743,"response_length":1262,"tag_count":5} {"question":"Simple plot in python of a numpy array? I have an array that looks like this \n```\ncplr = array([ 0.01828922, 0.01972157, 0.02342053, ..., 0.25928021,\n 0.26352547, 0.26883406])\n```\nIf I say \n```\nimport matplotlib.pyplot as plt\nplt(cplr)\n```\nTypeError: 'module' object is not callable How do I plot the contents of a numpy array?","response":"matplotlib.pyplot is a module; the function to plot is matplotlib.pyplot.plot. Thus, you should do \n```\nplt.plot(cplr)\nplt.show()\n```\nA good place to learn more about this would be to read a matplotlib tutorial.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/18423603\/simple-plot-in-python-of-a-numpy-array","best_answers_votes":86,"tags":["python-2.7","numpy"],"question_length":334,"response_length":211,"tag_count":2} {"question":"How do I get all the values from a NumPy array excluding a certain index? I have a NumPy array, and I want to retrieve all the elements except a certain index. For example, consider the following array \n```\na = [0,1,2,3,4,5,5,6,7,8,9]\n```\nIf I specify index 3, then the resultant should be \n```\na = [0,1,2,4,5,5,6,7,8,9]\n```","response":"Like resizing, removing elements from an NumPy array is a slow operation (especially for large arrays since it requires allocating space and copying all the data from the original array to the new array). It should be avoided if possible. Often you can avoid it by working with a masked array instead. For example, consider the array a: \n```\nimport numpy as np\n\na = np.array([0,1,2,3,4,5,5,6,7,8,9])\nprint(a)\nprint(a.sum())\n# [0 1 2 3 4 5 5 6 7 8 9]\n# 50\n```\nWe can mask its value at index 3 and can perform a summation which ignores masked elements: \n```\na = np.ma.array(a, mask=False)\na.mask[3] = True\nprint(a)\nprint(a.sum())\n# [0 1 2 -- 4 5 5 6 7 8 9]\n# 47\n```\nMasked arrays also support many operations besides sum. If you really need to, it is also possible to remove masked elements using the compressed method: \n```\nprint(a.compressed())\n# [0 1 2 4 5 5 6 7 8 9]\n```\nBut as mentioned above, avoid it if possible.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/7429118\/how-do-i-get-all-the-values-from-a-numpy-array-excluding-a-certain-index","best_answers_votes":76,"tags":["python","numpy"],"question_length":324,"response_length":918,"tag_count":2} {"question":"Weird behaviour initializing a numpy array of string data I am having some seemingly trivial trouble with numpy when the array contains string data. I have the following code: \n```\nmy_array = numpy.empty([1, 2], dtype = str)\nmy_array[0, 0] = \"Cat\"\nmy_array[0, 1] = \"Apple\"\n```\nNow, when I print it with print my_array[0, :], the response I get is ['C', 'A'], which is clearly not the expected output of Cat and Apple. Why is that, and how can I get the right output? Thanks!","response":"Numpy requires string arrays to have a fixed maximum length. When you create an empty array with dtype=str, it sets this maximum length to 1 by default. You can see if you do my_array.dtype; it will show \"|S1\", meaning \"one-character string\". Subsequent assignments into the array are truncated to fit this structure. You can pass an explicit datatype with your maximum length by doing, e.g.: \n```\nmy_array = numpy.empty([1, 2], dtype=\"S10\")\n```\nThe \"S10\" will create an array of length-10 strings. You have to decide how big will be big enough to hold all the data you want to hold.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/13717554\/weird-behaviour-initializing-a-numpy-array-of-string-data","best_answers_votes":82,"tags":["python","numpy"],"question_length":474,"response_length":583,"tag_count":2} {"question":"Comparing previous row values in Pandas DataFrame \n```\nimport pandas as pd\ndata={'col1':[1,3,3,1,2,3,2,2]}\ndf=pd.DataFrame(data,columns=['col1'])\nprint df\n\n\n col1 \n 0 1 \n 1 3 \n 2 3 \n 3 1 \n 4 2 \n 5 3 \n 6 2 \n 7 2\n```\nI have the following Pandas DataFrame and I want to create another column that compares the previous row of col1 to see if they are equal. What would be the best way to do this? It would be like the following DataFrame. Thanks \n```\ncol1 match \n0 1 False \n1 3 False \n2 3 True \n3 1 False \n4 2 False \n5 3 False \n6 2 False \n7 2 True\n```","response":"You need eq with shift: \n```\ndf['match'] = df.col1.eq(df.col1.shift())\nprint (df)\n col1 match\n0 1 False\n1 3 False\n2 3 True\n3 1 False\n4 2 False\n5 3 False\n6 2 False\n7 2 True\n```\nOr instead eq use ==, but it is a bit slowier in large DataFrame: \n```\ndf['match'] = df.col1 == df.col1.shift()\nprint (df)\n col1 match\n0 1 False\n1 3 False\n2 3 True\n3 1 False\n4 2 False\n5 3 False\n6 2 False\n7 2 True\n```\nTimings: \n```\nimport pandas as pd\ndata={'col1':[1,3,3,1,2,3,2,2]}\ndf=pd.DataFrame(data,columns=['col1'])\nprint (df)\n#[80000 rows x 1 columns]\ndf = pd.concat([df]*10000).reset_index(drop=True)\n\ndf['match'] = df.col1 == df.col1.shift()\ndf['match1'] = df.col1.eq(df.col1.shift())\nprint (df)\n\n```python\n%timeit df.col1.eq(df.col1.shift())\nThe slowest run took 4.83 times longer than the fastest. This could mean that an intermediate result is being cached.\n1000 loops, best of 3: 933 \u00b5s per loop\n```\n\n```python\n%timeit df.col1 == df.col1.shift()\n1000 loops, best of 3: 1 ms per loop\n```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/41399538\/comparing-previous-row-values-in-pandas-dataframe","best_answers_votes":94,"tags":["python","pandas","numpy","boolean","shift"],"question_length":753,"response_length":1061,"tag_count":5} {"question":"Convert structured array to regular NumPy array The answer will be very obvious I think, but I don't see it at the moment. How can I convert a record array back to a regular ndarray? Suppose I have following simple structured array: \n```\nx = np.array([(1.0, 4.0,), (2.0, -1.0)], dtype=[('f0', '\nTraceback (most recent call last):\n File \"\/home\/badScript.py\", line 8, in \n tensor1 = tensor1.numpy()\nRuntimeError: Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead.\n\nProcess finished with exit code 1\n```\nGeneric solution this was suggested to you in your error message, just replace var with your variable name \n```\nimport torch\n\ntensor1 = torch.tensor([1.0,2.0],requires_grad=True)\n\nprint(tensor1)\nprint(type(tensor1))\n\ntensor1 = tensor1.detach().numpy()\n\nprint(tensor1)\nprint(type(tensor1))\n```\nwhich returns as expected \n```\ntensor([1., 2.], requires_grad=True)\n\n[1. 2.]\n\n\nProcess finished with exit code 0\n```\nSome explanation You need to convert your tensor to another tensor that isn't requiring a gradient in addition to its actual value definition. This other tensor can be converted to a numpy array. Cf. this discuss.pytorch post. (I think, more precisely, that one needs to do that in order to get the actual tensor out of its pytorch Variable wrapper, cf. this other discuss.pytorch post).","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/55466298\/pytorch-cant-call-numpy-on-variable-that-requires-grad-use-var-detach-num","best_answers_votes":57,"tags":["python","numpy","pytorch","gradient","tensor"],"question_length":283,"response_length":1429,"tag_count":5} {"question":"what does numpy ndarray shape do? I have a simple question about the .shape function, which confused me a lot. \n```\na = np.array([1, 2, 3]) # Create a rank 1 array\nprint(type(a)) # Prints \"\"\nprint(a.shape) # Prints \"(3,)\"\n\nb = np.array([[1,2,3],[4,5,6]]) # Create a rank 2 array\nprint(b.shape) # Prints \"(2, 3)\"\n```\nWhat did the .shape exactly do? count how many rows, how many columns, then the a.shape suppose to be, (1,3), one row three columns, right?","response":"yourarray.shape or np.shape() or np.ma.shape() returns the shape of your ndarray as a tuple; And you can get the (number of) dimensions of your array using yourarray.ndim or np.ndim(). (i.e. it gives the n of the ndarray since all arrays in NumPy are just n-dimensional arrays (shortly called as ndarrays)) For a 1D array, the shape would be (n,) where n is the number of elements in your array. For a 2D array, the shape would be (n,m) where n is the number of rows and m is the number of columns in your array. Please note that in 1D case, the shape would simply be (n, ) instead of what you said as either (1, n) or (n, 1) for row and column vectors respectively. This is to follow the convention that: For 1D array, return a shape tuple with only 1 element (i.e. (n,)) For 2D array, return a shape tuple with only 2 elements (i.e. (n,m)) For 3D array, return a shape tuple with only 3 elements (i.e. (n,m,k)) For 4D array, return a shape tuple with only 4 elements (i.e. (n,m,k,j)) and so on. Also, please see the example below to see how np.shape() or np.ma.shape() behaves with 1D arrays and scalars: \n```\n# sample array\n\n```python\nu = np.arange(10)\n\n# get its shape\n```\n\n```python\nnp.shape(u) # u.shape\n#Output\n#(10,)\n#\n## get array dimension using `np.ndim`\n```\n\n```python\nnp.ndim(u)\n#Output\n#1\n```\n\n```python\nnp.shape(np.mean(u))\n#Output\n#() # empty tuple (to indicate that a scalar is a 0D array).\n#\n## check using `numpy.ndim`\n```\n\n```python\nnp.ndim(np.mean(u))\n#Output\n#0\n#```\n#P.S.: So, the shape tuple is consistent with our understanding of dimensions of space, at least mathematically.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/47564495\/what-does-numpy-ndarray-shape-do","best_answers_votes":82,"tags":["arrays","numpy","multidimensional-array","shapes","numpy-ndarray"],"question_length":525,"response_length":1579,"tag_count":5} {"question":"When to apply(pd.to_numeric) and when to astype(np.float64) in python? I have a pandas DataFrame object named xiv which has a column of int64 Volume measurements. \n```\n\n```python\nxiv['Volume'].head(5)\n#Output\n#0 252000\n#1 484000\n#2 62000\n#3 168000\n#4 232000\n#Name: Volume, dtype: int64\n#```\n#I have read other posts (like this and this) that suggest the following solutions. But when I use either approach, it doesn't appear to change the dtype of the underlying data: \n#```\n```\n\n```python\nxiv['Volume'] = pd.to_numeric(xiv['Volume'])\n```\n\n```python\nxiv['Volume'].dtypes\n#Output\n#dtype('int64')\n#```\n#Or... \n#```\n```\n\n```python\nxiv['Volume'] = pd.to_numeric(xiv['Volume'])\n#Output\n####omitted for brevity###\n```\n\n```python\nxiv['Volume'].dtypes\n#Output\n#dtype('int64')\n```\n\n```python\nxiv['Volume'] = xiv['Volume'].apply(pd.to_numeric)\n```\n\n```python\nxiv['Volume'].dtypes\n#Output\n#dtype('int64')\n#```\n#I've also tried making a separate pandas Series and using the methods listed above on that Series and reassigning to the x['Volume'] obect, which is a pandas.core.series.Series object. I have, however, found a solution to this problem using the numpy package's float64 type - this works but I don't know why it's different. \n#```\n```\n\n```python\nxiv['Volume'] = xiv['Volume'].astype(np.float64)\n```\n\n```python\nxiv['Volume'].dtypes\n#Output\n#dtype('float64')\n#```\n#Can someone explain how to accomplish with the pandas library what the numpy library seems to do easily with its float64 class; that is, convert the column in the xiv DataFrame to a float64 in place.\n```","response":"If you already have numeric dtypes (int8|16|32|64,float64,boolean) you can convert it to another \"numeric\" dtype using Pandas .astype() method. Demo: \n```\n\n```python\ndf = pd.DataFrame(np.random.randint(10**5,10**7,(5,3)),columns=list('abc'), dtype=np.int64)\n```\n\n```python\ndf\n#Output\n#a b c\n#0 9059440 9590567 2076918\n#1 5861102 4566089 1947323\n#2 6636568 162770 2487991\n#3 6794572 5236903 5628779\n#4 470121 4044395 4546794\n```\n\n```python\ndf.dtypes\n#Output\n#a int64\n#b int64\n#c int64\n#dtype: object\n```\n\n```python\ndf['a'] = df['a'].astype(float)\n```\n\n```python\ndf.dtypes\n#Output\n#a float64\n#b int64\n#c int64\n#dtype: object\n#```\n#It won't work for object (string) dtypes, that can't be converted to numbers: \n#```\n```\n\n```python\ndf.loc[1, 'b'] = 'XXXXXX'\n```\n\n```python\ndf\n#Output\n#a b c\n#0 9059440.0 9590567 2076918\n#1 5861102.0 XXXXXX 1947323\n#2 6636568.0 162770 2487991\n#3 6794572.0 5236903 5628779\n#4 470121.0 4044395 4546794\n```\n\n```python\ndf.dtypes\n#Output\n#a float64\n#b object\n#c int64\n#dtype: object\n```\n\n```python\ndf['b'].astype(float)\n...\nskipped\n...\nValueError: could not convert string to float: 'XXXXXX'\n```\nSo here we want to use pd.to_numeric() method: \n```\n```\n\n```python\ndf['b'] = pd.to_numeric(df['b'], errors='coerce')\n```\n\n```python\ndf\n#Output\n#a b c\n#0 9059440.0 9590567.0 2076918\n#1 5861102.0 NaN 1947323\n#2 6636568.0 162770.0 2487991\n#3 6794572.0 5236903.0 5628779\n#4 470121.0 4044395.0 4546794\n```\n\n```python\ndf.dtypes\n#Output\n#a float64\n#b float64\n#c int64\n#dtype: object\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/40095712\/when-to-applypd-to-numeric-and-when-to-astypenp-float64-in-python","best_answers_votes":61,"tags":["python","pandas","numpy","dataframe","types"],"question_length":1483,"response_length":1592,"tag_count":5} {"question":"Converting two lists into a matrix I'll try to be as clear as possible, and I'll start by explaining why I want to transform two arrays into a matrix. To plot the performance of a portfolio vs an market index I need a data structure like in this format: \n```\n[[portfolio_value1, index_value1]\n [portfolio_value2, index_value2]]\n```\nBut I have the the data as two separate 1-D arrays: \n```\nportfolio = [portfolio_value1, portfolio_value2, ...]\nindex = [index_value1, index_value2, ...]\n```\nSo how do I transform the second scenario into the first. I've tried np.insert to add the second array to a test matrix I had in a python shell, my problem was to transpose the first array into a single column matrix. Any help on how to achieve this without an imperative loop would be great.","response":"The standard numpy function for what you want is np.column_stack: \n```\n\n```python\nnp.column_stack(([1, 2, 3], [4, 5, 6]))\n#Output\n#array([[1, 4],\n# [2, 5],\n# [3, 6]])\n#```\n#So with your portfolio and index arrays, doing \n#```\n#np.column_stack((portfolio, index))\n#```\n#would yield something like: \n#```\n#[[portfolio_value1, index_value1],\n# [portfolio_value2, index_value2],\n# [portfolio_value3, index_value3],\n# ...]\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/18730044\/converting-two-lists-into-a-matrix","best_answers_votes":130,"tags":["python","arrays","numpy","matrix"],"question_length":781,"response_length":404,"tag_count":4} {"question":"How to access a column in a list of lists in python I have a 2D array in python modeled by a list of lists and I want to extract the column. I made a quick research and I found a way that uses numpy arrays. The problem is that I do not want to use numpy so I don't want to convert my list of lists into a numpy array and then use [:,1] syntax. I tried using it on a normal list of lists but it shows an error so it's not possible. I am asking for a similar thing for list of lists without having to go through each element (In numpy arrays, it's faster to access a column by using [:,1] syntax than iterating over the elements of the array). I found this link but again it suggests iterating over elements without a shortcut.","response":"List comprehensions are your friend when working with lists of lists: \n```\n\n```python\nalist\n#Output\n#[[0, 1, 2, 3, 4, 5],\n# [6, 7, 8, 9, 10, 11],\n# [12, 13, 14, 15, 16, 17],\n# [18, 19, 20, 21, 22, 23]]\n```\n\n```python\n[row[1] for row in alist]\n#Output\n#[1, 7, 13, 19]\n#```\n#There's also a handy 'idiom' for transposing a nested list, turning 'columns' into 'rows': \n#```\n```\n\n```python\ntlist = list(zip(*alist))\n```\n\n```python\ntlist\n#Output\n#[(0, 6, 12, 18),\n# (1, 7, 13, 19),\n# (2, 8, 14, 20),\n# (3, 9, 15, 21),\n# (4, 10, 16, 22),\n# (5, 11, 17, 23)]\n```\n\n```python\ntlist[1]\n#Output\n#(1, 7, 13, 19)\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/44360162\/how-to-access-a-column-in-a-list-of-lists-in-python","best_answers_votes":96,"tags":["python","list","numpy","nested","list-comprehension"],"question_length":725,"response_length":575,"tag_count":5} {"question":"Does Python have a function to reduce fractions? For example, when I calculate 98\/42 I want to get 7\/3, not 2.3333333, is there a function for that using Python or Numpy?","response":"The fractions module can do that \n```\n\n```python\nfrom fractions import Fraction\n```\n\n```python\nFraction(98, 42)\n#Output\n#Fraction(7, 3)\n#```\n#There's a recipe over here for a numpy gcd. Which you could then use to divide your fraction \n#```\n```\n\n```python\ndef numpy_gcd(a, b):\n a, b = np.broadcast_arrays(a, b)\n a = a.copy()\n b = b.copy()\n pos = np.nonzero(b)[0]\n while len(pos) > 0:\n b2 = b[pos]\n a[pos], b[pos] = b2, a[pos] % b2\n pos = pos[b[pos]!=0]\n return a\n\n```\n\n```python\nnumpy_gcd(np.array([98]), np.array([42]))\n#Output\n#array([14])\n```\n\n```python\n98\/14, 42\/14\n#Output\n#(7, 3)\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/17537613\/does-python-have-a-function-to-reduce-fractions","best_answers_votes":70,"tags":["python","python-2.7","numpy","numerical","fractions"],"question_length":170,"response_length":587,"tag_count":5} {"question":"numpy boolean array with 1 bit entries Is there a way in numpy to create an array of booleans that uses just 1 bit for each entry? The standard np.bool type is 1 byte, but this way I use 8 times the required memory. On Google I found that C++ has std::vector.","response":"To do this you can use numpy's packbits and unpackbits: \n```\nimport numpy as np\n# original boolean array\nA1 = np.array([\n [0, 1, 1, 0, 1],\n [0, 0, 1, 1, 1],\n [1, 1, 1, 1, 1],\n], dtype=bool)\n\n# packed data\nA2 = np.packbits(A1, axis=None)\n\n# checking the size\nprint(len(A1.tostring())) # 15 bytes\nprint(len(A2.tostring())) # 2 bytes (ceil(15\/8))\n\n# reconstructing from packed data. You need to resize and reshape\nA3 = np.unpackbits(A2, count=A1.size).reshape(A1.shape).view(bool)\n\n# and the arrays are equal\nprint(np.array_equal(A1, A3)) # True\n```\nPrior to numpy 1.17.0, the first function is straight-forward to use, but reconstruction required additional manipulations. Here is an example: \n```\nimport numpy as np\n# original boolean array\nA1 = np.array([\n [0, 1, 1, 0, 1],\n [0, 0, 1, 1, 1],\n [1, 1, 1, 1, 1],\n], dtype=np.bool)\n\n# packed data\nA2 = np.packbits(A1, axis=None)\n\n# checking the size\nprint(len(A1.tostring())) # 15 bytes\nprint(len(A2.tostring())) # 2 bytes (ceil(15\/8))\n\n# reconstructing from packed data. You need to resize and reshape\nA3 = np.unpackbits(A2, axis=None)[:A1.size].reshape(A1.shape).astype(np.bool)\n\n# and the arrays are equal\nprint(np.array_equal(A1, A3)) # True\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/5602155\/numpy-boolean-array-with-1-bit-entries","best_answers_votes":43,"tags":["python","numpy","boolean"],"question_length":259,"response_length":1215,"tag_count":3} {"question":"How to multiply individual elements of a list with a number? \n```\nS = [22, 33, 45.6, 21.6, 51.8]\nP = 2.45\n```\nHere S is an array How will I multiply this and get the value? \n```\nSP = [53.9, 80.85, 111.72, 52.92, 126.91]\n```","response":"In NumPy it is quite simple \n```\nimport numpy as np\nP=2.45\nS=[22, 33, 45.6, 21.6, 51.8]\nSP = P*np.array(S)\n```\nI recommend taking a look at the NumPy tutorial for an explanation of the full capabilities of NumPy's arrays: https:\/\/scipy.github.io\/old-wiki\/pages\/Tentative_NumPy_Tutorial","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/8194959\/how-to-multiply-individual-elements-of-a-list-with-a-number","best_answers_votes":77,"tags":["python","numpy","multiplication"],"question_length":223,"response_length":285,"tag_count":3} {"question":"Find index where elements change value numpy Suppose I have \n```\n\n```python\nv\n#Output\n#array([1, 1, 1, 1, 1, 2, 2, 2, 3, 4, 3, 4, 3, 4, 3, 4, 5, 5, 5])\n#```\n#Is there an efficient numpy way to find each index where the value changes? For instance, I would want some result like, \n#```\n```\n\n```python\nindex_of_changed_values(v)\n#Output\n#[0, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16]\n#```\n#If this is not possible with some numpy routine, what is a fast way to do it in python? It would also be useful to me to be referred to some good numpy tutorials since I am a numpy beginner.\n```","response":"You can get this functionality in numpy by comparing each element with it's neighbor; \n```\nv[:-1] != v[1:]\n\n\narray([False, False, False, False, True, False, False, True, True,\n True, True, True, True, True, True, True, False, False], dtype=bool)\n```\nto get the indices you use the \"where\" function \n```\nnp.where(v[:-1] != v[1:])[0]\n\narray([ 4, 7, 8, 9, 10, 11, 12, 13, 14, 15])\n```\nFrom here you can prepend the first element and add a one to get to the same indexing scheme you have in your question.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/19125661\/find-index-where-elements-change-value-numpy","best_answers_votes":94,"tags":["python","numpy","indexing","unique"],"question_length":532,"response_length":516,"tag_count":4} {"question":"When to use pandas series, numpy ndarrays or simply python dictionaries? I am new to learning Python, and some of its libraries (numpy, pandas). I have found a lot of documentation on how numpy ndarrays, pandas series and python dictionaries work. But owing to my inexperience with Python, I have had a really hard time determining when to use each one of them. And I haven't found any best-practices that will help me understand and decide when it is better to use each type of data structure. As a general matter, are there any best practices for deciding which, if any, of these three data structures a specific data set should be loaded into?","response":"The rule of thumb that I usually apply: use the simplest data structure that still satisfies your needs. If we rank the data structures from most simple to least simple, it usually ends up like this: Dictionaries \/ lists Numpy arrays Pandas series \/ dataframes So first consider dictionaries \/ lists. If these allow you to do all data operations that you need, then all is fine. If not, start considering numpy arrays. Some typical reasons for moving to numpy arrays are: Your data is 2-dimensional (or higher). Although nested dictionaries\/lists can be used to represent multi-dimensional data, in most situations numpy arrays will be more efficient. You have to perform a bunch of numerical calculations. As already pointed out by zhqiat, numpy will give a significant speed-up in this case. Furthermore numpy arrays come bundled with a large amount of mathematical functions. Then there are also some typical reasons for going beyond numpy arrays and to the more-complex but also more-powerful pandas series\/dataframes: You have to merge multiple data sets with each other, or do reshaping\/reordering of your data. This diagram gives a nice overview of all the 'data wrangling' operations that pandas allows you to do. You have to import data from or export data to a specific file format like Excel, HDF5 or SQL. Pandas comes with convenient import\/export functions for this.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/45285743\/when-to-use-pandas-series-numpy-ndarrays-or-simply-python-dictionaries","best_answers_votes":80,"tags":["python","pandas","numpy"],"question_length":646,"response_length":1379,"tag_count":3} {"question":"Numpy, multiply array with scalar [duplicate] This question already has answers here: How to multiply individual elements of a list with a number? (4 answers) Closed 6 years ago. Is it possible to use ufuncs https:\/\/docs.scipy.org\/doc\/numpy\/reference\/ufuncs.html In order to map function to array (1D and \/ or 2D) and scalar If not what would be my way to achieve this? For example: \n```\na_1 = np.array([1.0, 2.0, 3.0])\na_2 = np.array([[1., 2.], [3., 4.]])\nb = 2.0\n```\nExpected result: \n```\na_1 * b = array([2.0, 4.0, 6.0]); \na_2 * b = array([[2., 4.], [6., 8.]])\n```\nI`m using python 2.7 if it is relevant to an issue.","response":"You can multiply numpy arrays by scalars and it just works. \n```\n\n```python\nimport numpy as np\n```\n\n```python\nnp.array([1, 2, 3]) * 2\n#Output\n#array([2, 4, 6])\n```\n\n```python\nnp.array([[1, 2, 3], [4, 5, 6]]) * 2\n#Output\n#array([[ 2, 4, 6],\n# [ 8, 10, 12]])\n#```\n#This is also a very fast and efficient operation. With your example: \n#```\n```\n\n```python\na_1 = np.array([1.0, 2.0, 3.0])\n```\n\n```python\na_2 = np.array([[1., 2.], [3., 4.]])\n```\n\n```python\nb = 2.0\n```\n\n```python\na_1 * b\n#Output\n#array([2., 4., 6.])\n```\n\n```python\na_2 * b\n#Output\n#array([[2., 4.],\n# [6., 8.]])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/53485221\/numpy-multiply-array-with-scalar","best_answers_votes":74,"tags":["python","python-2.7","numpy","numpy-ufunc"],"question_length":620,"response_length":466,"tag_count":4} {"question":"Indexing a numpy array with a list of tuples Why can't I index an ndarray using a list of tuple indices like so? \n```\nidx = [(x1, y1), ... (xn, yn)]\nX[idx]\n```\nInstead I have to do something unwieldy like \n```\nidx2 = numpy.array(idx)\nX[idx2[:, 0], idx2[:, 1]] # or more generally:\nX[tuple(numpy.vsplit(idx2.T, 1)[0])]\n```\nIs there a simpler, more pythonic way?","response":"You can use a list of tuples, but the convention is different from what you want. numpy expects a list of row indices, followed by a list of column values. You, apparently, want to specify a list of (x,y) pairs. http:\/\/docs.scipy.org\/doc\/numpy\/reference\/arrays.indexing.html#integer-array-indexing The relevant section in the documentation is 'integer array indexing'. Here's an example, seeking 3 points in a 2d array. (2 points in 2d can be confusing): \n```\n\n```python\nidx\n#Output\n#[(0, 1, 1), (2, 3, 0)]\n```\n\n```python\nX[idx]\n#Output\n#array([2, 7, 4])\n#```\n#Using your style of xy pairs of indices: \n#```\n```\n\n```python\nidx1 = [(0,2),(1,3),(1,0)]\n```\n\n```python\n[X[i] for i in idx1]\n#Output\n#[2, 7, 4]\n```\n\n```python\nX[tuple(np.array(idx1).T)]\n#Output\n#array([2, 7, 4])\n#```\n#X[tuple(zip(*idx1))] is another way of doing the conversion. The tuple() is optional in Python2. zip(*...) is a Python idiom that reverses the nesting of a list of lists. You are on the right track with: \n#```\n```\n\n```python\nidx2=np.array(idx1)\n```\n\n```python\nX[idx2[:,0], idx2[:,1]]\n#Output\n#array([2, 7, 4])\n#```\n#My tuple() is just a bit more compact (and not necessarily more 'pythonic'). Given the numpy convention, some sort of conversion is necessary. (Should we check what works with n-dimensions and m-points?)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/28491230\/indexing-a-numpy-array-with-a-list-of-tuples","best_answers_votes":70,"tags":["python","numpy","multidimensional-array","indices"],"question_length":360,"response_length":1265,"tag_count":4} {"question":"Weighted percentile using numpy Is there a way to use the numpy.percentile function to compute weighted percentile? Or is anyone aware of an alternative python function to compute weighted percentile?","response":"Completely vectorized numpy solution Here is the code I use. It's not an optimal one (which I'm unable to write with numpy), but still much faster and more reliable than accepted solution \n```py\ndef weighted_quantile(values, quantiles, sample_weight=None, \n values_sorted=False, old_style=False):\n \"\"\" Very close to numpy.percentile, but supports weights.\n NOTE: quantiles should be in [0, 1]!\n :param values: numpy.array with data\n :param quantiles: array-like with many quantiles needed\n :param sample_weight: array-like of the same length as `array`\n :param values_sorted: bool, if True, then will avoid sorting of\n initial array\n :param old_style: if True, will correct output to be consistent\n with numpy.percentile.\n :return: numpy.array with computed quantiles.\n \"\"\"\n values = np.array(values)\n quantiles = np.array(quantiles)\n if sample_weight is None:\n sample_weight = np.ones(len(values))\n sample_weight = np.array(sample_weight)\n assert np.all(quantiles >= 0) and np.all(quantiles <= 1), \\\n 'quantiles should be in [0, 1]'\n\n if not values_sorted:\n sorter = np.argsort(values)\n values = values[sorter]\n sample_weight = sample_weight[sorter]\n\n weighted_quantiles = np.cumsum(sample_weight) - 0.5 * sample_weight\n if old_style:\n # To be convenient with numpy.percentile\n weighted_quantiles -= weighted_quantiles[0]\n weighted_quantiles \/= weighted_quantiles[-1]\n else:\n weighted_quantiles \/= np.sum(sample_weight)\n return np.interp(quantiles, weighted_quantiles, values)\n```\nExamples: weighted_quantile([1, 2, 9, 3.2, 4], [0.0, 0.5, 1.]) array([ 1. , 3.2, 9. ]) weighted_quantile([1, 2, 9, 3.2, 4], [0.0, 0.5, 1.], sample_weight=[2, 1, 2, 4, 1]) array([ 1. , 3.2, 9. ])","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/21844024\/weighted-percentile-using-numpy","best_answers_votes":81,"tags":["python","numpy","weighted","percentile"],"question_length":200,"response_length":1831,"tag_count":4} {"question":"In numpy, what does selection by [:,None] do? I'm taking the Udacity course on deep learning and I came across the following code: \n```\ndef reformat(dataset, labels):\n dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)\n # Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]\n labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)\n return dataset, labels\n```\nWhat does labels[:,None] actually do here?","response":"http:\/\/docs.scipy.org\/doc\/numpy\/reference\/arrays.indexing.html numpy.newaxis The newaxis object can be used in all slicing operations to create an axis of length one. :const: newaxis is an alias for \u2018None\u2019, and \u2018None\u2019 can be used in place of this with the same result. http:\/\/docs.scipy.org\/doc\/numpy-1.10.1\/reference\/generated\/numpy.expand_dims.html Demonstrating with part of your code \n```\n\n```python\nlabels=np.array([1,3,5])\n```\n\n```python\nlabels[:,None]\n#Output\n#array([[1],\n# [3],\n# [5]])\n```\n\n```python\nnp.arange(8)==labels[:,None]\n#Output\n#array([[False, True, False, False, False, False, False, False],\n# [False, False, False, True, False, False, False, False],\n# [False, False, False, False, False, True, False, False]], dtype=bool)\n```\n\n```python\n(np.arange(8)==labels[:,None]).astype(int)\n#Output\n#array([[0, 1, 0, 0, 0, 0, 0, 0],\n# [0, 0, 0, 1, 0, 0, 0, 0],\n# [0, 0, 0, 0, 0, 1, 0, 0]])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/37867354\/in-numpy-what-does-selection-by-none-do","best_answers_votes":57,"tags":["python","numpy"],"question_length":455,"response_length":930,"tag_count":2} {"question":"Faster way of polygon intersection with shapely I have a large number of polygons (~100000) and try to find a smart way of calculating their intersecting area with a regular grid cells. Currently, I am creating the polygons and the grid cells using shapely (based on their corner coordinates). Then, using a simple for-loop I go through each polygon and compare it to nearby grid cells. Just a small example to illustrate the polygons\/grid cells. \n```\nfrom shapely.geometry import box, Polygon\n# Example polygon \nxy = [[130.21001, 27.200001], [129.52, 27.34], [129.45, 27.1], [130.13, 26.950001]]\npolygon_shape = Polygon(xy)\n# Example grid cell\ngridcell_shape = box(129.5, -27.0, 129.75, 27.25)\n# The intersection\npolygon_shape.intersection(gridcell_shape).area\n```\n(BTW: the grid cells have the dimensions 0.25x0.25 and the polygons 1x1 at max) Actually this is quite fast for an individual polygon\/grid cell combo with around 0.003 seconds. However, running this code on a huge amount of polygons (each one could intersect dozens of grid cells) takes around 15+ minutes (up to 30+ min depending on the number of intersecting grid cells) on my machine which is not acceptable. Unfortunately, I have no idea how it is possible to write a code for polygon intersection to get the area of overlap. Do you have any tips? Is there an alternative to shapely?","response":"Consider using Rtree to help identify which grid cells that a polygon may intersect. This way, you can remove the for loop used with the array of lat\/lons, which is probably the slow part. Structure your code something like this: \n```\nfrom shapely.ops import cascaded_union\nfrom rtree import index\nidx = index.Index()\n\n# Populate R-tree index with bounds of grid cells\nfor pos, cell in enumerate(grid_cells):\n # assuming cell is a shapely object\n idx.insert(pos, cell.bounds)\n\n# Loop through each Shapely polygon\nfor poly in polygons:\n # Merge cells that have overlapping bounding boxes\n merged_cells = cascaded_union([grid_cells[pos] for pos in idx.intersection(poly.bounds)])\n # Now do actual intersection\n print(poly.intersection(merged_cells).area)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/14697442\/faster-way-of-polygon-intersection-with-shapely","best_answers_votes":82,"tags":["python","numpy","shapely"],"question_length":1353,"response_length":774,"tag_count":3} {"question":"Numpy argmax - random tie breaking In numpy.argmax function, tie breaking between multiple max elements is so that the first element is returned. Is there a functionality for randomizing tie breaking so that all maximum numbers have equal chance of being selected? Below is an example directly from numpy.argmax documentation. \n```\n\n```python\nb = np.arange(6)\n```\n\n```python\nb[1] = 5\n```\n\n```python\nb\n#Output\n#array([0, 5, 2, 3, 4, 5])\n```\n\n```python\nnp.argmax(b) # Only the first occurrence is returned.\n#Output\n#1\n#```\n#I am looking for ways so that 1st and 5th elements in the list are returned with equal probability. Thank you!\n```","response":"Use np.random.choice - \n```\nnp.random.choice(np.flatnonzero(b == b.max()))\n```\nLet's verify for an array with three max candidates - \n```\n\n```python\nb\n#Output\n#array([0, 5, 2, 5, 4, 5])\n```\n\n```python\nc=[np.random.choice(np.flatnonzero(b == b.max())) for i in range(100000)]\n```\n\n```python\nnp.bincount(c)\n#Output\n#array([ 0, 33180, 0, 33611, 0, 33209])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/42071597\/numpy-argmax-random-tie-breaking","best_answers_votes":61,"tags":["python","numpy"],"question_length":572,"response_length":360,"tag_count":2} {"question":"How to fix IndexError: invalid index to scalar variable This code generates error: \n```\nIndexError: invalid index to scalar variable.\n```\nat the line: results.append(RMSPE(np.expm1(y_train[testcv]), [y[1] for y in y_test])) How to fix it? \n```\nimport pandas as pd\nimport numpy as np\nfrom sklearn import ensemble\nfrom sklearn import cross_validation\n\ndef ToWeight(y):\n w = np.zeros(y.shape, dtype=float)\n ind = y != 0\n w[ind] = 1.\/(y[ind]**2)\n return w\n\ndef RMSPE(y, yhat):\n w = ToWeight(y)\n rmspe = np.sqrt(np.mean( w * (y - yhat)**2 ))\n return rmspe\n\nforest = ensemble.RandomForestRegressor(n_estimators=10, min_samples_split=2, n_jobs=-1)\n\nprint (\"Cross validations\")\ncv = cross_validation.KFold(len(train), n_folds=5)\n\nresults = []\nfor traincv, testcv in cv:\n y_test = np.expm1(forest.fit(X_train[traincv], y_train[traincv]).predict(X_train[testcv]))\n results.append(RMSPE(np.expm1(y_train[testcv]), [y[1] for y in y_test]))\n```\ntestcv is: \n```\n[False False False ..., True True True]\n```","response":"You are trying to index into a scalar (non-iterable) value: \n```\n[y[1] for y in y_test]\n# ^ this is the problem\n```\nWhen you call [y for y in test] you are iterating over the values already, so you get a single value in y. Your code is the same as trying to do the following: \n```\ny_test = [1, 2, 3]\ny = y_test[0] # y = 1\nprint(y[0]) # this line will fail\n```\nI'm not sure what you're trying to get into your results array, but you need to get rid of [y[1] for y in y_test]. If you want to append each y in y_test to results, you'll need to expand your list comprehension out further to something like this: \n```\n[results.append(..., y) for y in y_test]\n```\nOr just use a for loop: \n```\nfor y in y_test:\n results.append(..., y)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/32978575\/how-to-fix-indexerror-invalid-index-to-scalar-variable","best_answers_votes":49,"tags":["python","numpy","pandas"],"question_length":1021,"response_length":735,"tag_count":3} {"question":"How to pad with zeros a tensor along some axis (Python) I would like to pad a numpy tensor with 0 along the chosen axis. For instance, I have tensor r with shape (4,3,2) but I am only interested in padding only the last two axis (that is, pad only the matrix). Is it possible to do it with the one-line python code?","response":"You can use np.pad(): \n```\na = np.ones((4, 3, 2))\n\n# npad is a tuple of (n_before, n_after) for each dimension\nnpad = ((0, 0), (1, 2), (2, 1))\nb = np.pad(a, pad_width=npad, mode='constant', constant_values=0)\n\nprint(b.shape)\n# (4, 6, 5)\n\nprint(b)\n# [[[ 0. 0. 0. 0. 0.]\n# [ 0. 0. 1. 1. 0.]\n# [ 0. 0. 1. 1. 0.]\n# [ 0. 0. 1. 1. 0.]\n# [ 0. 0. 0. 0. 0.]\n# [ 0. 0. 0. 0. 0.]]\n\n# [[ 0. 0. 0. 0. 0.]\n# [ 0. 0. 1. 1. 0.]\n# [ 0. 0. 1. 1. 0.]\n# [ 0. 0. 1. 1. 0.]\n# [ 0. 0. 0. 0. 0.]\n# [ 0. 0. 0. 0. 0.]]\n\n# [[ 0. 0. 0. 0. 0.]\n# [ 0. 0. 1. 1. 0.]\n# [ 0. 0. 1. 1. 0.]\n# [ 0. 0. 1. 1. 0.]\n# [ 0. 0. 0. 0. 0.]\n# [ 0. 0. 0. 0. 0.]]\n\n# [[ 0. 0. 0. 0. 0.]\n# [ 0. 0. 1. 1. 0.]\n# [ 0. 0. 1. 1. 0.]\n# [ 0. 0. 1. 1. 0.]\n# [ 0. 0. 0. 0. 0.]\n# [ 0. 0. 0. 0. 0.]]]\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/19349410\/how-to-pad-with-zeros-a-tensor-along-some-axis-python","best_answers_votes":102,"tags":["python","numpy","multidimensional-array"],"question_length":315,"response_length":882,"tag_count":3} {"question":"How to use numpy.void type I loaded a MATLAB .mat file via scipy.io.loadmat and it gave me a list of numpy.void objects. What are they, how can they be used and where can I get some reference documentation on them?","response":"According to the numpy documentation: http:\/\/docs.scipy.org\/doc\/numpy\/reference\/arrays.dtypes.html, numpy.void types are defined as flexible data types. Basically, these are data types where there is no pre-defined type associated to the variable(s) you're looking at. If you look at numpy, you have data types such as float, uint8, bool, string, etc. void is to accommodate for more generic and flexible types and are for those data types that don't necessary fall into any one of these pre-defined data types. This situation is mostly encountered when you're loading in a struct where each element has multiple data types associated with multiple fields. Each structure element could have a combination of different data types, and the amalgamation of all of these data types to represent an instance of this structure element thus leads us to numpy.void. With the documentation, you can certainly do the same operations like you would with any other data type. Take a look at the generic data type methods here: http:\/\/docs.scipy.org\/doc\/numpy\/reference\/generated\/numpy.generic.html#numpy.generic . In fact, all numpy data types are derived from this generic class, including numpy.void. In the first link I provided at the beginning of this post, it shows a good example of how to create a custom record type, where a record is a combination of a tuple of numbers and a string. When creating a list of these records, each type in the list is of type numpy.void and it demonstrates that a record is of this data type. However, bear in mind that this record list has a data type that is of this record, but each element of this list will be of type numpy.void. However, as a matter of self-containment, let's re-create the example here: Let's create a custom record type where it has two fields associated for each variable you create: A 16-bit string with a field named name A 2-element tuple of floating point numbers that are 64-bits each, with a field named grades As such, you'd do something like: \n```\nimport numpy as np\ndt = np.dtype([('name', np.str_, 16), ('grades', np.float64, (2,))])\n```\nAs such, let's create an example list of two elements and instantiate their fields: \n```\nx = np.array([('Sarah', (8.0, 7.0)), ('John', (6.0, 7.0))], dtype=dt)\n```\nBecause we made this list into a numpy.array, we expect its data type to be so: \n```\ntype(x)\n```\nWe get: \n```\n\n```\nRemember, the list itself is a numpy.array, but not the individual elements. To access the second element of this list, which is the second record, we do: \n```\nx[1]\n```\nWe get: \n```\n('John', [6.0, 7.0])\n```\nTo check the type of the second record, we do: \n```\ntype(x[1])\n```\nWe get: \n```\n # As expected\n```\nSome additional bonuses for you To access the name of the second record, we do: \n```\nx[1]['name']\n```\nWe get: \n```\n'John'\n```\nTo access the grades of the second record, we do: \n```\nx[1]['grades']\n```\nWe get: \n```\narray([ 6., 7.])\n```\nTo check the type of the name inside the second record, we do: \n```\ntype(x[1]['name'])\n```\nWe get: \n```\n\n```\nTo check the type of the grades inside the second record, we do: \n```\ntype(x[1]['grades'])\n```\nWe get: \n```\n\n```\nTake note that each element in this list is of type numpy.void. However, the individual fields for each element in our list is either a tuple of numbers, or a string. The collection of these elements together is of type numpy.void.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/25247190\/how-to-use-numpy-void-type","best_answers_votes":86,"tags":["python","matlab","numpy","scipy"],"question_length":214,"response_length":3459,"tag_count":4} {"question":"TypeError: Invalid dimensions for image data when plotting array with imshow() For the following code \n```\n# Numerical operation\nSN_map_final = (new_SN_map - mean_SN) \/ sigma_SN \n\n# Plot figure\nfig12 = plt.figure(12)\nfig_SN_final = plt.imshow(SN_map_final, interpolation='nearest')\nplt.colorbar()\n\nfig12 = plt.savefig(outname12)\n```\nwith new_SN_map being a 1D array and mean_SN and sigma_SN being constants, I get the following error. \n```\nTraceback (most recent call last):\n File \"c:\\Users\\Valentin\\Desktop\\Stage M2\\density_map_simple.py\", line 546, in \n fig_SN_final = plt.imshow(SN_map_final, interpolation='nearest')\n File \"c:\\users\\valentin\\appdata\\local\\enthought\\canopy\\user\\lib\\site-packages\\matplotlib\\pyplot.py\", line 3022, in imshow\n **kwargs)\n File \"c:\\users\\valentin\\appdata\\local\\enthought\\canopy\\user\\lib\\site-packages\\matplotlib\\__init__.py\", line 1812, in inner\n return func(ax, *args, **kwargs)\n File \"c:\\users\\valentin\\appdata\\local\\enthought\\canopy\\user\\lib\\site-packages\\matplotlib\\axes\\_axes.py\", line 4947, in imshow\n im.set_data(X)\n File \"c:\\users\\valentin\\appdata\\local\\enthought\\canopy\\user\\lib\\site-packages\\matplotlib\\image.py\", line 453, in set_data\n raise TypeError(\"Invalid dimensions for image data\")\nTypeError: Invalid dimensions for image data\n```\nWhat is the source of this error? I thought my numerical operations were allowed.","response":"There is a (somewhat) related question on StackOverflow: Showing an image with pylab.imshow() Here the problem was that an array of shape (nx,ny,1) is still considered a 3D array, and must be squeezed or sliced into a 2D array. More generally, the reason for the Exception TypeError: Invalid dimensions for image data is shown here: matplotlib.pyplot.imshow() needs a 2D array, or a 3D array with the third dimension being of shape 3 or 4! You can easily check this with (these checks are done by imshow, this function is only meant to give a more specific message in case it's not a valid input): \n```\nfrom __future__ import print_function\nimport numpy as np\n\ndef valid_imshow_data(data):\n data = np.asarray(data)\n if data.ndim == 2:\n return True\n elif data.ndim == 3:\n if 3 <= data.shape[2] <= 4:\n return True\n else:\n print('The \"data\" has 3 dimensions but the last dimension '\n 'must have a length of 3 (RGB) or 4 (RGBA), not \"{}\".'\n ''.format(data.shape[2]))\n return False\n else:\n print('To visualize an image the data must be 2 dimensional or '\n '3 dimensional, not \"{}\".'\n ''.format(data.ndim))\n return False\n```\nIn your case: \n```\n\n```python\nnew_SN_map = np.array([1,2,3])\n```\n\n```python\nvalid_imshow_data(new_SN_map)\n#Output\n#To visualize an image the data must be 2 dimensional or 3 dimensional, not \"1\".\n#False\n#```\n#The np.asarray is what is done internally by matplotlib.pyplot.imshow so it's generally best you do it too. If you have a numpy array it's obsolete but if not (for example a list) it's necessary. In your specific case you got a 1D array, so you need to add a dimension with np.expand_dims() \n#```\n#import matplotlib.pyplot as plt\n#a = np.array([1,2,3,4,5])\n#a = np.expand_dims(a, axis=0) # or axis=1\n#plt.imshow(a)\n#plt.show()\n#```\n#or just use something that accepts 1D arrays like plot: \n#```\n#a = np.array([1,2,3,4,5])\n#plt.plot(a)\n#plt.show()\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/36431496\/typeerror-invalid-dimensions-for-image-data-when-plotting-array-with-imshow","best_answers_votes":77,"tags":["python","arrays","numpy","matplotlib"],"question_length":1392,"response_length":1976,"tag_count":4} {"question":"Convert list or numpy array of single element to float in python I have a function which can accept either a list or a numpy array. In either case, the list\/array has a single element (always). I just need to return a float. So, e.g., I could receive: \n```\nlist_ = [4]\n```\nor the numpy array: \n```\narray_ = array([4])\n```\nAnd I should return \n```\n4.0\n```\nSo, naturally (I would say), I employ float(...) on list_ and get: \n```\nTypeError: float() argument must be a string or a number\n```\nI do the same to array_ and this time it works by responding with \"4.0\". From this, I learn that Python's list cannot be converted to float this way. Based on the success with the numpy array conversion to float this lead me to the approach: \n```\nfloat(np.asarray(list_))\n```\nAnd this works when list_ is both a Python list and when it is a numpy array. Question But it seems like this approach has an overhead first converting the list to a numpy array and then to float. Basically: Is there a better way of doing this?","response":"You may want to use the ndarray.item method, as in a.item(). This is also equivalent to (the now deprecated) np.asscalar(a). This has the benefit of working in situations with views and superfluous axes, while the above solutions will currently break. For example, \n```\n\n```python\na = np.asarray(1).view()\n```\n\n```python\na.item() # correct\n#Output\n#1\n```\n\n```python\na[0] # breaks\n#Output\n#Traceback (most recent call last):\n# File \"\", line 1, in \n#IndexError: too many indices for array\n```\n\n```python\na = np.asarray([[2]])\n```\n\n```python\na.item() # correct\n#Output\n#2\n```\n\n```python\na[0] # bad result\n#Output\n#array([2])\n#```\n#This also has the benefit of throwing an exception if the array is not actually a scalar, while the a[0] approach will silently proceed (which may lead to bugs sneaking through undetected). \n#```\n```\n\n```python\na = np.asarray([1, 2])\n```\n\n```python\na[0] # silently proceeds\n#Output\n#1\n```\n\n```python\na.item() # detects incorrect size\n#Output\n#Traceback (most recent call last):\n# File \"\", line 1, in \n#ValueError: can only convert an array of size 1 to a Python scalar\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/30311172\/convert-list-or-numpy-array-of-single-element-to-float-in-python","best_answers_votes":62,"tags":["python","arrays","list","numpy","floating-point"],"question_length":1008,"response_length":986,"tag_count":5} {"question":"Why does numpy.power return 0 for small exponents while math.pow returns the correct answer? \n```\n\n```python\nnp.power(10,-100)\n#Output\n#0\n```\n\n```python\nmath.pow(10,-100)\n#Output\n#1e-100\n#```\n#I would expect both the commands to return 1e-100. This is not a precision issue either, since the issue persists even after increasing precision to 500. Is there some setting which I can change to get the correct answer?\n```","response":"Oh, it's much \"worse\" than that: \n```\n\n```python\nnumpy.power(10,-1)\n#Output\n#0\n#```\n#But this is a hint to what's going on: 10 is an integer, and numpy.power doesn't coerce the numbers to floats. But this works: \n#```\n```\n\n```python\nnumpy.power(10.,-1)\n#Output\n#0.10000000000000001\n```\n\n```python\nnumpy.power(10.,-100)\n#Output\n#1e-100\n#```\n#Note, however, that the power operator, **, does convert to float: \n#```\n```\n\n```python\n10**-1\n#Output\n#0.1\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/22956139\/why-does-numpy-power-return-0-for-small-exponents-while-math-pow-returns-the-cor","best_answers_votes":72,"tags":["python","numpy","exponentiation"],"question_length":405,"response_length":422,"tag_count":3} {"question":"Passing list-likes to .loc or [] with any missing labels is no longer supported I want to create a modified dataframe with the specified columns. I tried the following but throws the error \"Passing list-likes to .loc or [] with any missing labels is no longer supported\" \n```\n# columns to keep\nfiltered_columns = ['text', 'agreeCount', 'disagreeCount', 'id', 'user.firstName', 'user.lastName', 'user.gender', 'user.id']\ntips_filtered = tips_df.loc[:, filtered_columns]\n\n# display tips\ntips_filtered\n```\nThank you","response":"It looks like Pandas has deprecated this method of indexing. According to their docs: This behavior is deprecated and will show a warning message pointing to this section. The recommended alternative is to use .reindex() Using the new recommended method, you can filter your columns using: tips_filtered = tips_df.reindex(columns = filtered_columns). NB: To reindex rows, you would use reindex(index = ...) (More information here).","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/61291741\/passing-list-likes-to-loc-or-with-any-missing-labels-is-no-longer-supported","best_answers_votes":54,"tags":["python","pandas","numpy","dataframe"],"question_length":512,"response_length":431,"tag_count":4} {"question":"Scipy sparse... arrays? So, I'm doing some Kmeans classification using numpy arrays that are quite sparse-- lots and lots of zeroes. I figured that I'd use scipy's 'sparse' package to reduce the storage overhead, but I'm a little confused about how to create arrays, not matrices. I've gone through this tutorial on how to create sparse matrices: http:\/\/www.scipy.org\/SciPy_Tutorial#head-c60163f2fd2bab79edd94be43682414f18b90df7 To mimic an array, I just create a 1xN matrix, but as you may guess, Asp.dot(Bsp) doesn't quite work because you can't multiply two 1xN matrices. I'd have to transpose each array to Nx1, and that's pretty lame, since I'd be doing it for every dot-product calculation. Next up, I tried to create an NxN matrix where column 1 == row 1 (such that you can multiply two matrices and just take the top-left corner as the dot product), but that turned out to be really inefficient. I'd love to use scipy's sparse package as a magic replacement for numpy's array(), but as yet, I'm not really sure what to do. Any advice?","response":"Use a scipy.sparse format that is row or column based: csc_matrix and csr_matrix. These use efficient, C implementations under the hood (including multiplication), and transposition is a no-op (esp. if you call transpose(copy=False)), just like with numpy arrays. EDIT: some timings via ipython: \n```\nimport numpy, scipy.sparse\nn = 100000\nx = (numpy.random.rand(n) * 2).astype(int).astype(float) # 50% sparse vector\nx_csr = scipy.sparse.csr_matrix(x)\nx_dok = scipy.sparse.dok_matrix(x.reshape(x_csr.shape))\n```\nNow x_csr and x_dok are 50% sparse: \n```\nprint repr(x_csr)\n<1x100000 sparse matrix of type ''\n with 49757 stored elements in Compressed Sparse Row format>\n```\nAnd the timings: \n```\ntimeit numpy.dot(x, x)\n10000 loops, best of 3: 123 us per loop\n\ntimeit x_dok * x_dok.T\n1 loops, best of 3: 1.73 s per loop\n\ntimeit x_csr.multiply(x_csr).sum()\n1000 loops, best of 3: 1.64 ms per loop\n\ntimeit x_csr * x_csr.T\n100 loops, best of 3: 3.62 ms per loop\n```\nSo it looks like I told a lie. Transposition is very cheap, but there is no efficient C implementation of csr * csc (in the latest scipy 0.9.0). A new csr object is constructed in each call :-( As a hack (though scipy is relatively stable these days), you can do the dot product directly on the sparse data: \n```\ntimeit numpy.dot(x_csr.data, x_csr.data)\n10000 loops, best of 3: 62.9 us per loop\n```\nNote this last approach does a numpy dense multiplication again. The sparsity is 50%, so it's actually faster than dot(x, x) by a factor of 2.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/2540059\/scipy-sparse-arrays","best_answers_votes":36,"tags":["python","matrix","numpy","scipy","sparse-matrix"],"question_length":1042,"response_length":1528,"tag_count":5} {"question":"Convert numpy type to python I have a list of dicts in the following form that I generate from pandas. I want to convert it to a JSON format. \n```\nlist_val = [{1.0: 685}, {2.0: 8}]\noutput = json.dumps(list_val)\n```\nHowever, json.dumps throws an error: TypeError: 685 is not JSON serializable I am guessing it's a type conversion issue from numpy to python(?). However, when I convert the values v of each dict in the array using np.int32(v) it still throws the error. Here's the full code: \n```py\nnew = df[df[label] == label_new]\nks_dict = json.loads(content)\nks_list = ks_dict['variables']\nfreq_counts = []\n\nfor ks_var in ks_list:\n\n freq_var = dict()\n freq_var[\"name\"] = ks_var[\"name\"]\n ks_series = new[ks_var[\"name\"]]\n temp_df = ks_series.value_counts().to_dict()\n freq_var[\"new\"] = [{u: np.int32(v)} for (u, v) in temp_df.iteritems()]\n freq_counts.append(freq_var)\n\nout = json.dumps(freq_counts)\n```","response":"It looks like you're correct: \n```\n\n```python\nimport numpy\n```\n\n```python\nimport json\n```\n\n```python\njson.dumps(numpy.int32(685))\n#Output\n#Traceback (most recent call last):\n# File \"\", line 1, in \n# File \"\/usr\/lib\/python2.7\/json\/__init__.py\", line 243, in dumps\n# return _default_encoder.encode(obj)\n# File \"\/usr\/lib\/python2.7\/json\/encoder.py\", line 207, in encode\n# chunks = self.iterencode(o, _one_shot=True)\n# File \"\/usr\/lib\/python2.7\/json\/encoder.py\", line 270, in iterencode\n# return _iterencode(o, 0)\n# File \"\/usr\/lib\/python2.7\/json\/encoder.py\", line 184, in default\n# raise TypeError(repr(o) + \" is not JSON serializable\")\n#TypeError: 685 is not JSON serializable\n#```\n#The unfortunate thing here is that numpy numbers' __repr__ doesn't give you any hint about what type they are. They're running around masquerading as ints when they aren't (gasp). Ultimately, it looks like json is telling you that an int isn't serializable, but really, it's telling you that this particular np.int32 (or whatever type you actually have) isn't serializable. (No real surprise there -- No np.int32 is serializable). This is also why the dict that you inevitably printed before passing it to json.dumps looks like it just has integers in it as well. The easiest workaround here is probably to write your own serializer1: \n#```\n#class MyEncoder(json.JSONEncoder):\n# def default(self, obj):\n# if isinstance(obj, numpy.integer):\n# return int(obj)\n# elif isinstance(obj, numpy.floating):\n# return float(obj)\n# elif isinstance(obj, numpy.ndarray):\n# return obj.tolist()\n# else:\n# return super(MyEncoder, self).default(obj)\n#```\n#You use it like this: \n#```\n#json.dumps(numpy.float32(1.2), cls=MyEncoder)\n#json.dumps(numpy.arange(12), cls=MyEncoder)\n#json.dumps({'a': numpy.int32(42)}, cls=MyEncoder)\n#```\n#etc. 1Or you could just write the default function and pass that as the defaut keyword argument to json.dumps. In this scenario, you'd replace the last line with raise TypeError, but ... meh. The class is more extensible :-)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/27050108\/convert-numpy-type-to-python","best_answers_votes":129,"tags":["python","json","numpy","pandas"],"question_length":920,"response_length":2054,"tag_count":4} {"question":"How to plot vectors in python using matplotlib I am taking a course on linear algebra and I want to visualize the vectors in action, such as vector addition, normal vector, so on. For instance: \n```\nV = np.array([[1,1],[-2,2],[4,-7]])\n```\nIn this case I want to plot 3 vectors V1 = (1,1), M2 = (-2,2), M3 = (4,-7). Then I should be able to add V1,V2 to plot a new vector V12(all together in one figure). when I use the following code, the plot is not as intended \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nM = np.array([[1,1],[-2,2],[4,-7]])\n\nprint(\"vector:1\")\nprint(M[0,:])\n# print(\"vector:2\")\n# print(M[1,:])\nrows,cols = M.T.shape\nprint(cols)\n\nfor i,l in enumerate(range(0,cols)):\n print(\"Iteration: {}-{}\".format(i,l))\n print(\"vector:{}\".format(i))\n print(M[i,:])\n v1 = [0,0],[M[i,0],M[i,1]]\n # v1 = [M[i,0]],[M[i,1]]\n print(v1)\n plt.figure(i)\n plt.plot(v1)\n plt.show()\n```","response":"How about something like \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nV = np.array([[1,1], [-2,2], [4,-7]])\norigin = np.array([[0, 0, 0],[0, 0, 0]]) # origin point\n\nplt.quiver(*origin, V[:,0], V[:,1], color=['r','b','g'], scale=21)\nplt.show()\n```\nThen to add up any two vectors and plot them to the same figure, do so before you call plt.show(). Something like: \n```\nplt.quiver(*origin, V[:,0], V[:,1], color=['r','b','g'], scale=21)\nv12 = V[0] + V[1] # adding up the 1st (red) and 2nd (blue) vectors\nplt.quiver(*origin, v12[0], v12[1], scale=21)\nplt.show()\n```\nNOTE: in Python2 use origin[0], origin[1] instead of *origin","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/42281966\/how-to-plot-vectors-in-python-using-matplotlib","best_answers_votes":77,"tags":["python","python-2.7","numpy","matplotlib","vector"],"question_length":916,"response_length":634,"tag_count":5} {"question":"Randomly select from numpy array I have two related numpy arrays, X and y. I need to select n random rows from X and store this in an array, the corresponding y value and the appends to it the index of the points randomly selected. I have another array index which stores a list of index which I dont want to sample. How can I do this? Sample data: \n```\nindex = [2,3]\nX = np.array([[0.3,0.7],[0.5,0.5] ,[0.2,0.8], [0.1,0.9]])\ny = np.array([[0], [1], [0], [1]])\n```\nIf these X's were randomly selected (where n=2): \n```\nrandomylSelected = np.array([[0.3,0.7],[0.5,0.5]])\n```\nthe desired output would be: \n```\nindex = [0,1,2,3]\nrandomlySelectedY = [0,1]\n```\nHow can I do this?","response":"You can create random indices with np.random.choice: \n```\nn = 2 # for 2 random indices\nindex = np.random.choice(X.shape[0], n, replace=False)\n```\nThen you just need to index your arrays with the result: \n```\nx_random = X[index]\ny_random = Y[index]\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/43506766\/randomly-select-from-numpy-array","best_answers_votes":87,"tags":["python","arrays","numpy"],"question_length":674,"response_length":252,"tag_count":3} {"question":"Python finite difference functions? I've been looking around in Numpy\/Scipy for modules containing finite difference functions. However, the closest thing I've found is numpy.gradient(), which is good for 1st-order finite differences of 2nd order accuracy, but not so much if you're wanting higher-order derivatives or more accurate methods. I haven't even found very many specific modules for this sort of thing; most people seem to be doing a \"roll-your-own\" thing as they need them. So my question is if anyone knows of any modules (either part of Numpy\/Scipy or a third-party module) that are specifically dedicated to higher-order (both in accuracy and derivative) finite difference methods. I've got my own code that I'm working on, but it's currently kind of slow, and I'm not going to attempt to optimize it if there's something already available. Note that I am talking about finite differences, not derivatives. I've seen both scipy.misc.derivative() and Numdifftools, which take the derivative of an analytical function, which I don't have.","response":"One way to do this quickly is by convolution with the derivative of a gaussian kernel. The simple case is a convolution of your array with [-1, 1] which gives exactly the simple finite difference formula. Beyond that, (f*g)'= f'*g = f*g' where the * is convolution, so you end up with your derivative convolved with a plain gaussian, so of course this will smooth your data a bit, which can be minimized by choosing the smallest reasonable kernel. \n```\nimport numpy as np\nfrom scipy import ndimage\nimport matplotlib.pyplot as plt\n\n#Data:\nx = np.linspace(0,2*np.pi,100)\nf = np.sin(x) + .02*(np.random.rand(100)-.5)\n\n#Normalization:\ndx = x[1] - x[0] # use np.diff(x) if x is not uniform\ndxdx = dx**2\n\n#First derivatives:\ndf = np.diff(f) \/ dx\ncf = np.convolve(f, [1,-1]) \/ dx\ngf = ndimage.gaussian_filter1d(f, sigma=1, order=1, mode='wrap') \/ dx\n\n#Second derivatives:\nddf = np.diff(f, 2) \/ dxdx\nccf = np.convolve(f, [1, -2, 1]) \/ dxdx\nggf = ndimage.gaussian_filter1d(f, sigma=1, order=2, mode='wrap') \/ dxdx\n\n#Plotting:\nplt.figure()\nplt.plot(x, f, 'k', lw=2, label='original')\nplt.plot(x[:-1], df, 'r.', label='np.diff, 1')\nplt.plot(x, cf[:-1], 'r--', label='np.convolve, [1,-1]')\nplt.plot(x, gf, 'r', label='gaussian, 1')\nplt.plot(x[:-2], ddf, 'g.', label='np.diff, 2')\nplt.plot(x, ccf[:-2], 'g--', label='np.convolve, [1,-2,1]')\nplt.plot(x, ggf, 'g', label='gaussian, 2')\n```\nSince you mentioned np.gradient I assumed you had at least 2d arrays, so the following applies to that: This is built into the scipy.ndimage package if you want to do it for ndarrays. Be cautious though, because of course this doesn't give you the full gradient but I believe the product of all directions. Someone with better expertise will hopefully speak up. Here's an example: \n```\nfrom scipy import ndimage\n\nx = np.linspace(0,2*np.pi,100)\nsine = np.sin(x)\n\nim = sine * sine[...,None]\nd1 = ndimage.gaussian_filter(im, sigma=5, order=1, mode='wrap')\nd2 = ndimage.gaussian_filter(im, sigma=5, order=2, mode='wrap')\n\nplt.figure()\n\nplt.subplot(131)\nplt.imshow(im)\nplt.title('original')\n\nplt.subplot(132)\nplt.imshow(d1)\nplt.title('first derivative')\n\nplt.subplot(133)\nplt.imshow(d2)\nplt.title('second derivative')\n```\nUse of the gaussian_filter1d allows you to take a directional derivative along a certain axis: \n```\nimx = im * x\nd2_0 = ndimage.gaussian_filter1d(imx, axis=0, sigma=5, order=2, mode='wrap')\nd2_1 = ndimage.gaussian_filter1d(imx, axis=1, sigma=5, order=2, mode='wrap')\n\nplt.figure()\nplt.subplot(131)\nplt.imshow(imx)\nplt.title('original')\nplt.subplot(132)\nplt.imshow(d2_0)\nplt.title('derivative along axis 0')\nplt.subplot(133)\nplt.imshow(d2_1)\nplt.title('along axis 1')\n```\nThe first set of results above are a little confusing to me (peaks in the original show up as peaks in the second derivative when the curvature should point down). Without looking further into how the 2d version works, I can only really recomend the 1d version. If you want the magnitude, simply do something like: \n```\nd2_mag = np.sqrt(d2_0**2 + d2_1**2)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/18991408\/python-finite-difference-functions","best_answers_votes":59,"tags":["python","numpy","scipy"],"question_length":1051,"response_length":3022,"tag_count":3} {"question":"How do I transform a \"SciPy sparse matrix\" to a \"NumPy matrix\"? I am using a python function called \"incidence_matrix(G)\", which returns the incident matrix of graph. It is from Networkx package. The problem that I am facing is the return type of this function is \"Scipy Sparse Matrix\". I need to have the Incident matrix in the format of numpy matrix or array. I was wondering if there is any easy way of doing that or not? Or is there any built-in function that can do this transformation for me or not? Thanks","response":"The scipy.sparse.*_matrix has several useful methods, for example, if a is e.g. scipy.sparse.csr_matrix: a.toarray() - Return a dense ndarray representation of this matrix. (numpy.array, recommended) a.todense() - Return a dense matrix representation of this matrix. (numpy.matrix) Previously, these methods had shorthands (.A for .toarray(), and .M for .todense()), but these have been or will be deprecated as of Scipy v1.14.0.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/26576524\/how-do-i-transform-a-scipy-sparse-matrix-to-a-numpy-matrix","best_answers_votes":80,"tags":["python","numpy","scipy","sparse-matrix","networkx"],"question_length":512,"response_length":429,"tag_count":5} {"question":"numpy.savetxt without hash mark at beginning of header line When I try to save a matrix with header, a hash mark and a space (# ) appear on the first line: input: \n```\nnp.savetxt(filename,data, fmt='%i %i %i %i %s',delimiter='\\t',header=\"a\\tb\\tc\\td\\te\")\n```\noutput: \n```\n# a b c d e\n0 0 0 0 bla\n0 0 0 0 bla\n1 1 1 1 bla\n1 1 1 1 bla\n```\nAny hint why? How could I remove it?","response":"it inserts the # because that line is a comment, and the default character for comments is the symbol #, as you can read in the documentation here. If you want to get rid of it, pass comments='' as option to savetxt.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/17352244\/numpy-savetxt-without-hash-mark-at-beginning-of-header-line","best_answers_votes":84,"tags":["python","numpy"],"question_length":409,"response_length":216,"tag_count":2} {"question":"Understanding PyTorch einsum I'm familiar with how einsum works in NumPy. A similar functionality is also offered by PyTorch: torch.einsum(). What are the similarities and differences, either in terms of functionality or performance? The information available at PyTorch documentation is rather scanty and doesn't provide any insights regarding this.","response":"Since the description of einsum is skimpy in torch documentation, I decided to write this post to document, compare and contrast how torch.einsum() behaves when compared to numpy.einsum(). Differences: NumPy allows both small case and capitalized letters [a-zA-Z] for the \"subscript string\" whereas PyTorch allows only the small case letters [a-z]. NumPy accepts nd-arrays, plain Python lists (or tuples), list of lists (or tuple of tuples, list of tuples, tuple of lists) or even PyTorch tensors as operands (i.e. inputs). This is because the operands have only to be array_like and not strictly NumPy nd-arrays. On the contrary, PyTorch expects the operands (i.e. inputs) strictly to be PyTorch tensors. It will throw a TypeError if you pass either plain Python lists\/tuples (or its combinations) or NumPy nd-arrays. NumPy supports lot of keyword arguments (for e.g. optimize) in addition to nd-arrays while PyTorch doesn't offer such flexibility yet. Here are the implementations of some examples both in PyTorch and NumPy: \n```\n# input tensors to work with\n\n```python\nvec\n#Output\n#tensor([0, 1, 2, 3])\n```\n\n```python\naten\n#Output\n#tensor([[11, 12, 13, 14],\n# [21, 22, 23, 24],\n# [31, 32, 33, 34],\n# [41, 42, 43, 44]])\n```\n\n```python\nbten\n#Output\n#tensor([[1, 1, 1, 1],\n# [2, 2, 2, 2],\n# [3, 3, 3, 3],\n# [4, 4, 4, 4]])\n#```\n#1) Matrix multiplication PyTorch: torch.matmul(aten, bten) ; aten.mm(bten) NumPy : np.einsum(\"ij, jk -> ik\", arr1, arr2) \n#```\n```\n\n```python\ntorch.einsum('ij, jk -> ik', aten, bten)\n#Output\n#tensor([[130, 130, 130, 130],\n# [230, 230, 230, 230],\n# [330, 330, 330, 330],\n# [430, 430, 430, 430]])\n#```\n#2) Extract elements along the main-diagonal PyTorch: torch.diag(aten) NumPy : np.einsum(\"ii -> i\", arr) \n#```\n```\n\n```python\ntorch.einsum('ii -> i', aten)\n#Output\n#tensor([11, 22, 33, 44])\n#```\n#3) Hadamard product (i.e. element-wise product of two tensors) PyTorch: aten * bten NumPy : np.einsum(\"ij, ij -> ij\", arr1, arr2) \n#```\n```\n\n```python\ntorch.einsum('ij, ij -> ij', aten, bten)\n#Output\n#tensor([[ 11, 12, 13, 14],\n# [ 42, 44, 46, 48],\n# [ 93, 96, 99, 102],\n# [164, 168, 172, 176]])\n#```\n#4) Element-wise squaring PyTorch: aten ** 2 NumPy : np.einsum(\"ij, ij -> ij\", arr, arr) \n#```\n```\n\n```python\ntorch.einsum('ij, ij -> ij', aten, aten)\n#Output\n#tensor([[ 121, 144, 169, 196],\n# [ 441, 484, 529, 576],\n# [ 961, 1024, 1089, 1156],\n# [1681, 1764, 1849, 1936]])\n#```\n#General: Element-wise nth power can be implemented by repeating the subscript string and tensor n times. For e.g., computing element-wise 4th power of a tensor can be done using: \n#```\n## NumPy: np.einsum('ij, ij, ij, ij -> ij', arr, arr, arr, arr)\n```\n\n```python\ntorch.einsum('ij, ij, ij, ij -> ij', aten, aten, aten, aten)\n#Output\n#tensor([[ 14641, 20736, 28561, 38416],\n# [ 194481, 234256, 279841, 331776],\n# [ 923521, 1048576, 1185921, 1336336],\n# [2825761, 3111696, 3418801, 3748096]])\n#```\n#5) Trace (i.e. sum of main-diagonal elements) PyTorch: torch.trace(aten) NumPy einsum: np.einsum(\"ii -> \", arr) \n#```\n```\n\n```python\ntorch.einsum('ii -> ', aten)\n#Output\n#tensor(110)\n#```\n#6) Matrix transpose PyTorch: torch.transpose(aten, 1, 0) NumPy einsum: np.einsum(\"ij -> ji\", arr) \n#```\n```\n\n```python\ntorch.einsum('ij -> ji', aten)\n#Output\n#tensor([[11, 21, 31, 41],\n# [12, 22, 32, 42],\n# [13, 23, 33, 43],\n# [14, 24, 34, 44]])\n#```\n#7) Outer Product (of vectors) PyTorch: torch.ger(vec, vec) NumPy einsum: np.einsum(\"i, j -> ij\", vec, vec) \n#```\n```\n\n```python\ntorch.einsum('i, j -> ij', vec, vec)\n#Output\n#tensor([[0, 0, 0, 0],\n# [0, 1, 2, 3],\n# [0, 2, 4, 6],\n# [0, 3, 6, 9]])\n#```\n#8) Inner Product (of vectors) PyTorch: torch.dot(vec1, vec2) NumPy einsum: np.einsum(\"i, i -> \", vec1, vec2) \n#```\n```\n\n```python\ntorch.einsum('i, i -> ', vec, vec)\n#Output\n#tensor(14)\n#```\n#9) Sum along axis 0 PyTorch: torch.sum(aten, 0) NumPy einsum: np.einsum(\"ij -> j\", arr) \n#```\n```\n\n```python\ntorch.einsum('ij -> j', aten)\n#Output\n#tensor([104, 108, 112, 116])\n#```\n#10) Sum along axis 1 PyTorch: torch.sum(aten, 1) NumPy einsum: np.einsum(\"ij -> i\", arr) \n#```\n```\n\n```python\ntorch.einsum('ij -> i', aten)\n#Output\n#tensor([ 50, 90, 130, 170])\n#```\n#11) Batch Matrix Multiplication PyTorch: torch.bmm(batch_tensor_1, batch_tensor_2) NumPy : np.einsum(\"bij, bjk -> bik\", batch_tensor_1, batch_tensor_2) \n#```\n## input batch tensors to work with\n```\n\n```python\nbatch_tensor_1 = torch.arange(2 * 4 * 3).reshape(2, 4, 3)\n```\n\n```python\nbatch_tensor_2 = torch.arange(2 * 3 * 4).reshape(2, 3, 4)\n```\n\n```python\ntorch.bmm(batch_tensor_1, batch_tensor_2)\n#Output\n#tensor([[[ 20, 23, 26, 29],\n# [ 56, 68, 80, 92],\n# [ 92, 113, 134, 155],\n# [ 128, 158, 188, 218]],\n#\n# [[ 632, 671, 710, 749],\n# [ 776, 824, 872, 920],\n# [ 920, 977, 1034, 1091],\n# [1064, 1130, 1196, 1262]]])\n#\n## sanity check with the shapes\n```\n\n```python\ntorch.bmm(batch_tensor_1, batch_tensor_2).shape\n#Output\n#torch.Size([2, 4, 4])\n#\n## batch matrix multiply using einsum\n```\n\n```python\ntorch.einsum(\"bij, bjk -> bik\", batch_tensor_1, batch_tensor_2)\n#Output\n#tensor([[[ 20, 23, 26, 29],\n# [ 56, 68, 80, 92],\n# [ 92, 113, 134, 155],\n# [ 128, 158, 188, 218]],\n#\n# [[ 632, 671, 710, 749],\n# [ 776, 824, 872, 920],\n# [ 920, 977, 1034, 1091],\n# [1064, 1130, 1196, 1262]]])\n#\n## sanity check with the shapes\n```\n\n```python\ntorch.einsum(\"bij, bjk -> bik\", batch_tensor_1, batch_tensor_2).shape\n```\n12) Sum along axis 2 PyTorch: torch.sum(batch_ten, 2) NumPy einsum: np.einsum(\"ijk -> ij\", arr3D) \n```\n```\n\n```python\ntorch.einsum(\"ijk -> ij\", batch_ten)\n#Output\n#tensor([[ 50, 90, 130, 170],\n# [ 4, 8, 12, 16]])\n#```\n#13) Sum all the elements in an nD tensor PyTorch: torch.sum(batch_ten) NumPy einsum: np.einsum(\"ijk -> \", arr3D) \n#```\n```\n\n```python\ntorch.einsum(\"ijk -> \", batch_ten)\n#Output\n#tensor(480)\n#```\n#14) Sum over multiple axes (i.e. marginalization) PyTorch: torch.sum(arr, dim=(dim0, dim1, dim2, dim3, dim4, dim6, dim7)) NumPy: np.einsum(\"ijklmnop -> n\", nDarr) \n#```\n## 8D tensor\n```\n\n```python\nnDten = torch.randn((3,5,4,6,8,2,7,9))\n```\n\n```python\nnDten.shape\n#Output\n#torch.Size([3, 5, 4, 6, 8, 2, 7, 9])\n#\n## marginalize out dimension 5 (i.e. \"n\" here)\n```\n\n```python\nesum = torch.einsum(\"ijklmnop -> n\", nDten)\n```\n\n```python\nesum\n#Output\n#tensor([ 98.6921, -206.0575])\n#\n## marginalize out axis 5 (i.e. sum over rest of the axes)\n```\n\n```python\ntsum = torch.sum(nDten, dim=(0, 1, 2, 3, 4, 6, 7))\n```\n\n```python\ntorch.allclose(tsum, esum)\n#Output\n#True\n#```\n#15) Double Dot Products \/ Frobenius inner product (same as: torch.sum(hadamard-product) cf. 3) PyTorch: torch.sum(aten * bten) NumPy : np.einsum(\"ij, ij -> \", arr1, arr2) \n#```\n```\n\n```python\ntorch.einsum(\"ij, ij -> \", aten, bten)\n#Output\n#tensor(1300)\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/55894693\/understanding-pytorch-einsum","best_answers_votes":114,"tags":["python","numpy","pytorch","tensor","numpy-einsum"],"question_length":350,"response_length":6826,"tag_count":5} {"question":"set very low values to zero in numpy In numpy I have an array like \n```\n[0 + 0.5j, 0.25 + 1.2352444e-24j, 0.25+ 0j, 2.46519033e-32 + 0j]\n```\nwhat is the fastest and easiest way to set the super low value to zero to get \n```\n[0 + 0.5j, 0.25 + 0j, 0.25+ 0j, 0 + 0j]\n```\nefficiency is not the paramount.","response":"Hmmm. I'm not super-happy with it, but this seems to work: \n```\n\n```python\na = np.array([0 + 0.5j, 0.25 + 1.2352444e-24j, 0.25+ 0j, 2.46519033e-32 + 0j])\n```\n\n```python\na\n#Output\n#array([ 0.00000000e+00 +5.00000000e-01j,\n# 2.50000000e-01 +1.23524440e-24j,\n# 2.50000000e-01 +0.00000000e+00j, 2.46519033e-32 +0.00000000e+00j])\n```\n\n```python\ntol = 1e-16\n```\n\n```python\na.real[abs(a.real) < tol] = 0.0\n```\n\n```python\na.imag[abs(a.imag) < tol] = 0.0\n```\n\n```python\na\n#Output\n#array([ 0.00+0.5j, 0.25+0.j , 0.25+0.j , 0.00+0.j ])\n#```\n#and you can choose your tolerance as your problem requires. I usually use an order of magnitude or so higher than \n#```\n```\n\n```python\nnp.finfo(np.float).eps\n#Output\n#2.2204460492503131e-16\n#```\n#but it's problem-dependent.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/14419290\/set-very-low-values-to-zero-in-numpy","best_answers_votes":39,"tags":["python","numpy"],"question_length":302,"response_length":670,"tag_count":2} {"question":"Standard deviation in numpy [duplicate] This question already has answers here: Python: Numpy standard deviation error (3 answers) Closed 9 years ago. Here is my code: \n```\nimport numpy as np\nprint(np.std(np.array([0,1])))\n```\nit produces 0.5 I am confident that this is incorrect. What am I doing wrong?","response":"By default, numpy.std returns the population standard deviation, in which case np.std([0,1]) is correctly reported to be 0.5. If you are looking for the sample standard deviation, you can supply an optional ddof parameter to std(): \n```\n\n```python\nnp.std([0, 1], ddof=1)\n#Output\n#0.70710678118654757\n#```\n#ddof modifies the divisor of the sum of the squares of the samples-minus-mean. The divisor is N - ddof, where the default ddof is 0 as you can see from your result.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/34050491\/standard-deviation-in-numpy","best_answers_votes":105,"tags":["python","numpy","standard-deviation"],"question_length":304,"response_length":452,"tag_count":3} {"question":"python numpy roll with padding I'd like to roll a 2D numpy array, except that I'd like to fill the ends with zeros rather than roll the data as if it were periodic. The following \n```\nimport numpy as np\n\nx = np.array([[1, 2, 3], [4, 5, 6]])\n\nnp.roll(x, 1, axis=1)\n```\nreturns \n```\narray([[3, 1, 2], [6, 4, 5]])\n```\nbut what I would prefer is \n```\narray([[0, 1, 2], [0, 4, 5]])\n```","response":"numpy.pad can be used to create an array with surrounding zeros. Pad seems to be quite powerful and can do much more than a simple \"roll\". The tuple ((0,0),(1,0)) used in this answer indicates the \"side\" of the matrix which to pad. \n```\nimport numpy as np\nx = np.array([[1, 2, 3],[4, 5, 6]])\n\nprint np.pad(x,((0,0),(1,0)), mode='constant')[:, :-1]\n```\nGiving \n```\n[[0 1 2]\n [0 4 5]]\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/2777907\/python-numpy-roll-with-padding","best_answers_votes":56,"tags":["python","numpy"],"question_length":380,"response_length":386,"tag_count":2} {"question":"What might be the cause of 'invalid value encountered in less_equal' in numpy I experienced a RuntimeWarning \n```\nRuntimeWarning: invalid value encountered in less_equal\n```\nGenerated by this line of code of mine: \n```\ncenter_dists[j] <= center_dists[i]\n```\nBoth center_dists[j] and center_dists[i] are numpy arrays What might be the cause of this warning ?","response":"That's most likely happening because of a np.nan somewhere in the inputs involved. An example of it is shown below - \n```\n\n```python\nA = np.array([4, 2, 1])\n```\n\n```python\nB = np.array([2, 2, np.nan])\n```\n\n```python\nA<=B\nRuntimeWarning: invalid value encountered in less_equal\n#Output\n#array([False, True, False], dtype=bool)\n#```\n#For all those comparisons involving np.nan, it would output False. Let's confirm it for a broadcasted comparison. Here's a sample - \n#```\n```\n\n```python\nA = np.array([4, 2, 1])\n```\n\n```python\nB = np.array([2, 2, np.nan])\n```\n\n```python\nA[:,None] <= B\nRuntimeWarning: invalid value encountered in less_equal\n#Output\n#array([[False, False, False],\n# [ True, True, False],\n# [ True, True, False]], dtype=bool)\n#```\n#Please notice the third column in the output which corresponds to the comparison involving third element np.nan in B and that results in all False values.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/34955158\/what-might-be-the-cause-of-invalid-value-encountered-in-less-equal-in-numpy","best_answers_votes":57,"tags":["python","arrays","numpy","runtime"],"question_length":357,"response_length":872,"tag_count":4} {"question":"What is the advantage of saving `.npz` files instead of `.npy` in python, regarding speed, memory and look-up? The python documentation for the numpy.savez which saves an .npz file is: The .npz file format is a zipped archive of files named after the variables they contain. The archive is not compressed and each file in the archive contains one variable in .npy format. [...] When opening the saved .npz file with load a NpzFile object is returned. This is a dictionary-like object which can be queried for its list of arrays (with the .files attribute), and for the arrays themselves. My question is: what is the point of numpy.savez? Is it just a more elegant version (shorter command) to save multiple arrays, or is there a speed-up in the saving\/reading process? Does it occupy less memory?","response":"There are two parts of explanation for answering your question. I. NPY vs. NPZ As we already read from the doc, the .npy format is: the standard binary file format in NumPy for persisting a single arbitrary NumPy array on disk. ... The format is designed to be as simple as possible while achieving its limited goals. (sources) And .npz is only a simple way to combine multiple arrays into a single file, one can use ZipFile to contain multiple \u201c.npy\u201d files. We recommend using the file extension \u201c.npz\u201d for these archives. (sources) So, .npz is just a ZipFile containing multiple \u201c.npy\u201d files. And this ZipFile can be either compressed (by using np.savez_compressed) or uncompressed (by using np.savez). It's similar to tarball archive file in Unix-like system, where a tarball file can be just an uncompressed archive file which containing other files or a compressed archive file by combining with various compression programs (gzip, bzip2, etc.) II. Different APIs for binary serialization And Numpy also provides different APIs to produce these binary file output: np.save ---> Save an array to a binary file in NumPy .npy format np.savez --> Save several arrays into a single file in uncompressed .npz format np.savez_compressed --> Save several arrays into a single file in compressed .npz format np.load --> Load arrays or pickled objects from .npy, .npz or pickled files If we skim the source code of Numpy, under the hood: \n```py\ndef _savez(file, args, kwds, compress, allow_pickle=True, pickle_kwargs=None):\n ...\n if compress:\n compression = zipfile.ZIP_DEFLATED\n else:\n compression = zipfile.ZIP_STORED\n ...\n\n\ndef savez(file, *args, **kwds):\n _savez(file, args, kwds, False)\n\n\ndef savez_compressed(file, *args, **kwds):\n _savez(file, args, kwds, True)\n```\nThen back to the question: If only use np.save, there is no more compression on top of the .npy format, only just a single archive file for the convenience of managing multiple related files. If use np.savez_compressed, then of course less memory on disk because of more CPU time to do the compression job (i.e. a bit slower).","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/54238670\/what-is-the-advantage-of-saving-npz-files-instead-of-npy-in-python-regard","best_answers_votes":71,"tags":["python","numpy","serialization","archive","npz-file"],"question_length":796,"response_length":2126,"tag_count":5} {"question":"Retrieve name of column from its Index in Pandas I have a pandas dataframe and a numpy array of values of that dataframe. I have the index of a specific column and I already have the row index of an important value. Now I need to get the column name of that particular value from my dataframe. After searching through the documentations, I found out that I can do the opposite but not what I want.","response":"I think you need index columns names by position (python counts from 0, so for fourth column need 3): \n```\ncolname = df.columns[pos]\n```\nSample: \n```\ndf = pd.DataFrame({'A':[1,2,3],\n 'B':[4,5,6],\n 'C':[7,8,9],\n 'D':[1,3,5],\n 'E':[5,3,6],\n 'F':[7,4,3]})\n\nprint (df)\n A B C D E F\n0 1 4 7 1 5 7\n1 2 5 8 3 3 4\n2 3 6 9 5 6 3\n\npos = 3\ncolname = df.columns[pos]\nprint (colname)\nD\n```\n```\npos = [3,5]\ncolname = df.columns[pos]\nprint (colname)\nIndex(['D', 'F'], dtype='object')\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/43068412\/retrieve-name-of-column-from-its-index-in-pandas","best_answers_votes":78,"tags":["python","pandas","numpy","indexing"],"question_length":397,"response_length":587,"tag_count":4} {"question":"Python: ufunc 'add' did not contain a loop with signature matching types dtype('S21') dtype('S21') dtype('S21') I have two dataframes, which both have an Order ID and a date. I wanted to add a flag into the first dataframe df1: if a record with the same order id and date is in dataframe df2, then add a Y: \n```\n[ df1['R'] = np.where(orders['key'].isin(df2['key']), 'Y', 0)]\n```\nTo accomplish that, I was going to create a key, which would be the concatenation of the order_id and date, but when I try the following code: \n```\ndf1['key']=df1['Order_ID']+'_'+df1['Date']\n```\nI get this error \n```\nufunc 'add' did not contain a loop with signature matching types dtype('S21') dtype('S21') dtype('S21')\n```\ndf1 looks like this: \n```\nDate | Order_ID | other data points ... \n201751 4395674 ...\n201762 3487535 ...\n```\nThese are the datatypes: \n```\ndf1.info()\nRangeIndex: 157443 entries, 0 to 157442\nData columns (total 6 columns):\nOrder_ID 157429 non-null object\nDate 157443 non-null int64\n...\ndtypes: float64(2), int64(2), object(2)\nmemory usage: 7.2+ MB\n\ndf1['Order_ID'].values\narray(['782833030', '782834969', '782836416', ..., '783678018',\n '783679806', '783679874'], dtype=object)\n```","response":"The problem is that you can't add an object array (containing strings) to a number array, that's just ambiguous: \n```\n\n```python\nimport pandas as pd\n```\n\n```python\npd.Series(['abc', 'def']) + pd.Series([1, 2])\n#Output\n#TypeError: ufunc 'add' did not contain a loop with signature matching types dtype(''\n# with 1135520 stored elements in Compressed Sparse Row format>,\n# <49998x70000 sparse matrix of type ''\n# with 1135520 stored elements in Compressed Sparse Row format>], \n# dtype=object)\n#```\n#I would like to use both predictors in a regression, but the current format is obviously not what I'm looking for. Would it be possible to get the following: \n#```\n#<49998x1400000 sparse matrix of type ''\n# with 2271040 stored elements in Compressed Sparse Row format>\n#```\n#It is too large to be converted to a deep format.\n```","response":"You can use the scipy.sparse.hstack to concatenate sparse matrices with the same number of rows (horizontal concatenation): \n```\nfrom scipy.sparse import hstack\nhstack((X, X2))\n```\nSimilarly, you can use scipy.sparse.vstack to concatenate sparse matrices with the same number of columns (vertical concatenation). Using numpy.hstack or numpy.vstack will create an array with two sparse matrix objects.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/19710602\/concatenate-sparse-matrices-in-python-using-scipy-numpy","best_answers_votes":90,"tags":["python","numpy","scipy","sparse-matrix"],"question_length":859,"response_length":400,"tag_count":4} {"question":"Swap two rows in a numpy array in python [duplicate] This question already has an answer here: Row exchange in Numpy [duplicate] (1 answer) Closed 6 years ago. How to swap xth and yth rows of the 2-D NumPy array? x & y are inputs provided by the user. Lets say x = 0 & y =2 , and the input array is as below: \n```\na = [[4 3 1] \n [5 7 0] \n [9 9 3] \n [8 2 4]] \nExpected Output : \n[[9 9 3] \n [5 7 0] \n [4 3 1] \n [8 2 4]]\n```\nI tried multiple things, but did not get the expected result. this is what i tried: \n```\na[x],a[y]= a[y],a[x]\n\noutput i got is:\n[[9 9 3]\n [5 7 0]\n [9 9 3]\n [8 2 4]]\n```\nPlease suggest what is wrong in my solution.","response":"Put the index as a whole: \n```\na[[x, y]] = a[[y, x]]\n```\nWith your example: \n```\na = np.array([[4,3,1], [5,7,0], [9,9,3], [8,2,4]])\n\na \n# array([[4, 3, 1],\n# [5, 7, 0],\n# [9, 9, 3],\n# [8, 2, 4]])\n\na[[0, 2]] = a[[2, 0]]\na\n# array([[9, 9, 3],\n# [5, 7, 0],\n# [4, 3, 1],\n# [8, 2, 4]])\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/54069863\/swap-two-rows-in-a-numpy-array-in-python","best_answers_votes":84,"tags":["python","arrays","numpy","swap"],"question_length":659,"response_length":323,"tag_count":4} {"question":"Cython: (Why \/ When) Is it preferable to use Py_ssize_t for indexing? This is a follow-up to this question. (Why \/ When) Is it preferable to use Py_ssize_t for indexing? In the docs I just found \n```\n# Purists could use \"Py_ssize_t\" which is the proper Python type for\n# array indices.\n```\n-> Does that mean always when indexing NumPy\/Cython - array(s)\/-views one should use Py_ssize_t? -> Is Py_ssize_t e. g. an unsigned int so that I can't used @cython.boundscheck(False)","response":"Py_ssize_t is signed. See PEP 353, where it says \"A new type Py_ssize_t is introduced, which has the same size as the compiler's size_t type, but is signed. It will be a typedef for ssize_t where available.\" You should use Py_ssize_t for indexing. I didn't find a definitive statement of this in the Cython docs, but Stefan Behnel, a Cython developer, said as much in an email (https:\/\/groups.google.com\/forum\/#!topic\/cython-users\/brENF_M9zxM): As a general remark, you are using ints as indices. You should use the Py_ssize_t type instead (or size_t, if you prefer an unsigned type) to properly accommodate for 64 bit architectures.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/20987390\/cython-why-when-is-it-preferable-to-use-py-ssize-t-for-indexing","best_answers_votes":48,"tags":["python","numpy","indexing","cython","unsigned-integer"],"question_length":473,"response_length":633,"tag_count":5} {"question":"How to multiply two vector and get a matrix? In numpy operation, I have two vectors, let's say vector A is 4X1, vector B is 1X5, if I do AXB, it should result a matrix of size 4X5. But I tried lot of times, doing many kinds of reshape and transpose, they all either raise error saying not aligned or return a single value. How should I get the output product of matrix I want?","response":"Normal matrix multiplication works as long as the vectors have the right shape. Remember that * in Numpy is elementwise multiplication, and matrix multiplication is available with numpy.dot() (or with the @ operator, in Python 3.5) \n```\n\n```python\nnumpy.dot(numpy.array([[1], [2]]), numpy.array([[3, 4]]))\n#Output\n#array([[3, 4],\n# [6, 8]])\n#```\n#This is called an \"outer product.\" You can get it using plain vectors using numpy.outer(): \n#```\n```\n\n```python\nnumpy.outer(numpy.array([1, 2]), numpy.array([3, 4]))\n#Output\n#array([[3, 4],\n# [6, 8]])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/28578302\/how-to-multiply-two-vector-and-get-a-matrix","best_answers_votes":61,"tags":["python","numpy","matrix","vector","matrix-multiplication"],"question_length":376,"response_length":522,"tag_count":5} {"question":"bounding box of numpy array Suppose you have a 2D numpy array with some random values and surrounding zeros. Example \"tilted rectangle\": \n```\nimport numpy as np\nfrom skimage import transform\n\nimg1 = np.zeros((100,100))\nimg1[25:75,25:75] = 1.\nimg2 = transform.rotate(img1, 45)\n```\nNow I want to find the smallest bounding rectangle for all the nonzero data. For example: \n```\na = np.where(img2 != 0)\nbbox = img2[np.min(a[0]):np.max(a[0])+1, np.min(a[1]):np.max(a[1])+1]\n```\nWhat would be the fastest way to achieve this result? I am sure there is a better way since the np.where function takes quite a time if I am e.g. using 1000x1000 data sets. Edit: Should also work in 3D...","response":"You can roughly halve the execution time by using np.any to reduce the rows and columns that contain non-zero values to 1D vectors, rather than finding the indices of all non-zero values using np.where: \n```\ndef bbox1(img):\n a = np.where(img != 0)\n bbox = np.min(a[0]), np.max(a[0]), np.min(a[1]), np.max(a[1])\n return bbox\n\ndef bbox2(img):\n rows = np.any(img, axis=1)\n cols = np.any(img, axis=0)\n rmin, rmax = np.where(rows)[0][[0, -1]]\n cmin, cmax = np.where(cols)[0][[0, -1]]\n\n return rmin, rmax, cmin, cmax\n```\nSome benchmarks: \n```\n%timeit bbox1(img2)\n10000 loops, best of 3: 63.5 \u00b5s per loop\n\n%timeit bbox2(img2)\n10000 loops, best of 3: 37.1 \u00b5s per loop\n```\nExtending this approach to the 3D case just involves performing the reduction along each pair of axes: \n```\ndef bbox2_3D(img):\n\n r = np.any(img, axis=(1, 2))\n c = np.any(img, axis=(0, 2))\n z = np.any(img, axis=(0, 1))\n\n rmin, rmax = np.where(r)[0][[0, -1]]\n cmin, cmax = np.where(c)[0][[0, -1]]\n zmin, zmax = np.where(z)[0][[0, -1]]\n\n return rmin, rmax, cmin, cmax, zmin, zmax\n```\nIt's easy to generalize this to N dimensions by using itertools.combinations to iterate over each unique combination of axes to perform the reduction over: \n```\nimport itertools\n\ndef bbox2_ND(img):\n N = img.ndim\n out = []\n for ax in itertools.combinations(reversed(range(N)), N - 1):\n nonzero = np.any(img, axis=ax)\n out.extend(np.where(nonzero)[0][[0, -1]])\n return tuple(out)\n```\nIf you know the coordinates of the corners of the original bounding box, the angle of rotation, and the centre of rotation, you could get the coordinates of the transformed bounding box corners directly by computing the corresponding affine transformation matrix and dotting it with the input coordinates: \n```\ndef bbox_rotate(bbox_in, angle, centre):\n\n rmin, rmax, cmin, cmax = bbox_in\n\n # bounding box corners in homogeneous coordinates\n xyz_in = np.array(([[cmin, cmin, cmax, cmax],\n [rmin, rmax, rmin, rmax],\n [ 1, 1, 1, 1]]))\n\n # translate centre to origin\n cr, cc = centre\n cent2ori = np.eye(3)\n cent2ori[:2, 2] = -cr, -cc\n\n # rotate about the origin\n theta = np.deg2rad(angle)\n rmat = np.eye(3)\n rmat[:2, :2] = np.array([[ np.cos(theta),-np.sin(theta)],\n [ np.sin(theta), np.cos(theta)]])\n\n # translate from origin back to centre\n ori2cent = np.eye(3)\n ori2cent[:2, 2] = cr, cc\n\n # combine transformations (rightmost matrix is applied first)\n xyz_out = ori2cent.dot(rmat).dot(cent2ori).dot(xyz_in)\n\n r, c = xyz_out[:2]\n\n rmin = int(r.min())\n rmax = int(r.max())\n cmin = int(c.min())\n cmax = int(c.max())\n\n return rmin, rmax, cmin, cmax\n```\nThis works out to be very slightly faster than using np.any for your small example array: \n```\n%timeit bbox_rotate([25, 75, 25, 75], 45, (50, 50))\n10000 loops, best of 3: 33 \u00b5s per loop\n```\nHowever, since the speed of this method is independent of the size of the input array, it can be quite a lot faster for larger arrays. Extending the transformation approach to 3D is slightly more complicated, in that the rotation now has three different components (one about the x-axis, one about the y-axis and one about the z-axis), but the basic method is the same: \n```\ndef bbox_rotate_3d(bbox_in, angle_x, angle_y, angle_z, centre):\n\n rmin, rmax, cmin, cmax, zmin, zmax = bbox_in\n\n # bounding box corners in homogeneous coordinates\n xyzu_in = np.array(([[cmin, cmin, cmin, cmin, cmax, cmax, cmax, cmax],\n [rmin, rmin, rmax, rmax, rmin, rmin, rmax, rmax],\n [zmin, zmax, zmin, zmax, zmin, zmax, zmin, zmax],\n [ 1, 1, 1, 1, 1, 1, 1, 1]]))\n\n # translate centre to origin\n cr, cc, cz = centre\n cent2ori = np.eye(4)\n cent2ori[:3, 3] = -cr, -cc -cz\n\n # rotation about the x-axis\n theta = np.deg2rad(angle_x)\n rmat_x = np.eye(4)\n rmat_x[1:3, 1:3] = np.array([[ np.cos(theta),-np.sin(theta)],\n [ np.sin(theta), np.cos(theta)]])\n\n # rotation about the y-axis\n theta = np.deg2rad(angle_y)\n rmat_y = np.eye(4)\n rmat_y[[0, 0, 2, 2], [0, 2, 0, 2]] = (\n np.cos(theta), np.sin(theta), -np.sin(theta), np.cos(theta))\n\n # rotation about the z-axis\n theta = np.deg2rad(angle_z)\n rmat_z = np.eye(4)\n rmat_z[:2, :2] = np.array([[ np.cos(theta),-np.sin(theta)],\n [ np.sin(theta), np.cos(theta)]])\n\n # translate from origin back to centre\n ori2cent = np.eye(4)\n ori2cent[:3, 3] = cr, cc, cz\n\n # combine transformations (rightmost matrix is applied first)\n tform = ori2cent.dot(rmat_z).dot(rmat_y).dot(rmat_x).dot(cent2ori)\n xyzu_out = tform.dot(xyzu_in)\n\n r, c, z = xyzu_out[:3]\n\n rmin = int(r.min())\n rmax = int(r.max())\n cmin = int(c.min())\n cmax = int(c.max())\n zmin = int(z.min())\n zmax = int(z.max())\n\n return rmin, rmax, cmin, cmax, zmin, zmax\n```\nI've essentially just modified the function above using the rotation matrix expressions from here - I haven't had time to write a test-case yet, so use with caution.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/31400769\/bounding-box-of-numpy-array","best_answers_votes":104,"tags":["python","arrays","numpy","transformation"],"question_length":677,"response_length":5251,"tag_count":4} {"question":"Saving numpy array to txt file row wise I have an numpy array of form \n```\na = [1,2,3]\n```\nwhich I want to save to a .txt file such that the file looks like: \n```\n1 2 3\n```\nIf I use numpy.savetxt then I get a file like: \n```\n1\n2\n3\n```\nThere should be a easy solution to this I suppose, any suggestions?","response":"If numpy >= 1.5, you can do: # note that the filename is enclosed with double quotes, # example \"filename.txt\" \n```\nnumpy.savetxt(\"filename\", a, newline=\" \")\n```\nEdit several 1D arrays with same length \n```\na = numpy.array([1,2,3])\nb = numpy.array([4,5,6])\nnumpy.savetxt(filename, (a,b), fmt=\"%d\")\n\n# gives:\n# 1 2 3\n# 4 5 6\n```\nseveral 1D arrays with variable length \n```\na = numpy.array([1,2,3])\nb = numpy.array([4,5])\n\nwith open(filename,\"w\") as f:\n f.write(\"\\n\".join(\" \".join(map(str, x)) for x in (a,b)))\n\n# gives:\n# 1 2 3\n# 4 5\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/9565426\/saving-numpy-array-to-txt-file-row-wise","best_answers_votes":54,"tags":["python","numpy","save"],"question_length":302,"response_length":539,"tag_count":3} {"question":"python dict to numpy structured array I have a dictionary that I need to convert to a NumPy structured array. I'm using the arcpy function NumPyArraytoTable, so a NumPy structured array is the only data format that will work. Based on this thread: Writing to numpy array from dictionary and this thread: How to convert Python dictionary object to numpy array I've tried this: \n```\nresult = {0: 1.1181753789488595, 1: 0.5566080288678394, 2: 0.4718269778030734, 3: 0.48716683119447185, 4: 1.0, 5: 0.1395076201641266, 6: 0.20941558441558442}\n\nnames = ['id','data']\nformats = ['f8','f8']\ndtype = dict(names = names, formats=formats)\narray=numpy.array([[key,val] for (key,val) in result.iteritems()],dtype)\n```\nBut I keep getting expected a readable buffer object The method below works, but is stupid and obviously won't work for real data. I know there is a more graceful approach, I just can't figure it out. \n```\ntotable = numpy.array([[key,val] for (key,val) in result.iteritems()])\narray=numpy.array([(totable[0,0],totable[0,1]),(totable[1,0],totable[1,1])],dtype)\n```","response":"You could use np.array(list(result.items()), dtype=dtype): \n```\nimport numpy as np\nresult = {0: 1.1181753789488595, 1: 0.5566080288678394, 2: 0.4718269778030734, 3: 0.48716683119447185, 4: 1.0, 5: 0.1395076201641266, 6: 0.20941558441558442}\n\nnames = ['id','data']\nformats = ['f8','f8']\ndtype = dict(names = names, formats=formats)\narray = np.array(list(result.items()), dtype=dtype)\n\nprint(repr(array))\n```\nyields \n```\narray([(0.0, 1.1181753789488595), (1.0, 0.5566080288678394),\n (2.0, 0.4718269778030734), (3.0, 0.48716683119447185), (4.0, 1.0),\n (5.0, 0.1395076201641266), (6.0, 0.20941558441558442)], \n dtype=[('id', '\nDatetimeIndex: 241 entries, 2012-04-05 08:00:00 to 2012-04-05 12:00:00\nFreq: T\nData columns:\nradiation 241 non-null values\ntamb 241 non-null values\ndtypes: float64(2)\n```\nHow can I down-sample this dataframe to a resolution of one hour, computing the hourly mean for the temperature and the hourly sum for radiation?","response":"With pandas 0.18 the resample API changed (see the docs). So for pandas >= 0.18 the answer is: \n```\n\n```python\nframe.resample('1H').agg({'radiation': np.sum, 'tamb': np.mean})\n#Output\n#tamb radiation\n#2012-04-05 08:00:00 5.161235 279.507182\n#2012-04-05 09:00:00 4.968145 290.941073\n#2012-04-05 10:00:00 4.478531 317.678285\n#2012-04-05 11:00:00 4.706206 335.258633\n#2012-04-05 12:00:00 2.457873 8.655838\n#```\n#Old Answer: I am answering my question to reflect the time series related changes in pandas >= 0.8 (all other answers are outdated). Using pandas >= 0.8 the answer is: \n#```\n```\n\n```python\nframe.resample('1H', how={'radiation': np.sum, 'tamb': np.mean})\n#Output\n#tamb radiation\n#2012-04-05 08:00:00 5.161235 279.507182\n#2012-04-05 09:00:00 4.968145 290.941073\n#2012-04-05 10:00:00 4.478531 317.678285\n#2012-04-05 11:00:00 4.706206 335.258633\n#2012-04-05 12:00:00 2.457873 8.655838\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/10020591\/how-to-resample-a-dataframe-with-different-functions-applied-to-each-column","best_answers_votes":76,"tags":["python","numpy","time-series","pandas"],"question_length":1012,"response_length":952,"tag_count":4} {"question":"How to generate a random normal distribution of integers How to generate a random integer as with np.random.randint(), but with a normal distribution around 0. np.random.randint(-10, 10) returns integers with a discrete uniform distribution np.random.normal(0, 0.1, 1) returns floats with a normal distribution What I want is a kind of combination between the two functions.","response":"One other way to get a discrete distribution that looks like the normal distribution is to draw from a multinomial distribution where the probabilities are calculated from a normal distribution. \n```\nimport scipy.stats as ss\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.arange(-10, 11)\nxU, xL = x + 0.5, x - 0.5 \nprob = ss.norm.cdf(xU, scale = 3) - ss.norm.cdf(xL, scale = 3)\nprob = prob \/ prob.sum() # normalize the probabilities so their sum is 1\nnums = np.random.choice(x, size = 10000, p = prob)\nplt.hist(nums, bins = len(x))\n```\nHere, np.random.choice picks an integer from [-10, 10]. The probability for selecting an element, say 0, is calculated by p(-0.5 < x < 0.5) where x is a normal random variable with mean zero and standard deviation 3. I chose a std. dev. of 3 because this way p(-10 < x < 10) is almost 1. The result looks like this:","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/37411633\/how-to-generate-a-random-normal-distribution-of-integers","best_answers_votes":44,"tags":["python","numpy","random"],"question_length":374,"response_length":863,"tag_count":3} {"question":"Missing data, insert rows in Pandas and fill with NAN I'm new to Python and Pandas so there might be a simple solution which I don't see. I have a number of discontinuous datasets which look like this: \n```\nind A B C \n0 0.0 1 3 \n1 0.5 4 2 \n2 1.0 6 1 \n3 3.5 2 0 \n4 4.0 4 5 \n5 4.5 3 3\n```\nI now look for a solution to get the following: \n```\nind A B C \n0 0.0 1 3 \n1 0.5 4 2 \n2 1.0 6 1 \n3 1.5 NAN NAN \n4 2.0 NAN NAN \n5 2.5 NAN NAN \n6 3.0 NAN NAN \n7 3.5 2 0 \n8 4.0 4 5 \n9 4.5 3 3\n```\nThe problem is,that the gap in A varies from dataset to dataset in position and length...","response":"set_index and reset_index are your friends. \n```\ndf = DataFrame({\"A\":[0,0.5,1.0,3.5,4.0,4.5], \"B\":[1,4,6,2,4,3], \"C\":[3,2,1,0,5,3]})\n```\nFirst move column A to the index: \n```\n\n```python\ndf.set_index(\"A\")\n#Output\n#B C\n# A \n#0.0 1 3\n#0.5 4 2\n#1.0 6 1\n#3.5 2 0\n#4.0 4 5\n#4.5 3 3\n#```\n#Then reindex with a new index, here the missing data is filled in with nans. We use the Index object since we can name it; this will be used in the next step. \n#```\n```\n\n```python\nnew_index = Index(arange(0,5,0.5), name=\"A\")\n```\n\n```python\ndf.set_index(\"A\").reindex(new_index)\n#Output\n#B C\n#0.0 1 3\n#0.5 4 2\n#1.0 6 1\n#1.5 NaN NaN\n#2.0 NaN NaN\n#2.5 NaN NaN\n#3.0 NaN NaN\n#3.5 2 0\n#4.0 4 5\n#4.5 3 3\n#```\n#Finally move the index back to the columns with reset_index. Since we named the index, it all works magically: \n#```\n```\n\n```python\ndf.set_index(\"A\").reindex(new_index).reset_index()\n#Output\n#A B C\n#0 0.0 1 3\n#1 0.5 4 2\n#2 1.0 6 1\n#3 1.5 NaN NaN\n#4 2.0 NaN NaN\n#5 2.5 NaN NaN\n#6 3.0 NaN NaN\n#7 3.5 2 0\n#8 4.0 4 5\n#9 4.5 3 3\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/25909984\/missing-data-insert-rows-in-pandas-and-fill-with-nan","best_answers_votes":64,"tags":["python","numpy","pandas"],"question_length":653,"response_length":1084,"tag_count":3} {"question":"Cython: cimport and import numpy as (both) np In the tutorial of the Cython documentation, there are cimport and import statements of numpy module: \n```\nimport numpy as np\ncimport numpy as np\n```\nI found this convention is quite popular among numpy\/cython users. This looks strange for me because they are both named as np. In which part of the code, imported\/cimported np are used? Why cython compiler does not confuse them?","response":"cimport my_module gives access to C functions or attributes or even sub-modules under my_module import my_module gives access to Python functions or attributes or sub-modules under my_module. In your case: \n```\ncimport numpy as np\n```\ngives you access to Numpy C API, where you can declare array buffers, variable types and so on... And: \n```\nimport numpy as np\n```\ngives you access to NumPy-Python functions, such as np.array, np.linspace, etc Cython internally handles this ambiguity so that the user does not need to use different names.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/20268228\/cython-cimport-and-import-numpy-as-both-np","best_answers_votes":54,"tags":["python","numpy","cython","python-import"],"question_length":425,"response_length":540,"tag_count":4} {"question":"Element-wise string concatenation in numpy Is this a bug? \n```\nimport numpy as np\na1=np.array(['a','b'])\na2=np.array(['E','F'])\n\n```python\nadd(a1,a2)\n#Output\n#NotImplemented\n#```\n#I am trying to do element-wise string concatenation. I thought Add() was the way to do it in numpy but obviously it is not working as expected.\n```","response":"This can be done using numpy.char.add. Here is an example: \n```\n\n```python\nimport numpy as np\n```\n\n```python\na1 = np.array(['a', 'b'])\n```\n\n```python\na2 = np.array(['E', 'F'])\n```\n\n```python\nnp.char.add(a1, a2)\n#Output\n#array(['aE', 'bF'], \n# dtype=' result[-1]:\n result.append(val)\n else:\n result.append(result[-1])\n return result\n```\nBut my arrays have hundreds of thousands of entries and I need to call this many times. It seems like there's got to be a numpy trick to remove the loop, but I can't seem to find anything that will work. The alternative will be to write this as a C extension, but it seems like I'd be reinventing the wheel.","response":"numpy.maximum.accumulate works for me. \n```\n\n```python\nimport numpy\n```\n\n```python\nnumpy.maximum.accumulate(numpy.array([11,12,13,20,19,18,17,18,23,21]))\n#Output\n#array([11, 12, 13, 20, 20, 20, 20, 20, 23, 23])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/7251421\/running-maximum-of-numpy-array-values","best_answers_votes":83,"tags":["python","numpy"],"question_length":802,"response_length":187,"tag_count":2} {"question":"Copy numpy array into part of another array If I run the following: \n```\nimport numpy as np\na = np.arange(9)\na = a.reshape((3,3))\n```\nI will get this: \n```\na = [[0 1 2]\n [3 4 5]\n [6 7 8]]\n```\nIf I create a larger array like this: \n```\nb = np.zeros((5,5))\nb = [[ 0. 0. 0. 0. 0.]\n [ 0. 0. 0. 0. 0.]\n [ 0. 0. 0. 0. 0.]\n [ 0. 0. 0. 0. 0.]\n [ 0. 0. 0. 0. 0.]]\n```\nHow do I efficiently copy a into b to get an array like this? \n```\n# border of 0 surrounding a to be filled in with other data later\nb = [[ 0. 0. 0. 0. 0.]\n [ 0. 0. 1. 2. 0.]\n [ 0. 3. 4. 5. 0.]\n [ 0. 6. 7. 8. 0.]\n [ 0. 0. 0. 0. 0.]]\n```\nI am looking for a function built into numpy if it exists.","response":"You can specify b[1:4, 1:4] to denote the part: \n```\n\n```python\nimport numpy as np\n```\n\n```python\na = np.arange(9)\n```\n\n```python\na = a.reshape((3, 3))\n```\n\n```python\nb = np.zeros((5, 5))\n```\n\n```python\nb[1:4, 1:4] = a\n```\n\n```python\nb\n#Output\n#array([[ 0., 0., 0., 0., 0.],\n# [ 0., 0., 1., 2., 0.],\n# [ 0., 3., 4., 5., 0.],\n# [ 0., 6., 7., 8., 0.],\n# [ 0., 0., 0., 0., 0.]])\n```\n\n```python\nb[1:4,1:4] = a + 1 # If you really meant `[1, 2, ..., 9]`\n```\n\n```python\nb\n#Output\n#array([[ 0., 0., 0., 0., 0.],\n# [ 0., 1., 2., 3., 0.],\n# [ 0., 4., 5., 6., 0.],\n# [ 0., 7., 8., 9., 0.],\n# [ 0., 0., 0., 0., 0.]])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/40690248\/copy-numpy-array-into-part-of-another-array","best_answers_votes":56,"tags":["python","arrays","numpy"],"question_length":734,"response_length":589,"tag_count":3} {"question":"Numpy list of 1D Arrays to 2D Array I have a large list files that contain 2D numpy arrays pickled through numpy.save. I am trying to read the first column of each file and create a new 2D array. I currently read each column using numpy.load with a mmap. The 1D arrays are now in a list. \n```\ncol_list = []\nfor f in file_list:\n Temp = np.load(f,mmap_mode='r')\n col_list.append(Temp[:,0])\n```\nHow can I convert this into a 2D array?","response":"You can use \n```\nnumpy.stack(arrays, axis=0)\n```\nif you have an array of arrays. You can specify the axis in case you want to stack columns and not rows.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/21322564\/numpy-list-of-1d-arrays-to-2d-array","best_answers_votes":73,"tags":["python","arrays","numpy"],"question_length":437,"response_length":153,"tag_count":3} {"question":"Minimum Euclidean distance between points in two different Numpy arrays, not within I have two arrays of x-y coordinates, and I would like to find the minimum Euclidean distance between each point in one array with all the points in the other array. The arrays are not necessarily the same size. For example: \n```\nxy1=numpy.array(\n[[ 243, 3173],\n[ 525, 2997]])\n\nxy2=numpy.array(\n[[ 682, 2644],\n[ 277, 2651],\n[ 396, 2640]])\n```\nMy current method loops through each coordinate xy in xy1 and calculates the distances between that coordinate and the other coordinates. \n```\nmindist=numpy.zeros(len(xy1))\nminid=numpy.zeros(len(xy1))\n\nfor i,xy in enumerate(xy1):\n dists=numpy.sqrt(numpy.sum((xy-xy2)**2,axis=1))\n mindist[i],minid[i]=dists.min(),dists.argmin()\n```\nIs there a way to eliminate the for loop and somehow do element-by-element calculations between the two arrays? I envision generating a distance matrix for which I could find the minimum element in each row or column. Another way to look at the problem. Say I concatenate xy1 (length m) and xy2 (length p) into xy (length n), and I store the lengths of the original arrays. Theoretically, I should then be able to generate a n x n distance matrix from those coordinates from which I can grab an m x p submatrix. Is there a way to efficiently generate this submatrix?","response":"(Months later) scipy.spatial.distance.cdist( X, Y ) gives all pairs of distances, for X and Y 2 dim, 3 dim ... It also does 22 different norms, detailed here . \n```\n# cdist example: (nx,dim) (ny,dim) -> (nx,ny)\n\nfrom __future__ import division\nimport sys\nimport numpy as np\nfrom scipy.spatial.distance import cdist\n\n#...............................................................................\ndim = 10\nnx = 1000\nny = 100\nmetric = \"euclidean\"\nseed = 1\n\n # change these params in sh or ipython: run this.py dim=3 ...\nfor arg in sys.argv[1:]:\n exec( arg )\nnp.random.seed(seed)\nnp.set_printoptions( 2, threshold=100, edgeitems=10, suppress=True )\n\ntitle = \"%s dim %d nx %d ny %d metric %s\" % (\n __file__, dim, nx, ny, metric )\nprint \"\\n\", title\n\n#...............................................................................\nX = np.random.uniform( 0, 1, size=(nx,dim) )\nY = np.random.uniform( 0, 1, size=(ny,dim) )\ndist = cdist( X, Y, metric=metric ) # -> (nx, ny) distances\n#...............................................................................\n\nprint \"scipy.spatial.distance.cdist: X %s Y %s -> %s\" % (\n X.shape, Y.shape, dist.shape )\nprint \"dist average %.3g +- %.2g\" % (dist.mean(), dist.std())\nprint \"check: dist[0,3] %.3g == cdist( [X[0]], [Y[3]] ) %.3g\" % (\n dist[0,3], cdist( [X[0]], [Y[3]] ))\n\n\n# (trivia: how do pairwise distances between uniform-random points in the unit cube\n# depend on the metric ? With the right scaling, not much at all:\n# L1 \/ dim ~ .33 +- .2\/sqrt dim\n# L2 \/ sqrt dim ~ .4 +- .2\/sqrt dim\n# Lmax \/ 2 ~ .4 +- .2\/sqrt dim\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/1871536\/minimum-euclidean-distance-between-points-in-two-different-numpy-arrays-not-wit","best_answers_votes":47,"tags":["python","numpy","euclidean-distance"],"question_length":1334,"response_length":1610,"tag_count":3} {"question":"Numpy Vector (N,1) dimension -> (N,) dimension conversion I have a question regarding the conversion between (N,) dimension arrays and (N,1) dimension arrays. For example, y is (2,) dimension. \n```\nA=np.array([[1,2],[3,4]])\n\nx=np.array([1,2])\n\ny=np.dot(A,x)\n\ny.shape\nOut[6]: (2,)\n```\nBut the following will show y2 to be (2,1) dimension. \n```\nx2=x[:,np.newaxis]\n\ny2=np.dot(A,x2)\n\ny2.shape\nOut[14]: (2, 1)\n```\nWhat would be the most efficient way of converting y2 back to y without copying? Thanks, Tom","response":"reshape works for this \n```\na = np.arange(3) # a.shape = (3,)\nb = a.reshape((3,1)) # b.shape = (3,1)\nb2 = a.reshape((-1,1)) # b2.shape = (3,1)\nc = b.reshape((3,)) # c.shape = (3,)\nc2 = b.reshape((-1,)) # c2.shape = (3,)\n```\nnote also that reshape doesn't copy the data unless it needs to for the new shape (which it doesn't need to do here): \n```\na.__array_interface__['data'] # (22356720, False)\nb.__array_interface__['data'] # (22356720, False)\nc.__array_interface__['data'] # (22356720, False)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/17869840\/numpy-vector-n-1-dimension-n-dimension-conversion","best_answers_votes":60,"tags":["python","arrays","numpy"],"question_length":501,"response_length":531,"tag_count":3} {"question":"What does [:, :] mean on NumPy arrays Sorry for the stupid question. I'm programming in PHP but found some nice code in Python and want to \"recreate\" it in PHP. But I'm quite frustrated about the line: \n```\nself.h = -0.1 \nself.activity = numpy.zeros((512, 512)) + self.h\nself.activity[:, :] = self.h\n```\nI don't understand what [:, :] means. I couldn't find an answer by googling it. Full code \n```\nimport math\nimport numpy\nimport pygame\nfrom scipy.misc import imsave\nfrom scipy.ndimage.filters import gaussian_filter\n\n\nclass AmariModel(object):\n\n def __init__(self, size):\n self.h = -0.1\n self.k = 0.05\n self.K = 0.125\n self.m = 0.025\n self.M = 0.065\n\n self.stimulus = -self.h * numpy.random.random(size)\n self.activity = numpy.zeros(size) + self.h\n self.excitement = numpy.zeros(size)\n self.inhibition = numpy.zeros(size)\n\n def stimulate(self):\n self.activity[:, :] = self.activity > 0\n\n sigma = 1 \/ math.sqrt(2 * self.k)\n gaussian_filter(self.activity, sigma, 0, self.excitement, \"wrap\")\n self.excitement *= self.K * math.pi \/ self.k\n\n sigma = 1 \/ math.sqrt(2 * self.m)\n gaussian_filter(self.activity, sigma, 0, self.inhibition, \"wrap\")\n self.inhibition *= self.M * math.pi \/ self.m\n\n self.activity[:, :] = self.h\n self.activity[:, :] += self.excitement\n self.activity[:, :] -= self.inhibition\n self.activity[:, :] += self.stimulus\n\n\nclass AmariMazeGenerator(object):\n\n def __init__(self, size):\n self.model = AmariModel(size)\n\n pygame.init()\n self.display = pygame.display.set_mode(size, 0)\n pygame.display.set_caption(\"Amari Maze Generator\")\n\n def run(self):\n pixels = pygame.surfarray.pixels3d(self.display)\n\n index = 0\n running = True\n while running:\n self.model.stimulate()\n\n pixels[:, :, :] = (255 * (self.model.activity > 0))[:, :, None]\n pygame.display.flip()\n\n for event in pygame.event.get():\n if event.type == pygame.QUIT:\n running = False\n elif event.type == pygame.KEYDOWN:\n if event.key == pygame.K_ESCAPE:\n running = False\n elif event.key == pygame.K_s:\n imsave(\"{0:04d}.png\".format(index), pixels[:, :, 0])\n index = index + 1\n elif event.type == pygame.MOUSEBUTTONDOWN:\n position = pygame.mouse.get_pos()\n self.model.activity[position] = 1\n\n pygame.quit()\n\n\ndef main():\n generator = AmariMazeGenerator((512, 512))\n generator.run()\n\n\nif __name__ == \"__main__\":\n main()\n```","response":"The [:, :] stands for everything from the beginning to the end just like for lists. The difference is that the first : stands for first and the second : for the second dimension. \n```\na = numpy.zeros((3, 3))\n\n```python\na\n#Output\n#array([[ 0., 0., 0.],\n# [ 0., 0., 0.],\n# [ 0., 0., 0.]])\n#```\n#Assigning to second row: \n#```\n```\n\n```python\na[1, :] = 3\n```\n\n```python\na\n#Output\n#array([[ 0., 0., 0.],\n# [ 3., 3., 3.],\n# [ 0., 0., 0.]])\n#```\n#Assigning to second column: \n#```\n```\n\n```python\na[:, 1] = 4\n```\n\n```python\na\n#Output\n#array([[ 0., 4., 0.],\n# [ 3., 4., 3.],\n# [ 0., 4., 0.]])\n#```\n#Assigning to all: \n#```\n```\n\n```python\na[:] = 10\n```\n\n```python\na\n#Output\n#array([[ 10., 10., 10.],\n# [ 10., 10., 10.],\n# [ 10., 10., 10.]])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/16815928\/what-does-mean-on-numpy-arrays","best_answers_votes":54,"tags":["python","arrays","numpy","matrix-indexing"],"question_length":2770,"response_length":770,"tag_count":4} {"question":"Convert python list with None values to numpy array with nan values I am trying to convert a list that contains numeric values and None values to numpy.array, such that None is replaces with numpy.nan. For example: \n```\nmy_list = [3,5,6,None,6,None]\n\n# My desired result: \nmy_array = numpy.array([3,5,6,np.nan,6,np.nan])\n```\nNaive approach fails: \n```\n\n```python\nmy_list\n#Output\n#[3, 5, 6, None, 6, None]\n```\n\n```python\nnp.array(my_list)\n#Output\n#array([3, 5, 6, None, 6, None], dtype=object) # very limited \n```\n\n```python\n_ * 2\n#Output\n#Traceback (most recent call last):\n# File \"\", line 1, in \n#TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'\n```\n\n```python\nmy_array # normal array can handle these operations\n#Output\n#array([ 3., 5., 6., nan, 6., nan])\n```\n\n```python\nmy_array * 2\n#Output\n#array([ 6., 10., 12., nan, 12., nan])\n#```\n#What is the best way to solve this problem?\n```","response":"You simply have to explicitly declare the data type: \n```\n\n```python\nmy_list = [3, 5, 6, None, 6, None]\n```\n\n```python\nnp.array(my_list, dtype=float)\n#Output\n#array([ 3., 5., 6., nan, 6., nan])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/19456239\/convert-python-list-with-none-values-to-numpy-array-with-nan-values","best_answers_votes":67,"tags":["python","numpy"],"question_length":832,"response_length":179,"tag_count":2} {"question":"`AttributeError: rint` when using numpy.round I have a numpy array that looks like this: \n```\n[[41.743617 -87.626839]\n [41.936943 -87.669838]\n [41.962665 -87.65571899999999]]\n```\nI want to round the numbers in the array to two decimal places, or three. I tried using numpy.around and numpy.round, but both of them give me the following error: \n```\nFile \"\/Library\/Python\/2.7\/site-packages\/numpy-1.8.0.dev_3084618_20130514-py2.7-macosx-10.8-intel.egg\/numpy\/core\/fromnumeric.py\", line 2452, in round_\n return round(decimals, out)\nAttributeError: rint\n```\ni used numpy.around(x, decimals = 2) and numpy.round(x,decimals=2) Am I doing something wrong? Is there any other way to do this efficiently for a large array?","response":"You cannot round numpy arrays that are objects, this can be changed with astype as long as your array can be safely converted to floats: \n```\n\n```python\na = np.random.rand(5).astype(np.object)\n```\n\n```python\na\n#Output\n#array([0.5137250555772075, 0.4279757819721647, 0.4177118178603122,\n# 0.6270676923544128, 0.43733218329094947], dtype=object)\n```\n\n```python\nnp.around(a,3)\n#Output\n#Traceback (most recent call last):\n# File \"\", line 1, in \n# File \"\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/site-packages\/numpy\/core\/fromnumeric.py\", line 2384, in around\n# return round(decimals, out)\n#AttributeError: rint\n```\n\n```python\nnp.around(a.astype(np.double),3)\n#Output\n#array([ 0.514, 0.428, 0.418, 0.627, 0.437])\n#```\n#You will receive similar errors with string, unicode, void, and char type arrays.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/19387608\/attributeerror-rint-when-using-numpy-round","best_answers_votes":68,"tags":["python","arrays","numpy"],"question_length":714,"response_length":774,"tag_count":3} {"question":"Can't call strftime on numpy.datetime64, no definition I have a datetime64 t that I'd like to represent as a string. When I call strftime like this t.strftime('%Y.%m.%d') I get this error: \n```\nAttributeError: 'numpy.datetime64' object has no attribute 'strftime'\n```\nWhat am I missing? I am using Python 3.4.2 and Numpy 1.9.1","response":"Importing a data structures library like pandas to accomplish type conversion feels like overkill to me. You can achieve the same thing with the standard datetime module: \n```\nimport numpy as np\nimport datetime\nt = np.datetime64('2017-10-26')\nt = t.astype(datetime.datetime)\ntimestring = t.strftime('%Y.%m.%d')\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/28327101\/cant-call-strftime-on-numpy-datetime64-no-definition","best_answers_votes":56,"tags":["python","numpy"],"question_length":326,"response_length":314,"tag_count":2} {"question":"Plotting power spectrum in python I have an array with 301 values, which were gathered from a movie clip with 301 frames. This means 1 value from 1 frame. The movie clip is running at 30 fps, so is in fact 10 sec long Now I would like to get the power spectrum of this \"signal\" ( with the right Axis). I tried: \n```\nX = fft(S_[:,2]);\n pl.plot(abs(X))\n pl.show()\n```\nI also tried: \n```\nX = fft(S_[:,2]);\n pl.plot(abs(X)**2)\n pl.show()\n```\nThough I don't think this is the real spectrum. the signal: The spectrum: The power spectrum : Can anyone provide some help with this ? I would like to have a plot in Hz.","response":"Numpy has a convenience function, np.fft.fftfreq to compute the frequencies associated with FFT components: \n```\nfrom __future__ import division\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndata = np.random.rand(301) - 0.5\nps = np.abs(np.fft.fft(data))**2\n\ntime_step = 1 \/ 30\nfreqs = np.fft.fftfreq(data.size, time_step)\nidx = np.argsort(freqs)\n\nplt.plot(freqs[idx], ps[idx])\n```\nNote that the largest frequency you see in your case is not 30 Hz, but \n```\n\n```python\nmax(freqs)\n#Output\n#14.950166112956811\n#```\n#You never see the sampling frequency in a power spectrum. If you had had an even number of samples, then you would have reached the Nyquist frequency, 15 Hz in your case (although numpy would have calculated it as -15).\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/15382076\/plotting-power-spectrum-in-python","best_answers_votes":65,"tags":["python","numpy","scipy","signal-processing"],"question_length":608,"response_length":732,"tag_count":4} {"question":"binning a dataframe in pandas in Python [duplicate] This question already has answers here: Binning a column with pandas (4 answers) Closed 5 years ago. Given the following dataframe in pandas: \n```\nimport numpy as np\ndf = pandas.DataFrame({\"a\": np.random.random(100), \"b\": np.random.random(100), \"id\": np.arange(100)})\n```\nwhere id is an id for each point consisting of an a and b value, how can I bin a and b into a specified set of bins (so that I can then take the median\/average value of a and b in each bin)? df might have NaN values for a or b (or both) for any given row in df. Here's a better example using Joe Kington's solution with a more realistic df. The thing I'm unsure about is how to access the df.b elements for each df.a group below: \n```\na = np.random.random(20)\ndf = pandas.DataFrame({\"a\": a, \"b\": a + 10})\n# bins for df.a\nbins = np.linspace(0, 1, 10)\n# bin df according to a\ngroups = df.groupby(np.digitize(df.a,bins))\n# Get the mean of a in each group\nprint groups.mean()\n## But how to get the mean of b for each group of a?\n# ...\n```","response":"There may be a more efficient way (I have a feeling pandas.crosstab would be useful here), but here's how I'd do it: \n```\nimport numpy as np\nimport pandas\n\ndf = pandas.DataFrame({\"a\": np.random.random(100),\n \"b\": np.random.random(100),\n \"id\": np.arange(100)})\n\n# Bin the data frame by \"a\" with 10 bins...\nbins = np.linspace(df.a.min(), df.a.max(), 10)\ngroups = df.groupby(np.digitize(df.a, bins))\n\n# Get the mean of each bin:\nprint groups.mean() # Also could do \"groups.aggregate(np.mean)\"\n\n# Similarly, the median:\nprint groups.median()\n\n# Apply some arbitrary function to aggregate binned data\nprint groups.aggregate(lambda x: np.mean(x[x > 0.5]))\n```\nEdit: As the OP was asking specifically for just the means of b binned by the values in a, just do \n```\ngroups.mean().b\n```\nAlso if you wanted the index to look nicer (e.g. display intervals as the index), as they do in @bdiamante's example, use pandas.cut instead of numpy.digitize. (Kudos to bidamante. I didn't realize pandas.cut existed.) \n```\nimport numpy as np\nimport pandas\n\ndf = pandas.DataFrame({\"a\": np.random.random(100), \n \"b\": np.random.random(100) + 10})\n\n# Bin the data frame by \"a\" with 10 bins...\nbins = np.linspace(df.a.min(), df.a.max(), 10)\ngroups = df.groupby(pandas.cut(df.a, bins))\n\n# Get the mean of b, binned by the values in a\nprint groups.mean().b\n```\nThis results in: \n```\na\n(0.00186, 0.111] 10.421839\n(0.111, 0.22] 10.427540\n(0.22, 0.33] 10.538932\n(0.33, 0.439] 10.445085\n(0.439, 0.548] 10.313612\n(0.548, 0.658] 10.319387\n(0.658, 0.767] 10.367444\n(0.767, 0.876] 10.469655\n(0.876, 0.986] 10.571008\nName: b\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/16947336\/binning-a-dataframe-in-pandas-in-python","best_answers_votes":64,"tags":["python","numpy","pandas"],"question_length":1058,"response_length":1704,"tag_count":3} {"question":"Image.fromarray just produces black image I'm trying to save a numpy matrix as a grayscale image using Image.fromarray. It seems to work on a random matrix, but not on a particular one (where there should appear a circle). Can anyone explain what I'm doing wrong? \n```\nfrom PIL import Image\nimport numpy as np\nradius = 0.5\nsize = 10\nx,y = np.meshgrid(np.linspace(-1,1,size),np.linspace(-1,1,size))\nf = np.vectorize(lambda x,y: ( 1.0 if x*x + y*y < radius*radius else 0.0))\nz = f(x,y)\nprint(z)\nzz = np.random.random((size,size))\nimg = Image.fromarray(zz,mode='L') #replace z with zz and it will just produce a black image\nimg.save('my_pic.png')\n```","response":"Image.fromarray is poorly defined with floating-point input; it's not well documented but the function assumes the input is laid-out as unsigned 8-bit integers. To produce the output you're trying to get, multiply by 255 and convert to uint8: \n```\nz = (z * 255).astype(np.uint8)\n```\nThe reason it seems to work with the random array is that the bytes in this array, when interpreted as unsigned 8-bit integers, also look random. But the output is not the same random array as the input, which you can check by doing the above conversion on the random input: \n```\nnp.random.seed(0)\nzz = np.random.rand(size, size)\nImage.fromarray(zz, mode='L').save('pic1.png')\n```\n```\nImage.fromarray((zz * 255).astype('uint8'), mode='L').save('pic2.png')\n```\nSince the issue doesn't seem to be reported anywhere, I reported it on github: https:\/\/github.com\/python-pillow\/Pillow\/issues\/2856","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/47290668\/image-fromarray-just-produces-black-image","best_answers_votes":102,"tags":["python","numpy","python-imaging-library"],"question_length":647,"response_length":873,"tag_count":3} {"question":"What is :: (double colon) in numpy like in myarray[0::3]? [duplicate] This question already has answers here: Closed 14 years ago. Possible Duplicate: What is :: (double colon) in Python? I read the question What is :: (double colon) in Python when subscripting sequences?, but this not answer what myarray[x::y] mean.","response":"It prints every yth element from the list \/ array \n```\n\n```python\na = [1,2,3,4,5,6,7,8,9]\n```\n\n```python\na[::3]\n#Output\n#[1, 4, 7]\n#```\n#The additional syntax of a[x::y] means get every yth element starting at position x ie. \n#```\n```\n\n```python\na[2::3]\n#Output\n#[3, 6, 9]\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/7123888\/what-is-double-colon-in-numpy-like-in-myarray03","best_answers_votes":91,"tags":["python","syntax","numpy","slice"],"question_length":318,"response_length":226,"tag_count":4} {"question":"Resizing and stretching a NumPy array I am working in Python and I have a NumPy array like this: \n```\n[1,5,9]\n[2,7,3]\n[8,4,6]\n```\nHow do I stretch it to something like the following? \n```\n[1,1,5,5,9,9]\n[1,1,5,5,9,9]\n[2,2,7,7,3,3]\n[2,2,7,7,3,3]\n[8,8,4,4,6,6]\n[8,8,4,4,6,6]\n```\nThese are just some example arrays, I will actually be resizing several sizes of arrays, not just these. I'm new at this, and I just can't seem to wrap my head around what I need to do.","response":"@KennyTM's answer is very slick, and really works for your case but as an alternative that might offer a bit more flexibility for expanding arrays try np.repeat: \n```\n\n```python\na = np.array([[1, 5, 9],\n#Output\n# [2, 7, 3],\n# [8, 4, 6]])\n```\n\n```python\nnp.repeat(a,2, axis=1)\n#Output\n#array([[1, 1, 5, 5, 9, 9],\n# [2, 2, 7, 7, 3, 3],\n# [8, 8, 4, 4, 6, 6]])\n#```\n#So, this accomplishes repeating along one axis, to get it along multiple axes (as you might want), simply nest the np.repeat calls: \n#```\n```\n\n```python\nnp.repeat(np.repeat(a,2, axis=0), 2, axis=1)\n#Output\n#array([[1, 1, 5, 5, 9, 9],\n# [1, 1, 5, 5, 9, 9],\n# [2, 2, 7, 7, 3, 3],\n# [2, 2, 7, 7, 3, 3],\n# [8, 8, 4, 4, 6, 6],\n# [8, 8, 4, 4, 6, 6]])\n#```\n#You can also vary the number of repeats for any initial row or column. For example, if you wanted two repeats of each row aside from the last row: \n#```\n```\n\n```python\nnp.repeat(a, [2,2,1], axis=0)\n#Output\n#array([[1, 5, 9],\n# [1, 5, 9],\n# [2, 7, 3],\n# [2, 7, 3],\n# [8, 4, 6]])\n#```\n#Here when the second argument is a list it specifies a row-wise (rows in this case because axis=0) repeats for each row.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/4226386\/resizing-and-stretching-a-numpy-array","best_answers_votes":50,"tags":["python","arrays","resize","numpy","stretch"],"question_length":461,"response_length":1115,"tag_count":5} {"question":"kalman 2d filter in python My input is 2d (x,y) time series of a dot moving on a screen for a tracker software. It has some noise I want to remove using Kalman filter. Does someone can point me for a python code for Kalman 2d filter? In scipy cookbook I found only a 1d example: http:\/\/www.scipy.org\/Cookbook\/KalmanFiltering I saw there is implementation for Kalman filter in OpenCV, but couldn't find code examples. Thanks!","response":"Here is my implementation of the Kalman filter based on the equations given on wikipedia. Please be aware that my understanding of Kalman filters is very rudimentary so there are most likely ways to improve this code. (For example, it suffers from the numerical instability problem discussed here. As I understand it, this only affects the numerical stability when Q, the motion noise, is very small. In real life, the noise is usually not small, so fortunately (at least for my implementation) in practice the numerical instability does not show up.) In the example below, kalman_xy assumes the state vector is a 4-tuple: 2 numbers for the location, and 2 numbers for the velocity. The F and H matrices have been defined specifically for this state vector: If x is a 4-tuple state, then \n```\nnew_x = F * x\nposition = H * x\n```\nIt then calls kalman, which is the generalized Kalman filter. It is general in the sense it is still useful if you wish to define a different state vector -- perhaps a 6-tuple representing location, velocity and acceleration. You just have to define the equations of motion by supplying the appropriate F and H. \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef kalman_xy(x, P, measurement, R,\n motion = np.matrix('0. 0. 0. 0.').T,\n Q = np.matrix(np.eye(4))):\n \"\"\"\n Parameters: \n x: initial state 4-tuple of location and velocity: (x0, x1, x0_dot, x1_dot)\n P: initial uncertainty convariance matrix\n measurement: observed position\n R: measurement noise \n motion: external motion added to state vector x\n Q: motion noise (same shape as P)\n \"\"\"\n return kalman(x, P, measurement, R, motion, Q,\n F = np.matrix('''\n 1. 0. 1. 0.;\n 0. 1. 0. 1.;\n 0. 0. 1. 0.;\n 0. 0. 0. 1.\n '''),\n H = np.matrix('''\n 1. 0. 0. 0.;\n 0. 1. 0. 0.'''))\n\ndef kalman(x, P, measurement, R, motion, Q, F, H):\n '''\n Parameters:\n x: initial state\n P: initial uncertainty convariance matrix\n measurement: observed position (same shape as H*x)\n R: measurement noise (same shape as H)\n motion: external motion added to state vector x\n Q: motion noise (same shape as P)\n F: next state function: x_prime = F*x\n H: measurement function: position = H*x\n\n Return: the updated and predicted new values for (x, P)\n\n See also http:\/\/en.wikipedia.org\/wiki\/Kalman_filter\n\n This version of kalman can be applied to many different situations by\n appropriately defining F and H \n '''\n # UPDATE x, P based on measurement m \n # distance between measured and current position-belief\n y = np.matrix(measurement).T - H * x\n S = H * P * H.T + R # residual convariance\n K = P * H.T * S.I # Kalman gain\n x = x + K*y\n I = np.matrix(np.eye(F.shape[0])) # identity matrix\n P = (I - K*H)*P\n\n # PREDICT x, P based on motion\n x = F*x + motion\n P = F*P*F.T + Q\n\n return x, P\n\ndef demo_kalman_xy():\n x = np.matrix('0. 0. 0. 0.').T \n P = np.matrix(np.eye(4))*1000 # initial uncertainty\n\n N = 20\n true_x = np.linspace(0.0, 10.0, N)\n true_y = true_x**2\n observed_x = true_x + 0.05*np.random.random(N)*true_x\n observed_y = true_y + 0.05*np.random.random(N)*true_y\n plt.plot(observed_x, observed_y, 'ro')\n result = []\n R = 0.01**2\n for meas in zip(observed_x, observed_y):\n x, P = kalman_xy(x, P, meas, R)\n result.append((x[:2]).tolist())\n kalman_x, kalman_y = zip(*result)\n plt.plot(kalman_x, kalman_y, 'g-')\n plt.show()\n\ndemo_kalman_xy()\n```\nThe red dots show the noisy position measurements, the green line shows the Kalman predicted positions.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/13901997\/kalman-2d-filter-in-python","best_answers_votes":67,"tags":["python","numpy","scipy","smoothing","kalman-filter"],"question_length":424,"response_length":3797,"tag_count":5} {"question":"How to implode(reverse of pandas explode) based on a column I have a dataframe df like below \n```\nNETWORK config_id APPLICABLE_DAYS Case Delivery \n0 Grocery 5399 SUN 10 1 \n1 Grocery 5399 MON 20 2 \n2 Grocery 5399 TUE 30 3 \n3 Grocery 5399 WED 40 4\n```\nI want to implode( combine Applicable_days from multiple rows into single row like below) and get the average case and delivery per config_id \n```\nNETWORK config_id APPLICABLE_DAYS Avg_Cases Avg_Delivery \n0 Grocery 5399 SUN,MON,TUE,WED 90 10\n```\nusing the groupby on network,config_id i can get the avg_cases and avg_delivery like below. \n```\ndf.groupby(['network','config_id']).agg({'case':'mean','delivery':'mean'})\n```\nBut How do i be able to join APPLICABLE_DAYS while performing this aggregation?","response":"If you want the \"opposite\" of explode, then that means bringing it into a list in Solution #1. You can also join as a string in Solution #2: Use lambda x: x.tolist() for the 'APPLICABLE_DAYS' column within your .agg groupby function: \n```\ndf = (df.groupby(['NETWORK','config_id'])\n .agg({'APPLICABLE_DAYS': lambda x: x.tolist(),'Case':'mean','Delivery':'mean'})\n .rename({'Case' : 'Avg_Cases','Delivery' : 'Avg_Delivery'},axis=1)\n .reset_index())\ndf\nOut[1]: \n NETWORK config_id APPLICABLE_DAYS Avg_Cases Avg_Delivery\n0 Grocery 5399 [SUN, MON, TUE, WED] 25 2.5\n```\nUse lambda x: \",\".join(x) for the 'APPLICABLE_DAYS' column within your .agg groupby function: \n```\ndf = (df.groupby(['NETWORK','config_id'])\n .agg({'APPLICABLE_DAYS': lambda x: \",\".join(x),'Case':'mean','Delivery':'mean'})\n .rename({'Case' : 'Avg_Cases','Delivery' : 'Avg_Delivery'},axis=1)\n .reset_index())\ndf\nOut[1]: \n NETWORK config_id APPLICABLE_DAYS Avg_Cases Avg_Delivery\n0 Grocery 5399 SUN,MON,TUE,WED 25 2.5\n```\nIf you are looking for the sum, then you can just change mean to sum for the Cases and Delivery columns.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/64235312\/how-to-implodereverse-of-pandas-explode-based-on-a-column","best_answers_votes":54,"tags":["python","pandas","numpy","pandas-explode"],"question_length":991,"response_length":1197,"tag_count":4} {"question":"difference between np.inf and float('Inf') Is there some difference between NumPy np.inf and float('Inf')? float('Inf') == np.inf returns True, so it seems they are interchangeable, thus I was wondering why NumPy has defined its own \"inf\" constant, and when should I use one constant instead of the other (considering style concerns too)?","response":"TL, DR: There is no difference and they can be used interchangeably. Besides having the same value as math.inf and float('inf'): \n```\n\n```python\nimport math\n```\n\n```python\nimport numpy as np\n```\n\n```python\nnp.inf == float('inf')\n#Output\n#True\n```\n\n```python\nnp.inf == math.inf\n#Output\n#True\n#```\n#It also has the same type: \n#```\n```\n\n```python\nimport numpy as np\n```\n\n```python\ntype(np.inf)\n#Output\n#float\n```\n\n```python\ntype(np.inf) is type(float('inf'))\n#Output\n#float\n#```\n#That's interesting because NumPy also has it's own floating point types: \n#```\n```\n\n```python\nnp.float32(np.inf)\n#Output\n#inf\n```\n\n```python\ntype(np.float32(np.inf))\n#Output\n#numpy.float32\n```\n\n```python\nnp.float32('inf') == np.inf # nevertheless equal\n#Output\n#True\n#```\n#So it has the same value and the same type as math.inf and float('inf') which means it's interchangeable. Reasons for using np.inf It's less to type: np.inf (6 chars) math.inf (8 chars; new in python 3.5) float('inf') (12 chars) That means if you already have NumPy imported you can save yourself 6 (or 2) chars per occurrence compared to float('inf') (or math.inf). Because it's easier to remember. At least for me, it's far easier to remember np.inf than that I need to call float with a string. Also, NumPy defines some additional aliases for infinity: \n#```\n#np.Inf\n# np.inf\n# np.infty\n# np.Infinity\n# np.PINF\n#```\n#It also defines an alias for negative infinity: \n#```\n#np.NINF\n#```\n#Similarly for nan: \n#```\n#np.nan\n# np.NaN\n# np.NAN\n#```\n#Constants are constants This point is based on CPython and could be completely different in another Python implementation. A float CPython instance requires 24 Bytes: \n#```\n```\n\n```python\nimport sys\n#Output\n# >>> sys.getsizeof(np.inf)\n# 24\n#```\n#If you can re-use the same instance you might save a lot of memory compared to creating lots of new instances. Of course, this point is mute if you create your own inf constant but if you don't then: \n#```\n#a = [np.inf for _ in range(1000000)]\n# b = [float('inf') for _ in range(1000000)]\n#```\n#b would use 24 * 1000000 Bytes (~23 MB) more memory than a. Accessing a constant is faster than creating the variable. \n#```\n#%timeit np.inf\n# 37.9 ns \u00b1 0.692 ns per loop (mean \u00b1 std. dev. of 7 runs, 10000000 loops each)\n# %timeit float('inf')\n# 232 ns \u00b1 13.9 ns per loop (mean \u00b1 std. dev. of 7 runs, 1000000 loops each)\n```\n\n%timeit [np.inf for _ in range(10000)]\n 552 \u00b5s \u00b1 15.4 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each)\n %timeit [float('inf') for _ in range(10000)]\n 2.59 ms \u00b1 78.7 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each)\n```\nOf course, you can create your own constant to counter that point. But why bother if NumPy already did that for you.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/42315541\/difference-between-np-inf-and-floatinf","best_answers_votes":57,"tags":["python","numpy"],"question_length":338,"response_length":2512,"tag_count":2} {"question":"NumPy save some arrays at once I working on different shapes of arrays and I want to save them all with numpy.save, so, consider I have \n```\nmat1 = numpy.arange(8).reshape(4, 2)\nmat2 = numpy.arange(9).reshape(2, 3)\nnumpy.save('mat.npy', numpy.array([mat1, mat2]))\n```\nIt works. But when I have two matrices with one dimension of same size it's not working. \n```\nmat1 = numpy.arange(8).reshape(2, 4)\nmat2 = numpy.arange(10).reshape(2, 5)\nnumpy.save('mat.npy', numpy.array([mat1, mat2]))\n```\nIt causes Traceback (most recent call last): File \"\", line 1, in ValueError: could not broadcast input array from shape (2,4) into shape (2) And note that the problem caused by numpy.array([mat1, mat2]) and not by numpy.save I know that such array is possible: >> numpy.array([[[1, 2]], [[1, 2], [3, 4]]]) array([[[1, 2]], [[1, 2], [3, 4]]], dtype=object) So, all of what I want is to save two arrays as mat1 and mat2 at once.","response":"If you'd like to save multiple arrays in the same format as np.save, use np.savez. For example: \n```\nimport numpy as np\n\narr1 = np.arange(8).reshape(2, 4)\narr2 = np.arange(10).reshape(2, 5)\nnp.savez('mat.npz', name1=arr1, name2=arr2)\n\ndata = np.load('mat.npz')\nprint data['name1']\nprint data['name2']\n```\nIf you have several arrays, you can expand the arguments: \n```\nimport numpy as np\n\ndata = [np.arange(8).reshape(2, 4), np.arange(10).reshape(2, 5)]\nnp.savez('mat.npz', *data)\n\ncontainer = np.load('mat.npz')\ndata = [container[key] for key in container]\n```\nNote that the order is not preserved. If you do need to preserve order, you might consider using pickle instead. If you use pickle, be sure to specify the binary protocol, otherwise the you'll write things using ascii pickle, which is particularly inefficient for numpy arrays. With a binary protocol, ndarrays more or less pickle to the same format as np.save\/np.savez. For example: \n```\n# Note: This is Python2.x specific. It's identical except for the import on 3.x\nimport cPickle as pickle\nimport numpy as np\n\ndata = [np.arange(8).reshape(2, 4), np.arange(10).reshape(2, 5)]\n\nwith open('mat.pkl', 'wb') as outfile:\n pickle.dump(data, outfile, pickle.HIGHEST_PROTOCOL)\n\nwith open('mat.pkl', 'rb') as infile:\n result = pickle.load(infile)\n```\nIn this case, result and data will have identical contents and the order of the input list of arrays will be preserved.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/35133317\/numpy-save-some-arrays-at-once","best_answers_votes":94,"tags":["python","arrays","numpy"],"question_length":917,"response_length":1431,"tag_count":3} {"question":"numpy.r_ is not a function. What is it? According to the numpy\/scipy doc on numpy.r_ here, it is \"not a function, so takes no parameters\". If it is not a function, what is the proper term for \"functions\" such as numpy.r_?","response":"I would argue that for all purposes r_ is a function, but one implemented by a clever hack using different syntax. Mike already explained how r_ is in reality not a function, but a class instance of RClass, which has __getitem__ implemented, so that you can use it as r_[1]. The cosmetic difference is that you use square brackets instead of curved ones, so you are not doing a function call, but you are actually indexing the object. Although this is technically true, for all purposes, it works just like a function call, but one that allows some extra syntax not allowed by a normal function. The motivation for creating r_ probably comes from Matlab's syntax, which allows to construct arrays in a very compact way, like x = [1:10, 15, 20:10:100]. To achieve the same in numpy, you would have to do x = np.hstack((np.arange(1,11), 15, np.arange(20,110,10))). Using colons to create ranges is not allowed in python, but they do exist in the form of the slice notation to index into a list, like L[3:5], and even A[2:10, 20:30] for multi-dimensional arrays. Under the hood, these index notation gets transformed to a call to the __getitem__ method of the object, where the colon notation gets transformed into a slice object: \n```\n\n```python\nclass C(object):\n ...: def __getitem__(self, x):\n ...: print x\n```\n\n```python\nc = C()\n```\n\n```python\nc[1:11, 15, 20:110:10]\n(slice(1, 11, None), 15, slice(20, 110, 10))\n```\nThe r_ object 'abuses' this fact to create a 'function' that accepts slice notation, which also does some additional things like concatenating everything together and returning the result, so that you can write x = np.r_[1:11, 15, 20:110:10]. The \"Not a function, so takes no parameters\" in the documentation is slightly misleading ...\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/18601001\/numpy-r-is-not-a-function-what-is-it","best_answers_votes":47,"tags":["python","function","numpy"],"question_length":221,"response_length":1758,"tag_count":3} {"question":"how to plot and annotate hierarchical clustering dendrograms in scipy\/matplotlib I'm using dendrogram from scipy to plot hierarchical clustering using matplotlib as follows: \n```\nmat = array([[1, 0.5, 0.9],\n [0.5, 1, -0.5],\n [0.9, -0.5, 1]])\nplt.subplot(1,2,1)\nplt.title(\"mat\")\ndist_mat = mat\nlinkage_matrix = linkage(dist_mat,\n \"single\")\nprint \"linkage2:\"\nprint linkage(1-dist_mat, \"single\")\ndendrogram(linkage_matrix,\n color_threshold=1,\n labels=[\"a\", \"b\", \"c\"],\n show_leaf_counts=True)\nplt.subplot(1,2,2)\nplt.title(\"1 - mat\")\ndist_mat = 1 - mat\nlinkage_matrix = linkage(dist_mat,\n \"single\")\ndendrogram(linkage_matrix,\n color_threshold=1,\n labels=[\"a\", \"b\", \"c\"],\n show_leaf_counts=True)\n```\nMy questions are: first, why does mat and 1-mat give identical clusterings here? and second, how can I annotate the distance along each branch of the tree using dendrogram so that the distances between pairs of nodes can be compared? finally it seems that show_leaf_counts flag is ignored, is there a way to turn it on so that the number of objects in each class is shown? thanks.","response":"The input to linkage() is either an n x m array, representing n points in m-dimensional space, or a one-dimensional array containing the condensed distance matrix. In your example, mat is 3 x 3, so you are clustering three 3-d points. Clustering is based on the distance between these points. Why does mat and 1-mat give identical clusterings here? The arrays mat and 1-mat produce the same clustering because the clustering is based on distances between the points, and neither a reflection (-mat) nor a translation (mat + offset) of the entire data set change the relative distances between the points. How can I annotate the distance along each branch of the tree using dendrogram so that the distances between pairs of nodes can be compared? In the code below, I show how you can use the data returned by dendrogram to label the horizontal segments of the diagram with the corresponding distance. The values associated with the keys icoord and dcoord give the x and y coordinates of each three-segment inverted-U of the figure. In augmented_dendrogram this data is used to add a label of the distance (i.e. y value) of each horizontal line segment in dendrogram. \n```\nfrom scipy.cluster.hierarchy import dendrogram\nimport matplotlib.pyplot as plt\n\n\ndef augmented_dendrogram(*args, **kwargs):\n\n ddata = dendrogram(*args, **kwargs)\n\n if not kwargs.get('no_plot', False):\n for i, d in zip(ddata['icoord'], ddata['dcoord']):\n x = 0.5 * sum(i[1:3])\n y = d[1]\n plt.plot(x, y, 'ro')\n plt.annotate(\"%.3g\" % y, (x, y), xytext=(0, -8),\n textcoords='offset points',\n va='top', ha='center')\n\n return ddata\n```\nFor your mat array, the augmented dendrogram is So point 'a' and 'c' are 1.01 units apart, and point 'b' is 1.57 units from the cluster ['a', 'c']. It seems that show_leaf_counts flag is ignored, is there a way to turn it on so that the number of objects in each class is shown? The flag show_leaf_counts only applies when not all the original data points are shown as leaves. For example, when trunc_mode = \"lastp\", only the last p nodes are show. Here's an example with 100 points: \n```\nimport numpy as np\nfrom scipy.cluster.hierarchy import linkage\nimport matplotlib.pyplot as plt\nfrom augmented_dendrogram import augmented_dendrogram\n\n\n# Generate a random sample of `n` points in 2-d.\nnp.random.seed(12312)\nn = 100\nx = np.random.multivariate_normal([0, 0], np.array([[4.0, 2.5], [2.5, 1.4]]),\n size=(n,))\n\nplt.figure(1, figsize=(6, 5))\nplt.clf()\nplt.scatter(x[:, 0], x[:, 1])\nplt.axis('equal')\nplt.grid(True)\n\nlinkage_matrix = linkage(x, \"single\")\n\nplt.figure(2, figsize=(10, 4))\nplt.clf()\n\nplt.subplot(1, 2, 1)\nshow_leaf_counts = False\nddata = augmented_dendrogram(linkage_matrix,\n color_threshold=1,\n p=6,\n truncate_mode='lastp',\n show_leaf_counts=show_leaf_counts,\n )\nplt.title(\"show_leaf_counts = %s\" % show_leaf_counts)\n\nplt.subplot(1, 2, 2)\nshow_leaf_counts = True\nddata = augmented_dendrogram(linkage_matrix,\n color_threshold=1,\n p=6,\n truncate_mode='lastp',\n show_leaf_counts=show_leaf_counts,\n )\nplt.title(\"show_leaf_counts = %s\" % show_leaf_counts)\n\nplt.show()\n```\nThese are the points in the data set: With p=6 and trunc_mode=\"lastp\", dendrogram only shows the \"top\" of the dendrogram. The following shows the effect of show_leaf_counts.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/11917779\/how-to-plot-and-annotate-hierarchical-clustering-dendrograms-in-scipy-matplotlib","best_answers_votes":70,"tags":["python","numpy","matplotlib","scipy","dendrogram"],"question_length":1206,"response_length":3535,"tag_count":5} {"question":"Python debugger tells me value of Numpy array is \"*** Newest frame\" What does this mean? My function gets two numpy arrays from a python\/c library. After that function call I turn on the debugger to find a bug, so I add the line to look at the two numpy arrays. \n```\nimport pdb; pdb.set_trace()\n```\nBut for the values of one of the arrays pdb only returns the message *** Newest frame PDB output: \n```\n(Pdb) type(d)\n\n(Pdb) type(f)\n\n(Pdb) f.shape\n(3, 3, 17856)\n(Pdb) d[0].shape\n*** Newest frame\n(Pdb) d[0]\n*** Newest frame\n```","response":"The command d is the command for the debugger used to go down the stack to a 'newer frame'. It seems that the parsing cannot not handle this disambiguity. Try renaming the variable d. EDIT: Actually, the comments suggest much better handling than renaming.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/15160946\/python-debugger-tells-me-value-of-numpy-array-is-newest-frame","best_answers_votes":75,"tags":["python","numpy","pdb"],"question_length":569,"response_length":256,"tag_count":3} {"question":"Create large random boolean matrix with numpy I am trying to create a huge boolean matrix which is randomly filled with True and False with a given probability p. At first I used this code: \n```\nN = 30000\np = 0.1\nnp.random.choice(a=[False, True], size=(N, N), p=[p, 1-p])\n```\nBut sadly it does not seem to terminate for this big N. So I tried to split it up into the generation of the single rows by doing this: \n```\nN = 30000\np = 0.1\nmask = np.empty((N, N))\nfor i in range (N):\n mask[i] = np.random.choice(a=[False, True], size=N, p=[p, 1-p]) \n if (i % 100 == 0):\n print(i)\n```\nNow, there happens something strange (at least on my device): The first ~1100 rows are very fastly generated - but after it, the code becomes horribly slow. Why is this happening? What do I miss here? Are there better ways to create a big matrix which has True entries with probability p and False entries with probability 1-p? Edit: As many of you assumed that the RAM will be a problem: As the device which will run the code has almost 500GB RAM, this won't be a problem.","response":"The problem is your RAM, the values are being stored in memory as it's being created. I just created this matrix using this command: np.random.choice(a=[False, True], size=(N, N), p=[p, 1-p]) I used an AWS i3 instance with 64GB of RAM and 8 cores. To create this matrix, htop shows that it takes up ~20GB of RAM. Here is a benchmark in case you care: \n```\ntime np.random.choice(a=[False, True], size=(N, N), p=[p, 1-p])\n\nCPU times: user 18.3 s, sys: 3.4 s, total: 21.7 s\nWall time: 21.7 s\n\n\n def mask_method(N, p):\n for i in range(N):\n mask[i] = np.random.choice(a=[False, True], size=N, p=[p, 1-p])\n if (i % 100 == 0):\n print(i)\n\ntime mask_method(N,p)\n\nCPU times: user 20.9 s, sys: 1.55 s, total: 22.5 s\nWall time: 22.5 s\n```\nNote that the mask method only takes up ~9GB of RAM at it's peak. Edit: The first method flushes the RAM after the process is done where as the function method retains all of it.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/43528637\/create-large-random-boolean-matrix-with-numpy","best_answers_votes":35,"tags":["python","numpy","random"],"question_length":1080,"response_length":933,"tag_count":3} {"question":"Why does numpy.linalg.solve() offer more precise matrix inversions than numpy.linalg.inv()? I do not quite understand why numpy.linalg.solve() gives the more precise answer, whereas numpy.linalg.inv() breaks down somewhat, giving (what I believe are) estimates. For a concrete example, I am solving the equation C^{-1} * d where C denotes a matrix, and d is a vector-array. For the sake of discussion, the dimensions of C are shape (1000,1000) and d is shape (1,1000). numpy.linalg.solve(A, b) solves the equation A*x=b for x, i.e. x = A^{-1} * b. Therefore, I could either solve this equation by (1) \n```\ninverse = numpy.linalg.inv(C)\nresult = inverse * d\n```\nor (2) \n```\nnumpy.linalg.solve(C, d)\n```\nMethod (2) gives far more precise results. Why is this? What exactly is happening such that one \"works better\" than the other?","response":"np.linalg.solve(A, b) does not compute the inverse of A. Instead it calls one of the gesv LAPACK routines, which first factorizes A using LU decomposition, then solves for x using forward and backward substitution (see here). np.linalg.inv uses the same method to compute the inverse of A by solving for A-1 in A\u00b7A-1 = I where I is the identity*. The factorization step is exactly the same as above, but it takes more floating point operations to solve for A-1 (an n\u00d7n matrix) than for x (an n-long vector). Additionally, if you then wanted to obtain x via the identity A-1\u00b7b = x then the extra matrix multiplication would incur yet more floating point operations, and therefore slower performance and more numerical error. There's no need for the intermediate step of computing A-1 - it is faster and more accurate to obtain x directly. * The relevant bit of source for inv is here. Unfortunately it's a bit tricky to understand since it's templated C. The important thing to note is that an identity matrix is being passed to the LAPACK solver as parameter B.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/31256252\/why-does-numpy-linalg-solve-offer-more-precise-matrix-inversions-than-numpy-li","best_answers_votes":68,"tags":["python","arrays","numpy","matrix","linear-algebra"],"question_length":828,"response_length":1061,"tag_count":5} {"question":"numpy savetxt formatted as integer is not saving zeroes I am trying to save numpy.array to .csv in the following way. \n```\nwith open(\"resultTR.csv\", \"wb\") as f:\n f.write(b'ImageId,Label\\n')\n numpy.savetxt(f, result, fmt='%i', delimiter=\",\")\n```\nresult is numpy.array that consists of two columns, first column are indices (numbers 1 through n) and second column values from (0,9). Unfortunately I have problem that whenever there is 0 in the second column then nothing is written to the resulting .csv file in the second column. In other words first five rows of array looks like this: \n```\n[[ 1.00000000e+00 2.00000000e+00]\n [ 2.00000000e+00 0.00000000e+00]\n [ 3.00000000e+00 9.00000000e+00]\n [ 4.00000000e+00 9.00000000e+00]\n [ 5.00000000e+00 3.00000000e+00]\n```\nAnd first five rows of .csv file like this: \n```\nImageId,Label\n1,2\n2\n3,9\n4,9\n5,3\n```\nIt looks to me like my code should work and thus not saving zeroes seems to me very weird. Does anyone have some idea what can possibly be wrong with my code for writing to .csv file? EDIT: Just for compleetnes my python version is 2.7.2 and it's running on Mac OS X 10.9.2","response":"I would try saving the array as an int array, as in result.astype(int), or in full: \n```\nwith open(\"resultTR.csv\", \"wb\") as f:\n f.write(b'ImageId,Label\\n')\n numpy.savetxt(f, result.astype(int), fmt='%i', delimiter=\",\")\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/22557322\/numpy-savetxt-formatted-as-integer-is-not-saving-zeroes","best_answers_votes":59,"tags":["python","arrays","csv","numpy"],"question_length":1144,"response_length":228,"tag_count":4} {"question":"Cannot convert list to array: ValueError: only one element tensors can be converted to Python scalars I'm currently working with the PyTorch framework and trying to understand foreign code. I got an indices issue and wanted to print the shape of a list. The only way of doing so (as far as Google tells me) is to convert the list into a numpy array and then getting the shape with numpy.ndarray.shape(). But trying to convert my list into an array, I got a ValueError: only one element tensors can be converted to Python scalars. My List is a converted PyTorch Tensor (list(pytorchTensor)) and looks somewhat like this: \n```\n[\ntensor([[-0.2781, -0.2567, -0.2353, ..., -0.9640, -0.9855, -1.0069], \n [-0.2781, -0.2567, -0.2353, ..., -1.0069, -1.0283, -1.0927], \n [-0.2567, -0.2567, -0.2138, ..., -1.0712, -1.1141, -1.1784], \n ..., \n [-0.6640, -0.6425, -0.6211, ..., -1.0712, -1.1141, -1.0927], \n [-0.6640, -0.6425, -0.5997, ..., -0.9426, -0.9640, -0.9640], \n [-0.6640, -0.6425, -0.5997, ..., -0.9640, -0.9426, -0.9426]]),\n\ntensor([[-0.0769, -0.0980, -0.0769, ..., -0.9388, -0.9598, -0.9808], \n [-0.0559, -0.0769, -0.0980, ..., -0.9598, -1.0018, -1.0228], \n [-0.0559, -0.0769, -0.0769, ..., -1.0228, -1.0439, -1.0859], \n ..., \n [-0.4973, -0.4973, -0.4973, ..., -1.0018, -1.0439, -1.0228], \n [-0.4973, -0.4973, -0.4973, ..., -0.8757, -0.9177, -0.9177], \n [-0.4973, -0.4973, -0.4973, ..., -0.9177, -0.8967, -0.8967]]),\n\ntensor([[-0.1313, -0.1313, -0.1100, ..., -0.8115, -0.8328, -0.8753], \n [-0.1313, -0.1525, -0.1313, ..., -0.8541, -0.8966, -0.9391], \n [-0.1100, -0.1313, -0.1100, ..., -0.9391, -0.9816, -1.0666], \n ..., \n [-0.4502, -0.4714, -0.4502, ..., -0.8966, -0.8966, -0.8966], \n [-0.4502, -0.4714, -0.4502, ..., -0.8115, -0.8115, -0.7903], \n [-0.4502, -0.4714, -0.4502, ..., -0.8115, -0.7690, -0.7690]]),\n]\n```\nIs there a way of getting the shape of that list without converting it into a numpy array?","response":"It seems like you have a list of tensors. For each tensor you can see its size() (no need to convert to list\/numpy). If you insist, you can convert a tensor to numpy array using numpy(): Return a list of tensor shapes: \n```\n>> [t.size() for t in my_list_of_tensors]\n```\nReturns a list of numpy arrays: \n```\n>> [t.numpy() for t in my_list_of_tensors]\n```\nIn terms of performance, it is always best to avoid casting of tensors into numpy arrays, as it may incur sync of device\/host memory. If you only need to check the shape of a tensor, use size() function.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/52074153\/cannot-convert-list-to-array-valueerror-only-one-element-tensors-can-be-conver","best_answers_votes":39,"tags":["python","numpy","pytorch","numpy-ndarray"],"question_length":2068,"response_length":557,"tag_count":4} {"question":"Specifying dtype float32 with pandas.read_csv on pandas 0.10.1 I'm attempting to read a simple space-separated file with pandas read_csv method. However, pandas doesn't seem to be obeying my dtype argument. Maybe I'm incorrectly specifying it? I've distilled down my somewhat complicated call to read_csv to this simple test case. I'm actually using the converters argument in my 'real' scenario but I removed this for simplicity. Below is my ipython session: \n```\n\n```python\ncat test.out\n#Output\n#a b\n#0.76398 0.81394\n#0.32136 0.91063\n```\n\n```python\nimport pandas\n```\n\n```python\nimport numpy\n```\n\n```python\nx = pandas.read_csv('test.out', dtype={'a': numpy.float32}, delim_whitespace=True)\n```\n\n```python\nx\n#Output\n# a b\n#0 0.76398 0.81394\n#1 0.32136 0.91063\n```\n\n```python\nx.a.dtype\n#Output\n#dtype('float64')\n#```\n#I've also tried this using this with a dtype of numpy.int32 or numpy.int64. These choices result in an exception: \n#```\n#AttributeError: 'NoneType' object has no attribute 'dtype'\n#```\n#I'm assuming the AttributeError is because pandas will not automatically try to convert\/truncate the float values into an integer? I'm running on a 32-bit machine with a 32-bit version of Python. \n#```\n```\n\n```python\n!uname -a\n#Output\n#Linux ubuntu 3.0.0-13-generic #22-Ubuntu SMP Wed Nov 2 13:25:36 UTC 2011 i686 i686 i386 GNU\/Linux\n```\n\n```python\nimport platform\n```\n\n```python\nplatform.architecture()\n#Output\n#('32bit', 'ELF')\n```\n\n```python\npandas.__version__\n#Output\n#'0.10.1'\n#```\n```","response":"0.10.1 doesn't really support float32 very much see this http:\/\/pandas.pydata.org\/pandas-docs\/dev\/whatsnew.html#dtype-specification you can do this in 0.11 like this: \n```\n# dont' use dtype converters explicity for the columns you care about\n# they will be converted to float64 if possible, or object if they cannot\ndf = pd.read_csv('test.csv'.....)\n\n#### this is optional and related to the issue you posted ####\n# force anything that is not a numeric to nan\n# columns are the list of columns that you are interesetd in\ndf[columns] = df[columns].convert_objects(convert_numeric=True)\n\n\n # astype\n df[columns] = df[columns].astype('float32')\n\nsee http:\/\/pandas.pydata.org\/pandas-docs\/dev\/basics.html#object-conversion\n\nIts not as efficient as doing it directly in read_csv (but that requires\n some low-level changes)\n```\nI have confirmed that with 0.11-dev, this DOES work (on 32-bit and 64-bit, results are the same) \n```\n\n```python\nx = pd.read_csv(StringIO.StringIO(data), dtype={'a': np.float32}, delim_whitespace=True)\n```\n\n```python\nx\n#Output\n#a b\n#0 0.76398 0.81394\n#1 0.32136 0.91063\n```\n\n```python\nx.dtypes\n#Output\n#a float32\n#b float64\n#dtype: object\n```\n\n```python\npd.__version__\n#Output\n#'0.11.0.dev-385ff82'\n```\n\n```python\nquit()\nvagrant@precise32:~\/pandas$ uname -a\nLinux precise32 3.2.0-23-generic-pae #36-Ubuntu SMP Tue Apr 10 22:19:09 UTC 2012 i686 i686 i386 GNU\/Linux\n```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/15210962\/specifying-dtype-float32-with-pandas-read-csv-on-pandas-0-10-1","best_answers_votes":28,"tags":["python","pandas","numpy"],"question_length":1336,"response_length":1388,"tag_count":3} {"question":"How to get the cumulative distribution function with NumPy? I want to create a CDF with NumPy, my code is the next: \n```\nhisto = np.zeros(4096, dtype = np.int32)\nfor x in range(0, width):\n for y in range(0, height):\n histo[data[x][y]] += 1\n q = 0 \n cdf = list()\n for i in histo:\n q = q + i\n cdf.append(q)\n```\nI am walking by the array but take a long time the program execution. There is a built function with this feature, isn't?","response":"Using a histogram is one solution but it involves binning the data. This is not necessary for plotting a CDF of empirical data. Let F(x) be the count of how many entries are less than x then it goes up by one, exactly where we see a measurement. Thus, if we sort our samples then at each point we increment the count by one (or the fraction by 1\/N) and plot one against the other we will see the \"exact\" (i.e. un-binned) empirical CDF. A following code sample demonstrates the method \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nN = 100\nZ = np.random.normal(size = N)\n# method 1\nH,X1 = np.histogram( Z, bins = 10, normed = True )\ndx = X1[1] - X1[0]\nF1 = np.cumsum(H)*dx\n#method 2\nX2 = np.sort(Z)\nF2 = np.array(range(N))\/float(N)\n\nplt.plot(X1[1:], F1)\nplt.plot(X2, F2)\nplt.show()\n```\nIt outputs the following","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/10640759\/how-to-get-the-cumulative-distribution-function-with-numpy","best_answers_votes":109,"tags":["python","numpy","histogram"],"question_length":456,"response_length":819,"tag_count":3} {"question":"Calculate Matrix Rank using scipy I'd like to calculate the mathematical rank of a matrix using scipy. The most obvious function numpy.rank calculates the dimension of an array (ie. scalars have dimension 0, vectors 1, matrices 2, etc...). I am aware that the numpy.linalg.lstsq module has this capability, but I was wondering if such a fundamental operation is built into the matrix class somewhere. Here is an explicit example: \n```\nfrom numpy import matrix, rank\nA = matrix([[1,3,7],[2,8,3],[7,8,1]])\nprint rank(A)\n```\nThis gives 2 the dimension, where I'm looking for an answer of 3.","response":"Numpy provides numpy.linalg.matrix_rank(): \n```\n\n```python\nimport numpy\n```\n\n```python\nnumpy.__version__\n#Output\n#'1.5.1'\n```\n\n```python\nA = numpy.matrix([[1,3,7],[2,8,3],[7,8,1]])\n```\n\n```python\nnumpy.linalg.matrix_rank(A)\n#Output\n#3\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/2473983\/calculate-matrix-rank-using-scipy","best_answers_votes":62,"tags":["python","matrix","numpy","scipy"],"question_length":587,"response_length":180,"tag_count":4} {"question":"numpy, how do I find total rows in a 2D array and total column in a 1D array Hi apologies for the newbie question, but I'm wondering if someone can help me with two questions. Example say I have this, [[1,2,3],[10,2,2]] I have two questions. How do I find total columns: How do I find total rows: thank you very much. A","response":"Getting number of rows and columns is as simple as: \n```\n\n```python\nimport numpy as np\n```\n\n```python\na=np.array([[1,2,3],[10,2,2]])\n```\n\n```python\nnum_rows, num_cols = a.shape\n```\n\n```python\nprint num_rows, num_cols\n#Output\n#2 3\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/18688948\/numpy-how-do-i-find-total-rows-in-a-2d-array-and-total-column-in-a-1d-array","best_answers_votes":64,"tags":["python","numpy"],"question_length":319,"response_length":184,"tag_count":2} {"question":"vectorize conditional assignment in pandas dataframe If I have a dataframe df with column x and want to create column y based on values of x using this in pseudo code: \n```\nif df['x'] < -2 then df['y'] = 1 \nelse if df['x'] > 2 then df['y'] = -1 \nelse df['y'] = 0\n```\nHow would I achieve this? I assume np.where is the best way to do this but not sure how to code it correctly.","response":"One simple method would be to assign the default value first and then perform 2 loc calls: \n```\n\n```python\ndf = pd.DataFrame({'x':[0,-3,5,-1,1]})\ndf\n#Output\n#x\n#0 0\n#1 -3\n#2 5\n#3 -1\n#4 1\n```\n\n```python\ndf['y'] = 0\ndf.loc[df['x'] < -2, 'y'] = 1\ndf.loc[df['x'] > 2, 'y'] = -1\ndf\n#Output\n#x y\n#0 0 0\n#1 -3 1\n#2 5 -1\n#3 -1 0\n#4 1 0\n#```\n#If you wanted to use np.where then you could do it with a nested np.where: \n#```\n```\n\n```python\ndf['y'] = np.where(df['x'] < -2 , 1, np.where(df['x'] > 2, -1, 0))\ndf\n#Output\n#x y\n#0 0 0\n#1 -3 1\n#2 5 -1\n#3 -1 0\n#4 1 0\n#```\n#So here we define the first condition as where x is less than -2, return 1, then we have another np.where which tests the other condition where x is greater than 2 and returns -1, otherwise return 0 timings \n#```\n```\n\n```python\n%timeit df['y'] = np.where(df['x'] < -2 , 1, np.where(df['x'] > 2, -1, 0))\n\n1000 loops, best of 3: 1.79 ms per loop\n```\n\n```python\n%%timeit\ndf['y'] = 0\ndf.loc[df['x'] < -2, 'y'] = 1\ndf.loc[df['x'] > 2, 'y'] = -1\n\n100 loops, best of 3: 3.27 ms per loop\n```\nSo for this sample dataset the np.where method is twice as fast\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/28896769\/vectorize-conditional-assignment-in-pandas-dataframe","best_answers_votes":72,"tags":["python","pandas","dataframe","numpy","vectorization"],"question_length":376,"response_length":1092,"tag_count":5} {"question":"NumPy append vs concatenate What is the difference between NumPy append and concatenate? My observation is that concatenate is a bit faster and append flattens the array if axis is not specified. \n```\n\n```python\nprint a\n[[1 2]\n [3 4]\n [5 6]\n [5 6]\n [1 2]\n [3 4]\n [5 6]\n [5 6]\n [1 2]\n [3 4]\n [5 6]\n [5 6]\n [5 6]]\n```\n\n```python\nprint b\n[[1 2]\n [3 4]\n [5 6]\n [5 6]\n [1 2]\n [3 4]\n [5 6]\n [5 6]\n [5 6]]\n```\n\n```python\ntimeit -n 10000 -r 5 np.concatenate((a, b))\n10000 loops, best of 5: 2.05 \u00b5s per loop\n```\n\n```python\ntimeit -n 10000 -r 5 np.append(a, b, axis = 0)\n10000 loops, best of 5: 2.41 \u00b5s per loop\n```\n\n```python\nnp.concatenate((a, b))\n#Output\n#array([[1, 2],\n# [3, 4],\n# [5, 6],\n# [5, 6],\n# [1, 2],\n# [3, 4],\n# [5, 6],\n# [5, 6],\n# [1, 2],\n# [3, 4],\n# [5, 6],\n# [5, 6],\n# [5, 6],\n# [1, 2],\n# [3, 4],\n# [5, 6],\n# [5, 6],\n# [1, 2],\n# [3, 4],\n# [5, 6],\n# [5, 6],\n# [5, 6]])\n```\n\n```python\nnp.append(a, b, axis = 0)\n#Output\n#array([[1, 2],\n# [3, 4],\n# [5, 6],\n# [5, 6],\n# [1, 2],\n# [3, 4],\n# [5, 6],\n# [5, 6],\n# [1, 2],\n# [3, 4],\n# [5, 6],\n# [5, 6],\n# [5, 6],\n# [1, 2],\n# [3, 4],\n# [5, 6],\n# [5, 6],\n# [1, 2],\n# [3, 4],\n# [5, 6],\n# [5, 6],\n# [5, 6]])\n```\n\n```python\nnp.append(a, b)\n#Output\n#array([1, 2, 3, 4, 5, 6, 5, 6, 1, 2, 3, 4, 5, 6, 5, 6, 1, 2, 3, 4, 5, 6, 5,\n# 6, 5, 6, 1, 2, 3, 4, 5, 6, 5, 6, 1, 2, 3, 4, 5, 6, 5, 6, 5, 6])\n#```\n```","response":"np.append uses np.concatenate: \n```\ndef append(arr, values, axis=None):\n arr = asanyarray(arr)\n if axis is None:\n if arr.ndim != 1:\n arr = arr.ravel()\n values = ravel(values)\n axis = arr.ndim-1\n return concatenate((arr, values), axis=axis)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/35932101\/numpy-append-vs-concatenate","best_answers_votes":80,"tags":["python","numpy"],"question_length":1522,"response_length":284,"tag_count":2} {"question":"Solve an equation using a python numerical solver in numpy I have an equation, as follows: R - ((1.0 - np.exp(-tau))\/(1.0 - np.exp(-a*tau))) = 0. I want to solve for tau in this equation using a numerical solver available within numpy. What is the best way to go about this? The values for R and a in this equation vary for different implementations of this formula, but are fixed at particular values when it is to be solved for tau.","response":"In conventional mathematical notation, your equation is The SciPy fsolve function searches for a point at which a given expression equals zero (a \"zero\" or \"root\" of the expression). You'll need to provide fsolve with an initial guess that's \"near\" your desired solution. A good way to find such an initial guess is to just plot the expression and look for the zero crossing. \n```\n#!\/usr\/bin\/python\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.optimize import fsolve\n\n# Define the expression whose roots we want to find\n\na = 0.5\nR = 1.6\n\nfunc = lambda tau : R - ((1.0 - np.exp(-tau))\/(1.0 - np.exp(-a*tau))) \n\n# Plot it\n\ntau = np.linspace(-0.5, 1.5, 201)\n\nplt.plot(tau, func(tau))\nplt.xlabel(\"tau\")\nplt.ylabel(\"expression value\")\nplt.grid()\nplt.show()\n\n# Use the numerical solver to find the roots\n\ntau_initial_guess = 0.5\ntau_solution = fsolve(func, tau_initial_guess)\n\nprint \"The solution is tau = %f\" % tau_solution\nprint \"at which the value of the expression is %f\" % func(tau_solution)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/22742951\/solve-an-equation-using-a-python-numerical-solver-in-numpy","best_answers_votes":58,"tags":["python-2.7","numpy","equation","solver"],"question_length":434,"response_length":1012,"tag_count":4} {"question":"Getting data from ctypes array into numpy I am using a Python (via ctypes) wrapped C library to run a series of computation. At different stages of the running, I want to get data into Python, and specifically numpy arrays. The wrapping I am using does two different types of return for array data (which is of particular interest to me): ctypes Array: When I do type(x) (where x is the ctypes array, I get a in return. I know that this data is a copy of the internal data from the documentation and I am able to get it into a numpy array easily: \n```\n\n```python\nnp.ctypeslib.as_array(x)\n#Output\n#```\n#This returns a 1D numpy array of the data. ctype pointer to data: In this case from the library's documentation, I understand that I am getting a pointer to the data stored and used directly to the library. Whey I do type(y) (where y is the pointer) I get . With this case I am still able to index through the data like y[0][2], but I was only able to get it into numpy via a super awkward: \n#```\n```\n\n```python\nnp.frombuffer(np.core.multiarray.int_asbuffer(\n#Output\n# ctypes.addressof(y.contents), array_length*np.dtype(float).itemsize))\n#```\n#I found this in an old numpy mailing list thread from Travis Oliphant, but not in the numpy documentation. If instead of this approach I try as above I get the following: \n#```\n```\n\n```python\nnp.ctypeslib.as_array(y)\n BUNCH OF STACK INFORMATION\n#Output\n#...\n#...\n#AttributeError: 'LP_c_double' object has no attribute '__array_interface__'\n#```\n#Is this np.frombuffer approach the best or only way to do this? I am open to other suggestions but must would still like to use numpy as I have a lot of other post-processing code that relies on numpy functionality that I want to use with this data.\n```","response":"Creating NumPy arrays from a ctypes pointer object is a problematic operation. It is unclear who actually owns the memory the pointer is pointing to. When will it be freed again? How long is it valid? Whenever possible I would try to avoid this kind of construct. It is so much easier and safer to create arrays in the Python code and pass them to the C function than to use memory allocated by a Python-unaware C function. By doing the latter, you negate to some extent the advantages of having a high-level language taking care of the memory management. If you are really sure that someone takes care of the memory, you can create an object exposing the Python \"buffer protocol\" and then create a NumPy array using this buffer object. You gave one way of creating the buffer object in your post, via the undocumented int_asbuffer() function: \n```\nbuffer = numpy.core.multiarray.int_asbuffer(\n ctypes.addressof(y.contents), 8*array_length)\n```\n(Note that I substituted 8 for np.dtype(float).itemsize. It's always 8, on any platform.) A different way to create the buffer object would be to call the PyBuffer_FromMemory() function from the Python C API via ctypes: \n```\nbuffer_from_memory = ctypes.pythonapi.PyBuffer_FromMemory\nbuffer_from_memory.restype = ctypes.py_object\nbuffer = buffer_from_memory(y, 8*array_length)\n```\nFor both these ways, you can create a NumPy array from buffer by \n```\na = numpy.frombuffer(buffer, float)\n```\n(I actually do not understand why you use .astype() instead of a second parameter to frombuffer; furthermore, I wonder why you use np.int, while you said earlier that the array contains doubles.) I'm afraid it won't get much easier than this, but it isn't that bad, don't you think? You could bury all the ugly details in a wrapper function and don't worry about it any more.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/4355524\/getting-data-from-ctypes-array-into-numpy","best_answers_votes":30,"tags":["python","numpy","ctypes"],"question_length":1685,"response_length":1813,"tag_count":3} {"question":"What is a \"scalar\" in NumPy? The documentation states the purpose of scalars, such as the fact that conventional Python numbers like float and integer are too primitive, and therefore more complex data types are necessary. It also states certain kinds of scalars (data type hierarchy); as well as a couple of attributes of scalar. But it never gives a concrete definition of exactly what a scalar is in the context of Python. I want to get to the heart of the issue on this. In the simplest terms possible, what is a Pythonic scalar?","response":"A NumPy scalar is any object which is an instance of np.generic or whose type is in np.ScalarType: \n```none\n\n```python\nnp.ScalarType\n#Output\n#(int,\n# float,\n# complex,\n# long,\n# bool,\n# str,\n# unicode,\n# buffer,\n# numpy.int16,\n# numpy.float16,\n# numpy.int8,\n# numpy.uint64,\n# numpy.complex192,\n# numpy.void,\n# numpy.uint32,\n# numpy.complex128,\n# numpy.unicode_,\n# numpy.uint32,\n# numpy.complex64,\n# numpy.string_,\n# numpy.uint16,\n# numpy.timedelta64,\n# numpy.bool_,\n# numpy.uint8,\n# numpy.datetime64,\n# numpy.object_,\n# numpy.int64,\n# numpy.float96,\n# numpy.int32,\n# numpy.float64,\n# numpy.int32,\n# numpy.float32)\n#```\n#This definition comes from looking at the source code for np.isscalar: \n#```\n#def isscalar(num):\n# if isinstance(num, generic):\n# return True\n# else:\n# return type(num) in ScalarType\n#```\n#Note that you can test if something is a scalar by using np.isscalar: \n#```none\n#>>> np.isscalar(3.1)\n#True\n#\n#>>> np.isscalar([3.1])\n#False\n#\n#>>> np.isscalar(False)\n#True\n#```\n#How do we know what we know? I like learning how people know what they know\u2014more than the answers themselves. So let me try to explain where the above answer comes from. Having the right tools can help you figure out things like this for yourself. I found this out by using IPython. Using its TAB-completion feature, typing \n#```none\n```\n\n```python\nimport numpy as np\n```\n\n```python\nnp.[TAB]\n```\ncauses IPython to display all variables in the np module namespace. A search for the string \"scalar\" will lead you to np.ScalarType and np.isscalar. Typing \n```none\n```\n\n```python\nnp.isscalar?\n```\n(note the question mark at the end) prompts IPython to show you where np.isscalar is defined: \n```none\nFile: \/data1\/unutbu\/.virtualenvs\/dev\/lib\/python2.7\/site-packages\/numpy\/core\/numeric.py\n```\nwhich is how I got to the definition of isscalar. Alternatively, the NumPy documentation for isscalar has a link to the source code as well.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/21968643\/what-is-a-scalar-in-numpy","best_answers_votes":54,"tags":["python","numpy","scipy"],"question_length":533,"response_length":1863,"tag_count":3} {"question":"Are Decimal 'dtypes' available in NumPy? Are Decimal data type objects (dtypes) available in NumPy? \n```\n\n```python\nimport decimal, numpy\n```\n\n```python\nd = decimal.Decimal('1.1')\n```\n\n```python\ns = [['123.123','23'],['2323.212','123123.21312']]\n```\n\n```python\nss = numpy.array(s, dtype=numpy.dtype(decimal.Decimal))\n```\n\n```python\na = numpy.array(s, dtype=float)\n```\n\n```python\ntype(d)\n#Output\n#\n```\n\n```python\ntype(ss[1,1])\n#Output\n#\n```\n\n```python\ntype(a[1,1])\n#Output\n#\n#```\n#I suppose numpy.array doesn't support every dtype, but I sort of thought that it would at least let a dtype propagate as far as it could as long as the right operations were defined. Am I missing something? Is there some way for this to work?\n```","response":"NumPy doesn't recognize decimal.Decimal as a specific type. The closest it can get is the most general dtype, object. So when converting the elements to the desired dtype, the conversion is a no operation. \n```\n\n```python\nss.dtype\n#Output\n#dtype('object')\n#```\n#Keep in mind that because the elements of the array are Python objects, you won't get much of a speedup using them. For example, if you try to add this to any other array, the other elements will have to be boxed back into Python objects and added via the normal Python addition code. You might gain some speed in that the iteration will be in C, but not that much.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/7770870\/are-decimal-dtypes-available-in-numpy","best_answers_votes":35,"tags":["python","numpy"],"question_length":670,"response_length":609,"tag_count":2} {"question":"interpolate 3D volume with numpy and or scipy I am extremely frustrated because after several hours I can't seem to be able to do a seemingly easy 3D interpolation in python. In Matlab all I had to do was \n```\nVi = interp3(x,y,z,V,xi,yi,zi)\n```\nWhat is the exact equivalent of this using scipy's ndimage.map_coordinate or other numpy methods? Thanks","response":"In scipy 0.14 or later, there is a new function scipy.interpolate.RegularGridInterpolator which closely resembles interp3. The MATLAB command Vi = interp3(x,y,z,V,xi,yi,zi) would translate to something like: \n```\nfrom numpy import array\nfrom scipy.interpolate import RegularGridInterpolator as rgi\nmy_interpolating_function = rgi((x,y,z), V)\nVi = my_interpolating_function(array([xi,yi,zi]).T)\n```\nHere is a full example demonstrating both; it will help you understand the exact differences... MATLAB CODE: \n```\nx = linspace(1,4,11);\ny = linspace(4,7,22);\nz = linspace(7,9,33);\nV = zeros(22,11,33);\nfor i=1:11\n for j=1:22\n for k=1:33\n V(j,i,k) = 100*x(i) + 10*y(j) + z(k);\n end\n end\nend\nxq = [2,3];\nyq = [6,5];\nzq = [8,7];\nVi = interp3(x,y,z,V,xq,yq,zq);\n```\nThe result is Vi=[268 357] which is indeed the value at those two points (2,6,8) and (3,5,7). SCIPY CODE: \n```\nfrom scipy.interpolate import RegularGridInterpolator\nfrom numpy import linspace, zeros, array\nx = linspace(1,4,11)\ny = linspace(4,7,22)\nz = linspace(7,9,33)\nV = zeros((11,22,33))\nfor i in range(11):\n for j in range(22):\n for k in range(33):\n V[i,j,k] = 100*x[i] + 10*y[j] + z[k]\nfn = RegularGridInterpolator((x,y,z), V)\npts = array([[2,6,8],[3,5,7]])\nprint(fn(pts))\n```\nAgain it's [268,357]. So you see some slight differences: Scipy uses x,y,z index order while MATLAB uses y,x,z (strangely); In Scipy you define a function in a separate step and when you call it, the coordinates are grouped like (x1,y1,z1),(x2,y2,z2),... while matlab uses (x1,x2,...),(y1,y2,...),(z1,z2,...). Other than that, the two are similar and equally easy to use.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/21836067\/interpolate-3d-volume-with-numpy-and-or-scipy","best_answers_votes":36,"tags":["python","numpy","3d","scipy","interpolation"],"question_length":349,"response_length":1664,"tag_count":5} {"question":"Numpy argmax. How to compute both max and argmax? Is there a way to get max and argmax by one stroke ? \n```\nimport numpy as np\na=[0,0,1,0]\nmaximum=max(a)\nindex=np.argmax(a)\n```\nIs there a fastest way to do it, with something like: \n```\n[maximum,index]=function(a)\n```","response":"Maybe something like this is faster... \n```\nindex = np.argmax(a)\nmaximum = a[index]\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/26739044\/numpy-argmax-how-to-compute-both-max-and-argmax","best_answers_votes":45,"tags":["python","numpy"],"question_length":267,"response_length":87,"tag_count":2} {"question":"SimpleJSON and NumPy array What is the most efficient way of serializing a numpy array using simplejson?","response":"In order to keep dtype and dimension try this: \n```\nimport base64\nimport json\nimport numpy as np\n\nclass NumpyEncoder(json.JSONEncoder):\n\n def default(self, obj):\n \"\"\"If input object is an ndarray it will be converted into a dict \n holding dtype, shape and the data, base64 encoded.\n \"\"\"\n if isinstance(obj, np.ndarray):\n if obj.flags['C_CONTIGUOUS']:\n obj_data = obj.data\n else:\n cont_obj = np.ascontiguousarray(obj)\n assert(cont_obj.flags['C_CONTIGUOUS'])\n obj_data = cont_obj.data\n data_b64 = base64.b64encode(obj_data)\n return dict(__ndarray__=data_b64,\n dtype=str(obj.dtype),\n shape=obj.shape)\n # Let the base class default method raise the TypeError\n super(NumpyEncoder, self).default(obj)\n\n\ndef json_numpy_obj_hook(dct):\n \"\"\"Decodes a previously encoded numpy ndarray with proper shape and dtype.\n\n :param dct: (dict) json encoded ndarray\n :return: (ndarray) if input was an encoded ndarray\n \"\"\"\n if isinstance(dct, dict) and '__ndarray__' in dct:\n data = base64.b64decode(dct['__ndarray__'])\n return np.frombuffer(data, dct['dtype']).reshape(dct['shape'])\n return dct\n\nexpected = np.arange(100, dtype=np.float)\ndumped = json.dumps(expected, cls=NumpyEncoder)\nresult = json.loads(dumped, object_hook=json_numpy_obj_hook)\n\n\n# None of the following assertions will be broken.\nassert result.dtype == expected.dtype, \"Wrong Type\"\nassert result.shape == expected.shape, \"Wrong Shape\"\nassert np.allclose(expected, result), \"Wrong Values\"\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/3488934\/simplejson-and-numpy-array","best_answers_votes":81,"tags":["python","json","numpy","simplejson"],"question_length":104,"response_length":1668,"tag_count":4} {"question":"Convert NumPy array to 0 or 1 based on threshold I have an array below: \n```\na=np.array([0.1, 0.2, 0.3, 0.7, 0.8, 0.9])\n```\nWhat I want is to convert this vector to a binary vector based on a threshold. take threshold=0.5 as an example, element that greater than 0.5 convert to 1, otherwise 0. The output vector should like this: \n```\na_output = [0, 0, 0, 1, 1, 1]\n```\nHow can I do this?","response":"np.where \n```\nnp.where(a > 0.5, 1, 0)\n# array([0, 0, 0, 1, 1, 1])\n```\nBoolean basking with astype \n```\n(a > .5).astype(int)\n# array([0, 0, 0, 1, 1, 1])\n```\nnp.select \n```\nnp.select([a <= .5, a>.5], [np.zeros_like(a), np.ones_like(a)])\n# array([ 0., 0., 0., 1., 1., 1.])\n```\nSpecial case: np.round This is the best solution if your array values are floating values between 0 and 1 and your threshold is 0.5. \n```\na.round()\n# array([0., 0., 0., 1., 1., 1.])\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/46214291\/convert-numpy-array-to-0-or-1-based-on-threshold","best_answers_votes":83,"tags":["python","arrays","numpy"],"question_length":387,"response_length":464,"tag_count":3} {"question":"Taking subarrays from numpy array with given stride\/stepsize Lets say I have a Python Numpy array a. \n```\na = numpy.array([1,2,3,4,5,6,7,8,9,10,11])\n```\nI want to create a matrix of sub sequences from this array of length 5 with stride 3. The results matrix hence will look as follows: \n```\nnumpy.array([[1,2,3,4,5],[4,5,6,7,8],[7,8,9,10,11]])\n```\nOne possible way of implementing this would be using a for-loop. \n```\nresult_matrix = np.zeros((3, 5))\nfor i in range(0, len(a), 3):\n result_matrix[i] = a[i:i+5]\n```\nIs there a cleaner way to implement this in Numpy?","response":"Approach #1 : Using broadcasting - \n```\ndef broadcasting_app(a, L, S ): # Window len = L, Stride len\/stepsize = S\n nrows = ((a.size-L)\/\/S)+1\n return a[S*np.arange(nrows)[:,None] + np.arange(L)]\n```\nApproach #2 : Using more efficient NumPy strides - \n```\ndef strided_app(a, L, S ): # Window len = L, Stride len\/stepsize = S\n nrows = ((a.size-L)\/\/S)+1\n n = a.strides[0]\n return np.lib.stride_tricks.as_strided(a, shape=(nrows,L), strides=(S*n,n))\n```\nSample run - \n```\n\n```python\na\n#Output\n#array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])\n```\n\n```python\nbroadcasting_app(a, L = 5, S = 3)\n#Output\n#array([[ 1, 2, 3, 4, 5],\n# [ 4, 5, 6, 7, 8],\n# [ 7, 8, 9, 10, 11]])\n```\n\n```python\nstrided_app(a, L = 5, S = 3)\n#Output\n#array([[ 1, 2, 3, 4, 5],\n# [ 4, 5, 6, 7, 8],\n# [ 7, 8, 9, 10, 11]])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/40084931\/taking-subarrays-from-numpy-array-with-given-stride-stepsize","best_answers_votes":56,"tags":["python","numpy","vectorization"],"question_length":565,"response_length":844,"tag_count":3} {"question":"What is \"random-state\" in sklearn.model_selection.train_test_split example? [duplicate] This question already has answers here: Random state (Pseudo-random number) in Scikit learn (8 answers) Closed 4 years ago. Can someone explain me what random_state means in below example? \n```\nimport numpy as np\nfrom sklearn.model_selection import train_test_split\nX, y = np.arange(10).reshape((5, 2)), range(5)\n\n\nX_train, X_test, y_train, y_test = train_test_split(\n X, y, test_size=0.33, random_state=42)\n```\nWhy is it hard coded to 42?","response":"Isn't that obvious? 42 is the Answer to the Ultimate Question of Life, the Universe, and Everything. On a serious note, random_state simply sets a seed to the random generator, so that your train-test splits are always deterministic. If you don't set a seed, it is different each time. Relevant documentation: random_state : int, RandomState instance or None, optional (default=None) If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/49147774\/what-is-random-state-in-sklearn-model-selection-train-test-split-example","best_answers_votes":105,"tags":["python","numpy","machine-learning","random","scikit-learn"],"question_length":530,"response_length":607,"tag_count":5} {"question":"How to \"scale\" a numpy array? I would like to scale an array of shape (h, w) by a factor of n, resulting in an array of shape (h*n, w*n), with the. Say that I have a 2x2 array: \n```\narray([[1, 1],\n [0, 1]])\n```\nI would like to scale the array to become 4x4: \n```\narray([[1, 1, 1, 1],\n [1, 1, 1, 1],\n [0, 0, 1, 1],\n [0, 0, 1, 1]])\n```\nThat is, the value of each cell in the original array is copied into 4 corresponding cells in the resulting array. Assuming arbitrary array size and scaling factor, what's the most efficient way to do this?","response":"You should use the Kronecker product, numpy.kron: Computes the Kronecker product, a composite array made of blocks of the second array scaled by the first \n```\nimport numpy as np\na = np.array([[1, 1],\n [0, 1]])\nn = 2\nnp.kron(a, np.ones((n,n)))\n```\nwhich gives what you want: \n```\narray([[1, 1, 1, 1],\n [1, 1, 1, 1],\n [0, 0, 1, 1],\n [0, 0, 1, 1]])\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/7525214\/how-to-scale-a-numpy-array","best_answers_votes":75,"tags":["python","arrays","numpy","scaling"],"question_length":564,"response_length":381,"tag_count":4} {"question":"How to convert the output of meshgrid to the corresponding array of points? I want to create a list of points that would correspond to a grid. So if I want to create a grid of the region from (0, 0) to (1, 1), it would contain the points (0, 0), (0, 1), (1, 0) and (1, 0). I know that that this can be done with the following code: \n```\ng = np.meshgrid([0,1],[0,1])\nnp.append(g[0].reshape(-1,1),g[1].reshape(-1,1),axis=1)\n```\nYielding the result: \n```\narray([[0, 0],\n [1, 0],\n [0, 1],\n [1, 1]])\n```\nMy question is twofold: Is there a better way of doing this? Is there a way of generalizing this to higher dimensions?","response":"I just noticed that the documentation in numpy provides an even faster way to do this: \n```\nX, Y = np.mgrid[xmin:xmax:100j, ymin:ymax:100j]\npositions = np.vstack([X.ravel(), Y.ravel()])\n```\nThis can easily be generalized to more dimensions using the linked meshgrid2 function and mapping 'ravel' to the resulting grid. \n```\ng = meshgrid2(x, y, z)\npositions = np.vstack(map(np.ravel, g))\n```\nThe result is about 35 times faster than the zip method for a 3D array with 1000 ticks on each axis. Source: http:\/\/docs.scipy.org\/doc\/scipy\/reference\/generated\/scipy.stats.gaussian_kde.html#scipy.stats.gaussian_kde To compare the two methods consider the following sections of code: Create the proverbial tick marks that will help to create the grid. \n```\n\n```python\nimport numpy as np\n```\n\n```python\nfrom numpy import asarray\n```\n\n```python\nx = np.random.rand(100,1)\n```\n\n```python\ny = np.random.rand(100,1)\n```\n\n```python\nz = np.random.rand(100,1)\n```\nDefine the function that mgilson linked to for the meshgrid: \n```\n```\n\n```python\ndef meshgrid2(*arrs):\n ....: arrs = tuple(reversed(arrs))\n ....: lens = map(len, arrs)\n ....: dim = len(arrs)\n ....: sz = 1\n ....: for s in lens:\n ....: sz *= s\n ....: ans = []\n ....: for i, arr in enumerate(arrs):\n ....: slc = [1]*dim\n ....: slc[i] = lens[i]\n ....: arr2 = asarray(arr).reshape(slc)\n ....: for j, sz in enumerate(lens):\n ....: if j != i:\n ....: arr2 = arr2.repeat(sz, axis=j)\n ....: ans.append(arr2)\n ....: return tuple(ans)\n```\nCreate the grid and time the two functions. \n```\n```\n\n```python\ng = meshgrid2(x, y, z)\n```\n\n```python\n%timeit pos = np.vstack(map(np.ravel, g)).T\n100 loops, best of 3: 7.26 ms per loop\n```\n\n```python\n%timeit zip(*(x.flat for x in g))\n1 loops, best of 3: 264 ms per loop\n```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/12864445\/how-to-convert-the-output-of-meshgrid-to-the-corresponding-array-of-points","best_answers_votes":73,"tags":["python","numpy"],"question_length":635,"response_length":1841,"tag_count":2} {"question":"Random Choice with Pytorch? I have a tensor of pictures, and would like to randomly select from it. I'm looking for the equivalent of np.random.choice(). \n```\nimport torch\n\npictures = torch.randint(0, 256, (1000, 28, 28, 3))\n```\nLet's say I want 10 of these pictures.","response":"torch has no equivalent implementation of np.random.choice(), see the discussion here. The alternative is indexing with a shuffled index or random integers. To do it with replacement: Generate n random indices Index your original tensor with these indices \n```\npictures[torch.randint(len(pictures), (10,))]\n```\nTo do it without replacement: Shuffle the index Take the n first elements \n```\nindices = torch.randperm(len(pictures))[:10]\n\npictures[indices]\n```\nRead more about torch.randint and torch.randperm. Second code snippet is inspired by this post in PyTorch Forums.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/59461811\/random-choice-with-pytorch","best_answers_votes":72,"tags":["python","python-3.x","numpy","machine-learning","pytorch"],"question_length":267,"response_length":571,"tag_count":5} {"question":"Setting the fmt option in numpy.savetxt I am looking at the numpy.savetxt, and am stuck at the fmt option. I tried looking at here and also the reference in the link below all the letters that can be used for the fmt option sort give me a general sense of what is going on. What I do not understand is if the % symbol is required and in an example given here how should I interpret the 10.5 number? If \"f\" is about setting the floating point, then how come is it 10.5 (then again, I might not know how floating points are set...).","response":"Knowing that np.savetxt only works for 1D or 2D arrays, the general idea is: when fmt is a single formatting string it applies to all elements in the array (1D or 2D input array) when fmt is a sequence of formatting strings, it applies to each column of the 2D input array I'm presenting here some examples using the following input array: \n```\nimport numpy as np\n\na = np.array([[11, 12, 13, 14],\n [21, 22, 23, 24],\n [31, 32, 33, 34]])\n```\n1) Setting floating point precision: np.savetxt('tmp.txt', a, fmt='%1.3f') \n```\n11.000 12.000 13.000 14.000\n21.000 22.000 23.000 24.000\n31.000 32.000 33.000 34.000\n```\n2) Adding characters to right-justify. With spaces: np.savetxt('tmp.txt', a, fmt='% 4d') \n```\n11 12 13 14\n 21 22 23 24\n 31 32 33 34\n```\nWith zeros: np.savetxt('tmp.txt', a, fmt='%04d') \n```\n0011 0012 0013 0014\n0021 0022 0023 0024\n0031 0032 0033 0034\n```\n3) Adding characters to left-justify (use of \"-\"). With spaces: np.savetxt('tmp.txt', a, fmt='%-4d') \n```\n11 12 13 14 \n21 22 23 24 \n31 32 33 34\n```\n4) When fmt is a sequence of formatting strings, each row of a 2D input array is processed according to fmt: fmt as a sequence in a single formatting string \n```\nfmt = '%1.1f + %1.1f \/ (%1.1f * %1.1f)'\nnp.savetxt('tmp.txt', a, fmt=fmt)\n\n11.0 + 12.0 \/ (13.0 * 14.0)\n21.0 + 22.0 \/ (23.0 * 24.0)\n31.0 + 32.0 \/ (33.0 * 34.0)\n```\nfmt as an iterator of formatting strings: \n```\nfmt = '%d', '%1.1f', '%1.9f', '%1.9f'\nnp.savetxt('tmp.txt', a, fmt=fmt)\n\n11 12.0 13.000000000 14.000000000\n21 22.0 23.000000000 24.000000000\n31 32.0 33.000000000 34.000000000\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/17043393\/setting-the-fmt-option-in-numpy-savetxt","best_answers_votes":87,"tags":["python","arrays","string","numpy","save"],"question_length":530,"response_length":1626,"tag_count":5} {"question":"How do I subtract the previous row from the current row in a pandas dataframe and apply it to every row; without using a loop? I am using Python3.5 and I am working with pandas. I have loaded stock data from yahoo finance and have saved the files to csv. My DataFrames load this data from the csv. This is a copy of the ten rows of the csv file that is my DataFrame \n```\nDate Open High Low Close Volume Adj Close \n1990-04-12 26.875000 26.875000 26.625 26.625 6100 250.576036\n1990-04-16 26.500000 26.750000 26.375 26.750 500 251.752449\n1990-04-17 26.750000 26.875000 26.750 26.875 2300 252.928863\n1990-04-18 26.875000 26.875000 26.500 26.625 3500 250.576036\n1990-04-19 26.500000 26.750000 26.500 26.750 700 251.752449\n1990-04-20 26.750000 26.875000 26.750 26.875 2100 252.928863\n1990-04-23 26.875000 26.875000 26.750 26.875 700 252.928863\n1990-04-24 27.000000 27.000000 26.000 26.000 2400 244.693970\n1990-04-25 25.250000 25.250000 24.875 25.125 9300 236.459076\n1990-04-26 25.000000 25.250000 24.750 25.000 1200 235.282663\n```\nI know that I can use iloc, loc, ix but these values that I index will only give my specific rows and columns and will not perform the operation on every row. For example: Row one of the data in the open column has a value of 26.875 and the row below it has 26.50. The price dropped .375 cents. I want to be able to capture the % of Increase or Decrease from the previous day so to finish this example .375 divided by 26.875 = 1.4% decrease from one day to the next. I want to be able to run this calculation on every row so I know how much it has increased or decreased from the previous day. The index functions I have tried but they are absolute, and I don't want to use a loop. Is there a way I can do this with the ix, iloc, loc or another function?","response":"you can use pct_change() or\/and diff() methods Demo: \n```\n\n```python\ndf.Close.pct_change() * 100\n#Output\n#0 NaN\n#1 0.469484\n#2 0.467290\n#3 -0.930233\n#4 0.469484\n#5 0.467290\n#6 0.000000\n#7 -3.255814\n#8 -3.365385\n#9 -0.497512\n#Name: Close, dtype: float64\n```\n\n```python\ndf.Close.diff()\n#Output\n#0 NaN\n#1 0.125\n#2 0.125\n#3 -0.250\n#4 0.125\n#5 0.125\n#6 0.000\n#7 -0.875\n#8 -0.875\n#9 -0.125\n#Name: Close, dtype: float64\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/39479919\/how-do-i-subtract-the-previous-row-from-the-current-row-in-a-pandas-dataframe-an","best_answers_votes":76,"tags":["python","pandas","numpy","dataframe","indexing"],"question_length":1909,"response_length":452,"tag_count":5} {"question":"Numpy Adding two vectors with different sizes If I have two numpy arrays of different sizes, how can I superimpose them. \n```\na = numpy([0, 10, 20, 30])\nb = numpy([20, 30, 40, 50, 60, 70])\n```\nWhat is the cleanest way to add these two vectors to produce a new vector (20, 40, 60, 80, 60, 70)? This is my generic question. For background, I am specifically applying a Green's transform function and need to superimpose the results for each time step in the evaulation unto the responses previously accumulated.","response":"This could be what you are looking for \n```\nif len(a) < len(b):\n c = b.copy()\n c[:len(a)] += a\nelse:\n c = a.copy()\n c[:len(b)] += b\n```\nbasically you copy the longer one and then add in-place the shorter one","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/7891697\/numpy-adding-two-vectors-with-different-sizes","best_answers_votes":40,"tags":["python","numpy","linear-algebra"],"question_length":509,"response_length":219,"tag_count":3} {"question":"How do I extract a sub-array from a numpy 2d array? [duplicate] This question already has answers here: Slicing arrays in Numpy \/ Scipy (3 answers) Closed 9 years ago. I'd like to extract a numpy array with a specified size from a numpy 2d array--essentially I want to crop the array. For example, if have a numpy array like this: \n```\n([1,2,3],\n [4,5,6],\n [7,8,9])\n```\nI'd like to extract a 2x2 from it and the result should be: \n```\n([1,2],\n [4,5])\n```\nHow can I do that?","response":"Given this array: \n```\n\n```python\na\n#Output\n#array([[1, 2, 3],\n# [4, 5, 6],\n# [7, 8, 9]])\n#```\n#You can slice it along both dimensions: \n#```\n```\n\n```python\na[:2,:2]\n#Output\n#array([[1, 2],\n# [4, 5]])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/35681054\/how-do-i-extract-a-sub-array-from-a-numpy-2d-array","best_answers_votes":56,"tags":["python","arrays","numpy"],"question_length":473,"response_length":180,"tag_count":3} {"question":"How to find max value in a numpy array column? I can find quite a few permutations of this question, but not this (rather simple) one: how do I find the maximum value of a specific column of a numpy array (in the most pythonic way)? \n```\na = array([[10, 2], [3, 4], [5, 6]])\n```\nWhat I want is the max value in the first column and second column (these are x,y coordinates and I eventually need the height and width of each shape), so max x coordinate is 10 and max y coordinate is 6. I've tried: \n```\nxmax = numpy.amax(a,axis=0)\nymax = numpy.amax(a,axis=1)\n```\nbut these yield \n```\narray([10, 6])\narray([10, 4, 6])\n```\n...not what I expected. My solution is to use slices: \n```\nxmax = numpy.max(a[:,0])\nymax = numpy.max(a[:,1])\n```\nWhich works but doesn't seem to the best approach. Suggestions?","response":"Just unpack the list: \n```\n\n```python\nxmax, ymax = a.max(axis=0)\n```\n\n```python\nprint xmax, ymax\n#10 6\n```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/22129225\/how-to-find-max-value-in-a-numpy-array-column","best_answers_votes":55,"tags":["python","arrays","numpy"],"question_length":796,"response_length":101,"tag_count":3} {"question":"ufunc 'add' did not contain loop with signature matching type dtype ('S32') ('S32') ('S32') I'm trying to run someone's script for some simulations I've made to try plotting some histograms, but when I do I always get the error message mentioned above. I have no idea what's gone wrong. Here's the complete traceback error I get: \n```none\nFile \"AVAnalyse.py\", line 205, in \n f.write(line[0] + ' ' + line[1] + ' ' + line[2] + ' ' + line[3]) \nTypeError: ufunc 'add' did not contain a loop with signature matching types dtype('S32') dtype('S32') dtype('S32')\n```\nThis is the code I am trying to run: \n```\nname_out = \"histogram_\" + donor + \"_\" + acceptor + \".dat\" \nf = open(name_out, 'w')\nf.write('distance d.probability efficiency e.probability')\nfor line in dist_hist:\n f.write(line[0] + ' ' + line[1] + ' ' + line[2] + ' ' + line[3])\nf.close()\n\nprint \"data saved in \" + \"histogram_\" + donor + \"_\" + acceptor + \".dat\"\n```\nWhat am I doing wrong?","response":"It seems like line[0], line[1], line[2], line[3] are elements of dist_hist. dict_hist is a numpy.ndarray. The elements of dict_hist has a numeric type (like np.float64) (based on calculations from your attached file). You're trying to add elements of different types: np.float64 and str. If you want to avoid this TypeError, you can change type of line[0], line[1], line[2], line[3] to str. Your snippet of code should be like this: \n```\nname_out = \"histogram_\"+donor+\"_\"+acceptor+\".dat\" \nf = open(name_out,'w')\nf.write('distance d.probability efficiency e.probability')\nfor line in dist_hist:\n f.write(str(line[0])+' '+str(line[1])+' '+str(line[2])+' '+str(line[3]))\nf.close()\n\nprint \"data saved in \" +\"histogram_\"+donor+\"_\"+acceptor+\".dat\"\n```\nEDIT: You should replace this snippet of code: \n```\nname_out = \"histogram_\"+donor+\"_\"+acceptor+\".dat\" \nf = open(name_out,'w')\nf.write('distance d.probability efficiency e.probability')\nfor line in dist_hist:\n f.write(line[0]+' '+line[1]+' '+line[2]+' '+line[3])\nf.close()\n```\nto this one: \n```\nname_out = \"histogram_\" + donor + \"_\" + acceptor + \".dat\" \nf = open(name_out,'w')\nf.write('distance d.probability efficiency e.probability\\n')\nfor line in dist_hist:\n f.write(str(line[0]) + ' ' + str(line[1]) + ' ' + str(line[2]) + ' ' + str(line[3]) + '\\n')\nf.close()\n```\nBefore that, strings were written to file in one line. Because of that your data variable point to empty array since we start to read from 2nd line (which was empty).","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/41859824\/ufunc-add-did-not-contain-loop-with-signature-matching-type-dtype-s32-s3","best_answers_votes":34,"tags":["python","numpy"],"question_length":966,"response_length":1506,"tag_count":2} {"question":"How to create a white image in Python? The following code creates a black picture: \n```\nfrom scipy.misc import imread,imsave\nfrom numpy import zeros\n\nimg = zeros([100,100,3])\nfor y in range(len(img)):\n for x in range(len(img[0])):\n img[y,x] = [255,255,255]\nimsave(\"Result.jpg\",img)\n```\nI would have assumed it to be white. Edit in 2024: The code block is deprecated and won't run anymore.","response":"Every color in an image is represented by one byte. So to create an image array, you should set it's dtype to uint8. And, you don't need for-loop to set every elements to 255, you can use fill() method or slice index: \n```\nimport numpy as np\nimg = np.zeros([100,100,3],dtype=np.uint8)\nimg.fill(255) # or img[:] = 255\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/10465747\/how-to-create-a-white-image-in-python","best_answers_votes":87,"tags":["python","numpy","scipy"],"question_length":398,"response_length":320,"tag_count":3} {"question":"Filtering pandas dataframe with multiple Boolean columns I am trying to filter a df using several Boolean variables that are a part of the df, but have been unable to do so. Sample data: \n```\nA | B | C | D\nJohn Doe | 45 | True | False\nJane Smith | 32 | False | False\nAlan Holmes | 55 | False | True\nEric Lamar | 29 | True | True\n```\nThe dtype for columns C and D is Boolean. I want to create a new df (df1) with only the rows where either C or D is True. It should look like this: \n```\nA | B | C | D\nJohn Doe | 45 | True | False\nAlan Holmes | 55 | False | True\nEric Lamar | 29 | True | True\n```\nI've tried something like this, which faces issues because it cant handle the Boolean type: \n```\ndf1 = df[(df['C']=='True') or (df['D']=='True')]\n```\nAny ideas?","response":"```\n\n```python\nd\n#Output\n#A B C D\n#0 John Doe 45 True False\n#1 Jane Smith 32 False False\n#2 Alan Holmes 55 False True\n#3 Eric Lamar 29 True True\n#```\n#Solution 1: \n#```\n```\n\n```python\nd.loc[d.C | d.D]\n#Output\n#A B C D\n#0 John Doe 45 True False\n#2 Alan Holmes 55 False True\n#3 Eric Lamar 29 True True\n#```\n#Solution 2: \n#```\n```\n\n```python\nd[d[['C','D']].any(1)]\n#Output\n#A B C D\n#0 John Doe 45 True False\n#2 Alan Holmes 55 False True\n#3 Eric Lamar 29 True True\n#```\n#Solution 3: \n#```\n```\n\n```python\nd.query(\"C or D\")\n#Output\n#A B C D\n#0 John Doe 45 True False\n#2 Alan Holmes 55 False True\n#3 Eric Lamar 29 True True\n#```\n#PS If you change your solution to: \n#```\n#df[(df['C']==True) | (df['D']==True)]\n#```\n#it'll work too Pandas docs - boolean indexing why we should NOT use \"PEP complaint\" df[\"col_name\"] is True instead of df[\"col_name\"] == True? \n#```\n```\n\n```python\ndf = pd.DataFrame({\"col\":[True, True, True]})\n```\n\n```python\ndf\n#Output\n#col\n#0 True\n#1 True\n#2 True\n```\n\n```python\ndf[\"col\"] is True\n#Output\n#False # <----- oops, that's not exactly what we wanted\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/46207530\/filtering-pandas-dataframe-with-multiple-boolean-columns","best_answers_votes":80,"tags":["python","pandas","numpy","dataframe","boolean"],"question_length":755,"response_length":1211,"tag_count":5} {"question":"What is the internal precision of numpy.float128? What precision does numpy.float128 map to internally? Is it __float128 or long double? Or something else entirely? A potential follow on question if anybody knows: is it safe in C to cast a __float128 to a (16 byte) long double, with just a loss in precision? (this is for interfacing with a C lib that operates on long doubles). Edit: In response to the comment, the platform is 'Linux-3.0.0-14-generic-x86_64-with-Ubuntu-11.10-oneiric'. Now, if numpy.float128 has varying precision dependent on the platform, that is also useful knowledge for me! Just to be clear, it is the precision I am interested in, not the size of an element.","response":"numpy.longdouble refers to whatever type your C compiler calls long double. Currently, this is the only extended precision floating point type that numpy supports. On x86-32 and x86-64, this is an 80-bit floating point type. On more exotic systems it may be something else (IIRC on Sparc it's an actual 128-bit IEEE float, and on PPC it's double-double). (It also may depend on what OS and compiler you're using -- e.g. MSVC on Windows doesn't support any kind of extended precision at all.) Numpy will also export some name like numpy.float96 or numpy.float128. Which of these names is exported depends on your platform\/compiler, but whatever you get always refers to the same underlying type as longdouble. Also, these names are highly misleading. They do not indicate a 96- or 128-bit IEEE floating point format. Instead, they indicate the number of bits of alignment used by the underlying long double type. So e.g. on x86-32, long double is 80 bits, but gets padded up to 96 bits to maintain 32-bit alignment, and numpy calls this float96. On x86-64, long double is again the identical 80 bit type, but now it gets padded up to 128 bits to maintain 64-bit alignment, and numpy calls this float128. There's no extra precision, just extra padding. Recommendation: ignore the float96\/float128 names, just use numpy.longdouble. Or better yet stick to doubles unless you have a truly compelling reason. They'll be faster, more portable, etc.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/9062562\/what-is-the-internal-precision-of-numpy-float128","best_answers_votes":61,"tags":["python","c","numpy"],"question_length":684,"response_length":1441,"tag_count":3} {"question":"numpy.asarray: how to check up that its result dtype is numeric? I have to create a numpy.ndarray from array-like data with int, float or complex numbers. I hope to do it with numpy.asarray function. I don't want to give it a strict dtype argument, because I want to convert complex values to complex64 or complex128, floats to float32 or float64, etc. But if I just simply run numpy.ndarray(some_unknown_data) and look at the dtype of its result, how can I understand, that the data is numeric, not object or string or something else?","response":"You could check if the dtype of the array is a sub-dtype of np.number. For example: \n```\n\n```python\nnp.issubdtype(np.complex128, np.number)\n#Output\n#True\n```\n\n```python\nnp.issubdtype(np.int32, np.number)\n#Output\n#True\n```\n\n```python\nnp.issubdtype(np.str_, np.number)\n#Output\n#False\n```\n\n```python\nnp.issubdtype('O', np.number) # 'O' is object\n#Output\n#False\n#```\n#Essentially, this just checks whether the dtype is below 'number' in the NumPy dtype hierarchy:\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/29518923\/numpy-asarray-how-to-check-up-that-its-result-dtype-is-numeric","best_answers_votes":82,"tags":["python","arrays","numpy","types"],"question_length":535,"response_length":381,"tag_count":4} {"question":"Finding range of a numpy array elements I have a NumPy array of size 94 x 155: \n```\na = [1 2 20 68 210 290..\n 2 33 34 55 230 340..\n .. .. ... ... .... .....]\n```\nI want to calculate the range of each row, so that I get 94 ranges in a result. I tried looking for a numpy.range function, which I don't think exists. If this can be done through a loop, that's also fine. I'm looking for something like numpy.mean, which, if we set the axis parameter to 1, returns the mean for each row in the N-dimensional array.","response":"I think np.ptp might do what you want: http:\/\/docs.scipy.org\/doc\/numpy\/reference\/generated\/numpy.ptp.html \n```\nr = np.ptp(a,axis=1)\n```\nwhere r is your range array.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/12701659\/finding-range-of-a-numpy-array-elements","best_answers_votes":73,"tags":["python","arrays","numpy"],"question_length":527,"response_length":164,"tag_count":3} {"question":"How to see top n entries of term-document matrix after tfidf in scikit-learn I am new to scikit-learn, and I was using TfidfVectorizer to find the tfidf values of terms in a set of documents. I used the following code to obtain the same. \n```\nvectorizer = TfidfVectorizer(stop_words=u'english',ngram_range=(1,5),lowercase=True)\nX = vectorizer.fit_transform(lectures)\n```\nNow If I print X, I am able to see all the entries in matrix, but how can I find top n entries based on tfidf score. In addition to that is there any method that will help me to find top n entries based on tfidf score per ngram i.e. top entries among unigram,bigram,trigram and so on?","response":"Since version 0.15, the global term weighting of the features learnt by a TfidfVectorizer can be accessed through the attribute idf_, which will return an array of length equal to the feature dimension. Sort the features by this weighting to get the top weighted features: \n```\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nimport numpy as np\n\nlectures = [\"this is some food\", \"this is some drink\"]\nvectorizer = TfidfVectorizer()\nX = vectorizer.fit_transform(lectures)\nindices = np.argsort(vectorizer.idf_)[::-1]\nfeatures = vectorizer.get_feature_names()\ntop_n = 2\ntop_features = [features[i] for i in indices[:top_n]]\nprint top_features\n```\nOutput: \n```\n[u'food', u'drink']\n```\nThe second problem of getting the top features by ngram can be done using the same idea, with some extra steps of splitting the features into different groups: \n```\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nfrom collections import defaultdict\n\nlectures = [\"this is some food\", \"this is some drink\"]\nvectorizer = TfidfVectorizer(ngram_range=(1,2))\nX = vectorizer.fit_transform(lectures)\nfeatures_by_gram = defaultdict(list)\nfor f, w in zip(vectorizer.get_feature_names(), vectorizer.idf_):\n features_by_gram[len(f.split(' '))].append((f, w))\ntop_n = 2\nfor gram, features in features_by_gram.iteritems():\n top_features = sorted(features, key=lambda x: x[1], reverse=True)[:top_n]\n top_features = [f[0] for f in top_features]\n print '{}-gram top:'.format(gram), top_features\n```\nOutput: \n```\n1-gram top: [u'drink', u'food']\n2-gram top: [u'some drink', u'some food']\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/25217510\/how-to-see-top-n-entries-of-term-document-matrix-after-tfidf-in-scikit-learn","best_answers_votes":65,"tags":["python","numpy","scikit-learn","tf-idf","top-n"],"question_length":655,"response_length":1589,"tag_count":5} {"question":"Using numpy.genfromtxt to read a csv file with strings containing commas I am trying to read in a csv file with numpy.genfromtxt but some of the fields are strings which contain commas. The strings are in quotes, but numpy is not recognizing the quotes as defining a single string. For example, with the data in 't.csv': \n```\n2012, \"Louisville KY\", 3.5\n2011, \"Lexington, KY\", 4.0\n```\nthe code \n```\nnp.genfromtxt('t.csv', delimiter=',')\n```\nproduces the error: ValueError: Some errors were detected ! Line #2 (got 4 columns instead of 3) The data structure I am looking for is: \n```\narray([['2012', 'Louisville KY', '3.5'],\n ['2011', 'Lexington, KY', '4.0']], \n dtype='|S13')\n```\nLooking over the documentation, I don't see any options to deal with this. Is there a way do to it with numpy, or do I just need to read in the data with the csv module and then convert it to a numpy array?","response":"You can use pandas (the becoming default library for working with dataframes (heterogeneous data) in scientific python) for this. It's read_csv can handle this. From the docs: quotechar : string \n```\nThe character to used to denote the start and end of a quoted item. Quoted items \ncan include the delimiter and it will be ignored.\n```\nThe default value is \". An example: \n```\n\n```python\nimport pandas as pd\n```\n\n```python\nfrom StringIO import StringIO\n```\n\n```python\ns=\"\"\"year, city, value\n ...: 2012, \"Louisville KY\", 3.5\n ...: 2011, \"Lexington, KY\", 4.0\"\"\"\n```\n\n```python\npd.read_csv(StringIO(s), quotechar='\"', skipinitialspace=True)\n#Output\n#year city value\n#0 2012 Louisville KY 3.5\n#1 2011 Lexington, KY 4.0\n#```\n#The trick here is that you also have to use skipinitialspace=True to deal with the spaces after the comma-delimiter. Apart from a powerful csv reader, I can also strongly advice to use pandas with the heterogeneous data you have (the example output in numpy you give are all strings, although you could use structured arrays).\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/17933282\/using-numpy-genfromtxt-to-read-a-csv-file-with-strings-containing-commas","best_answers_votes":26,"tags":["python","pandas","numpy","csv","genfromtxt"],"question_length":896,"response_length":1049,"tag_count":5} {"question":"Numpy modify array in place? I have the following code which is attempting to normalize the values of an m x n array (It will be used as input to a neural network, where m is the number of training examples and n is the number of features). However, when I inspect the array in the interpreter after the script runs, I see that the values are not normalized; that is, they still have the original values. I guess this is because the assignment to the array variable inside the function is only seen within the function. How can I do this normalization in place? Or do I have to return a new array from the normalize function? \n```\nimport numpy\n\ndef normalize(array, imin = -1, imax = 1):\n \"\"\"I = Imin + (Imax-Imin)*(D-Dmin)\/(Dmax-Dmin)\"\"\"\n\n dmin = array.min()\n dmax = array.max()\n\n array = imin + (imax - imin)*(array - dmin)\/(dmax - dmin)\n print array[0]\n\n\ndef main():\n\n array = numpy.loadtxt('test.csv', delimiter=',', skiprows=1)\n for column in array.T:\n normalize(column)\n\n return array\n\nif __name__ == \"__main__\":\n a = main()\n```","response":"If you want to apply mathematical operations to a numpy array in-place, you can simply use the standard in-place operators +=, -=, \/=, etc. So for example: \n```\n\n```python\ndef foo(a):\n a += 10\n\n```\n\n```python\na = numpy.arange(10)\n```\n\n```python\na\n#Output\n#array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n```\n\n```python\nfoo(a)\n```\n\n```python\na\n#Output\n#array([10, 11, 12, 13, 14, 15, 16, 17, 18, 19])\n#```\n#The in-place version of these operations is a tad faster to boot, especially for larger arrays: \n#```\n```\n\n```python\ndef normalize_inplace(array, imin=-1, imax=1):\n dmin = array.min()\n dmax = array.max()\n array -= dmin\n array *= imax - imin\n array \/= dmax - dmin\n array += imin\n \n```\n\n```python\ndef normalize_copy(array, imin=-1, imax=1):\n dmin = array.min()\n dmax = array.max()\n return imin + (imax - imin) * (array - dmin) \/ (dmax - dmin)\n\n```\n\n```python\na = numpy.arange(10000, dtype='f')\n```\n\n```python\n%timeit normalize_inplace(a)\n#Output\n#10000 loops, best of 3: 144 us per loop\n```\n\n```python\n%timeit normalize_copy(a)\n#Output\n#10000 loops, best of 3: 146 us per loop\n```\n\n```python\na = numpy.arange(1000000, dtype='f')\n```\n\n```python\n%timeit normalize_inplace(a)\n#Output\n#100 loops, best of 3: 12.8 ms per loop\n```\n\n```python\n%timeit normalize_copy(a)\n#Output\n#100 loops, best of 3: 16.4 ms per loop\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/10149416\/numpy-modify-array-in-place","best_answers_votes":33,"tags":["python","arrays","numpy","in-place"],"question_length":1068,"response_length":1232,"tag_count":4} {"question":"What does \"an intermediate result is being cached\" mean? I have a set of n vectors stored in the 3 x n matrix z. I find the outer product using np.einsum. When I timed it using: \n```\n%timeit v=np.einsum('i...,j...->ij...',z,z)\n```\nI got the result: \n```\nThe slowest run took 7.23 times longer than the fastest. This could mean that an\nintermediate result is being cached \n100000 loops, best of 3: 2.9 \u00b5s per loop\n```\nWhat is happening here and can it be avoided? The best 3 is 2.9us, but the slowest maybe more typical.","response":"The message \"intermediate result is being cached\" is just a blind guess in the canned message reported by %timeit. It may or may not be true, and you should not assume it is correct. In particular, one of the most common reasons for the first run being slowest is that the array is in the CPU cache only after the first run. CPUs cache things automatically; you cannot avoid this, and you don't really want to avoid it. However, optimizing algorithms so that CPU caches can work optimally is nowadays one of the bottlenecks that high-performance computing needs to take into account.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/29759883\/what-does-an-intermediate-result-is-being-cached-mean","best_answers_votes":40,"tags":["numpy","ipython","benchmarking","vectorization"],"question_length":519,"response_length":583,"tag_count":4} {"question":"How to fast change image brightness with python + OpenCV? I have a sequence of images. I need to average brightness of these images. First example (very slow): \n```\nimg = cv2.imread('test.jpg') #load rgb image\nhsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) #convert it to hsv\n\nfor x in range(0, len(hsv)):\n for y in range(0, len(hsv[0])):\n hsv[x, y][2] += value\n\nimg = cv2.cvtColor(hsv, cv2.COLOR_HSV2BGR)\ncv2.imwrite(\"image_processed.jpg\", img)\n```\nSecond example (quickly) \n```\nhsv += value\n```\nThis example very fast but it changes all values HSV (I need to change only V (brightness))","response":"I know this question is a bit old, but I thought I might post the complete solution that worked for me (takes care of the overflow situation by saturating at 255): \n```\ndef increase_brightness(img, value=30):\n hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)\n h, s, v = cv2.split(hsv)\n\n lim = 255 - value\n v[v > lim] = 255\n v[v <= lim] += value\n\n final_hsv = cv2.merge((h, s, v))\n img = cv2.cvtColor(final_hsv, cv2.COLOR_HSV2BGR)\n return img\n```\nThis can be used as follows: \n```\nframe = increase_brightness(frame, value=20)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/32609098\/how-to-fast-change-image-brightness-with-python-opencv","best_answers_votes":85,"tags":["python","opencv","numpy","image-processing"],"question_length":594,"response_length":546,"tag_count":4} {"question":"Calculating Slopes in Numpy (or Scipy) I am trying to find the fastest and most efficient way to calculate slopes using Numpy and Scipy. I have a data set of three Y variables and one X variable and I need to calculate their individual slopes. For example, I can easily do this one row at a time, as shown below, but I was hoping there was a more efficient way of doing this. I also don't think linregress is the best way to go because I don't need any of the auxiliary variables like intercept, standard error, etc in my results. Any help is greatly appreciated. \n```\nimport numpy as np\n from scipy import stats\n\n Y = [[ 2.62710000e+11 3.14454000e+11 3.63609000e+11 4.03196000e+11\n 4.21725000e+11 2.86698000e+11 3.32909000e+11 4.01480000e+11\n 4.21215000e+11 4.81202000e+11]\n [ 3.11612352e+03 3.65968334e+03 4.15442691e+03 4.52470938e+03\n 4.65011423e+03 3.10707392e+03 3.54692896e+03 4.20656404e+03\n 4.34233412e+03 4.88462501e+03]\n [ 2.21536396e+01 2.59098311e+01 2.97401268e+01 3.04784552e+01\n 3.13667639e+01 2.76377113e+01 3.27846013e+01 3.73223417e+01\n 3.51249997e+01 4.42563658e+01]]\n X = [ 1990. 1991. 1992. 1993. 1994. 1995. 1996. 1997. 1998. 1999.] \n slope_0, intercept, r_value, p_value, std_err = stats.linregress(X, Y[0,:])\n slope_1, intercept, r_value, p_value, std_err = stats.linregress(X, Y[1,:])\n slope_2, intercept, r_value, p_value, std_err = stats.linregress(X, Y[2,:])\n slope_0 = slope\/Y[0,:][0]\n slope_1 = slope\/Y[1,:][0]\n slope_2 = slope\/Y[2,:][0]\n b, a = polyfit(X, Y[1,:], 1)\n slope_1_a = b\/Y[1,:][0]\n```","response":"The fastest and the most efficient way would be to use a native scipy function from linregress which calculates everything: slope : slope of the regression line intercept : intercept of the regression line r-value : correlation coefficient p-value : two-sided p-value for a hypothesis test whose null hypothesis is that the slope is zero stderr : Standard error of the estimate And here is an example: \n```\na = [15, 12, 8, 8, 7, 7, 7, 6, 5, 3]\nb = [10, 25, 17, 11, 13, 17, 20, 13, 9, 15]\nfrom scipy.stats import linregress\nlinregress(a, b)\n```\nwill return you: \n```\nLinregressResult(slope=0.20833333333333337, intercept=13.375, rvalue=0.14499815458068521, pvalue=0.68940144811669501, stderr=0.50261704627083648)\n```\nP.S. Just a mathematical formula for slope:","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/9538525\/calculating-slopes-in-numpy-or-scipy","best_answers_votes":74,"tags":["python","numpy","scipy"],"question_length":1670,"response_length":759,"tag_count":3} {"question":"How to center labels in histogram plot I have a numpy array results that looks like \n```\n[ 0. 2. 0. 0. 0. 0. 3. 0. 0. 0. 0. 0. 0. 0. 0. 2. 0. 0.\n 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0.\n 0. 1. 1. 0. 0. 0. 0. 2. 0. 3. 1. 0. 0. 2. 2. 0. 0. 0.\n 0. 0. 0. 0. 0. 1. 1. 0. 0. 0. 0. 0. 0. 2. 0. 0. 0. 0.\n 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 3. 1. 0. 0. 0. 0. 0.\n 0. 0. 0. 1. 0. 0. 0. 1. 2. 2.]\n```\nI would like to plot a histogram of it. I have tried \n```\nimport matplotlib.pyplot as plt\nplt.hist(results, bins=range(5))\nplt.show()\n```\nThis gives me a histogram with the x-axis labelled 0.0 0.5 1.0 1.5 2.0 2.5 3.0. 3.5 4.0. I would like the x-axis to be labelled 0 1 2 3 instead with the labels in the center of each bar. How can you do that?","response":"The other answers just don't do it for me. The benefit of using plt.bar over plt.hist is that bar can use align='center': \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\narr = np.array([ 0., 2., 0., 0., 0., 0., 3., 0., 0., 0., 0., 0., 0.,\n 0., 0., 2., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0.,\n 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 1., 1.,\n 0., 0., 0., 0., 2., 0., 3., 1., 0., 0., 2., 2., 0.,\n 0., 0., 0., 0., 0., 0., 0., 1., 1., 0., 0., 0., 0.,\n 0., 0., 2., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0.,\n 0., 0., 0., 0., 0., 3., 1., 0., 0., 0., 0., 0., 0.,\n 0., 0., 1., 0., 0., 0., 1., 2., 2.])\n\nlabels, counts = np.unique(arr, return_counts=True)\nplt.bar(labels, counts, align='center')\nplt.gca().set_xticks(labels)\nplt.show()\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/23246125\/how-to-center-labels-in-histogram-plot","best_answers_votes":66,"tags":["python","numpy","matplotlib","histogram"],"question_length":845,"response_length":881,"tag_count":4} {"question":"undo or reverse argsort(), python Given an array 'a' I would like to sort the array by columns sort(a, axis=0) do some stuff to the array and then undo the sort. By that I don't mean re sort but basically reversing how each element was moved. I assume argsort() is what I need but it is not clear to me how to sort an array with the results of argsort() or more importantly apply the reverse\/inverse of argsort() Here is a little more detail I have an array a, shape(a) = rXc I need to sort each column \n```\naargsort = a.argsort(axis=0) # May use this later\naSort = a.sort(axis=0)\n```\nnow average each row \n```\naSortRM = asort.mean(axis=1)\n```\nnow replace each col in a row with the row mean. is there a better way than this \n```\naWithMeans = ones_like(a)\nfor ind in range(r) # r = number of rows\n aWithMeans[ind]* aSortRM[ind]\n```\nNow I need to undo the sort I did in the first step. ????","response":"There are probably better solutions to the problem you are actually trying to solve than this (performing an argsort usually precludes the need to actually sort), but here you go: \n```\n\n```python\nimport numpy as np\n```\n\n```python\na = np.random.randint(0,10,10)\n```\n\n```python\naa = np.argsort(a)\n```\n\n```python\naaa = np.argsort(aa)\n```\n\n```python\na # original\n#Output\n#array([6, 4, 4, 6, 2, 5, 4, 0, 7, 4])\n```\n\n```python\na[aa] # sorted\n#Output\n#array([0, 2, 4, 4, 4, 4, 5, 6, 6, 7])\n```\n\n```python\na[aa][aaa] # undone\n#Output\n#array([6, 4, 4, 6, 2, 5, 4, 0, 7, 4])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/2483696\/undo-or-reverse-argsort-python","best_answers_votes":87,"tags":["python","arrays","sorting","numpy"],"question_length":894,"response_length":468,"tag_count":4} {"question":"Accessing NumPy array elements not in a given index list I have a NumPy array with the shape (100, 170, 256). And I have an array consisting of indexes [0, 10, 20, 40, 70]. I can get the sub-arrays corresponding to the indexes as follows: \n```\nsub_array = array[..., index]\n```\nThis returns an array with the shape (100, 170, 5) as expected. Now, I am trying to take the complement and get the sub-array NOT corresponding to those indexes. So, I did: \n```\nsub_array = array[..., ~index]\n```\nThis still returns me an array of shape (100, 170, 5) for some reason. I wonder how to do this complement operation of these indexes in python? [EDIT] Also tried: \n```\nsub_array = array[..., not(index.any)]\n```\nHowever, this does not do the thing I want as well (getting array of shape (100, 170, 251).","response":"The question is answered but I propose a benchmark of the three methods here. Fastest solution is boolean mask (with small and larger index array size) \n```\nmask = np.ones(arr.size, dtype=bool)\nmask[indexes] = False\nresult = arr[mask]\n```\nIt is 2000 times faster than the list comprehension and marginaly faster than np.delete Code to reproduce Three proposed solutions: list comprehension (sol1), boolean mask (sol2) or np.delete (sol3) \n```\nd = 100000\na = np.random.rand(d)\nidx = np.random.randint(d, size = 10)\n\n\n# list comprehension\ndef sol1(arr, indexes):\n return arr[[i for i in range(arr.size) if i not in indexes]]\nsol1(a, idx)\n# Out[30]: array([0.13044518, 0.68564961, 0.03033223, ..., 0.03796257, 0.40137137, 0.45403929])\n\n# boolean mask\ndef sol2(arr, indexes):\n mask = np.ones(arr.size, dtype=bool)\n mask[indexes] = False\n return arr[mask]\nsol2(a, idx)\n# Out[32]: array([0.13044518, 0.68564961, 0.03033223, ..., 0.03796257, 0.40137137, 0.45403929])\n\n# np.delete\ndef sol3(arr, indexes):\n return np.delete(arr, indexes)\nsol3(a, idx)\n# Out[36]: array([0.13044518, 0.68564961, 0.03033223, ..., 0.03796257, 0.40137137, 0.45403929])\n```\nResults \n```\n%timeit sol1(a, idx)\n384 ms \u00b1 2.75 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\n%timeit sol2(a, idx)\n154 \u00b5s \u00b1 15.7 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each)\n%timeit sol3(a, idx)\n194 \u00b5s \u00b1 18.8 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each)\n\n\nidx = np.random.randint(d, size = 1000)\n%timeit sol1(a, idx)\n386 ms \u00b1 7.75 ms per loop (mean \u00b1 std. dev. of 7 runs, 1 loop each)\n%timeit sol2(a, idx)\n171 \u00b5s \u00b1 11.3 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each)\n%timeit sol3(a, idx)\n205 \u00b5s \u00b1 10.5 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/27824075\/accessing-numpy-array-elements-not-in-a-given-index-list","best_answers_votes":33,"tags":["python","numpy"],"question_length":793,"response_length":1765,"tag_count":2} {"question":"Creating a Distance Matrix? I am currently reading in data into a dataframe that looks like this. \n```\nCity XCord YCord \nBoston 5 2\nPhoenix 7 3\nNew York 8 1\n..... . .\n```\nI want to to create a Euclidean Distance Matrix from this data showing the distance between all city pairs so I get a resulting matrix like: \n```\nBoston Phoenix New York\nBoston 0 2.236 3.162\nPhoenix 2.236 0 2.236\nNew York 3.162 2.236 0\n```\nThere are many more cities and coordinates in my actual data frame so i need to to be able to somehow iterate over all of the city pairs and create a distance matrix like the one I have shown above but I am not sure how to pair all of the cites together and apply the Euclidean Distance formula? Any help would be appreciated.","response":"I think you are intrested in distance_matrix. For example: Create data: \n```\nimport pandas as pd\nfrom scipy.spatial import distance_matrix\n \ndata = [[5, 7], [7, 3], [8, 1]]\nctys = ['Boston', 'Phoenix', 'New York']\ndf = pd.DataFrame(data, columns=['xcord', 'ycord'], index=ctys)\n```\nOutput: \n```\nxcord ycord\nBoston 5 7\nPhoenix 7 3\nNew York 8 1\n```\nUsing the distance matrix function: \n```\npd.DataFrame(distance_matrix(df.values, df.values), index=df.index, columns=df.index)\n```\nResults: \n```\nBoston Phoenix New York\nBoston 0.000000 4.472136 6.708204\nPhoenix 4.472136 0.000000 2.236068\nNew York 6.708204 2.236068 0.000000\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/29481485\/creating-a-distance-matrix","best_answers_votes":56,"tags":["python","numpy","dataframe"],"question_length":868,"response_length":670,"tag_count":3} {"question":"How to force zero interception in linear regression? I have some more or less linear data of the form: \n```\nx = [0.1, 0.2, 0.4, 0.6, 0.8, 1.0, 2.0, 4.0, 6.0, 8.0, 10.0, 20.0, 40.0, 60.0, 80.0]\ny = [0.50505332505407008, 1.1207373784533172, 2.1981844719020001, 3.1746209003398689, 4.2905482471260044, 6.2816226678076958, 11.073788414382639, 23.248479770546009, 32.120462301367183, 44.036117671229206, 54.009003143831116, 102.7077685684846, 185.72880217806673, 256.12183145545811, 301.97120103079675]\n```\nI am using scipy.optimize.leastsq to fit a linear regression to this: \n```\ndef lin_fit(x, y):\n '''Fits a linear fit of the form mx+b to the data'''\n fitfunc = lambda params, x: params[0] * x + params[1] #create fitting function of form mx+b\n errfunc = lambda p, x, y: fitfunc(p, x) - y #create error function for least squares fit\n\n init_a = 0.5 #find initial value for a (gradient)\n init_b = min(y) #find initial value for b (y axis intersection)\n init_p = numpy.array((init_a, init_b)) #bundle initial values in initial parameters\n\n #calculate best fitting parameters (i.e. m and b) using the error function\n p1, success = scipy.optimize.leastsq(errfunc, init_p.copy(), args = (x, y))\n f = fitfunc(p1, x) #create a fit with those parameters\n return p1, f\n```\nAnd it works beautifully (although I am not sure if scipy.optimize is the right thing to use here, it might be a bit over the top?). However, due to the way the data points lie it does not give me a y-axis interception at 0. I do know though that it has to be zero in this case, if x = 0 than y = 0. Is there any way I can force this?","response":"As @AbhranilDas mentioned, just use a linear method. There's no need for a non-linear solver like scipy.optimize.lstsq. Typically, you'd use numpy.polyfit to fit a line to your data, but in this case you'll need to do use numpy.linalg.lstsq directly, as you want to set the intercept to zero. As a quick example: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.array([0.1, 0.2, 0.4, 0.6, 0.8, 1.0, 2.0, 4.0, 6.0, 8.0, 10.0, \n 20.0, 40.0, 60.0, 80.0])\n\ny = np.array([0.50505332505407008, 1.1207373784533172, 2.1981844719020001,\n 3.1746209003398689, 4.2905482471260044, 6.2816226678076958,\n 11.073788414382639, 23.248479770546009, 32.120462301367183, \n 44.036117671229206, 54.009003143831116, 102.7077685684846, \n 185.72880217806673, 256.12183145545811, 301.97120103079675])\n\n# Our model is y = a * x, so things are quite simple, in this case...\n# x needs to be a column vector instead of a 1D vector for this, however.\nx = x[:,np.newaxis]\na, _, _, _ = np.linalg.lstsq(x, y)\n\nplt.plot(x, y, 'bo')\nplt.plot(x, a*x, 'r-')\nplt.show()\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/9990789\/how-to-force-zero-interception-in-linear-regression","best_answers_votes":58,"tags":["python","numpy","scipy","statistics","linear-regression"],"question_length":1704,"response_length":1113,"tag_count":5} {"question":"How can you turn an index array into a mask array in Numpy? Is it possible to convert an array of indices to an array of ones and zeros, given the range? i.e. [2,3] -> [0, 0, 1, 1, 0], in range of 5 I'm trying to automate something like this: \n```\n\n```python\nindex_array = np.arange(200,300)\n#Output\n#array([200, 201, ... , 299])\n```\n\n```python\nmask_array = ??? # some function of index_array and 500\n#Output\n#array([0, 0, 0, ..., 1, 1, 1, ... , 0, 0, 0])\n```\n\n```python\ntrain(data[mask_array]) # trains with 200~299\n```\n\n```python\npredict(data[~mask_array]) # predicts with 0~199, 300~499\n#Output\n#```\n```","response":"Here's one way: \n```\n\n```python\nindex_array = np.array([3, 4, 7, 9])\n```\n\n```python\nn = 15\n```\n\n```python\nmask_array = np.zeros(n, dtype=int)\n```\n\n```python\nmask_array[index_array] = 1\n```\n\n```python\nmask_array\n#Output\n#array([0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0])\n#```\n#If the mask is always a range, you can eliminate index_array, and assign 1 to a slice: \n#```\n```\n\n```python\nmask_array = np.zeros(n, dtype=int)\n```\n\n```python\nmask_array[5:10] = 1\n```\n\n```python\nmask_array\n#Output\n#array([0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0])\n#```\n#If you want an array of boolean values instead of integers, change the dtype of mask_array when it is created: \n#```\n```\n\n```python\nmask_array = np.zeros(n, dtype=bool)\n```\n\n```python\nmask_array\n#Output\n#array([False, False, False, False, False, False, False, False, False,\n# False, False, False, False, False, False], dtype=bool)\n```\n\n```python\nmask_array[5:10] = True\n```\n\n```python\nmask_array\n#Output\n#array([False, False, False, False, False, True, True, True, True,\n# True, False, False, False, False, False], dtype=bool)\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/25654748\/how-can-you-turn-an-index-array-into-a-mask-array-in-numpy","best_answers_votes":49,"tags":["python","arrays","numpy","where-clause","mask"],"question_length":550,"response_length":1025,"tag_count":5} {"question":"Overcome ValueError for empty array In this discussion I tried to fix an issue in plotting limits for y-axis, after the twiny() messes up my plot. I thought this: \n```\nax.set_ylim([y.min()-0.05, y.max()+0.05])\n```\nwas a good solution. And probably it is, for continuous set of data. As I said in that discussion, anyway, my data are noisy, and sometimes with gaps. So it happens that some plotted ranges have no data. In that case, naturally, the use of the .min() raises the error: \n```\nValueError: zero-size array to reduction operation minimum which has no identity\n```\nbecause the array is empty. How to work around it, so that the code just does not care about putting limits on the y-axis? (Hoping that this is the only issue the empty array will cause)","response":"Just catch the exception and ignore it: \n```\ntry:\n ax.set_ylim([y.min()-0.05, y.max()+0.05])\nexcept ValueError: #raised if `y` is empty.\n pass\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/22903114\/overcome-valueerror-for-empty-array","best_answers_votes":59,"tags":["python","numpy","matplotlib"],"question_length":759,"response_length":153,"tag_count":3} {"question":"in Numpy, how to zip two 2-D arrays? For example I have 2 arrays \n```\na = array([[0, 1, 2, 3],\n [4, 5, 6, 7]])\nb = array([[0, 1, 2, 3],\n [4, 5, 6, 7]])\n```\nHow can I zip a and b so I get \n```\nc = array([[(0,0), (1,1), (2,2), (3,3)],\n [(4,4), (5,5), (6,6), (7,7)]])\n```\n?","response":"You can use dstack: \n```\n\n```python\nnp.dstack((a,b))\n#Output\n#array([[[0, 0],\n# [1, 1],\n# [2, 2],\n# [3, 3]],\n```\n\n[[4, 4],\n [5, 5],\n [6, 6],\n [7, 7]]])\n```\nIf you must have tuples: \n```\n\n```python\nnp.array(zip(a.ravel(),b.ravel()), dtype=('i4,i4')).reshape(a.shape)\n#Output\n#array([[(0, 0), (1, 1), (2, 2), (3, 3)],\n# [(4, 4), (5, 5), (6, 6), (7, 7)]],\n# dtype=[('f0', ' threshold) # boolean mask where the difference of the filters exceeds threshold\nmaxima[~diff] = False # remove the local maxima which do not satisfy the minimum difference in neighborhood\n\nlabeled, num_objects = ndimage.label(maxima) # label connected components on maxima binary array (boolean mask)\nslices = ndimage.find_objects(labeled) # slices are 2d rect\nx, y = [], []\nfor dy,dx in slices:\n x_center = (dx.start + dx.stop - 1)\/2\n x.append(x_center)\n y_center = (dy.start + dy.stop - 1)\/2 \n y.append(y_center)\n\nplt.imshow(data)\nplt.savefig('\/tmp\/data.png', bbox_inches = 'tight')\n\nplt.autoscale(False)\nplt.plot(x,y, 'ro')\nplt.savefig('\/tmp\/result.png', bbox_inches = 'tight')\n```\nGiven data.png: the above program yields result.png with threshold = 1500. Lower the threshold to pick up more local maxima: References: J.F. Sebastian counts nuclei Joe Kington finds paw prints Ivan finds local maximums","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/9111711\/get-coordinates-of-local-maxima-in-2d-array-above-certain-value","best_answers_votes":66,"tags":["python","image","numpy","scipy"],"question_length":914,"response_length":1468,"tag_count":4} {"question":"numpy \"TypeError: ufunc 'bitwise_and' not supported for the input types\" when using a dynamically created boolean mask In numpy, if I have an array of floats, dynamically create a boolean mask of where this array equals a particular value and do a bitwise AND with a boolean array, I get an error: \n```\n\n```python\nimport numpy as np\n```\n\n```python\na = np.array([1.0, 2.0, 3.0])\n```\n\n```python\na == 2.0 & b\n```\n\nTraceback (most recent call last):\n File \"\", line 1, in \nTypeError: ufunc 'bitwise_and' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'\n```\nIf I save the result of the comparison to a variable and carry out the bitwise AND however, it works: \n```\n\n```python\nc = a == 2.0\n```\n\n```python\nc & b\n#Output\n#array([False, True, False], dtype=bool)\n#```\n#The objects created seem the same in each case though: \n#```\n```\n\n```python\ntype(a == 2.0)\n#Output\n#\n```\n\n```python\n(a == 2.0).dtype\n#Output\n#dtype('bool')\n```\n\n```python\ntype(c)\n#Output\n#\n```\n\n```python\nc.dtype\n#Output\n#dtype('bool')\n#```\n#Why the difference?\n```","response":"& has higher precedence than ==, so the expression \n```\na == 2.0 & b\n```\nis the same as \n```\na == (2.0 & b)\n```\nYou get the error because bitwise and is not defined for a floating point scalar and a boolean array. Add parentheses to get what you expected: \n```\n(a == 2.0) & b\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/50656307\/numpy-typeerror-ufunc-bitwise-and-not-supported-for-the-input-types-when-us","best_answers_votes":74,"tags":["python","numpy"],"question_length":1025,"response_length":279,"tag_count":2} {"question":"Inserting a row at a specific location in a 2d array in numpy? I have a 2d array in numpy where I want to insert a new row. Following question Numpy - add row to array can help. We can use numpy.vstack, but it stacks at the start or at the end. Can anyone please help in this regard.","response":"You are probably looking for numpy.insert \n```\n\n```python\nimport numpy as np\n```\n\n```python\na = np.zeros((2, 2))\n```\n\n```python\na\n#Output\n#array([[ 0., 0.],\n# [ 0., 0.]])\n## In the following line 1 is the index before which to insert, 0 is the axis.\n```\n\n```python\nnp.insert(a, 1, np.array((1, 1)), 0) \n#Output\n#array([[ 0., 0.],\n# [ 1., 1.],\n# [ 0., 0.]])\n```\n\n```python\nnp.insert(a, 1, np.array((1, 1)), 1)\n#Output\n#array([[ 0., 1., 0.],\n# [ 0., 1., 0.]])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/8298797\/inserting-a-row-at-a-specific-location-in-a-2d-array-in-numpy","best_answers_votes":75,"tags":["python","numpy"],"question_length":283,"response_length":412,"tag_count":2} {"question":"Efficient way to compute intersecting values between two numpy arrays I have a bottleneck in my program which is caused by the following: \n```\nA = numpy.array([10,4,6,7,1,5,3,4,24,1,1,9,10,10,18])\nB = numpy.array([1,4,5,6,7,8,9])\n\nC = numpy.array([i for i in A if i in B])\n```\nThe expected outcome for C is the following: \n```\nC = [4 6 7 1 5 4 1 1 9]\n```\nIs there a more efficient way of doing this operation? Note that array A contains repeating values and they need to be taken into account. I wasn't able to use set intersection since taking the intersection will omit the repeating values, returning just [1,4,5,6,7,9]. Also note this is only a simple demonstration. The actual array sizes can be in the order of thousands, to well over millions.","response":"You can use np.in1d: \n```\n\n```python\nA[np.in1d(A, B)]\n#Output\n#array([4, 6, 7, 1, 5, 4, 1, 1, 9])\n#```\n#np.in1d returns a boolean array indicating whether each value of A also appears in B. This array can then be used to index A and return the common values. It's not relevant to your example, but it's also worth mentioning that if A and B each contain unique values then np.in1d can be sped up by setting assume_unique=True: \n#```\n#np.in1d(A, B, assume_unique=True)\n#```\n#You might also be interested in np.intersect1d which returns an array of the unique values common to both arrays (sorted by value): \n#```\n```\n\n```python\nnp.intersect1d(A, B)\n#Output\n#array([1, 4, 5, 6, 7, 9])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/27967914\/efficient-way-to-compute-intersecting-values-between-two-numpy-arrays","best_answers_votes":46,"tags":["python","arrays","numpy"],"question_length":750,"response_length":643,"tag_count":3} {"question":"Merge two arrays vertically to array of tuples using numpy I have two numpy arrays: \n```py\nx = np.array([-1, 0, 1, 2])\ny = np.array([-2, -1, 0, 1])\n```\nIs there a way to merge these arrays together like tuples: \n```py\narray = [(-1, -2), (0, -1), (1, 0), (2, 1)]\n```","response":"```\n\n```python\nx = np.array([-1, 0, 1, 2])\n```\n\n```python\ny = np.array([-2, -1, 0, 1])\n```\njoin them into 2d array: \n```\n```\n\n```python\nnp.array((x,y))\n#Output\n#array([[-1, 0, 1, 2],\n# [-2, -1, 0, 1]])\n#```\n#transpose that array: \n#```\n```\n\n```python\nnp.array((x,y)).T\n#Output\n#array([[-1, -2],\n# [ 0, -1],\n# [ 1, 0],\n# [ 2, 1]])\n#```\n#or use the standard Python zip - this treats the arrays as lists \n#```\n```\n\n```python\nzip(x,y) # list(zip in py3\n#Output\n#[(-1, -2), (0, -1), (1, 0), (2, 1)]\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/35091879\/merge-two-arrays-vertically-to-array-of-tuples-using-numpy","best_answers_votes":58,"tags":["python","arrays","list","numpy","concatenation"],"question_length":265,"response_length":504,"tag_count":5} {"question":"How to create a numpy array from a pydub AudioSegment? I'm aware of the following question: How to create a pydub AudioSegment using an numpy array? My question is the right opposite. If I have a pydub AudioSegment how can I convert it to a numpy array? I would like to use scipy filters and so on. It is not very clear to me what is the internal structure of the AudioSegment raw data.","response":"Pydub has a facility for getting the audio data as an array of samples, it is an array.array instance (not a numpy array) but you should be able to convert it to a numpy array relatively easily: \n```\nfrom pydub import AudioSegment\nsound = AudioSegment.from_file(\"sound1.wav\")\n\n# this is an array\nsamples = sound.get_array_of_samples()\n```\nYou may be able to create a numpy variant of the implementation though. That method is implemented pretty simply: \n```\ndef get_array_of_samples(self):\n \"\"\"\n returns the raw_data as an array of samples\n \"\"\"\n return array.array(self.array_type, self._data)\n```\nCreating a new audio segment from a (modified?) array of samples is also possible: \n```\nnew_sound = sound._spawn(samples)\n```\nThe above is a little hacky, it was written for internal use within the AudioSegment class, but it mainly just figures out what type of audio data you're using (array of samples, list of samples, bytes, bytestring, etc). It's safe to use despite the underscore prefix.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/38015319\/how-to-create-a-numpy-array-from-a-pydub-audiosegment","best_answers_votes":25,"tags":["python","arrays","numpy","wave","pydub"],"question_length":386,"response_length":1004,"tag_count":5} {"question":"Replace the zeros in a NumPy integer array with nan I wrote a python script below: \n```py\nimport numpy as np\n\narr = np.arange(6).reshape(2, 3)\narr[arr==0]=['nan']\nprint arr\n```\nBut I got this error: \n```none\nTraceback (most recent call last):\n File \"C:\\Users\\Desktop\\test.py\", line 4, in \n arr[arr==0]=['nan']\nValueError: invalid literal for long() with base 10: 'nan'\n[Finished in 0.2s with exit code 1]\n```\nHow to replace zeros in a NumPy array with nan?","response":"np.nan has type float: arrays containing it must also have this datatype (or the complex or object datatype) so you may need to cast arr before you try to assign this value. The error arises because the string value 'nan' can't be converted to an integer type to match arr's type. \n```\n\n```python\narr = arr.astype('float')\n```\n\n```python\narr[arr == 0] = 'nan' # or use np.nan\n```\n\n```python\narr\n#Output\n#array([[ nan, 1., 2.],\n# [ 3., 4., 5.]])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/27778299\/replace-the-zeros-in-a-numpy-integer-array-with-nan","best_answers_votes":59,"tags":["python","arrays","numpy","nan"],"question_length":468,"response_length":424,"tag_count":4} {"question":"numpy bytes to plain string I have a numpy array X with dtype 'S' (numpy.bytes_). For example printing print(X[0, 0]) yields b'somestring'. Similarly str(X[0, 0]) returns string \"b'somestring'\". However I need to print or convert to string so that it does not contain b' at the beginning and ' at the end. I just want to print somestring or return a string \"somestring\". How to do it? Note: I cannot change the type of the array.","response":"You just need to decode the string back into ASCII, so it would just be: \n```\nbytes_string.decode('UTF-8')\n```\nDemo: \n```\n\n```python\nb'somestring'.decode('UTF-8')\n#Output\n#'somestring'\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/23618218\/numpy-bytes-to-plain-string","best_answers_votes":50,"tags":["python","numpy"],"question_length":429,"response_length":172,"tag_count":2} {"question":"numpy and Global Interpreter Lock I am about to write some computationally-intensive Python code that'll almost certainly spend most of its time inside numpy's linear algebra functions. The problem at hand is embarrassingly parallel. Long story short, the easiest way for me to take advantage of that would be by using multiple threads. The main barrier is almost certainly going to be the Global Interpreter Lock (GIL). To help design this, it would be useful to have a mental model for which numpy operations can be expected to release the GIL for their duration. To this end, I'd appreciate any rules of thumb, dos and don'ts, pointers etc. In case it matters, I'm using 64-bit Python 2.7.1 on Linux, with numpy 1.5.1 and scipy 0.9.0rc2, built with Intel MKL 10.3.1.","response":"Quite some numpy routines release GIL, so they can be efficiently parallel in threads (info). Maybe you don't need to do anything special! You can use this question to find whether the routines you need are among the ones that release GIL. In short, search for ALLOW_THREADS or nogil in the source. (Also note that MKL has the ability to use multiple threads for a routine, so that's another easy way to get parallelism, although possibly not the fastest kind).","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/6200437\/numpy-and-global-interpreter-lock","best_answers_votes":8,"tags":["python","multithreading","numpy","gil","python-multithreading"],"question_length":769,"response_length":461,"tag_count":5} {"question":"Overflow Error in Python's numpy.exp function I want to use numpy.exp like this: \n```\ncc = np.array([\n [0.120,0.34,-1234.1]\n])\n\nprint 1\/(1+np.exp(-cc))\n```\nBut this gives me error: \n```\n\/usr\/local\/lib\/python2.7\/site-packages\/ipykernel\/__main__.py:5: RuntimeWarning: overflow encountered in exp\n```\nI can't understand why? How can I fix this? It seems the problem is with third number (-1234.1)","response":"As fuglede says, the issue here is that np.float64 can't handle a number as large as exp(1234.1). Try using np.float128 instead: \n```\n\n```python\ncc = np.array([[0.120,0.34,-1234.1]], dtype=np.float128)\n```\n\n```python\ncc\n#Output\n#array([[ 0.12, 0.34, -1234.1]], dtype=float128)\n```\n\n```python\n1 \/ (1 + np.exp(-cc))\n#Output\n#array([[ 0.52996405, 0.58419052, 1.0893812e-536]], dtype=float128)\n#```\n#Note however, that there are certain quirks with using extended precision. It may not work on Windows; you don't actually get the full 128 bits of precision; and you might lose the precision whenever the number passes through pure python. You can read more about the details here. For most practical purposes, you can probably approximate 1 \/ (1 + ) to zero. That is to say, just ignore the warning and move on. Numpy takes care of the approximation for you (when using np.float64): \n#```\n```\n\n```python\n1 \/ (1 + np.exp(-cc))\n#Output\n#\/usr\/local\/bin\/ipython3:1: RuntimeWarning: overflow encountered in exp\n# #!\/usr\/local\/bin\/python3.4\n#array([[ 0.52996405, 0.58419052, 0. ]])\n#```\n#If you want to suppress the warning, you could use scipy.special.expit, as suggested by WarrenWeckesser in a comment to the question: \n#```\n```\n\n```python\nfrom scipy.special import expit\n```\n\n```python\nexpit(cc)\n#Output\n#array([[ 0.52996405, 0.58419052, 0. ]])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/40726490\/overflow-error-in-pythons-numpy-exp-function","best_answers_votes":54,"tags":["python","python-3.x","numpy","scipy"],"question_length":396,"response_length":1258,"tag_count":4} {"question":"How to delete an object from a numpy array without knowing the index Is it possible to delete an object from a numpy array without knowing the index of the object but instead knowing the object itself? I have seen that it is possible using the index of the object using the np.delete function, but I'm looking for a way to do it having the object but not its index. Example: [a,b,c,d,e,f] x = e I would like to delete x.","response":"You can find the index\/indices of the object using np.argwhere, and then delete the object(s) using np.delete. Example: \n```\nx = np.array([1,2,3,4,5])\nindex = np.argwhere(x==3)\ny = np.delete(x, index)\nprint(x, y)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/36365990\/how-to-delete-an-object-from-a-numpy-array-without-knowing-the-index","best_answers_votes":53,"tags":["python","list","numpy"],"question_length":420,"response_length":216,"tag_count":3} {"question":"How to detect a sign change for elements in a numpy array I have a numpy array with positive and negative values in. \n```\na = array([1,1,-1,-2,-3,4,5])\n```\nI want to create another array which contains a value at each index where a sign change occurs (For example, if the current element is positive and the previous element is negative and vice versa). For the array above, I would expect to get the following result \n```\narray([0,0,1,0,0,1,0])\n```\nAlternatively, a list of the positions in the array where the sign changes occur or list of booleans instead of 0's and 1's is fine.","response":"Something like \n```\na = array([1,1,-1,-2,-3,4,5])\nasign = np.sign(a)\nsignchange = ((np.roll(asign, 1) - asign) != 0).astype(int)\nprint signchange\narray([0, 0, 1, 0, 0, 1, 0])\n```\nNow, numpy.roll does a circular shift, so if the last element has different sign than the first, the first element in the signchange array will be 1. If this is not desired, one can of course do a simple \n```\nsignchange[0] = 0\n```\nAlso, np.sign considers 0 to have it's own sign, different from either positive or negative values. E.g. the \"signchange\" array for [-1,0,1] would be [0,1,1] even though the zero line was \"crossed\" only once. If this is undesired, one could insert the lines \n```\nsz = asign == 0\nwhile sz.any():\n asign[sz] = np.roll(asign, 1)[sz]\n sz = asign == 0\n```\nbetween lines 2 and 3 in the first example.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/2652368\/how-to-detect-a-sign-change-for-elements-in-a-numpy-array","best_answers_votes":38,"tags":["python","numpy"],"question_length":582,"response_length":810,"tag_count":2} {"question":"Equivalent of j in NumPy What is the equivalent of Octave's j in NumPy? How can I use j in Python? In Octave: \n```\noctave:1> j\nans = 0 + 1i\noctave:1> j*pi\/4\nans = 0.00000 + 0.78540i\n```\nBut in Python: \n```\n\n```python\nimport numpy as np\n```\n\n```python\nnp.imag\n#Output\n#\n```\n\n```python\nnp.imag(3)\n#Output\n#array(0)\n```\n\n```python\nnp.imag(3,2)\n#Output\n#Traceback (most recent call last):\n# File \"\", line 1, in \n#TypeError: imag() takes exactly 1 argument (2 given)\n```\n\n```python\nnp.imag(32)\n#Output\n#array(0)\n```\n\n```python\n\n```\n\n```python\n0+np.imag(1)\n#Output\n#1\n#```\n```","response":"In Python, 1j or 0+1j is a literal of complex type. You can broadcast that into an array using expressions, for example \n```\n\n```python\n1j * np.arange(5)\n#Output\n#array([ 0.+0.j, 0.+1.j, 0.+2.j, 0.+3.j, 0.+4.j])\n#```\n#Create an array from literals: \n#```\n```\n\n```python\nnp.array([1j])\n#Output\n#array([ 0.+1.j])\n#```\n#Note that what Michael9 posted creates a complex, not a complex array: \n#```\n```\n\n```python\nnp.complex(0,1)\n#Output\n#1j\n```\n\n```python\ntype(_)\n#Output\n#complex\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/28872862\/equivalent-of-j-in-numpy","best_answers_votes":65,"tags":["python","numpy","complex-numbers"],"question_length":491,"response_length":458,"tag_count":3} {"question":"Finding indices of matches of one array in another array I have two numpy arrays, A and B. A conatains unique values and B is a sub-array of A. Now I am looking for a way to get the index of B's values within A. For example: \n```\nA = np.array([1,2,3,4,5,6,7,8,9,10])\nB = np.array([1,7,10])\n# I need a function fun() that:\nfun(A,B)\n>> 0,6,9\n```","response":"You can use np.in1d with np.nonzero - \n```\nnp.nonzero(np.in1d(A,B))[0]\n```\nYou can also use np.searchsorted, if you care about maintaining the order - \n```\nnp.searchsorted(A,B)\n```\nFor a generic case, when A & B are unsorted arrays, you can bring in the sorter option in np.searchsorted, like so - \n```\nsort_idx = A.argsort()\nout = sort_idx[np.searchsorted(A,B,sorter = sort_idx)]\n```\nI would add in my favorite broadcasting too in the mix to solve a generic case - \n```\nnp.nonzero(B[:,None] == A)[1]\n```\nSample run - \n```\n\n```python\nA\n#Output\n#array([ 7, 5, 1, 6, 10, 9, 8])\n```\n\n```python\nB\n#Output\n#array([ 1, 10, 7])\n```\n\n```python\nsort_idx = A.argsort()\n```\n\n```python\nsort_idx[np.searchsorted(A,B,sorter = sort_idx)]\n#Output\n#array([2, 4, 0])\n```\n\n```python\nnp.nonzero(B[:,None] == A)[1]\n#Output\n#array([2, 4, 0])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/33678543\/finding-indices-of-matches-of-one-array-in-another-array","best_answers_votes":46,"tags":["python","arrays","numpy"],"question_length":343,"response_length":816,"tag_count":3} {"question":"Rounding to nearest int with numpy.rint() not consistent for .5 numpy's round int doesn't seem to be consistent with how it deals with xxx.5 \n```\n\n```python\nnp.rint(1.5)\n#Output\n#2.0\n```\n\n```python\nnp.rint(10.5)\n#Output\n#10.0\n#```\n#1.5 is rounded up while 10.5 is rounded down. Is there a reason for this? Is it just and artifact of the inaccuracy of floats? Edit Is there a way to get the desired functionality where n.5 is rounded up i.e. to n+1 for both n = even or odd?\n```","response":"So, this kind of behavior (as noted in comments), is a very traditional form of rounding, seen in the round half to even method. Also known (according to David Heffernan) as banker's rounding. The numpy documentation around this behavior implies that they are using this type of rounding, but also implies that there may be issues with the way in which numpy interacts with the IEEE floating point format. (shown below) \n```\nNotes\n-----\nFor values exactly halfway between rounded decimal values, Numpy\nrounds to the nearest even value. Thus 1.5 and 2.5 round to 2.0,\n-0.5 and 0.5 round to 0.0, etc. Results may also be surprising due\nto the inexact representation of decimal fractions in the IEEE\nfloating point standard [1]_ and errors introduced when scaling\nby powers of ten.\n```\nWhether or not that is the case, I honestly don't know. I do know that large portions of the numpy core are still written in FORTRAN 77, which predates the IEEE standard (set in 1984), but I don't know enough FORTRAN 77 to say whether or not there's some issue with the interface here. If you're looking to just round up regardless, the np.ceil function (ceiling function in general), will do this. If you're looking for the opposite (always rounding down), the np.floor function will achieve this.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/28617841\/rounding-to-nearest-int-with-numpy-rint-not-consistent-for-5","best_answers_votes":20,"tags":["python","numpy"],"question_length":460,"response_length":1281,"tag_count":2} {"question":"Why does indexing numpy arrays with brackets and commas differ in behavior? I tend to index numpy arrays (matrices) with brackets, but I've noticed when I want to slice an array (matrix) I must use the comma notation. Why is this? For example, \n```\n\n```python\nx = numpy.array([[1, 2], [3, 4], [5, 6]])\n```\n\n```python\nx\n#Output\n#array([[1, 2],\n# [3, 4],\n# [5, 6]])\n```\n\n```python\nx[1][1]\n#Output\n#4 # expected behavior\n```\n\n```python\nx[1,1]\n#Output\n#4 # expected behavior\n```\n\n```python\nx[:][1]\n#Output\n#array([3, 4]) # huh?\n```\n\n```python\nx[:,1]\n#Output\n#array([2, 4, 6]) # expected behavior\n#```\n```","response":"This: \n```\nx[:, 1]\n```\nmeans \"take all indices of x along the first axis, but only index 1 along the second\". This: \n```\nx[:][1]\n```\nmeans \"take all indices of x along the first axis (so all of x), then take index 1 along the first axis of the result\". You're applying the 1 to the wrong axis. x[1][2] and x[1, 2] are only equivalent because indexing an array with an integer shifts all remaining axes towards the front of the shape, so the first axis of x[1] is the second axis of x. This doesn't generalize at all; you should almost always use commas instead of multiple indexing steps.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/38113994\/why-does-indexing-numpy-arrays-with-brackets-and-commas-differ-in-behavior","best_answers_votes":46,"tags":["python","numpy","indexing","slice"],"question_length":535,"response_length":588,"tag_count":4} {"question":"Numpy: change max in each row to 1, all other numbers to 0 I'm trying to implement a numpy function that replaces the max in each row of a 2D array with 1, and all other numbers with zero: \n```\n\n```python\na = np.array([[0, 1],\n [2, 3],\n [4, 5],\n [6, 7],\n [9, 8]])\n```\n\n```python\nb = some_function(a)\n```\n\n```python\nb\n#Output\n#[[0. 1.]\n# [0. 1.]\n# [0. 1.]\n# [0. 1.]\n# [1. 0.]]\n#```\n#What I've tried so far \n#```\n#def some_function(x):\n# a = np.zeros(x.shape)\n# a[:,np.argmax(x, axis=1)] = 1\n# return a\n```\n\n```python\nb = some_function(a)\n```\n\n```python\nb\n#Output\n#[[1. 1.]\n# [1. 1.]\n# [1. 1.]\n# [1. 1.]\n# [1. 1.]]\n#```\n```","response":"Method #1, tweaking yours: \n```\n\n```python\na = np.array([[0, 1], [2, 3], [4, 5], [6, 7], [9, 8]])\n```\n\n```python\nb = np.zeros_like(a)\n```\n\n```python\nb[np.arange(len(a)), a.argmax(1)] = 1\n```\n\n```python\nb\n#Output\n#array([[0, 1],\n# [0, 1],\n# [0, 1],\n# [0, 1],\n# [1, 0]])\n#```\n#[Actually, range will work just fine; I wrote arange out of habit.] Method #2, using max instead of argmax to handle the case where multiple elements reach the maximum value: \n#```\n```\n\n```python\na = np.array([[0, 1], [2, 2], [4, 3]])\n```\n\n```python\n(a == a.max(axis=1)[:,None]).astype(int)\n#Output\n#array([[0, 1],\n# [1, 1],\n# [1, 0]])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/20295046\/numpy-change-max-in-each-row-to-1-all-other-numbers-to-0","best_answers_votes":44,"tags":["python","numpy"],"question_length":610,"response_length":561,"tag_count":2} {"question":"What is numpy method int0? I've seen np.int0 used for converting bounding box floating point values to int in OpenCV problems. What exactly is np.int0? I've seen np.uint8, np.int32, etc. I can't seem to find np.int0 in any online documentation. What kind of int does this cast arguments to?","response":"int0 is an alias for intp; this, in turn, is Integer used for indexing (same as C ssize_t; normally either int32 or int64) -- Numpy docs: basic types","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/48350693\/what-is-numpy-method-int0","best_answers_votes":35,"tags":["python","numpy","methods","int"],"question_length":290,"response_length":149,"tag_count":4} {"question":"Numpy transpose of 1D array not giving expected result I am trying a very basic example in Python scipy module for transpose() method but it's not giving expected result. I am using Ipython with pylab mode. \n```\na = array([1,2,3]\nprint a.shape\n>> (3,)\n\nb = a.transpose()\nprint b.shape\n>> (3,)\n```\nIf I print the contents of arrays \"a\" and \"b\", they are similar. Expectation is: (which will be result in Matlab on transpose) \n```\n[1,\n 2,\n 3]\n```","response":"NumPy's transpose() effectively reverses the shape of an array. If the array is one-dimensional, this means it has no effect. In NumPy, the arrays \n```\narray([1, 2, 3])\n```\nand \n```\narray([1,\n 2,\n 3])\n```\nare actually the same \u2013 they only differ in whitespace. What you probably want are the corresponding two-dimensional arrays, for which transpose() would work fine. Also consider using NumPy's matrix type: \n```\n\n```python\nnumpy.matrix([1, 2, 3])\n#Output\n#matrix([[1, 2, 3]])\n```\n\n```python\nnumpy.matrix([1, 2, 3]).T\n#Output\n#matrix([[1],\n# [2],\n# [3]])\n#```\n#Note that for most applications, the plain one-dimensional array would work fine as both a row or column vector, but when coming from Matlab, you might prefer using numpy.matrix.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/11885503\/numpy-transpose-of-1d-array-not-giving-expected-result","best_answers_votes":38,"tags":["python","numpy","scipy","transpose"],"question_length":446,"response_length":753,"tag_count":4} {"question":"Fitting a 2D Gaussian function using scipy.optimize.curve_fit - ValueError and minpack.error I intend to fit a 2D Gaussian function to images showing a laser beam to get its parameters like FWHM and position. So far I tried to understand how to define a 2D Gaussian function in Python and how to pass x and y variables to it. I've written a little script which defines that function, plots it, adds some noise to it and then tries to fit it using curve_fit. Everything seems to work except the last step in which I try to fit my model function to the noisy data. Here is my code: \n```\nimport scipy.optimize as opt\nimport numpy as np\nimport pylab as plt\n\n\n#define model function and pass independant variables x and y as a list\ndef twoD_Gaussian((x,y), amplitude, xo, yo, sigma_x, sigma_y, theta, offset):\n xo = float(xo)\n yo = float(yo) \n a = (np.cos(theta)**2)\/(2*sigma_x**2) + (np.sin(theta)**2)\/(2*sigma_y**2)\n b = -(np.sin(2*theta))\/(4*sigma_x**2) + (np.sin(2*theta))\/(4*sigma_y**2)\n c = (np.sin(theta)**2)\/(2*sigma_x**2) + (np.cos(theta)**2)\/(2*sigma_y**2)\n return offset + amplitude*np.exp( - (a*((x-xo)**2) + 2*b*(x-xo)*(y-yo) + c*((y-yo)**2)))\n\n# Create x and y indices\nx = np.linspace(0, 200, 201)\ny = np.linspace(0, 200, 201)\nx,y = np.meshgrid(x, y)\n\n#create data\ndata = twoD_Gaussian((x, y), 3, 100, 100, 20, 40, 0, 10)\n\n# plot twoD_Gaussian data generated above\nplt.figure()\nplt.imshow(data)\nplt.colorbar()\n\n# add some noise to the data and try to fit the data generated beforehand\ninitial_guess = (3,100,100,20,40,0,10)\n\ndata_noisy = data + 0.2*np.random.normal(size=len(x))\n\npopt, pcov = opt.curve_fit(twoD_Gaussian, (x,y), data_noisy, p0 = initial_guess)\n```\nHere is the error message I get when running the script using winpython 64-bit Python 2.7: \n```\nValueError: object too deep for desired array\nTraceback (most recent call last):\n File \"\", line 1, in \n File \"C:\\Python\\WinPython-64bit-2.7.6.2\\python-2.7.6.amd64\\lib\\site-packages\\spyderlib\\widgets\\externalshell\\sitecustomize.py\", line 540, in runfile\n execfile(filename, namespace)\n File \"E:\/Work Computer\/Software\/Python\/Fitting scripts\/2D Gaussian function fit\/2D_Gaussian_LevMarq_v2.py\", line 39, in \n popt, pcov = opt.curve_fit(twoD_Gaussian, (x,y), data_noisy, p0 = initial_guess)\n File \"C:\\Python\\WinPython-64bit-2.7.6.2\\python-2.7.6.amd64\\lib\\site-packages\\scipy\\optimize\\minpack.py\", line 533, in curve_fit\n res = leastsq(func, p0, args=args, full_output=1, **kw)\n File \"C:\\Python\\WinPython-64bit-2.7.6.2\\python-2.7.6.amd64\\lib\\site-packages\\scipy\\optimize\\minpack.py\", line 378, in leastsq\n gtol, maxfev, epsfcn, factor, diag)\nminpack.error: Result from function call is not a proper array of floats.\n```\nWhat is it that am I doing wrong? Is it how I pass the independent variables to the model function\/curve_fit?","response":"The output of twoD_Gaussian needs to be 1D. What you can do is add a .ravel() onto the end of the last line, like this: \n```\ndef twoD_Gaussian(xy, amplitude, xo, yo, sigma_x, sigma_y, theta, offset):\n x, y = xy\n xo = float(xo)\n yo = float(yo) \n a = (np.cos(theta)**2)\/(2*sigma_x**2) + (np.sin(theta)**2)\/(2*sigma_y**2)\n b = -(np.sin(2*theta))\/(4*sigma_x**2) + (np.sin(2*theta))\/(4*sigma_y**2)\n c = (np.sin(theta)**2)\/(2*sigma_x**2) + (np.cos(theta)**2)\/(2*sigma_y**2)\n g = offset + amplitude*np.exp( - (a*((x-xo)**2) + 2*b*(x-xo)*(y-yo) \n + c*((y-yo)**2)))\n return g.ravel()\n```\nYou'll obviously need to reshape the output for plotting, e.g: \n```\n# Create x and y indices\nx = np.linspace(0, 200, 201)\ny = np.linspace(0, 200, 201)\nx, y = np.meshgrid(x, y)\n\n#create data\ndata = twoD_Gaussian((x, y), 3, 100, 100, 20, 40, 0, 10)\n\n# plot twoD_Gaussian data generated above\nplt.figure()\nplt.imshow(data.reshape(201, 201))\nplt.colorbar()\n```\nDo the fitting as before: \n```\n# add some noise to the data and try to fit the data generated beforehand\ninitial_guess = (3,100,100,20,40,0,10)\n\ndata_noisy = data + 0.2*np.random.normal(size=data.shape)\n\npopt, pcov = opt.curve_fit(twoD_Gaussian, (x, y), data_noisy, p0=initial_guess)\n```\nAnd plot the results: \n```\ndata_fitted = twoD_Gaussian((x, y), *popt)\n\nfig, ax = plt.subplots(1, 1)\n#ax.hold(True) For older versions. This has now been deprecated and later removed\nax.imshow(data_noisy.reshape(201, 201), cmap=plt.cm.jet, origin='lower',\n extent=(x.min(), x.max(), y.min(), y.max()))\nax.contour(x, y, data_fitted.reshape(201, 201), 8, colors='w')\nplt.show()\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/21566379\/fitting-a-2d-gaussian-function-using-scipy-optimize-curve-fit-valueerror-and-m","best_answers_votes":55,"tags":["python","numpy","scipy","data-fitting"],"question_length":2856,"response_length":1659,"tag_count":4} {"question":"psycopg2: can't adapt type 'numpy.int64' I have a dataframe with the dtypes shown below and I want to insert the dataframe into a postgres DB but it fails due to error can't adapt type 'numpy.int64' \n```\nid_code int64\nsector object\ncreated_date float64\nupdated_date float64\n```\nHow can I convert these types to native python types such as from int64 (which is essentially 'numpy.int64') to a classic int that would then be acceptable to postgres via the psycopg2 client. \n```\ndata['id_code'].astype(np.int) defaults to int64\n```\nIt is nonetheless possible to convert from one numpy type to another (e.g from int to float) \n```\ndata['id_code'].astype(float)\n```\nchanges to \n```\ndtype: float64\n```\nThe bottomline is that psycopg2 doesn't seem to understand numpy datatypes if any one has ideas how to convert them to classic types that would be helpful. Updated: Insertion to DB \n```\ndef insert_many():\n \"\"\"Add data to the table.\"\"\"\n sql_query = \"\"\"INSERT INTO classification(\n id_code, sector, created_date, updated_date)\n VALUES (%s, %s, %s, %s);\"\"\"\n data = pd.read_excel(fh, sheet_name=sheetname)\n data_list = list(data.to_records())\n\n conn = None\n try:\n conn = psycopg2.connect(db)\n cur = conn.cursor()\n cur.executemany(sql_query, data_list)\n conn.commit()\n cur.close()\n except(Exception, psycopg2.DatabaseError) as error:\n print(error)\n finally:\n if conn is not None:\n conn.close()\n```","response":"Update: If the value includes NaN, it still wrong. It seems that psycopg2 can't explain the np.int64 format, the following method works for me. \n```\nimport numpy as np\nfrom psycopg2.extensions import register_adapter, AsIs\nregister_adapter(np.int64, AsIs)\n```\nOriginal answer: same problem here, successfully solve this problem after I transform series to nd.array and int. you can try as following: \n```\ndata['id_code'].values.astype(int)\n```\n--","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/50626058\/psycopg2-cant-adapt-type-numpy-int64","best_answers_votes":37,"tags":["numpy","psycopg2"],"question_length":1541,"response_length":446,"tag_count":2} {"question":"Estimate Autocorrelation using Python I would like to perform Autocorrelation on the signal shown below. The time between two consecutive points is 2.5ms (or a repetition rate of 400Hz). This is the equation for estimating autoacrrelation that I would like to use (Taken from http:\/\/en.wikipedia.org\/wiki\/Autocorrelation, section Estimation): What is the simplest method of finding the estimated autocorrelation of my data in python? Is there something similar to numpy.correlate that I can use? Or should I just calculate the mean and variance? Edit: With help from unutbu, I have written: \n```\nfrom numpy import *\nimport numpy as N\nimport pylab as P\n\nfn = 'data.txt'\nx = loadtxt(fn,unpack=True,usecols=[1])\ntime = loadtxt(fn,unpack=True,usecols=[0]) \n\ndef estimated_autocorrelation(x):\n n = len(x)\n variance = x.var()\n x = x-x.mean()\n r = N.correlate(x, x, mode = 'full')[-n:]\n #assert N.allclose(r, N.array([(x[:n-k]*x[-(n-k):]).sum() for k in range(n)]))\n result = r\/(variance*(N.arange(n, 0, -1)))\n return result\n\nP.plot(time,estimated_autocorrelation(x))\nP.xlabel('time (s)')\nP.ylabel('autocorrelation')\nP.show()\n```","response":"I don't think there is a NumPy function for this particular calculation. Here is how I would write it: \n```\ndef estimated_autocorrelation(x):\n \"\"\"\n http:\/\/stackoverflow.com\/q\/14297012\/190597\n http:\/\/en.wikipedia.org\/wiki\/Autocorrelation#Estimation\n \"\"\"\n n = len(x)\n variance = x.var()\n x = x-x.mean()\n r = np.correlate(x, x, mode = 'full')[-n:]\n assert np.allclose(r, np.array([(x[:n-k]*x[-(n-k):]).sum() for k in range(n)]))\n result = r\/(variance*(np.arange(n, 0, -1)))\n return result\n```\nThe assert statement is there to both check the calculation and to document its intent. When you are confident this function is behaving as expected, you can comment-out the assert statement, or run your script with python -O. (The -O flag tells Python to ignore assert statements.)","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/14297012\/estimate-autocorrelation-using-python","best_answers_votes":36,"tags":["python","numpy","signal-processing"],"question_length":1143,"response_length":805,"tag_count":3} {"question":"Open source alternative to MATLAB's fmincon function? [closed] Closed. This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meet Stack Overflow guidelines. It is not currently accepting answers. We don\u2019t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations. Closed 4 years ago. Improve this question Is there an open-source alternative to MATLAB's fmincon function for constrained linear optimization? I'm rewriting a MATLAB program to use Python \/ NumPy \/ SciPy and this is the only function I haven't found an equivalent to. A NumPy-based solution would be ideal, but any language will do.","response":"Is your problem convex? Linear? Non-linear? I agree that SciPy.optimize will probably do the job, but fmincon is a sort of bazooka for solving optimization problems, and you'll be better off if you can confine it to one of the categories below (in increasing level of difficulty to solve efficiently) Linear Program (LP) Quadratic Program (QP) Convex Quadratically-Constrained Quadratic Program (QCQP) Second Order Cone Program (SOCP) Semidefinite Program (SDP) Non-Linear Convex Problem Non-Convex Problem There are also combinatoric problems such as Mixed-Integer Linear Programs (MILP), but you didn't mention any sort of integrality constraints, suffice to say that they fall into a different class of problems. The CVXOpt package will be of great use to you if your problem is convex. If your problem is not convex, you need to choose between finding a local solution or the global solution. Many convex solvers 'sort of' work in a non-convex domain. Finding a good approximation to the global solution would require some form Simulated Annealing or Genetic Algorithm. Finding the global solution will require an enumeration of all local solutions or a combinatorial strategy such as Branch and Bound.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/49926\/open-source-alternative-to-matlabs-fmincon-function","best_answers_votes":33,"tags":["python","numpy","matlab","numeric","scientific-computing"],"question_length":807,"response_length":1206,"tag_count":5} {"question":"Does setting numpy arrays to None free memory? I have hundreds of really larges matrices, like (600, 800) or (3, 600, 800) shape'd ones. Therefore I want to de-allocate the memory used as soon as I don't really need something anymore. I thought: \n```\nsome_matrix = None\n```\nShould do the job, or is just the reference set to None but somewhere in the Memory the space still allocated? (like preserving the allocated space for some re-initialization of some_matrix in the future) Additionally: sometimes I am slicing through the matrices, calculated something and put the values into a buffer (a list, because it gets appended all the time). So setting a list to None will definitely free the memory, right? Or does some kind of unset() method exist where whole identifiers plus its referenced objects are \"deleted\"?","response":"You definitely want to have a look at the garbage collection. Unlike some programming language like C\/C++ where the programmer has to free dynamically allocated memory by himself when the space is no longer needed, python has a garbage collection. Meaning that python itself frees the memory when necessary. When you use some_matrix = None, you unlink the variable from the memory space; the reference counter is decreased, and if it reaches 0, the garbage collector will free the memory. When you use del some_matrix as suggested by MSeifert, the memory is not freed immediately as opposed to what the answer says. According to python doc, this is what happens: Deletion of a name removes the binding of that name from the local or global namespace What happened under the hood is that the counter of references to the memory space is reduced by 1 independently of assigning None or using del. When this counter reaches 0, the garbage collector will free the memory space in the future. The only difference is that when using del, it is clear from the context that you do not need the name anymore. If you look at the doc of the garbage collection, you will see that you can invoke it by yourself or change some of its parameters.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/35316728\/does-setting-numpy-arrays-to-none-free-memory","best_answers_votes":37,"tags":["python","arrays","numpy","memory-management","memory-leaks"],"question_length":815,"response_length":1231,"tag_count":5} {"question":"Check if two scipy.sparse.csr_matrix are equal I want to check if two csr_matrix are equal. If I do: \n```\nx.__eq__(y)\n```\nI get: \n```\nraise ValueError(\"The truth value of an array with more than one \"\nValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all().\n```\nThis, However, works well: \n```\nassert (z in x for z in y)\n```\nIs there a better way to do it? maybe using some scipy optimized function instead? Thanks so much","response":"Can we assume they are the same shape? \n```\n\n```python\na=sparse.csr_matrix([[0,1],[1,0]])\n```\n\n```python\nb=sparse.csr_matrix([[0,1],[1,1]])\n```\n\n```python\n(a!=b).nnz==0\n#Output\n#False\n#```\n#This checks the sparsity of the inequality array. It will give you an efficiency warning if you try a==b (at least the 1st time you use it). That's because it has to test all those zeros. It can't take much advantage of the sparsity. You need a relatively recent version to use logical operators like this. Were you trying to use x.__eq__(y) in some if expression, or did you get error from just that expression? In general you probably want to check several parameters first. Same shape, same nnz, same dtype. You need to be careful with floats. For dense arrays np.allclose is a good way of testing equality. And if the sparse arrays aren't too large, that might be good as well \n#```\n#np.allclose(a.A, b.A)\n#```\n#allclose uses all(less_equal(abs(x-y), atol + rtol * abs(y))). You can use a-b, but I suspect that this too will give an efficiecy warning.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/30685024\/check-if-two-scipy-sparse-csr-matrix-are-equal","best_answers_votes":53,"tags":["python","numpy","scipy"],"question_length":470,"response_length":1032,"tag_count":3} {"question":"Filter out nan rows in a specific column \n```\ndf =\n\nCol1 Col2 Col3\n1 nan 4\n2 5 4\n3 3 nan\n```\nGiven the dataframe df, I want to obtain a new dataframe df2 that does not contain nan in the column Col2. This is the expected result: df2 = \n```\nCol1 Col2 Col3\n2 5 4\n3 3 nan\n```\nI know that it's possible to use pandas.isnull and dropna, however how to specify only particular column to which filtering should be applied?","response":"you can use DataFrame.dropna() method: \n```\n\n```python\ndf.dropna(subset=['Col2'])\n#Output\n#Col1 Col2 Col3\n#1 2 5.0 4.0\n#2 3 3.0 NaN\n#```\n#or (in this case) less idiomatic Series.notnull(): \n#```\n```\n\n```python\ndf.loc[df.Col2.notnull()]\n#Output\n#Col1 Col2 Col3\n#1 2 5.0 4.0\n#2 3 3.0 NaN\n#```\n#or using DataFrame.query() method: \n#```\n```\n\n```python\ndf.query(\"Col2 == Col2\")\n#Output\n#Col1 Col2 Col3\n#1 2 5.0 4.0\n#2 3 3.0 NaN\n#```\n#numexpr solution: \n#```\n```\n\n```python\nimport numexpr as ne\n```\n\n```python\ncol = df.Col2\n```\n\n```python\ndf[ne.evaluate(\"col == col\")]\n#Output\n#Col1 Col2 Col3\n#1 2 5.0 4.0\n#2 3 3.0 NaN\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/43821529\/filter-out-nan-rows-in-a-specific-column","best_answers_votes":57,"tags":["python","pandas","numpy"],"question_length":443,"response_length":663,"tag_count":3} {"question":"How to iterate 1d NumPy array with index and value [duplicate] This question already has answers here: Iterate over numpy with index (numpy equivalent of python enumerate) (3 answers) Closed 6 years ago. For python dict, I could use iteritems() to loop through key and value at the same time. But I cannot find such functionality for NumPy array. I have to manually track idx like this: \n```\nidx = 0 \nfor j in theta:\n some_function(idx,j,theta)\n idx += 1\n```\nIs there a better way to do this?","response":"There are a few alternatives. The below assumes you are iterating over a 1d NumPy array. Iterate with range \n```\nfor j in range(theta.shape[0]): # or range(len(theta))\n some_function(j, theta[j], theta)\n```\nNote this is the only of the 3 solutions which will work with numba. This is noteworthy since iterating over a NumPy array explicitly is usually only efficient when combined with numba or another means of pre-compilation. Iterate with enumerate \n```\nfor idx, j in enumerate(theta):\n some_function(idx, j, theta)\n```\nThe most efficient of the 3 solutions for 1d arrays. See benchmarking below. Iterate with np.ndenumerate \n```\nfor idx, j in np.ndenumerate(theta):\n some_function(idx[0], j, theta)\n```\nNotice the additional indexing step in idx[0]. This is necessary since the index (like shape) of a 1d NumPy array is given as a singleton tuple. For a 1d array, np.ndenumerate is inefficient; its benefits only show for multi-dimensional arrays. Performance benchmarking \n```\n# Python 3.7, NumPy 1.14.3\n\nnp.random.seed(0)\n\narr = np.random.random(10**6)\n\ndef enumerater(arr):\n for index, value in enumerate(arr):\n index, value\n pass\n\ndef ranger(arr):\n for index in range(len(arr)):\n index, arr[index]\n pass\n\ndef ndenumerater(arr):\n for index, value in np.ndenumerate(arr):\n index[0], value\n pass\n\n%timeit enumerater(arr) # 131 ms\n%timeit ranger(arr) # 171 ms\n%timeit ndenumerater(arr) # 579 ms\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/49384682\/how-to-iterate-1d-numpy-array-with-index-and-value","best_answers_votes":66,"tags":["python","arrays","numpy","indexing","iterator"],"question_length":496,"response_length":1471,"tag_count":5} {"question":"why an extra comma in the shape of a single index numpy array A numpy array a a = numpy.arange(12) has shape a.shape = (12,) Why do we need the comma? is shape (12) reserved for something else?","response":"The reason we don't use (12) for a one-element tuple (like [12] for one-element list) is that round parentheses also appear in formulas. E.g., in x = 2*(5+7) the part (5+7) is just a number, not a tuple. But what if we actually meant it to be a one-element tuple? The trailing comma is a way to indicate that. Compare: \n```\n\n```python\n2*(5+7)\n#Output\n#24\n```\n\n```python\n2*(5+7,)\n#Output\n#(12, 12)\n#```\n#With lists, the trailing comma is not needed although some style guides recommend it for consistency. \n#```\n```\n\n```python\n2*[5+7]\n#Output\n#[12, 12]\n```\n\n```python\n2*[5+7,]\n#Output\n#[12, 12]\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/46134891\/why-an-extra-comma-in-the-shape-of-a-single-index-numpy-array","best_answers_votes":51,"tags":["arrays","numpy"],"question_length":193,"response_length":518,"tag_count":2} {"question":"How to correctly use scipy's skew and kurtosis functions? The skewness is a parameter to measure the symmetry of a data set and the kurtosis to measure how heavy its tails are compared to a normal distribution, see for example here. scipy.stats provides an easy way to calculate these two quantities, see scipy.stats.kurtosis and scipy.stats.skew. In my understanding, the skewness and kurtosis of a normal distribution should both be 0 using the functions just mentioned. That is, however, not the case with my code: \n```\nimport numpy as np\nfrom scipy.stats import kurtosis\nfrom scipy.stats import skew\n\nx = np.linspace( -5, 5, 1000 )\ny = 1.\/(np.sqrt(2.*np.pi)) * np.exp( -.5*(x)**2 ) # normal distribution\n\nprint( 'excess kurtosis of normal distribution (should be 0): {}'.format( kurtosis(y) ))\nprint( 'skewness of normal distribution (should be 0): {}'.format( skew(y) ))\n```\nThe output is: excess kurtosis of normal distribution (should be 0): -0.307393087742 skewness of normal distribution (should be 0): 1.11082371392 What am I doing wrong ? The versions I am using are \n```\npython: 2.7.6\nscipy : 0.17.1\nnumpy : 1.12.1\n```","response":"These functions calculate moments of the probability density distribution (that's why it takes only one parameter) and doesn't care about the \"functional form\" of the values. These are meant for \"random datasets\" (think of them as measures like mean, standard deviation, variance): \n```\nimport numpy as np\nfrom scipy.stats import kurtosis, skew\n\nx = np.random.normal(0, 2, 10000) # create random values based on a normal distribution\n\nprint( 'excess kurtosis of normal distribution (should be 0): {}'.format( kurtosis(x) ))\nprint( 'skewness of normal distribution (should be 0): {}'.format( skew(x) ))\n```\nwhich gives: \n```\nexcess kurtosis of normal distribution (should be 0): -0.024291887786943356\nskewness of normal distribution (should be 0): 0.009666157036010928\n```\nchanging the number of random values increases the accuracy: \n```\nx = np.random.normal(0, 2, 10000000)\n```\nLeading to: \n```\nexcess kurtosis of normal distribution (should be 0): -0.00010309478605163847\nskewness of normal distribution (should be 0): -0.0006751744848755031\n```\nIn your case the function \"assumes\" that each value has the same \"probability\" (because the values are equally distributed and each value occurs only once) so from the point of view of skew and kurtosis it's dealing with a non-gaussian probability density (not sure what exactly this is) which explains why the resulting values aren't even close to 0: \n```\nimport numpy as np\nfrom scipy.stats import kurtosis, skew\n\nx_random = np.random.normal(0, 2, 10000)\n\nx = np.linspace( -5, 5, 10000 )\ny = 1.\/(np.sqrt(2.*np.pi)) * np.exp( -.5*(x)**2 ) # normal distribution\n\nimport matplotlib.pyplot as plt\n\nf, (ax1, ax2) = plt.subplots(1, 2)\nax1.hist(x_random, bins='auto')\nax1.set_title('probability density (random)')\nax2.hist(y, bins='auto')\nax2.set_title('(your dataset)')\nplt.tight_layout()\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/45483890\/how-to-correctly-use-scipys-skew-and-kurtosis-functions","best_answers_votes":51,"tags":["python","numpy","scipy","statistics"],"question_length":1132,"response_length":1840,"tag_count":4} {"question":"Adding a background image to a plot Say I am plotting a set of points with an image as a background. I've used the Lena image in the example: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.misc import imread\n\nnp.random.seed(0)\nx = np.random.uniform(0.0,10.0,15)\ny = np.random.uniform(0.0,10.0,15)\nimg = imread(\"lena.jpg\")\nplt.scatter(x,y,zorder=1)\nplt.imshow(img,zorder=0)\nplt.show()\n```\nThis gives me . My question is: How can I specify the corner coordinates of the image in the plot? Let's say I'd like the bottom-left corner to be at x, y = 0.5, 1.0 and the top-right corner to be at x, y = 8.0, 7.0.","response":"Use the extent keyword of imshow. The order of the argument is [left, right, bottom, top] \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nnp.random.seed(0)\nx = np.random.uniform(0.0,10.0,15)\ny = np.random.uniform(0.0,10.0,15)\n\ndatafile = 'lena.jpg'\nimg = plt.imread(datafile)\nplt.scatter(x,y,zorder=1)\nplt.imshow(img, zorder=0, extent=[0.5, 8.0, 1.0, 7.0])\nplt.show()\n```\nFor cases where it's desired to have an image in a small area of the scatter plot, change the order of the plots (.imshow then .scatter) and change the extent values. \n```py\nplt.imshow(img, zorder=0, extent=[3.0, 5.0, 3.0, 4.50])\nplt.scatter(x, y, zorder=1)\nplt.show()\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/15160123\/adding-a-background-image-to-a-plot","best_answers_votes":49,"tags":["python","numpy","matplotlib"],"question_length":624,"response_length":653,"tag_count":3} {"question":"How to multiply a scalar throughout a specific column within a NumPy array? I need to do some analysis on a large dataset from a hydrolgeology field work. I am using NumPy. I want to know how I can: multiply e.g. the 2nd column of my array by a number (e.g. 5.2). And then calculate the cumulative sum of the numbers in that column. As I mentioned I only want to work on a specific column and not the whole array.","response":"```\nyou can do this in two simple steps using NumPy:\n\n```python\n# multiply column 2 of the 2D array, A, by 5.2\n```\n\n```python\nA[:,1] *= 5.2\n```\n\n```python\n# assuming by 'cumulative sum' you meant the 'reduced' sum:\n```\n\n```python\nA[:,1].sum()\n```\n\n```python\n# if in fact you want the cumulative sum (ie, returns a new column)\n```\n\n```python\n# then do this for the second step instead:\n```\n\n```python\nNP.cumsum(A[:,1])\n#Output\n#```\n#with some mocked data: \n#```\n```\n\n```python\nA = NP.random.rand(8, 5)\n```\n\n```python\nA\n#Output\n# array([[ 0.893, 0.824, 0.438, 0.284, 0.892],\n# [ 0.534, 0.11 , 0.409, 0.555, 0.96 ],\n# [ 0.671, 0.817, 0.636, 0.522, 0.867],\n# [ 0.752, 0.688, 0.142, 0.793, 0.716],\n# [ 0.276, 0.818, 0.904, 0.767, 0.443],\n# [ 0.57 , 0.159, 0.144, 0.439, 0.747],\n# [ 0.705, 0.793, 0.575, 0.507, 0.956],\n# [ 0.322, 0.713, 0.963, 0.037, 0.509]])\n```\n\n```python\nA[:,1] *= 5.2\n```\n\n```python\nA\n#Output\n# array([[ 0.893, 4.287, 0.438, 0.284, 0.892],\n# [ 0.534, 0.571, 0.409, 0.555, 0.96 ],\n# [ 0.671, 4.25 , 0.636, 0.522, 0.867],\n# [ 0.752, 3.576, 0.142, 0.793, 0.716],\n# [ 0.276, 4.255, 0.904, 0.767, 0.443],\n# [ 0.57 , 0.827, 0.144, 0.439, 0.747],\n# [ 0.705, 4.122, 0.575, 0.507, 0.956],\n# [ 0.322, 3.71 , 0.963, 0.037, 0.509]])\n```\n\n```python\nA[:,1].sum()\n#Output\n# 25.596156138451427\n#```\n#just a few simple rules are required to grok element selection (indexing) in NumPy: NumPy, like Python, is 0-based, so eg, the \"1\" below refers to the second column commas separate the dimensions inside the brackets, so [rows, columns], eg, A[2,3] means the item (\"cell\") at row three, column four a colon means all of the elements along that dimension, eg, A[:,1] creates a view of A's column 2; A[3,:] refers to the fourth row\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/7508638\/how-to-multiply-a-scalar-throughout-a-specific-column-within-a-numpy-array","best_answers_votes":44,"tags":["python","arrays","numpy","multidimensional-array"],"question_length":413,"response_length":1730,"tag_count":4} {"question":"Numpy: views vs copy by slicing When I am doing the slicing, an unexpected thing happened that seems the first to be view but the second is copy. First First slice of row, then slice of column. It seems is a view. \n```\n\n```python\na = np.arange(12).reshape(3, 4) \n```\n\n```python\na[0:3:2, :][:, [0, 2]] = 100\n```\n\n```python\na\n#Output\n#array([[100, 1, 100, 3],\n# [ 4, 5, 6, 7],\n# [100, 9, 100, 11]])\n#```\n#Second But if I first slice of column, then slice of row, it seems a copy: \n#```\n```\n\n```python\na[:, [0, 2]][0:3:2, :] = 0\n```\n\n```python\na\n#Output\n#array([[100, 1, 100, 3],\n# [ 4, 5, 6, 7],\n# [100, 9, 100, 11]])\n#```\n#I am confused because the two methods finally will cause seem position to change, but why the second actually doesn't change the number?\n```","response":"The accepted answer by John Zwinck is actually false (I just figured this out the hard way!). The problem in the question is a combination of doing \"l-value indexing\" with numpy's fancy indexing. The following doc explains exactly this case https:\/\/scipy-cookbook.readthedocs.io\/items\/ViewsVsCopies.html in the section \"But fancy indexing does seem to return views sometimes, doesn't it?\" Edit: To summarize the above link: Whether a view or a copy is created is determined by whether the indexing can be represented as a slice. Exception: If one does \"fancy indexing\" then always a copy is created. Fancy indexing is something like a[[1,2]]. Exception to the exception: If one does l-value indexing (i.e. the indexing happens left of the = sign), then the rule for when a view or a copy are created doesn't apply anymore (though see below for a further exception). The python interpreter will directly assign the values to the left hand side without creating a copy or a view. To prove that a copy is created in both cases, you can do the operation in two steps: \n```\n\n```python\na = np.arange(12).reshape(3, 4)\n```\n\n```python\nb = a[0:3:2, :][:, [0, 2]]\n```\n\n```python\nb[:] = 100\n```\n\n```python\na\n#Output\n#array([[ 0, 1, 2, 3],\n# [ 4, 5, 6, 7],\n# [ 8, 9, 10, 11]])\n#```\n#and \n#```\n```\n\n```python\nb = a[:, [0, 2]][0:3:2, :]\n```\n\n```python\nb[:] = 0\n```\n\n```python\na\n#Output\n#array([[ 0, 1, 2, 3],\n# [ 4, 5, 6, 7],\n# [ 8, 9, 10, 11]])\n#```\n#Just as an aside, the question by the original poster is the exact problem stated at the end of the scipy-cookbook link above. There is no solution given in the book. The tricky thing about the question is that there are two indexing operations done in a row. Exception to the exception to the exception: If there are two indexing operations done in a row on the left hand side (as is the case in this question), the direct assignment in l-value indexing only works if the first indexing operation can be represented as a slice. Otherwise a copy has to be created even though it is l-value indexing.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/47181092\/numpy-views-vs-copy-by-slicing","best_answers_votes":33,"tags":["python","numpy","slice"],"question_length":734,"response_length":1975,"tag_count":3} {"question":"How to convert ndarray to array? I'm using pandas.Series and np.ndarray. The code is like this \n```\n\n```python\nt\n#Output\n#array([[ 0., 0., 0.],\n# [ 0., 0., 0.],\n# [ 0., 0., 0.]])\n```\n\n```python\npandas.Series(t)\n#Output\n#Exception: Data must be 1-dimensional\n#>>>\n#```\n#And I trie to convert it into 1-dimensional array: \n#```\n```\n\n```python\ntt = t.reshape((1,-1))\n```\n\n```python\ntt\n#Output\n#array([[ 0., 0., 0., 0., 0., 0., 0., 0., 0.]])\n#```\n#tt is still multi-dimensional since there are double '['. So how do I get a really convert ndarray into array? After searching, it says they are the same. However in my situation, they are not working the same.\n```","response":"An alternative is to use np.ravel: \n```\n\n```python\nnp.zeros((3,3)).ravel()\n#Output\n#array([ 0., 0., 0., 0., 0., 0., 0., 0., 0.])\n#```\n#The importance of ravel over flatten is ravel only copies data if necessary and usually returns a view, while flatten will always return a copy of the data. To use reshape to flatten the array: \n#```\n#tt = t.reshape(-1)\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/18200052\/how-to-convert-ndarray-to-array","best_answers_votes":40,"tags":["python","numpy","multidimensional-array"],"question_length":605,"response_length":346,"tag_count":3} {"question":"Indexing over the last axis when you don't know the rank in advance How can I index the last axis of a Numpy array if I don't know its rank in advance? Here is what I want to do: Let a be a Numpy array of unknown rank. I want the slice of the last k elements of the last axis. If a is 1D, I want \n```\nb = a[-k:]\n```\nIf a is 2D, I want \n```\nb = a[:, -k:]\n```\nIf a is 3D, I want \n```\nb = a[:, :, -k:]\n```\nand so on. I want this to work regardless of the rank of a (as long as the rank is at least 1). The fact that I want the last k elements in the example is irrelevant of course, the point is that I want to specify indices for whatever the last axis is when I don't know the rank of an array in advance.","response":"```\nb = a[..., -k:]\n```\nThis is mentioned in the docs.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/42916029\/indexing-over-the-last-axis-when-you-dont-know-the-rank-in-advance","best_answers_votes":46,"tags":["python","numpy","indexing","slice"],"question_length":704,"response_length":54,"tag_count":4} {"question":"in operator, float(\"NaN\") and np.nan I used to believe that in operator in Python checks the presence of element in some collection using equality checking ==, so element in some_list is roughly equivalent to any(x == element for x in some_list). For example: \n```\nTrue in [1, 2, 3]\n# True because True == 1\n```\nor \n```\n1 in [1., 2., 3.]\n# also True because 1 == 1.\n```\nHowever, it is well-known that NaN is not equal to itself. So I expected that float(\"NaN\") in [float(\"NaN\")] is False. And it is False indeed. However, if we use numpy.nan instead of float(\"NaN\"), the situation is quite different: \n```\nimport numpy as np\nnp.nan in [np.nan, 1, 2]\n# True\n```\nBut np.nan == np.nan still gives False! How is it possible? What's the difference between np.nan and float(\"NaN\")? How does in deal with np.nan?","response":"To check if the item is in the list, Python tests for object identity first, and then tests for equality only if the objects are different.1 float(\"NaN\") in [float(\"NaN\")] is False because two different NaN objects are involved in the comparison. The test for identity therefore returns False, and then the test for equality also returns False since NaN != NaN. np.nan in [np.nan, 1, 2] however is True because the same NaN object is involved in the comparison. The test for object identity returns True and so Python immediately recognises the item as being in the list. The __contains__ method (invoked using in) for many of Python's other builtin Container types, such as tuples and sets, is implemented using the same check. 1 At least this is true in CPython. Object identity here means that the objects are found at the same memory address: the contains method for lists is performed using PyObject_RichCompareBool which quickly compares object pointers before a potentially more complicated object comparison. Other Python implementations may differ.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/47721635\/in-operator-floatnan-and-np-nan","best_answers_votes":37,"tags":["python","numpy","containers","nan"],"question_length":805,"response_length":1057,"tag_count":4} {"question":"Populate a Pandas SparseDataFrame from a SciPy Sparse Matrix I noticed Pandas now has support for Sparse Matrices and Arrays. Currently, I create DataFrame()s like this: \n```\nreturn DataFrame(matrix.toarray(), columns=features, index=observations)\n```\nIs there a way to create a SparseDataFrame() with a scipy.sparse.csc_matrix() or csr_matrix()? Converting to dense format kills RAM badly. Thanks!","response":"A direct conversion is not supported ATM. Contributions are welcome! Try this, should be ok on memory as the SpareSeries is much like a csc_matrix (for 1 column) and pretty space efficient \n```\n\n```python\ncol = np.array([0,0,1,2,2,2])\n```\n\n```python\ndata = np.array([1,2,3,4,5,6],dtype='float64')\n```\n\n```python\nm = csc_matrix( (data,(row,col)), shape=(3,3) )\n```\n\n```python\nm\n#Output\n#<3x3 sparse matrix of type ''\n# with 6 stored elements in Compressed Sparse Column format>\n```\n\n```python\npd.SparseDataFrame([ pd.SparseSeries(m[i].toarray().ravel()) \n for i in np.arange(m.shape[0]) ])\n#Output\n#0 1 2\n#0 1 0 4\n#1 0 0 5\n#2 2 3 6\n```\n\n```python\ndf = pd.SparseDataFrame([ pd.SparseSeries(m[i].toarray().ravel()) \n for i in np.arange(m.shape[0]) ])\n```\n\n```python\ntype(df)\n#Output\n#pandas.sparse.frame.SparseDataFrame\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/17818783\/populate-a-pandas-sparsedataframe-from-a-scipy-sparse-matrix","best_answers_votes":30,"tags":["python","numpy","scipy","pandas","sparse-matrix"],"question_length":398,"response_length":892,"tag_count":5} {"question":"Flipping zeroes and ones in one-dimensional NumPy array I have a one-dimensional NumPy array that consists of zeroes and ones like so: \n```\narray([0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])\n```\nI'd like a quick way to just \"flip\" the values such that zeroes become ones, and ones become zeroes, resulting in a NumPy array like this: \n```\narray([1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])\n```\nIs there an easy one-liner for this? I looked at the fliplr() function, but this seems to require NumPy arrays of dimensions two or greater. I'm sure there's a fairly simple answer, but any help would be appreciated.","response":"There must be something in your Q that i do not understand... Anyway \n```\n\n```python\nfrom numpy import array\n```\n\n```python\na = array((1,0,0,1,1,0,0))\n```\n\n```python\nb = 1-a\n```\n\n```python\nprint a ; print b\n[1 0 0 1 1 0 0]\n[0 1 1 0 0 1 1]\n```\n\n```python\n```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/26890477\/flipping-zeroes-and-ones-in-one-dimensional-numpy-array","best_answers_votes":109,"tags":["python","arrays","numpy"],"question_length":614,"response_length":230,"tag_count":3} {"question":"How can I draw scatter trend line? I want to draw a scatter trend line on matplot. How can I do that? Python \n```\nimport pandas as pd\nimport matplotlib.pyplot as plt\ncsv = pd.read_csv('\/tmp\/test.csv')\ndata = csv[['fee', 'time']]\nx = data['fee']\ny = data['time']\nplt.scatter(x, y)\nplt.show()\n```\nCSV \n```\nfee,time\n100,650\n90,700\n80,860\n70,800\n60,1000\n50,1200\n```\ntime is integer value. Scatter chart","response":"I'm sorry I found the answer by myself. How to add trendline in python matplotlib dot (scatter) graphs? Python \n```\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\ncsv = pd.read_csv('\/tmp\/test.csv')\ndata = csv[['fee', 'time']]\nx = data['fee']\ny = data['time']\nplt.scatter(x, y)\n\nz = np.polyfit(x, y, 1)\np = np.poly1d(z)\nplt.plot(x,p(x),\"r--\")\n\nplt.show()\n```\nChart","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/41635448\/how-can-i-draw-scatter-trend-line","best_answers_votes":80,"tags":["python","pandas","numpy","matplotlib","trendline"],"question_length":398,"response_length":387,"tag_count":5} {"question":"Python out of memory on large CSV file (numpy) I have a 3GB CSV file that I try to read with python, I need the median column wise. \n```\nfrom numpy import * \ndef data():\n return genfromtxt('All.csv',delimiter=',')\n\ndata = data() # This is where it fails already.\n\nmed = zeros(len(data[0]))\ndata = data.T\nfor i in xrange(len(data)):\n m = median(data[i])\n med[i] = 1.0\/float(m)\nprint med\n```\nThe error that I get is this: \n```\nPython(1545) malloc: *** mmap(size=16777216) failed (error code=12)\n\n*** error: can't allocate region\n\n*** set a breakpoint in malloc_error_break to debug\n\nTraceback (most recent call last):\n\n File \"Normalize.py\", line 40, in \n\n data = data()\n\n File \"Normalize.py\", line 39, in data\n\n return genfromtxt('All.csv',delimiter=',')\n\nFile \"\/Library\/Frameworks\/Python.framework\/Versions\/2.6\/lib\/python2.6\/site-\npackages\/numpy\/lib\/npyio.py\", line 1495, in genfromtxt\n\nfor (i, line) in enumerate(itertools.chain([first_line, ], fhd)):\n\nMemoryError\n```\nI think it's just an out of memory error. I am running a 64bit MacOSX with 4GB of ram and both numpy and Python compiled in 64bit mode. How do I fix this? Should I try a distributed approach, just for the memory management? Thanks EDIT: Also tried with this but no luck... \n```\ngenfromtxt('All.csv',delimiter=',', dtype=float16)\n```","response":"As other folks have mentioned, for a really large file, you're better off iterating. However, you do commonly want the entire thing in memory for various reasons. genfromtxt is much less efficient than loadtxt (though it handles missing data, whereas loadtxt is more \"lean and mean\", which is why the two functions co-exist). If your data is very regular (e.g. just simple delimited rows of all the same type), you can also improve on either by using numpy.fromiter. If you have enough ram, consider using np.loadtxt('yourfile.txt', delimiter=',') (You may also need to specify skiprows if you have a header on the file.) As a quick comparison, loading ~500MB text file with loadtxt uses ~900MB of ram at peak usage, while loading the same file with genfromtxt uses ~2.5GB. Loadtxt Genfromtxt Alternately, consider something like the following. It will only work for very simple, regular data, but it's quite fast. (loadtxt and genfromtxt do a lot of guessing and error-checking. If your data is very simple and regular, you can improve on them greatly.) \n```\nimport numpy as np\n\ndef generate_text_file(length=1e6, ncols=20):\n data = np.random.random((length, ncols))\n np.savetxt('large_text_file.csv', data, delimiter=',')\n\ndef iter_loadtxt(filename, delimiter=',', skiprows=0, dtype=float):\n def iter_func():\n with open(filename, 'r') as infile:\n for _ in range(skiprows):\n next(infile)\n for line in infile:\n line = line.rstrip().split(delimiter)\n for item in line:\n yield dtype(item)\n iter_loadtxt.rowlength = len(line)\n\n data = np.fromiter(iter_func(), dtype=dtype)\n data = data.reshape((-1, iter_loadtxt.rowlength))\n return data\n\n#generate_text_file()\ndata = iter_loadtxt('large_text_file.csv')\n```\nFromiter","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/8956832\/python-out-of-memory-on-large-csv-file-numpy","best_answers_votes":73,"tags":["python","memory","csv","numpy","scipy"],"question_length":1322,"response_length":1830,"tag_count":5} {"question":"How to obtain a gaussian filter in python I am using python to create a gaussian filter of size 5x5. I saw this post here where they talk about a similar thing but I didn't find the exact way to get equivalent python code to matlab function fspecial('gaussian', f_wid, sigma) Is there any other way to do it? I tried using the following code : \n```\nsize = 2\nsizey = None\nsize = int(size)\nif not sizey:\n sizey = size\nelse:\n sizey = int(sizey)\nx, y = scipy.mgrid[-size: size + 1, -sizey: sizey + 1]\ng = scipy.exp(- (x ** 2\/float(size) + y ** 2 \/ float(sizey)))\nprint g \/ np.sqrt(2 * np.pi)\n```\nThe output obtained is \n```\n[[ 0.00730688 0.03274718 0.05399097 0.03274718 0.00730688]\n [ 0.03274718 0.14676266 0.24197072 0.14676266 0.03274718]\n [ 0.05399097 0.24197072 0.39894228 0.24197072 0.05399097]\n [ 0.03274718 0.14676266 0.24197072 0.14676266 0.03274718]\n [ 0.00730688 0.03274718 0.05399097 0.03274718 0.00730688]]\n```\nWhat I want is something like this: \n```\n0.0029690 0.0133062 0.0219382 0.0133062 0.0029690\n 0.0133062 0.0596343 0.0983203 0.0596343 0.0133062\n 0.0219382 0.0983203 0.1621028 0.0983203 0.0219382\n 0.0133062 0.0596343 0.0983203 0.0596343 0.0133062\n 0.0029690 0.0133062 0.0219382 0.0133062 0.0029690\n```","response":"In general terms if you really care about getting the the exact same result as MATLAB, the easiest way to achieve this is often by looking directly at the source of the MATLAB function. In this case, edit fspecial: \n```\n...\n case 'gaussian' % Gaussian filter\n\n siz = (p2-1)\/2;\n std = p3;\n\n [x,y] = meshgrid(-siz(2):siz(2),-siz(1):siz(1));\n arg = -(x.*x + y.*y)\/(2*std*std);\n\n h = exp(arg);\n h(h> fspecial('gaussian',5,1)\n\n 0.002969 0.013306 0.021938 0.013306 0.002969\n 0.013306 0.059634 0.09832 0.059634 0.013306\n 0.021938 0.09832 0.1621 0.09832 0.021938\n 0.013306 0.059634 0.09832 0.059634 0.013306\n 0.002969 0.013306 0.021938 0.013306 0.002969\n\n : matlab_style_gauss2D((5,5),1)\n\narray([[ 0.002969, 0.013306, 0.021938, 0.013306, 0.002969],\n [ 0.013306, 0.059634, 0.09832 , 0.059634, 0.013306],\n [ 0.021938, 0.09832 , 0.162103, 0.09832 , 0.021938],\n [ 0.013306, 0.059634, 0.09832 , 0.059634, 0.013306],\n [ 0.002969, 0.013306, 0.021938, 0.013306, 0.002969]])\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/17190649\/how-to-obtain-a-gaussian-filter-in-python","best_answers_votes":49,"tags":["python","matlab","numpy","gaussian"],"question_length":1292,"response_length":1799,"tag_count":4} {"question":"numpy corrcoef - compute correlation matrix while ignoring missing data I am trying to compute a correlation matrix of several values. These values include some 'nan' values. I'm using numpy.corrcoef. For element(i,j) of the output correlation matrix I'd like to have the correlation calculated using all values that exist for both variable i and variable j. This is what I have now: \n```\n\n```python\ndf_counties = pd.read_sql(\"SELECT Median_Age, Rpercent_2008, overall_LS, population_density FROM countyVotingSM2\", db_eng)\n```\n\n```python\nnp.corrcoef(df_counties, rowvar = False)\n#Output\n#array([[ 1. , nan, nan, -0.10998411],\n# [ nan, nan, nan, nan],\n# [ nan, nan, nan, nan],\n# [-0.10998411, nan, nan, 1. ]])\n#```\n#Too many nan's :(\n```","response":"One of the main features of pandas is being NaN friendly. To calculate correlation matrix, simply call df_counties.corr(). Below is an example to demonstrate df.corr() is NaN tolerant whereas np.corrcoef is not. \n```\nimport pandas as pd\nimport numpy as np\n\n# data\n# ==============================\nnp.random.seed(0)\ndf = pd.DataFrame(np.random.randn(100,5), columns=list('ABCDE'))\ndf[df < 0] = np.nan\ndf\n\n A B C D E\n0 1.7641 0.4002 0.9787 2.2409 1.8676\n1 NaN 0.9501 NaN NaN 0.4106\n2 0.1440 1.4543 0.7610 0.1217 0.4439\n3 0.3337 1.4941 NaN 0.3131 NaN\n4 NaN 0.6536 0.8644 NaN 2.2698\n5 NaN 0.0458 NaN 1.5328 1.4694\n6 0.1549 0.3782 NaN NaN NaN\n7 0.1563 1.2303 1.2024 NaN NaN\n8 NaN NaN NaN 1.9508 NaN\n9 NaN NaN 0.7775 NaN NaN\n.. ... ... ... ... ...\n90 NaN 0.8202 0.4631 0.2791 0.3389\n91 2.0210 NaN NaN 0.1993 NaN\n92 NaN NaN NaN 0.1813 NaN\n93 2.4125 NaN NaN NaN 0.2515\n94 NaN NaN NaN NaN 1.7389\n95 0.9944 1.3191 NaN 1.1286 0.4960\n96 0.7714 1.0294 NaN NaN 0.8626\n97 NaN 1.5133 0.5531 NaN 0.2205\n98 NaN NaN 1.1003 1.2980 2.6962\n99 NaN NaN NaN NaN NaN\n\n[100 rows x 5 columns]\n\n# calculations\n# ================================\ndf.corr()\n\n A B C D E\nA 1.0000 0.2718 0.2678 0.2822 0.1016\nB 0.2718 1.0000 -0.0692 0.1736 -0.1432\nC 0.2678 -0.0692 1.0000 -0.3392 0.0012\nD 0.2822 0.1736 -0.3392 1.0000 0.1562\nE 0.1016 -0.1432 0.0012 0.1562 1.0000\n\n\nnp.corrcoef(df, rowvar=False)\n\narray([[ nan, nan, nan, nan, nan],\n [ nan, nan, nan, nan, nan],\n [ nan, nan, nan, nan, nan],\n [ nan, nan, nan, nan, nan],\n [ nan, nan, nan, nan, nan]])\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/31619578\/numpy-corrcoef-compute-correlation-matrix-while-ignoring-missing-data","best_answers_votes":40,"tags":["python","numpy","pandas","correlation"],"question_length":845,"response_length":1920,"tag_count":4} {"question":"Shuffling NumPy array along a given axis Given the following NumPy array, \n```\n> a = array([[1, 2, 3, 4, 5], [1, 2, 3, 4, 5],[1, 2, 3, 4, 5]])\n```\nit's simple enough to shuffle a single row, \n```\n> shuffle(a[0])\n> a\narray([[4, 2, 1, 3, 5],[1, 2, 3, 4, 5],[1, 2, 3, 4, 5]])\n```\nIs it possible to use indexing notation to shuffle each of the rows independently? Or do you have to iterate over the array. I had in mind something like, \n```\n> numpy.shuffle(a[:])\n> a\narray([[4, 2, 3, 5, 1],[3, 1, 4, 5, 2],[4, 2, 1, 3, 5]]) # Not the real output\n```\nthough this clearly doesn't work.","response":"Vectorized solution with rand+argsort trick We could generate unique indices along the specified axis and index into the the input array with advanced-indexing. To generate the unique indices, we would use random float generation + sort trick, thus giving us a vectorized solution. We would also generalize it to cover generic n-dim arrays and along generic axes with np.take_along_axis. The final implementation would look something like this - \n```\ndef shuffle_along_axis(a, axis):\n idx = np.random.rand(*a.shape).argsort(axis=axis)\n return np.take_along_axis(a,idx,axis=axis)\n```\nNote that this shuffle won't be in-place and returns a shuffled copy. Sample run - \n```\n\n```python\na\n#Output\n#array([[18, 95, 45, 33],\n# [40, 78, 31, 52],\n# [75, 49, 42, 94]])\n```\n\n```python\nshuffle_along_axis(a, axis=0)\n#Output\n#array([[75, 78, 42, 94],\n# [40, 49, 45, 52],\n# [18, 95, 31, 33]])\n```\n\n```python\nshuffle_along_axis(a, axis=1)\n#Output\n#array([[45, 18, 33, 95],\n# [31, 78, 52, 40],\n# [42, 75, 94, 49]])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/5040797\/shuffling-numpy-array-along-a-given-axis","best_answers_votes":32,"tags":["python","random","numpy"],"question_length":579,"response_length":1029,"tag_count":3} {"question":"NumPy: difference between linalg.eig() and linalg.eigh() In a Python 3 application I'm using NumPy to calculate eigenvalues and eigenvectors of a symmetric real matrix. Here's my demo code: \n```\nimport numpy as np\na = np.random.rand(3,3) # generate a random array shaped (3,3)\n\na = (a + a.T)\/2 # a becomes a random simmetric matrix \n\nevalues1, evectors1 = np.linalg.eig(a)\n\nevalues2, evectors2 = np.linalg.eigh(a)\n```\nExcept for the signs, I got the same eigenvectors and eigenvalues using np.linalg.eig and np.linalg.eigh. So, what's the difference between the two methods? Thanks EDIT: I've read the docs here https:\/\/docs.scipy.org\/doc\/numpy\/reference\/generated\/numpy.linalg.eig.html and here https:\/\/docs.scipy.org\/doc\/numpy\/reference\/generated\/numpy.linalg.eigh.html but still I can not understand why I should use eigh() when I have a symmetric array.","response":"eigh guarantees you that the eigenvalues are sorted and uses a faster algorithm that takes advantage of the fact that the matrix is symmetric. If you know that your matrix is symmetric, use this function. Attention, eigh doesn't check if your matrix is indeed symmetric, it by default just takes the lower triangular part of the matrix and assumes that the upper triangular part is defined by the symmetry of the matrix. eig works for general matrices and therefore uses a slower algorithm, you can check that for example with IPythons magic command %timeit. If you test with larger matrices, you will also see that in general the eigenvalues are not sorted here.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/45434989\/numpy-difference-between-linalg-eig-and-linalg-eigh","best_answers_votes":70,"tags":["python-3.x","numpy"],"question_length":862,"response_length":663,"tag_count":2} {"question":"replace zeroes in numpy array with the median value I have a numpy array like this: \n```\nfoo_array = [38,26,14,55,31,0,15,8,0,0,0,18,40,27,3,19,0,49,29,21,5,38,29,17,16]\n```\nI want to replace all the zeros with the median value of the whole array (where the zero values are not to be included in the calculation of the median) So far I have this going on: \n```\nfoo_array = [38,26,14,55,31,0,15,8,0,0,0,18,40,27,3,19,0,49,29,21,5,38,29,17,16]\nfoo = np.array(foo_array)\nfoo = np.sort(foo)\nprint \"foo sorted:\",foo\n#foo sorted: [ 0 0 0 0 0 3 5 8 14 15 16 17 18 19 21 26 27 29 29 31 38 38 40 49 55]\nnonzero_values = foo[0::] > 0\nnz_values = foo[nonzero_values]\nprint \"nonzero_values?:\",nz_values\n#nonzero_values?: [ 3 5 8 14 15 16 17 18 19 21 26 27 29 29 31 38 38 40 49 55]\nsize = np.size(nz_values)\nmiddle = size \/ 2\nprint \"median is:\",nz_values[middle]\n#median is: 26\n```\nIs there a clever way to achieve this with numpy syntax? Thank you","response":"This solution takes advantage of numpy.median: \n```\nimport numpy as np\nfoo_array = [38,26,14,55,31,0,15,8,0,0,0,18,40,27,3,19,0,49,29,21,5,38,29,17,16]\nfoo = np.array(foo_array)\n# Compute the median of the non-zero elements\nm = np.median(foo[foo > 0])\n# Assign the median to the zero elements \nfoo[foo == 0] = m\n```\nJust a note of caution, the median for your array (with no zeroes) is 23.5 but as written this sticks in 23.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/17056325\/replace-zeroes-in-numpy-array-with-the-median-value","best_answers_votes":62,"tags":["python","arrays","numpy","replace","conditional-statements"],"question_length":944,"response_length":424,"tag_count":5} {"question":"Saving an imshow-like image while preserving resolution I have an (n, m) array that I've been visualizing with matplotlib.pyplot.imshow. I'd like to save this data in some type of raster graphics file (e.g. a png) so that: The colors are the ones shown with imshow Each element of the underlying array is exactly one pixel in the saved image -- meaning that if the underlying array is (n, m) elements, the image is NxM pixels. (I'm not interested in interpolation='nearest' in imshow.) There is nothing in the saved image except for the pixels corresponding to the data in the array. (I.e. there's no white space around the edges, axes, etc.) How can I do this? I've seen some code that can kind of do this by using interpolation='nearest' and forcing matplotlib to (grudgingly) turn off axes, whitespace, etc. However, there must be some way to do this more directly -- maybe with PIL? After all, I have the underlying data. If I can get an RGB value for each element of the underlying array, then I can save it with PIL. Is there some way to extract the RGB data from imshow? I can write my own code to map the array values to RGB values, but I don't want to reinvent the wheel, since that functionality already exists in matplotlib.","response":"As you already guessed there is no need to create a figure. You basically need three steps. Normalize your data, apply the colormap, save the image. matplotlib provides all the necessary functionality: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# some data (512x512)\nimport scipy.misc\ndata = scipy.misc.lena()\n\n# a colormap and a normalization instance\ncmap = plt.cm.jet\nnorm = plt.Normalize(vmin=data.min(), vmax=data.max())\n\n# map the normalized data to colors\n# image is now RGBA (512x512x4) \nimage = cmap(norm(data))\n\n# save the image\nplt.imsave('test.png', image)\n```\nWhile the code above explains the single steps, you can also let imsave do all three steps (similar to imshow): \n```\nplt.imsave('test.png', data, cmap=cmap)\n```\nResult (test.png):","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/31544130\/saving-an-imshow-like-image-while-preserving-resolution","best_answers_votes":66,"tags":["numpy","matplotlib","python-imaging-library"],"question_length":1235,"response_length":766,"tag_count":3} {"question":"Change the values of a NumPy array that are NOT in a list of indices I have a NumPy array like: \n```\na = np.arange(30)\n```\nI know that I can replace the values located at positions indices=[2,3,4] using for instance fancy indexing: \n```\na[indices] = 999\n```\nBut how to replace the values at the positions that are not in indices? Would be something like below? \n```\na[ not in indices ] = 888\n```","response":"I don't know of a clean way to do something like this: \n```\nmask = np.ones(a.shape,dtype=bool) #np.ones_like(a,dtype=bool)\nmask[indices] = False\na[~mask] = 999\na[mask] = 888\n```\nOf course, if you prefer to use the numpy data-type, you could use dtype=np.bool_ -- There won't be any difference in the output. it's just a matter of preference really.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/16940895\/change-the-values-of-a-numpy-array-that-are-not-in-a-list-of-indices","best_answers_votes":50,"tags":["python","arrays","numpy","replace","multidimensional-array"],"question_length":395,"response_length":348,"tag_count":5} {"question":"What is the way data is stored in *.npy? I'm saving NumPy arrays using numpy.save function. I want other developers to have capability to read data from those file using C language. So I need to know,how numpy organizes binary data in file.OK, it's obvious when I'm saving array of 'i4' but what about array of arrays that contains some structures?Can't find any info in documentation UPD : lets say tha data is something like : \n```\ndt = np.dtype([('outer','(3,)\", line 1, in \n#ValueError: setting an array element with a sequence.\n#```\n#You can normally work around this doing the following: \n#```\n```\n\n```python\nnp.array(arr[:, 1], dtype=[('', float)]*3).view(float).reshape(-1, 3)\n#Output\n#Traceback (most recent call last):\n# File \"\", line 1, in \n#TypeError: expected a readable buffer object\n#```\n#Not here though, which was kind of puzzling. Apparently it is the fact that the objects in your array are lists that throws this off, as replacing the lists with tuples works: \n#```\n```\n\n```python\nnp.array([tuple(j) for j in arr[:, 1]],\n dtype=[('', float)]*3).view(float).reshape(-1, 3)\n#Output\n#array([[ 1., 2., 3.],\n# [ 4., 5., 6.]])\n#```\n#Since there doesn't seem to be any entirely satisfactory solution, the easiest is probably to go with: \n#```\n```\n\n```python\nnp.array(list(arr[:, 1]), dtype=float)\n#Output\n#array([[ 1., 2., 3.],\n# [ 4., 5., 6.]])\n#```\n#Although that will not be very efficient, probably better to go with something like: \n#```\n```\n\n```python\nnp.fromiter((tuple(j) for j in arr[:, 1]), dtype=[('', float)]*3,\n count=len(arr)).view(float).reshape(-1, 3)\n#Output\n#array([[ 1., 2., 3.],\n# [ 4., 5., 6.]])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/19459017\/how-to-convert-a-numpy-2d-array-with-object-dtype-to-a-regular-2d-array-of-float","best_answers_votes":28,"tags":["python","arrays","object","numpy","2d"],"question_length":1229,"response_length":1629,"tag_count":5} {"question":"Calling statistics functions from Scipy This may well be completely trivial. I want to call the spearmanr function from scipy: (the data are just examples) \n```\nimport scipy\nfrom numpy import *\n\nLen = [2,3,5,7,2]\nPop = [5,2,6,3,2]\n\na = array(Len)\nb = array(Pop)\n\nprint scipy.stats.spearmanr(a,b)\n```\nThis generates the error: \n```\nAttributeError: 'module' object has no attribute 'spearmanr'\n```\nWhat am I doing wrong? Thanks","response":"Use import scipy.stats. Then it works. Importing a package does not automatically import all the subpackages\/modules. In these cases you have to do this explicitly.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/6664996\/calling-statistics-functions-from-scipy","best_answers_votes":53,"tags":["python","numpy","scipy"],"question_length":425,"response_length":164,"tag_count":3} {"question":"Numpy random choice of tuples I'm having trouble to create an array of random choices, where a choice is a tuple. I get the error: a must be 1-dimensional Here is an example: \n```\nchoices = ((0,0,0),(255,255,255))\nnumpy.random.choice(choices,4)\n```\nIs there any other way to do this? Expected result: a numpy array consiting of 4 elements randomly picked from the choices tuple. \n```\n((0,0,0),(0,0,0),(255,255,255),(255,255,255))\n```","response":"Use choice to choose the 1dim indices into the array, then index it. In the example you provided, only the number of possible choices affects the nature of the choice, not the actual values (0, 255). Choosing indices is the 1dim problem choice knows how to handle. \n```\nchoices = numpy.array([[0,0,0],[255,255,255]])\nidx = numpy.random.choice(len(choices),4)\nchoices[idx]\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/23445936\/numpy-random-choice-of-tuples","best_answers_votes":40,"tags":["python","numpy","random"],"question_length":433,"response_length":375,"tag_count":3} {"question":"Shuffling non-zero elements of each row in an array - Python \/ NumPy I have a an array that is relatively sparse, and I would like to go through each row and shuffle only the non-zero elements. Example Input: \n```\n[2,3,1,0]\n[0,0,2,1]\n```\nExample Output: \n```\n[2,1,3,0]\n[0,0,1,2]\n```\nNote how the zeros have not changed position. To shuffle all elements in each row (including zeros) I can do this: \n```\nfor i in range(len(X)):\n np.random.shuffle(X[i, :])\n```\nWhat I tried to do then is this: \n```\nfor i in range(len(X)):\n np.random.shuffle(X[i, np.nonzero(X[i, :])])\n```\nBut it has no effect. I've noticed that the return type of X[i, np.nonzero(X[i, :])] is different from X[i, :] which might be the cause. \n```\n\n```python\nX[i, np.nonzero(X[i, :])]\n#Output\n#array([[23, 5, 29, 11, 17]])\n```\n\n```python\nX[i, :]\n#Output\n#array([23, 5, 29, 11, 17])\n#```\n```","response":"You could use the non-inplace numpy.random.permutation with explicit non-zero indexing: \n```\n\n```python\nX = np.array([[2,3,1,0], [0,0,2,1]])\n```\n\n```python\nfor i in range(len(X)):\n idx = np.nonzero(X[i])\n X[i][idx] = np.random.permutation(X[i][idx])\n\n```\n\n```python\nX\n#Output\n#array([[3, 2, 1, 0],\n# [0, 0, 2, 1]])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/45764955\/shuffling-non-zero-elements-of-each-row-in-an-array-python-numpy","best_answers_votes":14,"tags":["python","numpy","vectorization","shuffle"],"question_length":849,"response_length":303,"tag_count":4} {"question":"Understanding NumPy's nonzero function I am trying to understand numpy's nonzero function. The following is an example application: \n```\nimport numpy\narr = numpy.array([[1,0],[1,1]])\narr.nonzero()\n--> (array([0, 1, 1]), array([0, 0, 1]))\n```\nI can see that because arr is 2-D, the output of nonzero() is a 2-tuple. However, I do not understand why the number of indices in each element of the tuple exceeds the number of rows\/columns of the array. I can see that \n```\narr[arr.nonzero()]\n--> array([1, 1, 1])\n```\nBut how...?","response":"Each element of the tuple contains one of the indices for each nonzero value. Therefore, the length of each tuple element is the number of nonzeros in the array. From your example, the indices of the nonzeros are [0, 0], [1, 0], and [1, 1]. The first element of the tuple is the first index for each of the nonzero values: ([0, 1, 1]), and the second element of the tuple is the second index for each of the nonzero values: ([0, 0, 1]). Your second code block just returns the nonzero values of the array (I am not clear from the question if the return value is part of the confusion). \n```\n\n```python\narr[arr.nonzero()]\n#Output\n#array([1, 1, 1])\n#```\n#This is more clear if we use an example array with other values. \n#```\n```\n\n```python\narr = numpy.array([[1,0],[2,3]])\n```\n\n```python\narr[arr.nonzero()]\n#Output\n#array([1, 2, 3])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/7924033\/understanding-numpys-nonzero-function","best_answers_votes":47,"tags":["numpy"],"question_length":523,"response_length":785,"tag_count":1} {"question":"numpy divide along axis Is there a numpy function to divide an array along an axis with elements from another array? For example, suppose I have an array a with shape (l,m,n) and an array b with shape (m,); I'm looking for something equivalent to: \n```\ndef divide_along_axis(a,b,axis=None):\n if axis is None:\n return a\/b\n c = a.copy()\n for i, x in enumerate(c.swapaxes(0,axis)):\n x \/= b[i]\n return c\n```\nFor example, this is useful when normalizing an array of vectors: \n```\n\n```python\na = np.random.randn(4,3)\n#Output\n#array([[ 1.03116167, -0.60862215, -0.29191449],\n# [-1.27040355, 1.9943905 , 1.13515384],\n# [-0.47916874, 0.05495749, -0.58450632],\n# [ 2.08792161, -1.35591814, -0.9900364 ]])\n```\n\n```python\nnp.apply_along_axis(np.linalg.norm,1,a)\n#Output\n#array([ 1.23244853, 2.62299312, 0.75780647, 2.67919815])\n```\n\n```python\nc = divide_along_axis(a,np.apply_along_axis(np.linalg.norm,1,a),0)\n```\n\n```python\nnp.apply_along_axis(np.linalg.norm,1,c)\n#Output\n#array([ 1., 1., 1., 1.])\n#```\n```","response":"For the specific example you've given: dividing an (l,m,n) array by (m,) you can use np.newaxis: \n```\na = np.arange(1,61, dtype=float).reshape((3,4,5)) # Create a 3d array \na.shape # (3,4,5)\n\nb = np.array([1.0, 2.0, 3.0, 4.0]) # Create a 1-d array\nb.shape # (4,)\n\na \/ b # Gives a ValueError\n\na \/ b[:, np.newaxis] # The result you want\n```\nYou can read all about the broadcasting rules here. You can also use newaxis more than once if required. (e.g. to divide a shape (3,4,5,6) array by a shape (3,5) array). From my understanding of the docs, using newaxis + broadcasting avoids also any unecessary array copying. Indexing, newaxis etc are described more fully here now. (Documentation reorganised since this answer first posted).","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/7140738\/numpy-divide-along-axis","best_answers_votes":45,"tags":["numpy","vector","divide"],"question_length":973,"response_length":903,"tag_count":3} {"question":"np.full(size, 0) vs. np.zeros(size) vs. np.empty() If you were to choose one of the following three ways of initializing an array with zeros which one would you choose and why? \n```\nmy_arr_1 = np.full(size, 0)\n```\nor \n```\nmy_arr_2 = np.zeros(size)\n```\nor \n```\nmy_arr_3 = np.empty(size)\nmy_arr_3[:] = 0\n```","response":"I'd use np.zeros, because of its name. I would never use the third idiom because it takes two statements instead of a single expression and it's harder for the NumPy folks to optimize. In fact, in NumPy 1.10, np.zeros is still the fastest option, despite all the optimizations to indexing: \n```py\n\n```python\n%timeit np.zeros(1e6)\n#Output\n#1000 loops, best of 3: 804 \u00b5s per loop\n```\n\n```python\n%timeit np.full(1e6, 0)\n#Output\n#1000 loops, best of 3: 816 \u00b5s per loop\n```\n\n```python\n%timeit a = np.empty(1e6); a[:] = 0\n#Output\n#1000 loops, best of 3: 919 \u00b5s per loop\n#```\n#Bigger array for comparison with @John Zwinck's results: \n#```py\n```\n\n```python\n%timeit np.zeros(1e8)\n#Output\n#100000 loops, best of 3: 9.66 \u00b5s per loop\n```\n\n```python\n%timeit np.full(1e8, 0)\n#Output\n#1 loops, best of 3: 614 ms per loop\n```\n\n```python\n%timeit a = np.empty(1e8); a[:] = 0\n#Output\n#1 loops, best of 3: 229 ms per loop\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/26213199\/np-fullsize-0-vs-np-zerossize-vs-np-empty","best_answers_votes":21,"tags":["python","arrays","numpy"],"question_length":305,"response_length":787,"tag_count":3} {"question":"how to modify a 2D numpy array at specific locations without a loop? I have a 2D numpy array and I have a arrays of rows and columns which should be set to a particular value. Lets consider the following example \n```\na = array([[1, 2, 3],\n [4, 5, 6],\n [7, 8, 9]])\n```\nI want to modify entries at rows [0,2] and columns [1,2]. This should result in the following array \n```\na = array([[1, 2, 0],\n [4, 5, 0],\n [7, 8, 9]])\n```\nI did following and it resulted in modifying each sequence of column in every row \n```\nrows = [0,1]\ncols = [2,2]\nb=a[numpy.ix_(rows,columns)]\n```\nIt resulted in the following array modifying every column of the specified array \n```\narray([[1, 0, 0],\n [4, 5, 6],\n [7, 0, 0]])\n```\nSome one could please let me know how to do it? Thanks a lot EDIT: It is to be noted that rows and columns coincidently happend to be sequentia. The actual point is that these could be arbitrary and in any order. if it is rows = [a,b,c] and cols=[n x z] then I want to modify exactly three elements at locations (a,n),(b,x),(c,z).","response":"Adding to what others have said, you can modify these elements using fancy indexing as follows: \n```\n\n```python\nrows = [0,1]\n```\n\n```python\ncols = [2,2]\n```\n\n```python\na = np.arange(1,10).reshape((3,3))\n```\n\n```python\na[rows,cols] = 0\n```\n\n```python\na\n#Output\n#array([[1, 2, 0],\n# [4, 5, 0],\n# [7, 8, 9]])\n#```\n#You might want to read the documentation on indexing multidimensional arrays: http:\/\/docs.scipy.org\/doc\/numpy\/user\/basics.indexing.html#indexing-multi-dimensional-arrays The key point is: if the index arrays have a matching shape, and there is an index array for each dimension of the array being indexed, the resultant array has the same shape as the index arrays, and the values correspond to the index set for each position in the index arrays. Importantly this also allows you to do things like: \n#```\n```\n\n```python\na[rows,cols] = np.array([33,77])\n```\n\n```python\na\n#Output\n#array([[ 1, 2, 33],\n# [ 4, 5, 77],\n# [ 7, 8, 9]])\n#```\n#where you can set each element independently using another array, list or tuple of the same size.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/7761393\/how-to-modify-a-2d-numpy-array-at-specific-locations-without-a-loop","best_answers_votes":33,"tags":["python","numpy"],"question_length":1087,"response_length":1033,"tag_count":2} {"question":"Replace NaN's in NumPy array with closest non-NaN value I have a NumPy array a like the following: \n```\n\n```python\nstr(a)\n#Output\n#'[ nan nan nan 1.44955726 1.44628034 1.44409573\\n 1.4408188 1.43657094 1.43171624 1.42649744 1.42200684 1.42117704\\n 1.42040255 1.41922908 nan nan nan nan\\n nan nan]'\n#```\n#I want to replace each NaN with the closest non-NaN value, so that all of the NaN's at the beginning get set to 1.449... and all of the NaN's at the end get set to 1.419.... I can see how to do this for specific cases like this, but I need to be able to do it generally for any length of array, with any length of NaN's at the beginning and end of the array (there will be no NaN's in the middle of the numbers). Any ideas? I can find the NaN's easily enough with np.isnan(), but I can't work out how to get the closest value to each NaN.\n```","response":"As an alternate solution (this will linearly interpolate for arrays NaNs in the middle, as well): \n```\nimport numpy as np\n\n# Generate data...\ndata = np.random.random(10)\ndata[:2] = np.nan\ndata[-1] = np.nan\ndata[4:6] = np.nan\n\nprint data\n\n# Fill in NaN's...\nmask = np.isnan(data)\ndata[mask] = np.interp(np.flatnonzero(mask), np.flatnonzero(~mask), data[~mask])\n\nprint data\n```\nThis yields: \n```\n[ nan nan 0.31619306 0.25818765 nan nan\n 0.27410025 0.23347532 0.02418698 nan]\n\n[ 0.31619306 0.31619306 0.31619306 0.25818765 0.26349185 0.26879605\n 0.27410025 0.23347532 0.02418698 0.02418698]\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/9537543\/replace-nans-in-numpy-array-with-closest-non-nan-value","best_answers_votes":72,"tags":["python","arrays","numpy","nan"],"question_length":907,"response_length":644,"tag_count":4} {"question":"fitting exponential decay with no initial guessing Does anyone know a scipy\/numpy module which will allow to fit exponential decay to data? Google search returned a few blog posts, for example - http:\/\/exnumerus.blogspot.com\/2010\/04\/how-to-fit-exponential-decay-example-in.html , but that solution requires y-offset to be pre-specified, which is not always possible EDIT: curve_fit works, but it can fail quite miserably with no initial guess for parameters, and that is sometimes needed. The code I'm working with is \n```\n#!\/usr\/bin\/env python\nimport numpy as np\nimport scipy as sp\nimport pylab as pl\nfrom scipy.optimize.minpack import curve_fit\n\nx = np.array([ 50., 110., 170., 230., 290., 350., 410., 470., \n530., 590.])\ny = np.array([ 3173., 2391., 1726., 1388., 1057., 786., 598., \n443., 339., 263.])\n\nsmoothx = np.linspace(x[0], x[-1], 20)\n\nguess_a, guess_b, guess_c = 4000, -0.005, 100\nguess = [guess_a, guess_b, guess_c]\n\nexp_decay = lambda x, A, t, y0: A * np.exp(x * t) + y0\n\nparams, cov = curve_fit(exp_decay, x, y, p0=guess)\n\nA, t, y0 = params\n\nprint \"A = %s\\nt = %s\\ny0 = %s\\n\" % (A, t, y0)\n\npl.clf()\nbest_fit = lambda x: A * np.exp(t * x) + y0\n\npl.plot(x, y, 'b.')\npl.plot(smoothx, best_fit(smoothx), 'r-')\npl.show()\n```\nwhich works, but if we remove \"p0=guess\", it fails miserably.","response":"You have two options: Linearize the system, and fit a line to the log of the data. Use a non-linear solver (e.g. scipy.optimize.curve_fit The first option is by far the fastest and most robust. However, it requires that you know the y-offset a-priori, otherwise it's impossible to linearize the equation. (i.e. y = A * exp(K * t) can be linearized by fitting y = log(A * exp(K * t)) = K * t + log(A), but y = A*exp(K*t) + C can only be linearized by fitting y - C = K*t + log(A), and as y is your independent variable, C must be known beforehand for this to be a linear system. If you use a non-linear method, it's a) not guaranteed to converge and yield a solution, b) will be much slower, c) gives a much poorer estimate of the uncertainty in your parameters, and d) is often much less precise. However, a non-linear method has one huge advantage over a linear inversion: It can solve a non-linear system of equations. In your case, this means that you don't have to know C beforehand. Just to give an example, let's solve for y = A * exp(K * t) with some noisy data using both linear and nonlinear methods: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport scipy as sp\nimport scipy.optimize\n\n\ndef main():\n # Actual parameters\n A0, K0, C0 = 2.5, -4.0, 2.0\n\n # Generate some data based on these\n tmin, tmax = 0, 0.5\n num = 20\n t = np.linspace(tmin, tmax, num)\n y = model_func(t, A0, K0, C0)\n\n # Add noise\n noisy_y = y + 0.5 * (np.random.random(num) - 0.5)\n\n fig = plt.figure()\n ax1 = fig.add_subplot(2,1,1)\n ax2 = fig.add_subplot(2,1,2)\n\n # Non-linear Fit\n A, K, C = fit_exp_nonlinear(t, noisy_y)\n fit_y = model_func(t, A, K, C)\n plot(ax1, t, y, noisy_y, fit_y, (A0, K0, C0), (A, K, C0))\n ax1.set_title('Non-linear Fit')\n\n # Linear Fit (Note that we have to provide the y-offset (\"C\") value!!\n A, K = fit_exp_linear(t, y, C0)\n fit_y = model_func(t, A, K, C0)\n plot(ax2, t, y, noisy_y, fit_y, (A0, K0, C0), (A, K, 0))\n ax2.set_title('Linear Fit')\n\n plt.show()\n\ndef model_func(t, A, K, C):\n return A * np.exp(K * t) + C\n\ndef fit_exp_linear(t, y, C=0):\n y = y - C\n y = np.log(y)\n K, A_log = np.polyfit(t, y, 1)\n A = np.exp(A_log)\n return A, K\n\ndef fit_exp_nonlinear(t, y):\n opt_parms, parm_cov = sp.optimize.curve_fit(model_func, t, y, maxfev=1000)\n A, K, C = opt_parms\n return A, K, C\n\ndef plot(ax, t, y, noisy_y, fit_y, orig_parms, fit_parms):\n A0, K0, C0 = orig_parms\n A, K, C = fit_parms\n\n ax.plot(t, y, 'k--', \n label='Actual Function:\\n $y = %0.2f e^{%0.2f t} + %0.2f$' % (A0, K0, C0))\n ax.plot(t, fit_y, 'b-',\n label='Fitted Function:\\n $y = %0.2f e^{%0.2f t} + %0.2f$' % (A, K, C))\n ax.plot(t, noisy_y, 'ro')\n ax.legend(bbox_to_anchor=(1.05, 1.1), fancybox=True, shadow=True)\n\nif __name__ == '__main__':\n main()\n```\nNote that the linear solution provides a result much closer to the actual values. However, we have to provide the y-offset value in order to use a linear solution. The non-linear solution doesn't require this a-priori knowledge.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/3938042\/fitting-exponential-decay-with-no-initial-guessing","best_answers_votes":66,"tags":["python","numpy","scipy"],"question_length":1320,"response_length":3090,"tag_count":3} {"question":"Middle point of each pair of an numpy.array I have an array of the form: \n```py\nx = np.array([ 1230., 1230., 1227., 1235., 1217., 1153., 1170.])\n```\nand I would like to produce another array where the values are the mean of each pair of values within my original array: \n```py\nxm = np.array([ 1230., 1228.5, 1231., 1226., 1185., 1161.5])\n```\nSomeone knows the easiest and fast way to do it without using loops?","response":"Even shorter, slightly sweeter: \n```\n(x[1:] + x[:-1]) \/ 2\n```\nThis is faster: \n```\n\n```python\npython -m timeit -s \"import numpy; x = numpy.random.random(1000000)\" \"x[:-1] + numpy.diff(x)\/2\"\n#Output\n#100 loops, best of 3: 6.03 msec per loop\n```\n\n```python\npython -m timeit -s \"import numpy; x = numpy.random.random(1000000)\" \"(x[1:] + x[:-1]) \/ 2\"\n#Output\n#100 loops, best of 3: 4.07 msec per loop\n#```\n#This is perfectly accurate: Consider each element in x[1:] + x[:-1]. So consider x\u2080 and x\u2081, the first and second elements. x\u2080 + x\u2081 is calculated to perfect precision and then rounded, in accordance to IEEE. It would therefore be the correct answer if that was all that was needed. (x\u2080 + x\u2081) \/ 2 is just half of that value. This can almost always be done by reducing the exponent by one, except in two cases: x\u2080 + x\u2081 overflows. This will result in an infinity (of either sign). That's not what is wanted, so the calculation will be wrong. x\u2080 + x\u2081 underflows. As the size is reduced, rounding will be perfect and thus the calculation will be correct. In all other cases, the calculation will be correct. Now consider x[:-1] + numpy.diff(x) \/ 2. This, by inspection of the source, evaluates directly to \n#```\n#x[:-1] + (x[1:] - x[:-1]) \/ 2\n#```\n#and so consider again x\u2080 and x\u2081. x\u2081 - x\u2080 will have severe \"problems\" with underflow for many values. This will also lose precision with large cancellations. It's not immediately clear that this doesn't matter if the signs are the same, though, as the error effectively cancels out on addition. What does matter is that rounding occurs. (x\u2081 - x\u2080) \/ 2 will be no less rounded, but then x\u2080 + (x\u2081 - x\u2080) \/ 2 involves another rounding. This means that errors will creep in. Proof: \n#```\n#import numpy\n```\n\nwins = draws = losses = 0\n\nfor _ in range(100000):\n a = numpy.random.random()\n b = numpy.random.random() \/ 0.146\n\n x = (a+b)\/2 \n y = a + (b-a)\/2\n\n error_mine = (a-x) - (x-b)\n error_theirs = (a-y) - (y-b)\n\n if x != y:\n if abs(error_mine) < abs(error_theirs):\n wins += 1\n elif abs(error_mine) == abs(error_theirs):\n draws += 1\n else:\n losses += 1\n else:\n draws += 1\n\nwins \/ 1000\n#>>> 12.44\n\ndraws \/ 1000\n#>>> 87.56\n\nlosses \/ 1000\n#>>> 0.0\n```\nThis shows that for the carefully chosen constant of 1.46, a full 12-13% of answers are wrong with the diff variant! As expected, my version is always right. Now consider underflow. Although my variant has overflow problems, these are much less big a deal than cancellation problems. It should be obvious why the double-rounding from the above logic is very problematic. Proof: \n```\n...\n a = numpy.random.random()\n b = -numpy.random.random()\n...\n\nwins \/ 1000\n#>>> 25.149\n\ndraws \/ 1000\n#>>> 74.851\n\nlosses \/ 1000\n#>>> 0.0\n```\nYeah, it gets 25% wrong! In fact, it doesn't take much pruning to get this up to 50%: \n```\n...\n a = numpy.random.random()\n b = -a + numpy.random.random()\/256\n...\n\nwins \/ 1000\n#>>> 49.188\n\ndraws \/ 1000\n#>>> 50.812\n\nlosses \/ 1000\n#>>> 0.0\n```\nWell, it's not that bad. It's only ever 1 least-significant-bit off as long as the signs are the same, I think. So there you have it. My answer is the best unless you're finding the average of two values whose sum exceeds 1.7976931348623157e+308 or is smaller than -1.7976931348623157e+308.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/23855976\/middle-point-of-each-pair-of-an-numpy-array","best_answers_votes":76,"tags":["python","numpy","mean"],"question_length":421,"response_length":3295,"tag_count":3} {"question":"python why use numpy.r_ instead of concatenate In which case using objects like numpy.r_ or numpy.c_ is better (more efficient, more suitable) than using functions like concatenate or vstack for example ? I am trying to understand a code where the programmer wrote something like: \n```\nreturn np.r_[0.0, 1d_array, 0.0] == 2\n```\nwhere 1d_array is an array whose values can be 0, 1 or 2. Why not using np.concatenate (for example) instead ? Like : \n```\nreturn np.concatenate([[0.0], 1d_array, [0.0]]) == 2\n```\nIt is more readable and apparently it does the same thing.","response":"np.r_ is implemented in the numpy\/lib\/index_tricks.py file. This is pure Python code, with no special compiled stuff. So it is not going to be any faster than the equivalent written with concatenate, arange and linspace. It's useful only if the notation fits your way of thinking and your needs. In your example it just saves converting the scalars to lists or arrays: \n```\n\n```python\nnp.r_[0.0, np.array([1,2,3,4]), 0.0]\n#Output\n#array([ 0., 1., 2., 3., 4., 0.])\n#```\n#error with the same arguments: \n#```\n```\n\n```python\nnp.concatenate([0.0, np.array([1,2,3,4]), 0.0])\n...\nValueError: zero-dimensional arrays cannot be concatenated\n```\ncorrect with the added [] \n```\n```\n\n```python\nnp.concatenate([[0.0], np.array([1,2,3,4]), [0.0]])\n#Output\n#array([ 0., 1., 2., 3., 4., 0.])\n#```\n#hstack takes care of that by passing all arguments through [atleast_1d(_m) for _m in tup]: \n#```\n```\n\n```python\nnp.hstack([0.0, np.array([1,2,3,4]), 0.0])\n#Output\n#array([ 0., 1., 2., 3., 4., 0.])\n#```\n#So at least in simple cases it is most similar to hstack. But the real usefulness of r_ comes when you want to use ranges \n#```\n#np.r_[0.0, 1:5, 0.0]\n#np.hstack([0.0, np.arange(1,5), 0.0])\n#np.r_[0.0, slice(1,5), 0.0]\n#```\n#r_ lets you use the : syntax that is used in indexing. That's because it is actually an instance of a class that has a __getitem__ method. index_tricks uses this programming trick several times. They've thrown in other bells-n-whistles Using an imaginary step, uses np.linspace to expand the slice rather than np.arange. \n#```\n#np.r_[-1:1:6j, [0]*3, 5, 6]\n#```\n#produces: \n#```\n#array([-1. , -0.6, -0.2, 0.2, 0.6, 1. , 0. , 0. , 0. , 5. , 6. ])\n#```\n#There are more details in the documentation. I did some time tests for many slices in https:\/\/stackoverflow.com\/a\/37625115\/901925\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/37743843\/python-why-use-numpy-r-instead-of-concatenate","best_answers_votes":60,"tags":["python","numpy","concatenation"],"question_length":566,"response_length":1778,"tag_count":3} {"question":"Efficiently Read last 'n' rows of CSV into DataFrame A few methods to do this: Read the entire CSV and then use df.tail Somehow reverse the file (whats the best way to do this for large files?) and then use nrows argument to read Somehow find the number of rows in the CSV, then use skiprows and read required number of rows. Maybe do chunk read discarding initial chunks (though not sure how this would work) Can it be done in some easier way? If not, which amongst these three should be prefered and why? Possibly related: Efficiently finding the last line in a text file Reading parts of ~13000 row CSV file with pandas read_csv and nrows Not directly related: How to get the last n row of pandas dataframe?","response":"I don't think pandas offers a way to do this in read_csv. Perhaps the neatest (in one pass) is to use collections.deque: \n```\nfrom collections import deque\nfrom StringIO import StringIO\n\nwith open(fname, 'r') as f:\n q = deque(f, 2) # replace 2 with n (lines read at the end)\n\n```python\nq\n#Output\n#deque(['7,8,9\\n', '10,11,12'], maxlen=2)\n# # these are the last two lines of my csv\n```\n\n```python\npd.read_csv(StringIO(''.join(q)), header=None)\n```\nAnother option worth trying is to get the number of lines in a first pass and then read the file again, skip that number of rows (minus n) using read_csv...\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/17108250\/efficiently-read-last-n-rows-of-csv-into-dataframe","best_answers_votes":41,"tags":["python","csv","numpy","pandas"],"question_length":710,"response_length":608,"tag_count":4} {"question":"Finding which rows have all elements as zeros in a matrix with numpy I have a large numpy matrix M. Some of the rows of the matrix have all of their elements as zero and I need to get the indices of those rows. The naive approach I'm considering is to loop through each row in the matrix and then check each elements. What would be a better and a faster approach to accomplish this using numpy?","response":"Here's one way. I assume numpy has been imported using import numpy as np. \n```\n\n```python\na\n#Output\n#array([[0, 1, 0],\n# [1, 0, 1],\n# [0, 0, 0],\n# [1, 1, 0],\n# [0, 0, 0]])\n```\n\n```python\nnp.where(~a.any(axis=1))[0]\n#Output\n#array([2, 4])\n#```\n#It's a slight variation of this answer: How to check that a matrix contains a zero column? Here's what's going on: The any method returns True if any value in the array is \"truthy\". Nonzero numbers are considered True, and 0 is considered False. By using the argument axis=1, the method is applied to each row. For the example a, we have: \n#```\n```\n\n```python\na.any(axis=1)\n#Output\n#array([ True, True, False, True, False], dtype=bool)\n#```\n#So each value indicates whether the corresponding row contains a nonzero value. The ~ operator is the binary \"not\" or complement: \n#```\n```\n\n```python\n~a.any(axis=1)\n#Output\n#array([False, False, True, False, True], dtype=bool)\n#```\n#(An alternative expression that gives the same result is (a == 0).all(axis=1).) To get the row indices, we use the where function. It returns the indices where its argument is True: \n#```\n```\n\n```python\nnp.where(~a.any(axis=1))\n#Output\n#(array([2, 4]),)\n#```\n#Note that where returned a tuple containing a single array. where works for n-dimensional arrays, so it always returns a tuple. We want the single array in that tuple. \n#```\n```\n\n```python\nnp.where(~a.any(axis=1))[0]\n#Output\n#array([2, 4])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/23726026\/finding-which-rows-have-all-elements-as-zeros-in-a-matrix-with-numpy","best_answers_votes":66,"tags":["python","numpy","matrix"],"question_length":394,"response_length":1406,"tag_count":3} {"question":"Computing the correlation coefficient between two multi-dimensional arrays I have two arrays that have the shapes N X T and M X T. I'd like to compute the correlation coefficient across T between every possible pair of rows n and m (from N and M, respectively). What's the fastest, most pythonic way to do this? (Looping over N and M would seem to me to be neither fast nor pythonic.) I'm expecting the answer to involve numpy and\/or scipy. Right now my arrays are numpy arrays, but I'm open to converting them to a different type. I'm expecting my output to be an array with the shape N X M. N.B. When I say \"correlation coefficient,\" I mean the Pearson product-moment correlation coefficient. Here are some things to note: The numpy function correlate requires input arrays to be one-dimensional. The numpy function corrcoef accepts two-dimensional arrays, but they must have the same shape. The scipy.stats function pearsonr requires input arrays to be one-dimensional.","response":"Correlation (default 'valid' case) between two 2D arrays: You can simply use matrix-multiplication np.dot like so - \n```\nout = np.dot(arr_one,arr_two.T)\n```\nCorrelation with the default \"valid\" case between each pairwise row combinations (row1,row2) of the two input arrays would correspond to multiplication result at each (row1,row2) position. Row-wise Correlation Coefficient calculation for two 2D arrays: \n```\ndef corr2_coeff(A, B):\n # Rowwise mean of input arrays & subtract from input arrays themeselves\n A_mA = A - A.mean(1)[:, None]\n B_mB = B - B.mean(1)[:, None]\n\n # Sum of squares across rows\n ssA = (A_mA**2).sum(1)\n ssB = (B_mB**2).sum(1)\n\n # Finally get corr coeff\n return np.dot(A_mA, B_mB.T) \/ np.sqrt(np.dot(ssA[:, None],ssB[None]))\n```\nThis is based upon this solution to How to apply corr2 functions in Multidimentional arrays in MATLAB Benchmarking This section compares runtime performance with the proposed approach against generate_correlation_map & loopy pearsonr based approach listed in the other answer.(taken from the function test_generate_correlation_map() without the value correctness verification code at the end of it). Please note the timings for the proposed approach also include a check at the start to check for equal number of columns in the two input arrays, as also done in that other answer. The runtimes are listed next. Case #1: \n```\n\n```python\nA = np.random.rand(1000, 100)\n```\n\n```python\nB = np.random.rand(1000, 100)\n```\n\n```python\n%timeit corr2_coeff(A, B)\n100 loops, best of 3: 15 ms per loop\n```\n\n```python\n%timeit generate_correlation_map(A, B)\n100 loops, best of 3: 19.6 ms per loop\n```\nCase #2: \n```\n```\n\n```python\nA = np.random.rand(5000, 100)\n```\n\n```python\nB = np.random.rand(5000, 100)\n```\n\n```python\n%timeit corr2_coeff(A, B)\n1 loops, best of 3: 368 ms per loop\n```\n\n```python\n%timeit generate_correlation_map(A, B)\n1 loops, best of 3: 493 ms per loop\n```\nCase #3: \n```\n```\n\n```python\nA = np.random.rand(10000, 10)\n```\n\n```python\nB = np.random.rand(10000, 10)\n```\n\n```python\n%timeit corr2_coeff(A, B)\n1 loops, best of 3: 1.29 s per loop\n```\n\n```python\n%timeit generate_correlation_map(A, B)\n1 loops, best of 3: 1.83 s per loop\n```\nThe other loopy pearsonr based approach seemed too slow, but here are the runtimes for one small datasize - \n```\n```\n\n```python\nA = np.random.rand(1000, 100)\n```\n\n```python\nB = np.random.rand(1000, 100)\n```\n\n```python\n%timeit corr2_coeff(A, B)\n100 loops, best of 3: 15.3 ms per loop\n```\n\n```python\n%timeit generate_correlation_map(A, B)\n100 loops, best of 3: 19.7 ms per loop\n```\n\n```python\n%timeit pearsonr_based(A, B)\n1 loops, best of 3: 33 s per loop\n```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/30143417\/computing-the-correlation-coefficient-between-two-multi-dimensional-arrays","best_answers_votes":47,"tags":["python","arrays","numpy","scipy","correlation"],"question_length":972,"response_length":2603,"tag_count":5} {"question":"efficient way of removing None's from numpy array Is there an efficient way to remove Nones from numpy arrays and resize the array to its new size? For example, how would you remove the None from this frame without iterating through it in python. I can easily iterate through it but was working on an api call that would be potentially called many times. \n```\na = np.array([1,45,23,23,1234,3432,-1232,-34,233,None])\n```","response":"```\n\n```python\na[a != np.array(None)]\n#Output\n#array([1, 45, 23, 23, 1234, 3432, -1232, -34, 233], dtype=object)\n#```\n#The above works because a != np.array(None) is a boolean array which maps out non-None values: \n#```\n```\n\n```python\na != np.array(None)\n#Output\n#array([ True, True, True, True, True, True, True, True, True, False], dtype=bool)\n#```\n#Selecting elements of an array in this manner is called boolean array indexing.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/25254929\/efficient-way-of-removing-nones-from-numpy-array","best_answers_votes":56,"tags":["python","numpy"],"question_length":419,"response_length":426,"tag_count":2} {"question":"Exponential curve fitting in SciPy I have two NumPy arrays x and y. When I try to fit my data using exponential function and curve_fit (SciPy) with this simple code \n```\n#!\/usr\/bin\/env python\nfrom pylab import *\nfrom scipy.optimize import curve_fit\n\nx = np.array([399.75, 989.25, 1578.75, 2168.25, 2757.75, 3347.25, 3936.75, 4526.25, 5115.75, 5705.25])\ny = np.array([109,62,39,13,10,4,2,0,1,2])\n\ndef func(x, a, b, c, d):\n return a*np.exp(b-c*x)+d\n\npopt, pcov = curve_fit(func, x, y)\n```\nI get wrong coefficients popt \n```\n[a,b,c,d] = [1., 1., 1., 24.19999988]\n```\nWhat is the problem?","response":"First comment: since a*exp(b - c*x) = (a*exp(b))*exp(-c*x) = A*exp(-c*x), a or b is redundant. I'll drop b and use: \n```\nimport matplotlib.pyplot as plt\n\ndef func(x, a, c, d):\n return a*np.exp(-c*x)+d\n```\nThat isn't the main issue. The problem is simply that curve_fit fails to converge to a solution to this problem when you use the default initial guess (which is all 1s). Check pcov; you'll see that it is inf. This is not surprising, because if c is 1, most of the values of exp(-c*x) underflow to 0: \n```\n\n```python\nnp.exp(-x)\n#Output\n#array([ 2.45912644e-174, 0.00000000e+000, 0.00000000e+000,\n# 0.00000000e+000, 0.00000000e+000, 0.00000000e+000,\n# 0.00000000e+000, 0.00000000e+000, 0.00000000e+000,\n# 0.00000000e+000])\n#```\n#This suggests that c should be small. A better initial guess is, say, p0 = (1, 1e-6, 1). Then I get: \n#```\n```\n\n```python\npopt, pcov = curve_fit(func, x, y, p0=(1, 1e-6, 1))\n```\n\n```python\npopt\n#Output\n#array([ 1.63561656e+02, 9.71142196e-04, -1.16854450e+00])\n#```\n#This looks reasonable: \n#```\n```\n\n```python\nxx = np.linspace(300, 6000, 1000)\n```\n\n```python\nyy = func(xx, *popt)\n```\n\n```python\nplt.plot(x, y, 'ko')\n#Output\n#[]\n```\n\n```python\nplt.plot(xx, yy)\n#Output\n#[]\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/21420792\/exponential-curve-fitting-in-scipy","best_answers_votes":53,"tags":["python","numpy","matplotlib","scipy","curve-fitting"],"question_length":587,"response_length":1286,"tag_count":5} {"question":"check type within numpy array I have different types of data. most of them are int and sometimes float. The int is different in size so 8\/ 16\/ 32 bits are the sizes. For this situation I'm creating a numerical type converter. therefore i check the type by using isinstance(). This because I have read that isinstance() is less worse than type(). The point is that a lot of data i get is numpy arrays. I use spyder as IDE and then i see by the variables also a type. but when i type isinstance(var,'type i read') i get False. I did some checks: \n```\na = 2.17 \nb = 3 \nc = np.array(np.random.rand(2, 8))\nd = np.array([1])\n```\nfor there isinstance(var,type) i get: \n```\nisinstance(a, float)\nTrue\nisinstance(b, int)\nTrue\nisinstance(c, float) # or isinstance(c, np.float64)\nFalse\nisinstance(d, int) # or isinstance(c, np.int32)\nFalse\n```\nc and d are True when i ask \n```\nisinstance(c, np.ndarray)\nTrue\nisinstance(d, np.ndarray)\nTrue\n```\ni can check with step in the ndarray by \n```\nisinstance(c[i][j], np.float64)\nTrue\nisinstance(d[i], np.int32)\nTrue\n```\nbut this means that for every dimension i have to add a new index otherwise it is False again. I can check there type with dtype like c.dtype == 'float64'... Oke so for what i have find and tried... My questions are basically: how is the var.dtype method compared to isinstance() and type() (worst\/ better etc)? if var.dtype is even worse as isinstance() is there some method in the isinstance() without all the manual indexing? (autoindexing etc)?","response":"An array is an object of type np.ndarray. Its values or elements are stored in a data buffer, which can be thought of as a contiguous block of memory bytes. The bytes in the data buffer do not have a type, because they are not Python objects. The array has a dtype parameter, which is used to interpret those bytes. If dtype is int32 (there are various synonyms), 4 bytes are interpreted as an integer. Accessing an element, say c[0] gives a new object that depends on the dtype, e.g. an object type np.int32. c[0].item will give an Python object of the corresponding type: \n```\n\n```python\nc=np.array([1])\n```\n\n```python\nc.dtype\n#Output\n#dtype('int32')\n```\n\n```python\ntype(c)\n#Output\n#numpy.ndarray\n```\n\n```python\ntype(c[0])\n#Output\n#numpy.int32\n```\n\n```python\nc[0].item()\n#Output\n#1\n```\n\n```python\ntype(c[0].item())\n#Output\n#int\n#```\n#(And c[0].dtype is the same as for c.dtype; you don't need to index individual elements of an array to check their dtype). The same 4 bytes of this array can be viewed as dtype int8 - a single byte integer. \n#```\n```\n\n```python\nc.view('b')\n#Output\n#array([1, 0, 0, 0], dtype=int8)\n#```\n#A single element of this alternate view is np.int8, but when I take item(), I get a Python integer. There isn't a int8 Python numeric type. \n#```\n```\n\n```python\ntype(c.view('b')[0])\n#Output\n#numpy.int8\n```\n\n```python\ntype(c.view('b')[0].item())\n#Output\n#int\n#```\n#A list contains pointers to Python objects, each of which has a type. So does an array of dtype=object. But the common numeric array does not contain Python integers or floats. It has a data buffer that can interpreted in various ways according to the dtype. Python integers don't come in different sizes, at least not to the same extent as numpy dtypes. So the isinstance and type() stuff does not apply to the contents of an ndarray. ==================== From the comments I gather you are trying to convert integer arrays to float. You aren't converting scalars. If so then dtype is all that matters; an array always has a dtype. It's unclear whether you are ok with casting a np.float32 to np.float64. I'd suggest studying, and experimenting with the np.can_cast function and the x.astype method. \n#```\n#x.astype(np.float64, copy=False)\n#```\n#for example will convert all int dtypes to float, without copying the ones that are already float64. It may copy and convert np.float32 ones. Look also at the casting parameter of these functions. =========================== I found in scipy.optimize.minimize another testing tool \n#```\n```\n\n```python\nnp.typecodes\n#Output\n#{'All': '?bhilqpBHILQPefdgFDGSUVOMm',\n# 'AllFloat': 'efdgFDG',\n# 'AllInteger': 'bBhHiIlLqQpP',\n# 'Character': 'c',\n# 'Complex': 'FDG',\n# 'Datetime': 'Mm',\n# 'Float': 'efdg',\n# 'Integer': 'bhilqp',\n# 'UnsignedInteger': 'BHILQP'}\n#```\n#It can be used to check for integers with: \n#```\n#if x0.dtype.kind in np.typecodes[\"AllInteger\"]:\n# x0 = np.asarray(x0, dtype=float)\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/40312013\/check-type-within-numpy-array","best_answers_votes":43,"tags":["python","python-3.x","numpy","isinstance"],"question_length":1499,"response_length":2886,"tag_count":4} {"question":"convert binary string to numpy array Assume I have the string: \n```\nmy_data = '\\x00\\x00\\x80?\\x00\\x00\\x00@\\x00\\x00@@\\x00\\x00\\x80@'\n```\nWhere I got it is irrelevant, but for the sake of having something concrete, assume I read it from a binary file. I know my string is the binary representation of 4 (4-byte) floats. I would like to get those floats as a numpy array. I could do: \n```\nimport struct\nimport numpy as np\ntple = struct.unpack( '4f', my_data )\nmy_array = np.array( tple, dtype=np.float32 )\n```\nBut it seems silly to create an intermediate tuple. Is there a way to do this operation without creating an intermediate tuple? EDIT I would also like to be able to construct the array in such a way that I can specify the endianness of the string.","response":"```\n\n```python\nnp.frombuffer(b'\\x00\\x00\\x80?\\x00\\x00\\x00@\\x00\\x00@@\\x00\\x00\\x80@', dtype='f4'), or np.float32 on a big-endian system\n#Output\n#array([ 4.60060299e-41, 8.96831017e-44, 2.30485571e-41,\n# 4.60074312e-41], dtype=float32)\n#```\n#The b isn't necessary prior to Python 3, of course. In fact, if you actually are using a binary file to load the data from, you could even skip the using-a-string step and load the data directly from the file with numpy.fromfile(). Also, dtype reference, just in case: http:\/\/docs.scipy.org\/doc\/numpy\/reference\/arrays.dtypes.html\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/11760095\/convert-binary-string-to-numpy-array","best_answers_votes":51,"tags":["python","numpy","binary-data"],"question_length":752,"response_length":861,"tag_count":3} {"question":"Custom data types in numpy arrays I'm creating a numpy array which is to be filled with objects of a particular class I've made. I'd like to initialize the array such that it will only ever contain objects of that class. For example, here's what I'd like to do, and what happens if I do it. \n```\nclass Kernel:\n pass\n\n```python\nL = np.empty(4,dtype=Kernel)\n```\n\nTypeError: data type not understood\n```\nI can do this: \n```\n\n```python\nL = np.empty(4,dtype=object)\n#Output\n#```\n#and then assign each element of L as a Kernel object (or any other type of object). It would be so neat were I able to have an array of Kernels, though, from both a programming point of view (type checking) and a mathematical one (operations on sets of functions). Is there any way for me to specify the data type of a numpy array using an arbitrary class?\n```","response":"If your Kernel class has a predictable amount of member data, then you could define a dtype for it instead of a class. e.g. if it's parameterized by 9 floats and an int, you could do \n```\nkerneldt = np.dtype([('myintname', np.int32), ('myfloats', np.float64, 9)])\narr = np.empty(dims, dtype=kerneldt)\n```\nYou'll have to do some coercion to turn them into objects of class Kernel every time you want to manipulate methods of a single kernel but that's one way to store the actual data in a NumPy array. If you want to only store a reference, then the object dtype is the best you can do without subclassing ndarray.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/2350072\/custom-data-types-in-numpy-arrays","best_answers_votes":35,"tags":["python","numpy"],"question_length":807,"response_length":614,"tag_count":2} {"question":"Distinguishing overfitting vs good prediction These are questions on how to calculate & reduce overfitting in machine learning. I think many new to machine learning will have the same questions, so I tried to be clear with my examples and questions in hope that answers here can help others. I have a very small sample of texts and I'm trying to predict values associated with them. I've used sklearn to calculate tf-idf, and insert those into a regression model for prediction. This gives me 26 samples with 6323 features - not a lot.. I know: \n```\n>> count_vectorizer = CountVectorizer(min_n=1, max_n=1)\n>> term_freq = count_vectorizer.fit_transform(texts)\n>> transformer = TfidfTransformer()\n>> X = transformer.fit_transform(term_freq) \n>> print X.shape\n\n(26, 6323)\n```\nInserting those 26 samples of 6323 features (X) and associated scores (y), into a LinearRegression model, gives good predictions. These are obtained using leave-one-out cross validation, from cross_validation.LeaveOneOut(X.shape[0], indices=True) : \n```\nusing ngrams (n=1):\n human machine points-off %error\n 8.67 8.27 0.40 1.98\n 8.00 7.33 0.67 3.34\n ... ... ... ...\n 5.00 6.61 1.61 8.06\n 9.00 7.50 1.50 7.50\nmean: 7.59 7.64 1.29 6.47\nstd : 1.94 0.56 1.38 6.91\n```\nPretty good! Using ngrams (n=300) instead of unigrams (n=1), similar results occur, which is obviously not right. No 300-words occur in any of the texts, so the prediction should fail, but it doesn't: \n```\nusing ngrams (n=300):\n human machine points-off %error\n 8.67 7.55 1.12 5.60\n 8.00 7.57 0.43 2.13\n ... ... ... ...\nmean: 7.59 7.59 1.52 7.59\nstd : 1.94 0.08 1.32 6.61\n```\nQuestion 1: This might mean that the prediction model is overfitting the data. I only know this because I chose an extreme value for the ngrams (n=300) which I KNOW can't produce good results. But if I didn't have this knowledge, how would you normally tell that the model is over-fitting? In other words, if a reasonable measure (n=1) were used, how would you know that the good prediction was a result of being overfit vs. the model just working well? Question 2: What is the best way of preventing over-fitting (in this situation) to be sure that the prediction results are good or not? Question 3: If LeaveOneOut cross validation is used, how can the model possibly over-fit with good results? Over-fitting means the prediction accuracy will suffer - so why doesn't it suffer on the prediction for the text being left out? The only reason I can think of: in a tf-idf sparse matrix of mainly 0s, there is strong overlap between texts because so many terms are 0s - the regression then thinks the texts correlate highly. Please answer any of the questions even if you don't know them all. Thanks!","response":"how would you normally tell that the model is over-fitting? One useful rule of thumb is that you may be overfitting when your model's performance on its own training set is much better than on its held-out validation set or in a cross-validation setting. That's not all there is to it, though. The blog entry I linked to describes a procedure for testing for overfit: plot training set and validation set error as a function of training set size. If they show a stable gap at the right end of the plot, you're probably overfitting. What is the best way of preventing over-fitting (in this situation) to be sure that the prediction results are good or not? Use a held-out test set. Only do evaluation on this set when you're completely done with model selection (hyperparameter tuning); don't train on it, don't use it in (cross-)validation. The score you get on the test set is the model's final evaluation. This should show whether you've accidentally overfit the validation set(s). [Machine learning conferences are sometimes set up like a competition, where the test set is not given to the researchers until after they've delivered their final model to the organisers. In the meanwhile, they can use the training set as they please, e.g. by testing models using cross-validation. Kaggle does something similar.] If LeaveOneOut cross validation is used, how can the model possibly over-fit with good results? Because you can tune the model as much as you want in this cross-validation setting, until it performs nearly perfectly in CV. As an extreme example, suppose that you've implemented an estimator that is essentially a random number generator. You can keep trying random seeds until you hit a \"model\" that produces very low error in cross-validation, but that doesn't you've hit the right model. It means you've overfit to the cross-validation. See also this interesting warstory.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/12253151\/distinguishing-overfitting-vs-good-prediction","best_answers_votes":36,"tags":["python","numpy","machine-learning","regression","scikit-learn"],"question_length":2921,"response_length":1890,"tag_count":5} {"question":"np.random.permutation with seed? I want to use a seed with np.random.permutation, like \n```\nnp.random.permutation(10, seed=42)\n```\nI get the following error: \n```\n\"permutation() takes no keyword arguments\"\n```\nHow can I do that else? Thanks.","response":"If you want it in one line, you can create a new RandomState, and call the permutation on that: \n```\nnp.random.RandomState(seed=42).permutation(10)\n```\nThis is better than just setting the seed of np.random, as it will have only a localized effect. NumPy 1.16 Update: RandomState is now considered a legacy feature. I see no indication that it will be deprecated any time soon, but now the recommended way to generate reproducible random numbers is via Random Generators, of which the default can be instantiated like so: \n```\nnp.random.default_rng(seed=42).permutation(10)\n```\nNote that it appears like there's no guarantees of bitstream equivalence across different versions of NumPy for this generator, wheras for the RandomState the documentation states that \"This generator is considered frozen and will have no further improvements. It is guaranteed to produce the same values as the final point release of NumPy v1.16.\"","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/47742622\/np-random-permutation-with-seed","best_answers_votes":72,"tags":["python","numpy","random","permutation"],"question_length":241,"response_length":926,"tag_count":4} {"question":"Best way to initialize and fill an numpy array? [duplicate] This question already has answers here: Create numpy matrix filled with NaNs (11 answers) After `x = x.y()`, why did `x` become `None` instead of being modified (possibly causing \"AttributeError: 'NoneType' object has no attribute\")? (1 answer) Closed 1 year ago. I want to initialize and fill a numpy array. What is the best way? This works as I expect: \n```\n\n```python\nimport numpy as np\n```\n\n```python\nnp.empty(3)\n#Output\n#array([ -1.28822975e-231, -1.73060252e-077, 2.23946712e-314])\n#```\n#But this doesn't: \n#```\n```\n\n```python\nnp.empty(3).fill(np.nan)\n#Output\n#>>>\n#```\n#Nothing? \n#```\n```\n\n```python\ntype(np.empty(3))\n#Output\n#\n#```\n#It seems to me that the np.empty() call is returning the correct type of object, so I don't understand why .fill() is not working? Assigning the result of np.empty() first works fine: \n#```\n```\n\n```python\na = np.empty(3)\n```\n\n```python\na.fill(np.nan)\n```\n\n```python\na\n#Output\n#array([ nan, nan, nan])\n#```\n#Why do I need to assign to a variable in order to use np.fill()? Am I missing a better alternative?\n```","response":"You could also try: \n```\n\n```python\nnp.full(3, np.nan)\n#Output\n#array([ nan, nan, nan])\n#```\n#The pertinent doc: \n#```\n#Definition: np.full(shape, fill_value, dtype=None, order='C')\n#Docstring:\n#Return a new array of given shape and type, filled with `fill_value`.\n#```\n#Although I think this might be only available in numpy 1.8+\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/22414152\/best-way-to-initialize-and-fill-an-numpy-array","best_answers_votes":47,"tags":["python","arrays","numpy","multidimensional-array","initialization"],"question_length":1014,"response_length":322,"tag_count":5} {"question":"How do I convert a numpy matrix into a boolean matrix? I have a n x n matrix in numpy which has 0 and non-0 values. Is there a way to easily convert it to a boolean matrix? Thanks.","response":"```\nnumpy.array(old_matrix, dtype=bool)\n```\nAlternatively, \n```\nold_matrix != 0\n```\nThe first version is an elementwise coercion to boolean. Analogous constructs will work for conversion to other data types. The second version is an elementwise comparison to 0. It involves less typing, but ran slightly slower when I timed it. Which you use is up to you; I'd probably decide based on whether \"convert to boolean\" or \"compare to 0\" is a better conceptual description of what I'm after.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/20373039\/how-do-i-convert-a-numpy-matrix-into-a-boolean-matrix","best_answers_votes":48,"tags":["python","numpy"],"question_length":180,"response_length":485,"tag_count":2} {"question":"Python how to combine two matrices in numpy new to Python, struggling in numpy, hope someone can help me, thank you! \n```\nfrom numpy import * \nA = matrix('1.0 2.0; 3.0 4.0') \nB = matrix('5.0 6.0')\nC = matrix('1.0 2.0; 3.0 4.0; 5.0 6.0')\nprint \"A=\",A\nprint \"B=\",B\nprint \"C=\",C\n```\nresults: \n```\nA= [[ 1. 2.]\n [ 3. 4.]]\nB= [[ 5. 6.]]\nC= [[ 1. 2.]\n [ 3. 4.]\n [ 5. 6.]]\n```\nQuestion: how to use A and B to generate C, like in matlab C=[A;B]?","response":"Use numpy.concatenate: \n```\n\n```python\nimport numpy as np\n```\n\n```python\nnp.concatenate((A, B))\n#Output\n#matrix([[ 1., 2.],\n# [ 3., 4.],\n# [ 5., 6.]])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/20180210\/python-how-to-combine-two-matrices-in-numpy","best_answers_votes":43,"tags":["python","numpy","matrix"],"question_length":455,"response_length":142,"tag_count":3} {"question":"How to find last occurrence of maximum value in a numpy.ndarray I have a numpy.ndarray in which the maximum value will mostly occur more than once. EDIT: This is subtly different from numpy.argmax: how to get the index corresponding to the *last* occurrence, in case of multiple occurrences of the maximum values because the author says Or, even better, is it possible to get a list of indices of all the occurrences of the maximum value in the array? whereas in my case getting such a list may prove very expensive Is it possible to find the index of the last occurrence of the maximum value by using something like numpy.argmax? I want to find only the index of the last occurrence, not an array of all occurrences (since several hundreds may be there) For example this will return the index of the first occurrence ie 2 \n```\nimport numpy as np\na=np.array([0,0,4,4,4,4,2,2,2,2])\nprint np.argmax(a)\n```\nHowever I want it to output 5.","response":"numpy.argmax only returns the index of the first occurrence. You could apply argmax to a reversed view of the array: \n```\nimport numpy as np\na = np.array([0,0,4,4,4,4,2,2,2,2])\nb = a[::-1]\ni = len(b) - np.argmax(b) - 1\ni # 5\na[i:] # array([4, 2, 2, 2, 2])\n```\nNote numpy doesn't copy the array but instead creates a view of the original with a stride that accesses it in reverse order. \n```\nid(a) == id(b.base) # True\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/8768540\/how-to-find-last-occurrence-of-maximum-value-in-a-numpy-ndarray","best_answers_votes":72,"tags":["numpy","scipy"],"question_length":934,"response_length":425,"tag_count":2} {"question":"Efficiently check if numpy ndarray values are strictly increasing I'm having a numpy ndarray where I would like to check if each row vector is monotonically increasing. Example: \n```\na = np.asarray([[1,2,3],[1,5,7],[4,3,6]])\nmonotonically_increasing(a)\n```\nExpected return: \n```\n[True, True, False]\n```\nI'm not entirely sure how to efficiently do this, since the matrices are expected to be quite large (~1000x1000), and was hoping for some help.","response":"```\n\n```python\nimport numpy as np\n```\n\n```python\na = np.asarray([[1,2,3],[1,5,7],[4,3,6]])\n#Output\n#```\n#Find the difference between each element. np.diff has an argument that lets you specify the axis to perform the diff \n#```\n```\n\n```python\nnp.diff(a)\n#Output\n#array([[ 1, 1],\n# [ 4, 2],\n# [-1, 3]])\n#```\n#Check to see if each difference is greater than 0. \n#```\n```\n\n```python\nnp.diff(a) > 0\n#Output\n#array([[ True, True],\n# [ True, True],\n# [False, True]], dtype=bool)\n#```\n#Check to see if all the differences are > 0 \n#```\n```\n\n```python\nnp.all(np.diff(a) > 0)\n#Output\n#False\n#>>>\n#```\n#As suggested by @Jaime - check that each element is greater than the element to its left: \n#```\n#np.all(a[:, 1:] >= a[:, :-1], axis=1)\n#```\n#Which appears to be about twice as fast\/efficient as my diff solution.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/30734258\/efficiently-check-if-numpy-ndarray-values-are-strictly-increasing","best_answers_votes":65,"tags":["python","numpy","matrix"],"question_length":446,"response_length":728,"tag_count":3} {"question":"how to annotate heatmap with text in matplotlib I am plotting a heatmap in matplotlib using: plt.pcolor(rand(5,5)) how can I annotate the heatmap with the actual numbers plotted? meaning in each cell of the plotted heatmap, put the value corresponding to that cell in the 5x5 matrix passed to pcolor. thanks.","response":"There is no automatic feature to do such a thing, but you could loop through each point and put text in the appropriate location: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndata = np.random.rand(5, 4)\nheatmap = plt.pcolor(data)\n\nfor y in range(data.shape[0]):\n for x in range(data.shape[1]):\n plt.text(x + 0.5, y + 0.5, '%.4f' % data[y, x],\n horizontalalignment='center',\n verticalalignment='center',\n )\n\nplt.colorbar(heatmap)\n\nplt.show()\n```\nHTH","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/11917547\/how-to-annotate-heatmap-with-text-in-matplotlib","best_answers_votes":55,"tags":["python","numpy","matplotlib","scipy"],"question_length":308,"response_length":518,"tag_count":4} {"question":"Scipy\/Numpy FFT Frequency Analysis I'm looking for how to turn the frequency axis in a fft (taken via scipy.fftpack.fftfreq) into a frequency in Hertz, rather than bins or fractional bins. I tried to code below to test out the FFT: \n```\nt = scipy.linspace(0,120,4000)\nacc = lambda t: 10*scipy.sin(2*pi*2.0*t) + 5*scipy.sin(2*pi*8.0*t) + 2*scipy.random.random(len(t))\n\nsignal = acc(t)\n\nFFT = abs(scipy.fft(signal))\nFFT = scipy.fftpack.fftshift(FFT)\nfreqs = scipy.fftpack.fftfreq(signal.size)\n\npylab.plot(freqs,FFT,'x')\npylab.show()\n```\nThe sampling rate should be 4000 samples \/ 120 seconds = 33.34 samples\/sec. The signal has a 2.0 Hz signal, a 8.0 Hz signal, and some random noise. I take the FFT, grab the frequencies, and plot it. The numbers are pretty nonsensical. If I multiply the frequencies by 33.34 (the sampling frequency), then I get peaks at about 8 Hz and 15 Hz, which seems wrong (also, the frequencies should be a factor of 4 apart, not 2!). Any thoughts on what I'm doing wrong here?","response":"I think you don't need to do fftshift(), and you can pass sampling period to fftfreq(): \n```\nimport scipy\nimport scipy.fftpack\nimport pylab\nfrom scipy import pi\nt = scipy.linspace(0,120,4000)\nacc = lambda t: 10*scipy.sin(2*pi*2.0*t) + 5*scipy.sin(2*pi*8.0*t) + 2*scipy.random.random(len(t))\n\nsignal = acc(t)\n\nFFT = abs(scipy.fft(signal))\nfreqs = scipy.fftpack.fftfreq(signal.size, t[1]-t[0])\n\npylab.subplot(211)\npylab.plot(t, signal)\npylab.subplot(212)\npylab.plot(freqs,20*scipy.log10(FFT),'x')\npylab.show()\n```\nfrom the graph you can see there are two peak at 2Hz and 8Hz.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/9456037\/scipy-numpy-fft-frequency-analysis","best_answers_votes":48,"tags":["python","numpy","scipy"],"question_length":1000,"response_length":573,"tag_count":3} {"question":"Rearranging axes in numpy? I have an ndarray such as \n```\n\n```python\narr = np.random.rand(10, 20, 30, 40)\n```\n\n```python\narr.shape\n#Output\n#(10, 20, 30, 40)\n#```\n#whose axes I would like to swap around into some arbitrary order such as \n#```\n```\n\n```python\nrearranged_arr = np.swapaxes(np.swapaxes(arr, 1,3), 0,1)\n```\n\n```python\nrearranged_arr.shape\n#Output\n#(40, 10, 30, 20)\n#```\n#Is there a function which achieves this without having to chain together a bunch of np.swapaxes?\n```","response":"There are two options: np.moveaxis and np.transpose. np.moveaxis(a, sources, destinations) docs This function can be used to rearrange specific dimensions of an array. For example, to move the 4th dimension to be the 1st and the 2nd dimension to be the last: \n```py\n\n```python\nrearranged_arr = np.moveaxis(arr, [3, 1], [0, 3])\n```\n\n```python\nrearranged_arr.shape\n#Output\n#(40, 10, 30, 20)\n#```\n#This can be particularly useful if you have many dimensions and only want to rearrange a small number of them. e.g. \n#```py\n```\n\n```python\nanother_arr = np.random.rand(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)\n```\n\n```python\nnp.moveaxis(another_arr, [8, 9], [0, 1]).shape\n#Output\n#(8, 9, 0, 1, 2, 3, 4, 5, 6, 7)\n#```\n#np.transpose(a, axes=None) docs This function can be used to rearrange all dimensions of an array at once. For example, to solve your particular case: \n#```py\n```\n\n```python\nrearranged_arr = np.transpose(arr, axes=[3, 0, 2, 1])\n```\n\n```python\nrearranged_arr.shape\n#Output\n#(40, 10, 30, 20)\n#```\n#or equivalently \n#```py\n```\n\n```python\nrearranged_arr = arr.transpose(3, 0, 2, 1)\n```\n\n```python\nrearranged_arr.shape\n#Output\n#(40, 10, 30, 20)\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/57438392\/rearranging-axes-in-numpy","best_answers_votes":49,"tags":["python","numpy"],"question_length":415,"response_length":1015,"tag_count":2} {"question":"Unpack NumPy array by column If I have a NumPy array, for example 5x3, is there a way to unpack it column by column all at once to pass to a function rather than like this: my_func(arr[:, 0], arr[:, 1], arr[:, 2])? Kind of like *args for list unpacking but by column.","response":"You can unpack the transpose of the array in order to use the columns for your function arguments: \n```\nmy_func(*arr.T)\n```\nHere's a simple example: \n```\n\n```python\nx = np.arange(15).reshape(5, 3)\n#Output\n#array([[ 0, 5, 10],\n# [ 1, 6, 11],\n# [ 2, 7, 12],\n# [ 3, 8, 13],\n# [ 4, 9, 14]])\n#```\n#Let's write a function to add the columns together (normally done with x.sum(axis=1) in NumPy): \n#```\n#def add_cols(a, b, c):\n# return a+b+c\n#```\n#Then we have: \n#```\n```\n\n```python\nadd_cols(*x.T)\n#Output\n#array([15, 18, 21, 24, 27])\n#```\n#NumPy arrays will be unpacked along the first dimension, hence the need to transpose the array.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/27046533\/unpack-numpy-array-by-column","best_answers_votes":37,"tags":["python","arrays","numpy","argument-unpacking"],"question_length":267,"response_length":610,"tag_count":4} {"question":"Open and view .npz file in Python I have an unknown .npz file and would like to see whats inside. I am very new to Python. \n```\n\n```python\nimport numpy\n```\n\n```python\nb = numpy.load('data.npz')\n```\n\n```python\nprint(b.files)\n#Output\n#['arr_1', 'arr_0', 'arr_3', 'arr_2']\n#```\n#How do I see what these arr_i contain (i.e. dimension, values, etc.)?\n```","response":"```\nnp.savez_compressed('filename.npz', array1=array1, array2=array2)\nb = np.load('filename.npz')\n```\nAnd do b['array_1'], b['array_2'] and so on to retrieve data from each array.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/48429408\/open-and-view-npz-file-in-python","best_answers_votes":32,"tags":["python","numpy"],"question_length":305,"response_length":179,"tag_count":2} {"question":"CS231n: How to calculate gradient for Softmax loss function? I am watching some videos for Stanford CS231: Convolutional Neural Networks for Visual Recognition but do not quite understand how to calculate analytical gradient for softmax loss function using numpy. From this stackexchange answer, softmax gradient is calculated as: Python implementation for above is: \n```\nnum_classes = W.shape[0]\nnum_train = X.shape[1]\nfor i in range(num_train):\n for j in range(num_classes):\n p = np.exp(f_i[j])\/sum_i\n dW[j, :] += (p-(j == y[i])) * X[:, i]\n```\nCould anyone explain how the above snippet work? Detailed implementation for softmax is also included below. \n```\ndef softmax_loss_naive(W, X, y, reg):\n \"\"\"\n Softmax loss function, naive implementation (with loops)\n Inputs:\n - W: C x D array of weights\n - X: D x N array of data. Data are D-dimensional columns\n - y: 1-dimensional array of length N with labels 0...K-1, for K classes\n - reg: (float) regularization strength\n Returns:\n a tuple of:\n - loss as single float\n - gradient with respect to weights W, an array of same size as W\n \"\"\"\n # Initialize the loss and gradient to zero.\n loss = 0.0\n dW = np.zeros_like(W)\n\n #############################################################################\n # Compute the softmax loss and its gradient using explicit loops. #\n # Store the loss in loss and the gradient in dW. If you are not careful #\n # here, it is easy to run into numeric instability. Don't forget the #\n # regularization! #\n #############################################################################\n\n # Get shapes\n num_classes = W.shape[0]\n num_train = X.shape[1]\n\n for i in range(num_train):\n # Compute vector of scores\n f_i = W.dot(X[:, i]) # in R^{num_classes}\n\n # Normalization trick to avoid numerical instability, per http:\/\/cs231n.github.io\/linear-classify\/#softmax\n log_c = np.max(f_i)\n f_i -= log_c\n\n # Compute loss (and add to it, divided later)\n # L_i = - f(x_i)_{y_i} + log \\sum_j e^{f(x_i)_j}\n sum_i = 0.0\n for f_i_j in f_i:\n sum_i += np.exp(f_i_j)\n loss += -f_i[y[i]] + np.log(sum_i)\n\n # Compute gradient\n # dw_j = 1\/num_train * \\sum_i[x_i * (p(y_i = j)-Ind{y_i = j} )]\n # Here we are computing the contribution to the inner sum for a given i.\n for j in range(num_classes):\n p = np.exp(f_i[j])\/sum_i\n dW[j, :] += (p-(j == y[i])) * X[:, i]\n\n # Compute average\n loss \/= num_train\n dW \/= num_train\n\n # Regularization\n loss += 0.5 * reg * np.sum(W * W)\n dW += reg*W\n\n return loss, dW\n```","response":"Not sure if this helps, but: is really the indicator function , as described here. This forms the expression (j == y[i]) in the code. Also, the gradient of the loss with respect to the weights is: where which is the origin of the X[:,i] in the code.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/41663874\/cs231n-how-to-calculate-gradient-for-softmax-loss-function","best_answers_votes":20,"tags":["python","numpy","softmax"],"question_length":2637,"response_length":249,"tag_count":3} {"question":"How to use numpy with 'None' value in Python? I'd like to calculate the mean of an array in Python in this form: \n```\nMatrice = [1, 2, None]\n```\nI'd just like to have my None value ignored by the numpy.mean calculation but I can't figure out how to do it.","response":"You are looking for masked arrays. Here's an example. \n```\nimport numpy.ma as ma\na = ma.array([1, 2, None], mask = [0, 0, 1])\nprint \"average =\", ma.average(a)\n```\nFrom the numpy docs linked above, \"The numpy.ma module provides a nearly work-alike replacement for numpy that supports data arrays with masks.\"","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/962343\/how-to-use-numpy-with-none-value-in-python","best_answers_votes":12,"tags":["python","numpy","mean"],"question_length":255,"response_length":307,"tag_count":3} {"question":"numpy.isnan(value) not the same as value == numpy.nan? Why am I getting the following: \n```\n\n```python\nv\n#Output\n#nan\n```\n\n```python\ntype(v)\n#Output\n#\n```\n\n```python\nv == np.nan\n#Output\n#False\n```\n\n```python\nnp.isnan(v)\n#Output\n#True\n#```\n#I would have thought the two should be equivalent?\n```","response":"nan != nan. That's just how equality comparisons on nan are defined. It was decided that this result is more convenient for numerical algorithms than the alternative. This is specifically why isnan exists.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/29528092\/numpy-isnanvalue-not-the-same-as-value-numpy-nan","best_answers_votes":35,"tags":["python","numpy","types","boolean","nan"],"question_length":234,"response_length":205,"tag_count":5} {"question":"Convert dtype from int64 to int32 Basically, I am using python x32 bit to load from file a list object containing several numpy arrays (previously saved inside a pickle using python x64). I can load them properly and check the contents but I cannot use them. \n```\nTypeError: Cannot cast array data from dtype('int64') to dtype('int32')\n```\nHow can I convert the arrays element type from within the list to int32 so I can use them with python x32. The error comes when I try to execute the following part: \n```\na=np.bincount(np.hstack(data['Y']))\n```\nLooking at what is inside data['Y']","response":"As others have said, 32-bit versions of numpy still support 64-bit dtypes. But if you really need to convert to int32, you can use the astype function: \n```\n\n```python\nimport numpy as np\n```\n\n```python\nx = np.array([1,2,3], dtype=np.int64)\n```\n\n```python\nx\n#Output\n#array([1, 2, 3])\n```\n\n```python\nx.dtype\n#Output\n#dtype('int64')\n```\n\n```python\ny = x.astype(np.int32)\n```\n\n```python\ny\n#Output\n#array([1, 2, 3], dtype=int32)\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/22725043\/convert-dtype-from-int64-to-int32","best_answers_votes":35,"tags":["python","numpy"],"question_length":585,"response_length":338,"tag_count":2} {"question":"Fitting to Poisson histogram I am trying to fit a curve over the histogram of a Poisson distribution that looks like this I have modified the fit function so that it resembles a Poisson distribution, with the parameter t as a variable. But the curve_fit function can not be plotted and I am not sure why. \n```\ndef histo(bsize):\n N = bsize\n #binwidth\n bw = (dt.max()-dt.min())\/(N-1.)\n bin1 = dt.min()+ bw*np.arange(N)\n #define the array to hold the occurrence count\n bincount= np.array([])\n for bin in bin1:\n count = np.where((dt>=bin)&(dt False\n> FutureWarning: comparison to `None` will result in an elementwise object comparison in the future.\n```\nThis also breaks the lazy loading pattern for numpy arrays: \n```\nimport numpy as np\ndef f(a=None):\n if a == None: \n a = \n \n```\nWhat other possibilities allow you to still use lazy initialization?","response":"You are looking for is: \n```\nif a is None:\n a = something else\n```\nThe problem is that, by using the == operator, if the input element a is a numpy array, numpy will try to perform an element wise comparison and tell you that you cannot compare it. For a a numpy array, a == None gives error, np.all(a == None) doesn't (but does not do what you expect). Instead a is None will work regardless the data type of a.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/33954216\/comparison-to-none-will-result-in-an-elementwise-object","best_answers_votes":63,"tags":["python","numpy"],"question_length":547,"response_length":415,"tag_count":2} {"question":"numpy.loadtxt, ValueError: could not convert string to float This is sample from large csv file: \n```\n6.1;6.1;7.2;8.9;5.0;\n8.9;10.0;8.9;6.1;5.0;\n```\nIf I try to read it to numpy array with np.loadtxt('test.csv', delimiter=';') I get: ValueError: could not convert string to float: and don't understand why?","response":"You need to strip off the trailing ';' from the lines. A possible workaround if you know you have 5 columns is: \n```\nnp.loadtxt('test.csv', delimiter=';', usecols=range(5))\n```\nOr, use genfromtext instead which handles missing values \n```\nnp.genfromtxt('test.csv', delimiter=';')[:,:-1]\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/16729210\/numpy-loadtxt-valueerror-could-not-convert-string-to-float","best_answers_votes":44,"tags":["numpy"],"question_length":306,"response_length":290,"tag_count":1} {"question":"How to return a view of several columns in numpy structured array I can see several columns (fields) at once in a numpy structured array by indexing with a list of the field names, for example \n```\nimport numpy as np\n\na = np.array([(1.5, 2.5, (1.0,2.0)), (3.,4.,(4.,5.)), (1.,3.,(2.,6.))],\n dtype=[('x',float), ('y',float), ('value',float,(2,2))])\n\nprint a[['x','y']]\n#[(1.5, 2.5) (3.0, 4.0) (1.0, 3.0)]\n\nprint a[['x','y']].dtype\n#[('x', '>>\n```\n\n```python\nnp.sqrt(4)\n#Output\n#2.0\n```\n\n```python\nnp.sqrt(-4)\n#Output\n#__main__:1: RuntimeWarning: invalid value encountered in sqrt\n#nan\n#```\n#Let's test if you have negative values: \n#```\n```\n\n```python\nimport numpy as np\n```\n\n```python\n\n```\n\n```python\na = 0.75 + (1.25 - 0.75) * np.random.randn(10000)\n```\n\n```python\nb = 8 + (12 - 8) * np.random.randn(10000)\n```\n\n```python\nc = -12 + 2 * np.random.randn(10000)\n```\n\n```python\n\n```\n\n```python\nz = b ** 2 - (4 * a * c)\n```\n\n```python\nprint len([_ for _ in z if _ < 0])\n#Output\n#71\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/39123766\/i-am-getting-a-warning-runtimewarning-invalid-value-encountered-in-sqrt","best_answers_votes":39,"tags":["python","python-2.7","python-3.x","numpy","math"],"question_length":424,"response_length":640,"tag_count":5} {"question":"How to surface plot\/3d plot from dataframe I am new to pandas and matplotlib. I couldn't get the exact reference to plot my DataFrame which has the following schema: \n```\nschema = StructType([\nStructField(\"x\", IntegerType(), True),\nStructField(\"y\", IntegerType(), True),\nStructField(\"z\", IntegerType(), True)])\n```\nI'd like to plot 3d graph w.r.t. x, y and z. Here is the sample code I used: \n```\nimport matplotlib.pyplot as pltt\n\ndfSpark = sqlContext.createDataFrame(tupleRangeRDD, schema) \/\/ reading as spark df\ndf = dfSpark.toPandas()\nfig = pltt.figure();\nax = fig.add_subplot(111, projection='3d')\nax.plot_surface(df['x'], df['y'], df['z'])\n```\nI am getting a empty graph plot. I'm definitely missing something. Any pointers? Request-1: Print df \n```\ndef print_full(x):\n pd.set_option('display.max_rows', len(x))\n print(x)\n pd.reset_option('display.max_rows')\n\n\nprint_full(df)\n```\nResult of top 10: \n```\nx y z\n0 301 301 10\n1 300 301 16\n2 300 300 6\n3 299 301 30\n4 299 300 20\n5 299 299 14\n6 298 301 40\n7 298 300 30\n8 298 299 24\n9 298 298 10\n10 297 301 48\n```","response":".plot_surface() takes 2D arrays as inputs, not 1D DataFrame columns. This has been explained quite well here, along with the below code that illustrates how one could arrive at the required format using DataFrame input. Reproduced below with minor modifications like additional comments. Alternatively, however, there is .plot_trisurf() which uses 1D inputs. I've added an example in the middle of the code. \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib import cm\nfrom matplotlib.ticker import LinearLocator, FormatStrFormatter\nfrom mpl_toolkits.mplot3d import Axes3D\n\n## Matplotlib Sample Code using 2D arrays via meshgrid\nX = np.arange(-5, 5, 0.25)\nY = np.arange(-5, 5, 0.25)\nX, Y = np.meshgrid(X, Y)\nR = np.sqrt(X ** 2 + Y ** 2)\nZ = np.sin(R)\nfig = plt.figure()\nax = Axes3D(fig)\nsurf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=cm.coolwarm,\n linewidth=0, antialiased=False)\nax.set_zlim(-1.01, 1.01)\n\nax.zaxis.set_major_locator(LinearLocator(10))\nax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))\n\nfig.colorbar(surf, shrink=0.5, aspect=5)\nplt.title('Original Code')\nplt.show()\n```\n```\n## DataFrame from 2D-arrays\nx = X.reshape(1600)\ny = Y.reshape(1600)\nz = Z.reshape(1600)\ndf = pd.DataFrame({'x': x, 'y': y, 'z': z}, index=range(len(x)))\n\n# Plot using `.trisurf()`:\n\nax.plot_trisurf(df.x, df.y, df.z, cmap=cm.jet, linewidth=0.2)\nplt.show()\n```\n```\n# 2D-arrays from DataFrame\nx1 = np.linspace(df['x'].min(), df['x'].max(), len(df['x'].unique()))\ny1 = np.linspace(df['y'].min(), df['y'].max(), len(df['y'].unique()))\n\n\"\"\"\nx, y via meshgrid for vectorized evaluation of\n2 scalar\/vector fields over 2-D grids, given\none-dimensional coordinate arrays x1, x2,..., xn.\n\"\"\"\n\nx2, y2 = np.meshgrid(x1, y1)\n\n# Interpolate unstructured D-dimensional data.\nz2 = griddata((df['x'], df['y']), df['z'], (x2, y2), method='cubic')\n\n# Ready to plot\nfig = plt.figure()\nax = fig.gca(projection='3d')\nsurf = ax.plot_surface(x2, y2, z2, rstride=1, cstride=1, cmap=cm.coolwarm,\n linewidth=0, antialiased=False)\nax.set_zlim(-1.01, 1.01)\n\nax.zaxis.set_major_locator(LinearLocator(10))\nax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))\n\nfig.colorbar(surf, shrink=0.5, aspect=5)\nplt.title('Meshgrid Created from 3 1D Arrays')\n\nplt.show()\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/36589521\/how-to-surface-plot-3d-plot-from-dataframe","best_answers_votes":49,"tags":["python","numpy","pandas","matplotlib","dataframe"],"question_length":1199,"response_length":2311,"tag_count":5} {"question":"ValueError: ndarray is not C-contiguous in cython I have written the following function in cython to estimate the log-likelihood \n```\n@cython.boundscheck(False)\n@cython.wraparound(False)\ndef likelihood(double m,\n double c,\n np.ndarray[np.double_t, ndim=1, mode='c'] r_mpc not None,\n np.ndarray[np.double_t, ndim=1, mode='c'] gtan not None,\n np.ndarray[np.double_t, ndim=1, mode='c'] gcrs not None,\n np.ndarray[np.double_t, ndim=1, mode='c'] shear_err not None,\n np.ndarray[np.double_t, ndim=1, mode='c'] beta not None,\n double rho_c,\n np.ndarray[np.double_t, ndim=1, mode='c'] rho_c_sigma not None):\n cdef double rscale = rscaleConstM(m, c,rho_c, 200)\n\n cdef Py_ssize_t ngals = r_mpc.shape[0]\n\n cdef np.ndarray[DTYPE_T, ndim=1, mode='c'] gamma_inf = Sh(r_mpc, c, rscale, rho_c_sigma)\n cdef np.ndarray[DTYPE_T, ndim=1, mode='c'] kappa_inf = Kap(r_mpc, c, rscale, rho_c_sigma)\n\n\n cdef double delta = 0.\n cdef double modelg = 0.\n cdef double modsig = 0.\n\n cdef Py_ssize_t i\n cdef DTYPE_T logProb = 0.\n\n\n #calculate logprob\n for i from ngals > i >= 0:\n\n modelg = (beta[i]*gamma_inf[i] \/ (1 - beta[i]*kappa_inf[i]))\n\n delta = gtan[i] - modelg\n\n modsig = shear_err[i]\n\n logProb = logProb -.5*(delta\/modsig)**2 - logsqrt2pi - log(modsig)\n\n\n return logProb\n```\nbut when I run the compiled version of this function, I get the following error message: \n```\nFile \"Tools.pyx\", line 3, in Tools.likelihood \n def likelihood(double m,\nValueError: ndarray is not C-contiguous\n```\nI could not quite understand why this problem occurs??!!! I will appreciate to get any useful tips.","response":"Just before you get the error, try printing the flags attribute of the numpy array(s) you're passing to likelihood. You'll probably see something like: \n```\n\n```python\nfoo.flags\n#Output\n#C_CONTIGUOUS : False\n# F_CONTIGUOUS : True\n# OWNDATA : True\n# WRITEABLE : True\n# ALIGNED : True\n# UPDATEIFCOPY : False\n#```\n#Note where it says C_CONTIGUOUS : False, because that's the issue. To fix it, simply convert it to C-order: \n#```\n```\n\n```python\nfoo = foo.copy(order='C')\n```\n\n```python\nfoo.flags\n#Output\n#C_CONTIGUOUS : True\n# F_CONTIGUOUS : False\n# OWNDATA : True\n# WRITEABLE : True\n# ALIGNED : True\n# UPDATEIFCOPY : False\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/26778079\/valueerror-ndarray-is-not-c-contiguous-in-cython","best_answers_votes":53,"tags":["python","numpy","cython"],"question_length":1743,"response_length":608,"tag_count":3} {"question":"Group by consecutive index numbers I was wondering if there is a way to groupby consecutive index numbers and move the groups in different columns. Here is an example of the DataFrame I'm using: \n```\n0\n0 19218.965703\n1 19247.621650\n2 19232.651322\n9 19279.216956\n10 19330.087371\n11 19304.316973\n```\nAnd my idea is to gruoup by sequential index numbers and get something like this: \n```\n0 1\n0 19218.965703 19279.216956 \n1 19247.621650 19330.087371\n2 19232.651322 19304.316973\n```\nIve been trying to split my data by blocks of 3 and then groupby but I was looking more about something that can be used to group and rearrange sequential index numbers. Thank you!","response":"Here is one way: \n```\nfrom more_itertools import consecutive_groups\nfinal=pd.concat([df.loc[i].reset_index(drop=True) \n for i in consecutive_groups(df.index)],axis=1)\nfinal.columns=range(len(final.columns))\nprint(final)\n```\n```\n0 1\n0 19218.965703 19279.216956\n1 19247.621650 19330.087371\n2 19232.651322 19304.316973\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/57713712\/group-by-consecutive-index-numbers","best_answers_votes":21,"tags":["python","pandas","numpy","group-by"],"question_length":710,"response_length":356,"tag_count":4} {"question":"Multivariate spline interpolation in python\/scipy? Is there a library module or other straightforward way to implement multivariate spline interpolation in python? Specifically, I have a set of scalar data on a regularly-spaced three-dimensional grid which I need to interpolate at a small number of points scattered throughout the domain. For two dimensions, I have been using scipy.interpolate.RectBivariateSpline, and I'm essentially looking for an extension of that to three-dimensional data. The N-dimensional interpolation routines I have found are not quite good enough: I would prefer splines over LinearNDInterpolator for smoothness, and I have far too many data points (often over one million) for, e.g., a radial basis function to work. If anyone knows of a python library that can do this, or perhaps one in another language that I could call or port, I'd really appreciate it.","response":"If I'm understanding your question correctly, your input \"observation\" data is regularly gridded? If so, scipy.ndimage.map_coordinates does exactly what you want. It's a bit hard to understand at first pass, but essentially, you just feed it a sequence of coordinates that you want to interpolate the values of the grid at in pixel\/voxel\/n-dimensional-index coordinates. As a 2D example: \n```\nimport numpy as np\nfrom scipy import ndimage\nimport matplotlib.pyplot as plt\n\n# Note that the output interpolated coords will be the same dtype as your input\n# data. If we have an array of ints, and we want floating point precision in\n# the output interpolated points, we need to cast the array as floats\ndata = np.arange(40).reshape((8,5)).astype(np.float)\n\n# I'm writing these as row, column pairs for clarity...\ncoords = np.array([[1.2, 3.5], [6.7, 2.5], [7.9, 3.5], [3.5, 3.5]])\n# However, map_coordinates expects the transpose of this\ncoords = coords.T\n\n# The \"mode\" kwarg here just controls how the boundaries are treated\n# mode='nearest' is _not_ nearest neighbor interpolation, it just uses the\n# value of the nearest cell if the point lies outside the grid. The default is\n# to treat the values outside the grid as zero, which can cause some edge\n# effects if you're interpolating points near the edge\n# The \"order\" kwarg controls the order of the splines used. The default is \n# cubic splines, order=3\nzi = ndimage.map_coordinates(data, coords, order=3, mode='nearest')\n\nrow, column = coords\nnrows, ncols = data.shape\nim = plt.imshow(data, interpolation='nearest', extent=[0, ncols, nrows, 0])\nplt.colorbar(im)\nplt.scatter(column, row, c=zi, vmin=data.min(), vmax=data.max())\nfor r, c, z in zip(row, column, zi):\n plt.annotate('%0.3f' % z, (c,r), xytext=(-10,10), textcoords='offset points',\n arrowprops=dict(arrowstyle='->'), ha='right')\nplt.show()\n```\nTo do this in n-dimensions, we just need to pass in the appropriate sized arrays: \n```\nimport numpy as np\nfrom scipy import ndimage\n\ndata = np.arange(3*5*9).reshape((3,5,9)).astype(np.float)\ncoords = np.array([[1.2, 3.5, 7.8], [0.5, 0.5, 6.8]])\nzi = ndimage.map_coordinates(data, coords.T)\n```\nAs far as scaling and memory usage goes, map_coordinates will create a filtered copy of the array if you're using an order > 1 (i.e. not linear interpolation). If you just want to interpolate at a very small number of points, this is a rather large overhead. It doesn't increase with the number points you want to interpolate at, however. As long as have enough RAM for a single temporary copy of your input data array, you'll be fine. If you can't store a copy of your data in memory, you can either a) specify prefilter=False and order=1 and use linear interpolation, or b) replace your original data with a filtered version using ndimage.spline_filter, and then call map_coordinates with prefilter=False. Even if you have enough ram, keeping the filtered dataset around can be a big speedup if you need to call map_coordinates multiple times (e.g. interactive use, etc).","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/6238250\/multivariate-spline-interpolation-in-python-scipy","best_answers_votes":48,"tags":["python","numpy","scipy","interpolation"],"question_length":889,"response_length":3040,"tag_count":4} {"question":"Comparing elements of an array to a scalar and getting the max in Python I want to compare the elements of an array to a scalar and get an array with the maximum of the compared values. That's I want to call \n```\nimport numpy as np\nnp.max([1,2,3,4], 3)\n```\nand want to get \n```\narray([3,3,3,4])\n```\nBut I get \n```\nValueError: 'axis' entry is out of bounds\n```\nWhen I run \n```\nnp.max([[1,2,3,4], 3])\n```\nI get \n```\n[1, 2, 3, 4]\n```\nwhich is one of the two elements in the list that is not the result I seek for. Is there a Numpy solution for that which is fast as the other built-in functions?","response":"This is already built into numpy with the function np.maximum: \n```py\na = np.arange(1,5)\nn = 3\n\nnp.maximum(a, n)\n#array([3, 3, 3, 4])\n```\nThis doesn't mutate a: \n```py\na\n#array([1, 2, 3, 4])\n```\nIf you want to mutate the original array as in @jamylak's answer, you can give a as the output: \n```py\nnp.maximum(a, n, a)\n#array([3, 3, 3, 4])\n\na\n#array([3, 3, 3, 4])\n```\nDocs: maximum(x1, x2[, out]) Element-wise maximum of array elements. Equivalent to np.where(x1 > x2, x1, x2) but faster and does proper broadcasting.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/16587427\/comparing-elements-of-an-array-to-a-scalar-and-getting-the-max-in-python","best_answers_votes":42,"tags":["python","numpy"],"question_length":592,"response_length":516,"tag_count":2} {"question":"What is the difference between the types and ? Is there a difference between the types and ?","response":"numpy.string_ is the NumPy datatype used for arrays containing fixed-width byte strings. On the other hand, str is a native Python type and can not be used as a datatype for NumPy arrays*. If you create a NumPy array containing strings, the array will use the numpy.string_ type (or the numpy.unicode_ type in Python 3). More precisely, the array will use a sub-datatype of np.string_: \n```\n\n```python\na = np.array(['abc', 'xy'])\n```\n\n```python\na\n#Output\n#array(['abc', 'xy'], dtype='> 0\n\n# calculate the convolution (np.convolve and scipy.signal.convolve identical)\n# the keywordargument mode=\"same\" ensures that the convolution spans the same\n# shape as the input array.\n#filtered = scipy.signal.convolve(signal, gauss, mode='same') \nfiltered = np.convolve(signal, gauss, mode='same') \n\ndeconv, _ = scipy.signal.deconvolve( filtered, gauss )\n#the deconvolution has n = len(signal) - len(gauss) + 1 points\nn = len(signal)-len(gauss)+1\n# so we need to expand it by \ns = (len(signal)-n)\/2\n#on both sides.\ndeconv_res = np.zeros(len(signal))\ndeconv_res[s:len(signal)-s-1] = deconv\ndeconv = deconv_res\n# now deconv contains the deconvolution \n# expanded to the original shape (filled with zeros) \n\n\n#### Plot #### \nfig , ax = plt.subplots(nrows=4, figsize=(6,7))\n\nax[0].plot(signal, color=\"#907700\", label=\"original\", lw=3 ) \nax[1].plot(gauss, color=\"#68934e\", label=\"gauss filter\", lw=3 )\n# we need to divide by the sum of the filter window to get the convolution normalized to 1\nax[2].plot(filtered\/np.sum(gauss), color=\"#325cab\", label=\"convoluted\" , lw=3 )\nax[3].plot(deconv, color=\"#ab4232\", label=\"deconvoluted\", lw=3 ) \n\nfor i in range(len(ax)):\n ax[i].set_xlim([0, len(signal)])\n ax[i].set_ylim([-0.07, 1.2])\n ax[i].legend(loc=1, fontsize=11)\n if i != len(ax)-1 :\n ax[i].set_xticklabels([])\n\nplt.savefig(__file__ + \".png\")\nplt.show()\n```\nThis code produces the following image, showing exactly what we want (Deconvolve(Convolve(signal,gauss) , gauss) == signal) Some important findings are: The filter should be shorter than the signal The filter should be much bigger than zero everywhere (here > 0.013 is good enough) Using the keyword argument mode = 'same' to the convolution ensures that it lives on the same array shape as the signal. The deconvolution has n = len(signal) - len(gauss) + 1 points. So in order to let it also reside on the same original array shape we need to expand it by s = (len(signal)-n)\/2 on both sides. Of course, further findings, comments and suggestion to this question are still welcome.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/40615034\/understanding-scipy-deconvolve","best_answers_votes":25,"tags":["python","numpy","scipy","signals","deconvolution"],"question_length":2620,"response_length":2648,"tag_count":5} {"question":"Dihedral\/Torsion Angle From Four Points in Cartesian Coordinates in Python What suggestions do people have for quickly calculating dihedral angles in Python? In the diagrams, phi is the dihedral angle: What's your best for calculating angles in the range 0 to pi? What about 0 to 2pi? \"Best\" here means some mix of fast and numerically stable. Methods that return values over the full range 0 to 2pi are preferred but if you have an incredibly fast way of calculating the dihedral over 0 to pi share that too. Here are my 3 best efforts. Only the 2nd one returns angles between 0 and 2pi. It's also the slowest. General comments about my approaches: arccos() in Numpy seems plenty stable but since people raise this issue I may just not fully understand it. The use of einsum came from here. Why is numpy's einsum faster than numpy's built in functions? The diagrams and some inspiration came from here. How do I calculate a dihedral angle given Cartesian coordinates? The 3 approaches with comments: \n```\nimport numpy as np\nfrom time import time\n\n# This approach tries to minimize magnitude and sqrt calculations\ndef dihedral1(p):\n # Calculate vectors between points, b1, b2, and b3 in the diagram\n b = p[:-1] - p[1:]\n # \"Flip\" the first vector so that eclipsing vectors have dihedral=0\n b[0] *= -1\n # Use dot product to find the components of b1 and b3 that are not\n # perpendicular to b2. Subtract those components. The resulting vectors\n # lie in parallel planes.\n v = np.array( [ v - (v.dot(b[1])\/b[1].dot(b[1])) * b[1] for v in [b[0], b[2]] ] )\n # Use the relationship between cos and dot product to find the desired angle.\n return np.degrees(np.arccos( v[0].dot(v[1])\/(np.linalg.norm(v[0]) * np.linalg.norm(v[1]))))\n\n# This is the straightforward approach as outlined in the answers to\n# \"How do I calculate a dihedral angle given Cartesian coordinates?\"\ndef dihedral2(p):\n b = p[:-1] - p[1:]\n b[0] *= -1\n v = np.array( [ v - (v.dot(b[1])\/b[1].dot(b[1])) * b[1] for v in [b[0], b[2]] ] )\n # Normalize vectors\n v \/= np.sqrt(np.einsum('...i,...i', v, v)).reshape(-1,1)\n b1 = b[1] \/ np.linalg.norm(b[1])\n x = np.dot(v[0], v[1])\n m = np.cross(v[0], b1)\n y = np.dot(m, v[1])\n return np.degrees(np.arctan2( y, x ))\n\n# This one starts with two cross products to get a vector perpendicular to\n# b2 and b1 and another perpendicular to b2 and b3. The angle between those vectors\n# is the dihedral angle.\ndef dihedral3(p):\n b = p[:-1] - p[1:]\n b[0] *= -1\n v = np.array( [np.cross(v,b[1]) for v in [b[0], b[2]] ] )\n # Normalize vectors\n v \/= np.sqrt(np.einsum('...i,...i', v, v)).reshape(-1,1)\n return np.degrees(np.arccos( v[0].dot(v[1]) ))\n\ndihedrals = [ (\"dihedral1\", dihedral1), (\"dihedral2\", dihedral2), (\"dihedral3\", dihedral3) ]\n```\nBenchmarking: \n```\n# Testing arccos near 0\n# Answer is 0.000057\np1 = np.array([\n [ 1, 0, 0 ],\n [ 0, 0, 0 ],\n [ 0, 0, 1 ],\n [ 0.999999, 0.000001, 1 ]\n ])\n\n# +x,+y\np2 = np.array([\n [ 1, 0, 0 ],\n [ 0, 0, 0 ],\n [ 0, 0, 1 ],\n [ 0.1, 0.6, 1 ]\n ])\n\n# -x,+y\np3 = np.array([\n [ 1, 0, 0 ],\n [ 0, 0, 0 ],\n [ 0, 0, 1 ],\n [-0.3, 0.6, 1 ]\n ])\n# -x,-y\np4 = np.array([\n [ 1, 0, 0 ],\n [ 0, 0, 0 ],\n [ 0, 0, 1 ],\n [-0.3, -0.6, 1 ]\n ])\n# +x,-y\np5 = np.array([\n [ 1, 0, 0 ],\n [ 0, 0, 0 ],\n [ 0, 0, 1 ],\n [ 0.6, -0.6, 1 ]\n ])\n\nfor d in dihedrals:\n name = d[0]\n f = d[1]\n print \"%s: %12.6f %12.6f %12.6f %12.6f %12.6f\" \\\n % (name, f(p1), f(p2), f(p3), f(p4), f(p5))\nprint\n\ndef profileDihedrals(f):\n t0 = time()\n for i in range(20000):\n p = np.random.random( (4,3) )\n f(p)\n p = np.random.randn( 4,3 )\n f(p)\n return(time() - t0)\n\nprint \"dihedral1: \", profileDihedrals(dihedral1)\nprint \"dihedral2: \", profileDihedrals(dihedral2)\nprint \"dihedral3: \", profileDihedrals(dihedral3)\n```\nBenchmarking output: \n```\ndihedral1: 0.000057 80.537678 116.565051 116.565051 45.000000\ndihedral2: 0.000057 80.537678 116.565051 -116.565051 -45.000000\ndihedral3: 0.000057 80.537678 116.565051 116.565051 45.000000\n\ndihedral1: 2.79781794548\ndihedral2: 3.74271392822\ndihedral3: 2.49604296684\n```\nAs you can see in the benchmarking, the last one tends to be the fastest while the second one is the only one that returns angles from the full range of 0 to 2pi since it uses arctan2.","response":"Here's an implementation for torsion angle over the full 2pi range that is a bit faster, doesn't resort to numpy quirks (einsum being mysteriously faster than logically equivalent code), and is easier to read. There's even a bit more than just hacks going on here -- the math is different too. The formula used in the question's dihedral2 uses 3 square roots and 1 cross product, the formula on Wikipedia uses 1 square root and 3 cross products, but the formula used in the function below uses only 1 square root and 1 cross product. This is probably as simple as the math can get. Functions with 2pi range function from question, Wikipedia formula for comparison, and the new function: dihedrals.py \n```\n#!\/usr\/bin\/env python\n# -*- coding: utf-8 -*-\n\nimport numpy as np\n\ndef old_dihedral2(p):\n \"\"\"http:\/\/stackoverflow.com\/q\/20305272\/1128289\"\"\"\n b = p[:-1] - p[1:]\n b[0] *= -1\n v = np.array( [ v - (v.dot(b[1])\/b[1].dot(b[1])) * b[1] for v in [b[0], b[2]] ] )\n # Normalize vectors\n v \/= np.sqrt(np.einsum('...i,...i', v, v)).reshape(-1,1)\n b1 = b[1] \/ np.linalg.norm(b[1])\n x = np.dot(v[0], v[1])\n m = np.cross(v[0], b1)\n y = np.dot(m, v[1])\n return np.degrees(np.arctan2( y, x ))\n\n\ndef wiki_dihedral(p):\n \"\"\"formula from Wikipedia article on \"Dihedral angle\"; formula was removed\n from the most recent version of article (no idea why, the article is a\n mess at the moment) but the formula can be found in at this permalink to\n an old version of the article:\n https:\/\/en.wikipedia.org\/w\/index.php?title=Dihedral_angle&oldid=689165217#Angle_between_three_vectors\n uses 1 sqrt, 3 cross products\"\"\"\n p0 = p[0]\n p1 = p[1]\n p2 = p[2]\n p3 = p[3]\n\n b0 = -1.0*(p1 - p0)\n b1 = p2 - p1\n b2 = p3 - p2\n\n b0xb1 = np.cross(b0, b1)\n b1xb2 = np.cross(b2, b1)\n\n b0xb1_x_b1xb2 = np.cross(b0xb1, b1xb2)\n\n y = np.dot(b0xb1_x_b1xb2, b1)*(1.0\/np.linalg.norm(b1))\n x = np.dot(b0xb1, b1xb2)\n\n return np.degrees(np.arctan2(y, x))\n\n\ndef new_dihedral(p):\n \"\"\"Praxeolitic formula\n 1 sqrt, 1 cross product\"\"\"\n p0 = p[0]\n p1 = p[1]\n p2 = p[2]\n p3 = p[3]\n\n b0 = -1.0*(p1 - p0)\n b1 = p2 - p1\n b2 = p3 - p2\n\n # normalize b1 so that it does not influence magnitude of vector\n # rejections that come next\n b1 \/= np.linalg.norm(b1)\n\n # vector rejections\n # v = projection of b0 onto plane perpendicular to b1\n # = b0 minus component that aligns with b1\n # w = projection of b2 onto plane perpendicular to b1\n # = b2 minus component that aligns with b1\n v = b0 - np.dot(b0, b1)*b1\n w = b2 - np.dot(b2, b1)*b1\n\n # angle between v and w in a plane is the torsion angle\n # v and w may not be normalized but that's fine since tan is y\/x\n x = np.dot(v, w)\n y = np.dot(np.cross(b1, v), w)\n return np.degrees(np.arctan2(y, x))\n```\nThe new function would probably be a bit more conveniently called with 4 separate arguments but it to match the signature in the original question it simply immediately unpacks the argument. Code for testing: test_dihedrals.ph \n```\nfrom dihedrals import *\n\n# some atom coordinates for testing\np0 = np.array([24.969, 13.428, 30.692]) # N\np1 = np.array([24.044, 12.661, 29.808]) # CA\np2 = np.array([22.785, 13.482, 29.543]) # C\np3 = np.array([21.951, 13.670, 30.431]) # O\np4 = np.array([23.672, 11.328, 30.466]) # CB\np5 = np.array([22.881, 10.326, 29.620]) # CG\np6 = np.array([23.691, 9.935, 28.389]) # CD1\np7 = np.array([22.557, 9.096, 30.459]) # CD2\n\n# I guess these tests do leave 1 quadrant (-x, +y) untested, oh well...\n\ndef test_old_dihedral2():\n assert(abs(old_dihedral2(np.array([p0, p1, p2, p3])) - (-71.21515)) < 1E-4)\n assert(abs(old_dihedral2(np.array([p0, p1, p4, p5])) - (-171.94319)) < 1E-4)\n assert(abs(old_dihedral2(np.array([p1, p4, p5, p6])) - (60.82226)) < 1E-4)\n assert(abs(old_dihedral2(np.array([p1, p4, p5, p7])) - (-177.63641)) < 1E-4)\n\n\ndef test_new_dihedral1():\n assert(abs(wiki_dihedral(np.array([p0, p1, p2, p3])) - (-71.21515)) < 1E-4)\n assert(abs(wiki_dihedral(np.array([p0, p1, p4, p5])) - (-171.94319)) < 1E-4)\n assert(abs(wiki_dihedral(np.array([p1, p4, p5, p6])) - (60.82226)) < 1E-4)\n assert(abs(wiki_dihedral(np.array([p1, p4, p5, p7])) - (-177.63641)) < 1E-4)\n\n\ndef test_new_dihedral2():\n assert(abs(new_dihedral(np.array([p0, p1, p2, p3])) - (-71.21515)) < 1E-4)\n assert(abs(new_dihedral(np.array([p0, p1, p4, p5])) - (-171.94319)) < 1E-4)\n assert(abs(new_dihedral(np.array([p1, p4, p5, p6])) - (60.82226)) < 1E-4)\n assert(abs(new_dihedral(np.array([p1, p4, p5, p7])) - (-177.63641)) < 1E-4)\n```\nCode for timing: time_dihedrals.py \n```\n#!\/usr\/bin\/env python\n# -*- coding: utf-8 -*-\n\nfrom dihedrals import *\nfrom time import time\n\ndef profileDihedrals(f):\n t0 = time()\n for i in range(20000):\n p = np.random.random( (4,3) )\n f(p)\n p = np.random.randn( 4,3 )\n f(p)\n return(time() - t0)\n\nprint(\"old_dihedral2: \", profileDihedrals(old_dihedral2))\nprint(\"wiki_dihedral: \", profileDihedrals(wiki_dihedral))\nprint(\"new_dihedral: \", profileDihedrals(new_dihedral))\n```\nThe functions can be tested with pytest as pytest .\/test_dihedrals.py. Timing results: \n```\n.\/time_dihedrals.py\nold_dihedral2: 1.6442952156066895\nwiki_dihedral: 1.3895585536956787\nnew_dihedral: 0.8703620433807373\n```\nnew_dihedral is about twice as fast as old_dihedral2. ...you can also see that the hardware used for this answer is a lot beefier than the hardware used in the question (3.74 vs 1.64 for dihedral2) ;-P If you want to get even more aggressive you can use pypy. At the time of writing pypy doesn't support numpy.cross but you can just use a cross product implemented in python instead. For a 3-vector cross product the C pypy generates is probably at least as good as what numpy uses. Doing so gets the time down to 0.60 for me but at this we're wading into silly hax. Same benchmark but with same hardware as used in the question: \n```\nold_dihedral2: 3.0171279907226562\nwiki_dihedral: 3.415065050125122\nnew_dihedral: 2.086946964263916\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/20305272\/dihedral-torsion-angle-from-four-points-in-cartesian-coordinates-in-python","best_answers_votes":24,"tags":["python","math","numpy"],"question_length":5209,"response_length":6083,"tag_count":3} {"question":"How to randomly shuffle data and target in python? I have a 4D array training images, whose dimensions correspond to (num_images, channels, width, height). I also have a 2D target labels whose dimensions correspond to (num_images, class_number). When training, I want to randomly shuffle the data by using random.shuffle, but how can I keep the labels shuffled in the same order as my images?","response":"```py\nfrom sklearn.utils import shuffle\nimport numpy as np\n\nX = np.array([\n [0, 0, 0],\n [1, 1, 1], \n [2, 2, 2], \n [3, 3, 3], \n [4, 4, 4]\n])\ny = np.array([0, 1, 2, 3, 4])\nX, y = shuffle(X, y)\nprint(X)\nprint(y)\n```\nOutput: \n```py\n[[1 1 1]\n [3 3 3]\n [0 0 0]\n [2 2 2]\n [4 4 4]] \n\n[1 3 0 2 4]\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/35076223\/how-to-randomly-shuffle-data-and-target-in-python","best_answers_votes":94,"tags":["python","numpy"],"question_length":392,"response_length":296,"tag_count":2} {"question":"Numpy Array Get row index searching by a row I am new to numpy and I am implementing clustering with random forest in python. My question is: How could I find the index of the exact row in an array? For example \n```\n[[ 0. 5. 2.]\n [ 0. 0. 3.]\n [ 0. 0. 0.]]\n```\nand I look for [0. 0. 3.] and get as result 1(the index of the second row). Any suggestion? Follows the code (not working...) \n```\nfor index, element in enumerate(leaf_node.x):\n for index_second_element, element_two in enumerate(leaf_node.x):\n if (index <= index_second_element):\n index_row = np.where(X == element)\n index_column = np.where(X == element_two)\n self.similarity_matrix[index_row][index_column] += 1\n```","response":"Why not simply do something like this? \n```\n\n```python\na\n#Output\n#array([[ 0., 5., 2.],\n# [ 0., 0., 3.],\n# [ 0., 0., 0.]])\n```\n\n```python\nb\n#Output\n#array([ 0., 0., 3.])\n```\n\n```python\na==b\n#Output\n#array([[ True, False, False],\n# [ True, True, True],\n# [ True, True, False]], dtype=bool)\n```\n\n```python\nnp.all(a==b,axis=1)\n#Output\n#array([False, True, False], dtype=bool)\n```\n\n```python\nnp.where(np.all(a==b,axis=1))\n#Output\n#(array([1]),)\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/18927475\/numpy-array-get-row-index-searching-by-a-row","best_answers_votes":76,"tags":["python","arrays","numpy","random-forest"],"question_length":745,"response_length":383,"tag_count":4} {"question":"Python numpy array of numpy arrays I've got a problem on creating a numpy array of numpy arrays. I would create it in a loop: \n```\na=np.array([])\nwhile(...):\n ...\n b= \/\/a numpy array generated\n a=np.append(a,b)\n ...\n```\nDesired result: \n```\n[[1,5,3], [9,10,1], ..., [4,8,6]]\n```\nReal result: \n```\n[1,5,3,9,10,1,... 4,8,6]\n```\nIs it possible? I don't know the final dimension of the array, so I can't initialize it with a fixed dimension.","response":"Never append to numpy arrays in a loop: it is the one operation that NumPy is very bad at compared with basic Python. This is because you are making a full copy of the data each append, which will cost you quadratic time. Instead, just append your arrays to a Python list and convert it at the end; the result is simpler and faster: \n```\na = []\n\nwhile ...:\n b = ... # NumPy array\n a.append(b)\na = np.asarray(a)\n```\nAs for why your code doesn't work: np.append doesn't behave like list.append at all. In particular, it won't create new dimensions when appending. You would have to create the initial array with two dimensions, then append with an explicit axis argument.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/31250129\/python-numpy-array-of-numpy-arrays","best_answers_votes":66,"tags":["python","arrays","numpy"],"question_length":445,"response_length":675,"tag_count":3} {"question":"Python: How to get values of an array at certain index positions? I have a numpy array like this: \n```\na = [0,88,26,3,48,85,65,16,97,83,91]\n```\nHow can I get the values at certain index positions in ONE step? For example: \n```\nind_pos = [1,5,7]\n```\nThe result should be: \n```\n[88,85,16]\n```","response":"Just index using you ind_pos \n```\nind_pos = [1,5,7]\nprint (a[ind_pos]) \n[88 85 16]\n\n```python\na = [0,88,26,3,48,85,65,16,97,83,91]\n```\n\n```python\nimport numpy as np\n```\n\n```python\narr = np.array(a)\n```\n\n```python\nind_pos = [1,5,7]\n```\n\n```python\narr[ind_pos]\n#Output\n#array([88, 85, 16])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/25201438\/python-how-to-get-values-of-an-array-at-certain-index-positions","best_answers_votes":36,"tags":["python","arrays","numpy","indexing"],"question_length":290,"response_length":271,"tag_count":4} {"question":"Numpy where() on a 2D matrix I have a matrix like this \n```\nt = np.array([[1,2,3,'foo'],\n [2,3,4,'bar'],\n [5,6,7,'hello'],\n [8,9,1,'bar']])\n```\nI want to get the indices where the rows contain the string 'bar' In a 1d array \n```\nrows = np.where(t == 'bar')\n```\nshould give me the indices [0,3] followed by broadcasting:- \n```\nresults = t[rows]\n```\nshould give me the right rows But I can't figure out how to get it to work with 2d arrays.","response":"For the general case, where your search string can be in any column, you can do this: \n```\n\n```python\nrows, cols = np.where(t == 'bar')\n```\n\n```python\nt[rows]\n#Output\n#array([['2', '3', '4', 'bar'],\n# ['8', '9', '1', 'bar']],\n# dtype='|S11')\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/24219841\/numpy-where-on-a-2d-matrix","best_answers_votes":31,"tags":["python","arrays","numpy","matrix"],"question_length":438,"response_length":227,"tag_count":4} {"question":"python - RGB matrix of an image Taking an image as input, how can I get the rgb matrix corresponding to it? I checked out the numpy.asarray function. Does that give me the rgb matrix or some other matrix?","response":"Note that this answer is outdated as of 2018; scipy has deprecated imread, and you should switch to imageio.imread. See this transition doc about differences between the two. The code below should work with no changes if you just import the new library in place of the old, but I haven\u2019t tested it. The simplest answer is to use the NumPy and SciPy wrappers around PIL. There's a great tutorial, but the basic idea is: \n```\nfrom scipy import misc\narr = misc.imread('lena.png') # 640x480x3 array\narr[20, 30] # 3-vector for a pixel\narr[20, 30, 1] # green value for a pixel\n```\nFor a 640x480 RGB image, this will give you a 640x480x3 array of uint8. Or you can just open the file with PIL (or, rather, Pillow; if you're still using PIL, this may not work, or may be very slow) and pass it straight to NumPy: \n```\nimport numpy as np\nfrom PIL import Image\nimg = Image.open('lena.png')\narr = np.array(img) # 640x480x4 array\narr[20, 30] # 4-vector, just like above\n```\nThis will give you a 640x480x4 array of type uint8 (the 4th is alpha; PIL always loads PNG files as RGBA, even if they have no transparency; see img.getbands() if you're every unsure). If you don't want to use NumPy at all, PIL's own PixelArray type is a more limited array: \n```\narr = img.load()\narr[20, 30] # tuple of 4 ints\n```\nThis gives you a 640x480 PixelAccess array of RGBA 4-tuples. Or you can just call getpixel on the image: \n```\nimg.getpixel(20, 30) # tuple of 4 ints\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/25102461\/python-rgb-matrix-of-an-image","best_answers_votes":25,"tags":["python","image","numpy","python-imaging-library"],"question_length":204,"response_length":1445,"tag_count":4} {"question":"plot data from CSV file with matplotlib I have a CSV file at e:\\dir1\\datafile.csv. It contains three columns and 10 heading and trailing lines need to be skipped. I would like to plot it with numpy.loadtxt(), for which I haven't found any rigorous documentation. Here is what I started to write from the several tries I found on the web. \n```\nimport matplotlib as mpl\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.cbook as cbook\n\ndef read_datafile(file_name):\n # the skiprows keyword is for heading, but I don't know if trailing lines\n # can be specified\n data = np.loadtxt(file_name, delimiter=',', skiprows=10)\n return data\n\ndata = read_datafile('e:\\dir1\\datafile.csv')\n\nx = ???\ny = ???\n\nfig = plt.figure()\n\nax1 = fig.add_subplot(111)\n\nax1.set_title(\"Mains power stability\") \nax1.set_xlabel('time')\nax1.set_ylabel('Mains voltage')\n\nax1.plot(x,y, c='r', label='the data')\n\nleg = ax1.legend()\n\nplt.show()\n```","response":"According to the docs numpy.loadtxt is a fast reader for simply formatted files. The genfromtxt function provides more sophisticated handling of, e.g., lines with missing values. so there are only a few options to handle more complicated files. As mentioned numpy.genfromtxt has more options. So as an example you could use \n```\nimport numpy as np\ndata = np.genfromtxt('e:\\dir1\\datafile.csv', delimiter=',', skip_header=10,\n skip_footer=10, names=['x', 'y', 'z'])\n```\nto read the data and assign names to the columns (or read a header line from the file with names=True) and than plot it with \n```\nax1.plot(data['x'], data['y'], color='r', label='the data')\n```\nI think numpy is quite well documented now. You can easily inspect the docstrings from within ipython or by using an IDE like spider if you prefer to read them rendered as HTML.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/13545388\/plot-data-from-csv-file-with-matplotlib","best_answers_votes":48,"tags":["python","numpy","matplotlib"],"question_length":946,"response_length":859,"tag_count":3} {"question":"Why is numpy.linalg.pinv() preferred over numpy.linalg.inv() for creating inverse of a matrix in linear regression If we want to search for the optimal parameters theta for a linear regression model by using the normal equation with: theta = inv(X^T * X) * X^T * y one step is to calculate inv(X^T*X). Therefore numpy provides np.linalg.inv() and np.linalg.pinv() Though this leads to different results: \n```\nX=np.matrix([[1,2104,5,1,45],[1,1416,3,2,40],[1,1534,3,2,30],[1,852,2,1,36]])\ny=np.matrix([[460],[232],[315],[178]])\n\nXT=X.T\nXTX=XT@X\n\npinv=np.linalg.pinv(XTX)\ntheta_pinv=(pinv@XT)@y\nprint(theta_pinv)\n\n[[188.40031946]\n [ 0.3866255 ]\n [-56.13824955]\n [-92.9672536 ]\n [ -3.73781915]]\n\ninv=np.linalg.inv(XTX)\ntheta_inv=(inv@XT)@y\nprint(theta_inv)\n\n[[-648.7890625 ]\n [ 0.79418945]\n [-110.09375 ]\n [ -74.0703125 ]\n [ -3.69091797]]\n```\nThe first output, that is the output of pinv is the correct one and additionally recommended in the numpy.linalg.pinv() docs. But why is this and where are the differences \/ Pros \/ Cons between inv() and pinv().","response":"If the determinant of the matrix is zero it will not have an inverse and your inv function will not work. This usually happens if your matrix is singular. But pinv will. This is because pinv returns the inverse of your matrix when it is available and the pseudo inverse when it isn't. The different results of the functions are because of rounding errors in floating point arithmetic You can read more about how pseudo inverse works here","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/49357417\/why-is-numpy-linalg-pinv-preferred-over-numpy-linalg-inv-for-creating-invers","best_answers_votes":35,"tags":["python","numpy","matrix","linear-algebra","linear-regression"],"question_length":1056,"response_length":437,"tag_count":5} {"question":"How can I efficiently process a numpy array in blocks similar to Matlab's blkproc (blockproc) function I'm looking for a good approach for efficiently dividing an image into small regions, processing each region separately, and then re-assembling the results from each process into a single processed image. Matlab had a tool for this called blkproc (replaced by blockproc in newer versions of Matlab). In an ideal world, the function or class would support overlap between the divisions in the input matrix too. In the Matlab help, blkproc is defined as: B = blkproc(A,[m n],[mborder nborder],fun,...) A is your input matrix, [m n] is the block size [mborder, nborder] is the size of your border region (optional) fun is a function to apply to each block I have kluged together an approach, but it strikes me as clumsy and I bet there's a much better way. At the risk of my own embarrassment, here's my code: \n```\nimport numpy as np\n\ndef segmented_process(M, blk_size=(16,16), overlap=(0,0), fun=None):\n rows = []\n for i in range(0, M.shape[0], blk_size[0]):\n cols = []\n for j in range(0, M.shape[1], blk_size[1]):\n cols.append(fun(M[i:i+blk_size[0], j:j+blk_size[1]]))\n rows.append(np.concatenate(cols, axis=1))\n return np.concatenate(rows, axis=0)\n\nR = np.random.rand(128,128)\npassthrough = lambda(x):x\nRprime = segmented_process(R, blk_size=(16,16), \n overlap=(0,0), \n fun=passthrough)\n\nnp.all(R==Rprime)\n```","response":"Here are some examples of a different (loop free) way to work with blocks: \n```\nimport numpy as np\nfrom numpy.lib.stride_tricks import as_strided as ast\n\nA= np.arange(36).reshape(6, 6)\nprint A\n#[[ 0 1 2 3 4 5]\n# [ 6 7 8 9 10 11]\n# ...\n# [30 31 32 33 34 35]]\n\n# 2x2 block view\nB= ast(A, shape= (3, 3, 2, 2), strides= (48, 8, 24, 4))\nprint B[1, 1]\n#[[14 15]\n# [20 21]]\n\n# for preserving original shape\nB[:, :]= np.dot(B[:, :], np.array([[0, 1], [1, 0]]))\nprint A\n#[[ 1 0 3 2 5 4]\n# [ 7 6 9 8 11 10]\n# ...\n# [31 30 33 32 35 34]]\nprint B[1, 1]\n#[[15 14]\n# [21 20]]\n\n# for reducing shape, processing in 3D is enough\nC= B.reshape(3, 3, -1)\nprint C.sum(-1)\n#[[ 14 22 30]\n# [ 62 70 78]\n# [110 118 126]]\n```\nSo just trying to simply copy the matlab functionality to numpy is not all ways the best way to proceed. Sometimes a 'off the hat' thinking is needed. Caveat: In general, implementations based on stride tricks may (but does not necessary need to) suffer some performance penalties. So be prepared to all ways measure your performance. In any case it's wise to first check if the needed functionality (or similar enough, in order to easily adapt for) has all ready been implemented in numpy or scipy. Update: Please note that there is no real magic involved here with the strides, so I'll provide a simple function to get a block_view of any suitable 2D numpy-array. So here we go: \n```\nfrom numpy.lib.stride_tricks import as_strided as ast\n\ndef block_view(A, block= (3, 3)):\n \"\"\"Provide a 2D block view to 2D array. No error checking made.\n Therefore meaningful (as implemented) only for blocks strictly\n compatible with the shape of A.\"\"\"\n # simple shape and strides computations may seem at first strange\n # unless one is able to recognize the 'tuple additions' involved ;-)\n shape= (A.shape[0]\/ block[0], A.shape[1]\/ block[1])+ block\n strides= (block[0]* A.strides[0], block[1]* A.strides[1])+ A.strides\n return ast(A, shape= shape, strides= strides)\n\nif __name__ == '__main__':\n from numpy import arange\n A= arange(144).reshape(12, 12)\n print block_view(A)[0, 0]\n #[[ 0 1 2]\n # [12 13 14]\n # [24 25 26]]\n print block_view(A, (2, 6))[0, 0]\n #[[ 0 1 2 3 4 5]\n # [12 13 14 15 16 17]]\n print block_view(A, (3, 12))[0, 0]\n #[[ 0 1 2 3 4 5 6 7 8 9 10 11]\n # [12 13 14 15 16 17 18 19 20 21 22 23]\n # [24 25 26 27 28 29 30 31 32 33 34 35]]\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/5073767\/how-can-i-efficiently-process-a-numpy-array-in-blocks-similar-to-matlabs-blkpro","best_answers_votes":26,"tags":["python","matlab","image-processing","numpy","scipy"],"question_length":1505,"response_length":2437,"tag_count":5} {"question":"What is the difference between numpy var() and statistics variance() in python? I was trying one Dataquest exercise and I figured out that the variance I am getting is different for the two packages. e.g for [1,2,3,4] \n```\nfrom statistics import variance\nimport numpy as np\nprint(np.var([1,2,3,4]))\nprint(variance([1,2,3,4]))\n\/\/1.25\n\/\/1.6666666666666667\n```\nThe expected answer of the exercise is calculated with np.var() Edit I guess it has to do that the later one is sample variance and not variance. Anyone could explain the difference?","response":"Use this \n```\nprint(np.var([1,2,3,4],ddof=1))\n\n1.66666666667\n```\nDelta Degrees of Freedom: the divisor used in the calculation is N - ddof, where N represents the number of elements. By default, ddof is zero. The mean is normally calculated as x.sum() \/ N, where N = len(x). If, however, ddof is specified, the divisor N - ddof is used instead. In standard statistical practice, ddof=1 provides an unbiased estimator of the variance of a hypothetical infinite population. ddof=0 provides a maximum likelihood estimate of the variance for normally distributed variables. Statistical libraries like numpy use the variance n for what they call var or variance and the standard deviation For more information refer this documentation : numpy doc","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/41204400\/what-is-the-difference-between-numpy-var-and-statistics-variance-in-python","best_answers_votes":40,"tags":["python","numpy"],"question_length":540,"response_length":741,"tag_count":2} {"question":"python - invalid value encountered in log I have the following expression: log = np.sum(np.nan_to_num(-y*np.log(a+ 1e-7)-(1-y)*np.log(1-a+ 1e-7))) it is giving me the following warning: \n```\nRuntimeWarning: invalid value encountered in log\n log = np.sum(np.nan_to_num(-y*np.log(a+ 1e-7)-(1-y)*np.log(1-a+ 1e-7)))\n```\nI don't understand what might be the invalid value or why am I getting it. Any and every help is appreciated. NOTE: This is a cross-entropy cost function where I added 1e-7 to avoid having zeros inside log. y & a are numpy arrays and numpy is imported as np.","response":"You probably still have negative values inside the log, which gives nan with real numbers. a and y should represent probability between 0 to 1, So you need to check why do you have smaller\/larger values there. Adding 1e-7 shows there is something fishy, because np.log(0) gives -inf, which I think is the value you want.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/37496299\/python-invalid-value-encountered-in-log","best_answers_votes":32,"tags":["python","numpy","math","cross-entropy"],"question_length":576,"response_length":320,"tag_count":4} {"question":"NumPy types with underscore: `int_`, `float_`, etc What is the significance of the underscore suffixing in int_, float_, etc.?","response":"From page 21 of Guide to Numpy by TE Oliphant: Names for the data types that would clash with standard Python object names are followed by a trailing underscore, \u2019 \u2019. These data types are so named because they use the same underlying precision as the corresponding Python data types. . . . The array types bool_, int_, complex_, float_, object_, unicode_, and str_ are enhanced-scalars. They are very similar to the standard Python types (without the trailing underscore) and inherit from them (except for bool_ and object_). They can be used in place of the standard Python types whenever desired. Whenever a data type is required, as an argument, the standard Python types are recognized as well.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/6205020\/numpy-types-with-underscore-int-float-etc","best_answers_votes":29,"tags":["python","numpy","naming-conventions"],"question_length":126,"response_length":698,"tag_count":3} {"question":"Weird indexing using numpy I have a variable, x, that is of the shape (2,2,50,100). I also have an array, y, that equals np.array([0,10,20]). A weird thing happens when I index x[0,:,:,y]. \n```\nx = np.full((2,2,50,100),np.nan)\ny = np.array([0,10,20])\nprint(x.shape)\n(2,2,50,100)\nprint(x[:,:,:,y].shape)\n(2,2,50,3)\nprint(x[0,:,:,:].shape)\n(2,50,100)\nprint(x[0,:,:,y].shape)\n(3,2,50)\n```\nWhy does the last one output (3,2,50) and not (2,50,3)?","response":"This is how numpy uses advanced indexing to broadcast array shapes. When you pass a 0 for the first index, and y for the last index, numpy will broadcast the 0 to be the same shape as y. The following equivalence holds: x[0,:,:,y] == x[(0, 0, 0),:,:,y]. here is an example \n```\nimport numpy as np\n\nx = np.arange(120).reshape(2,3,4,5)\ny = np.array([0,2,4])\n\nnp.equal(x[0,:,:,y], x[(0, 0, 0),:,:,y]).all()\n# returns:\nTrue\n```\nNow, because you are effectively passing in two sets of indices, you are using the advanced indexing API to form (in this case) pairs of indices. \n```\nx[(0, 0, 0),:,:,y])\n\n# equivalent to\n[\n x[0,:,:,y[0]], \n x[0,:,:,y[1]], \n x[0,:,:,y[2]]\n]\n\n# equivalent to\nrows = np.array([0, 0, 0])\ncols = y\nx[rows,:,:,cols]\n\n# equivalent to\n[\n x[r,:,:,c] for r, c in zip(rows, columns)\n]\n```\nWhich has a first dimension that same as the length of y. This is what you are seeing. As an example, look at an array with 4 dimensions which are described in the next chunk: \n```\nx = np.arange(120).reshape(2,3,4,5)\ny = np.array([0,2,4])\n\n# x looks like:\narray([[[[ 0, 1, 2, 3, 4], -+ =+\n [ 5, 6, 7, 8, 9], Sheet1 |\n [ 10, 11, 12, 13, 14], | |\n [ 15, 16, 17, 18, 19]], -+ |\n Workbook1\n [[ 20, 21, 22, 23, 24], -+ |\n [ 25, 26, 27, 28, 29], Sheet2 |\n [ 30, 31, 32, 33, 34], | |\n [ 35, 36, 37, 38, 39]], -+ |\n |\n [[ 40, 41, 42, 43, 44], -+ |\n [ 45, 46, 47, 48, 49], Sheet3 |\n [ 50, 51, 52, 53, 54], | |\n [ 55, 56, 57, 58, 59]]], -+ =+\n\n\n [[[ 60, 61, 62, 63, 64],\n [ 65, 66, 67, 68, 69],\n [ 70, 71, 72, 73, 74],\n [ 75, 76, 77, 78, 79]],\n\n [[ 80, 81, 82, 83, 84],\n [ 85, 86, 87, 88, 89],\n [ 90, 91, 92, 93, 94],\n [ 95, 96, 97, 98, 99]],\n\n [[100, 101, 102, 103, 104],\n [105, 106, 107, 108, 109],\n [110, 111, 112, 113, 114],\n [115, 116, 117, 118, 119]]]])\n```\nx has a really easy to understand sequential form that we can now use to show what is happening... The first dimension is like having 2 Excel Workbooks, the second dimension is like having 3 sheets in each workbook, the third dimension is like having 4 rows per sheet, and the last dimension is 5 values for each row (or columns per sheet). Looking at it this way, asking for x[0,:,:,0], is the saying: \"in the first workbook, for each sheet, for each row, give me the first value\/column.\" \n```\nx[0,:,:,y[0]]\n# returns:\narray([[ 0, 5, 10, 15],\n [20, 25, 30, 35],\n [40, 45, 50, 55]])\n\n# this is in the same as the first element in:\nx[(0,0,0),:,:,y]\n```\nBut now with advanced indexing, we can think of x[(0,0,0),:,:,y] as \"in the first workbook, for each sheet, for each row, give me the yth value\/column. Ok, now do it for each value of y\" \n```\nx[(0,0,0),:,:,y]\n# returns:\narray([[[ 0, 5, 10, 15],\n [20, 25, 30, 35],\n [40, 45, 50, 55]],\n\n [[ 2, 7, 12, 17],\n [22, 27, 32, 37],\n [42, 47, 52, 57]],\n\n [[ 4, 9, 14, 19],\n [24, 29, 34, 39],\n [44, 49, 54, 59]]])\n```\nWhere it gets crazy is that numpy will broadcast to match the outer dimensions of index array. So if you want to do that same operation as above, but for BOTH \"Excel workbooks\", you don't have to loop and concatenate. You can just pass an array to the first dimension, but it MUST have a compatible shape. Passing an integer gets broadcast to y.shape == (3,). If you want to pass an array as the first index, only the last dimension of the array has to be compatible with y.shape. I.e., the last dimension of the first index must either be 3 or 1. \n```py\nix = np.array([[0], [1]])\nx[ix,:,:,y].shape\n# each row of ix is broadcast to length 3:\n(2, 3, 3, 4)\n\nix = np.array([[0,0,0], [1,1,1]])\nx[ix,:,:,y].shape\n# this is identical to above:\n(2, 3, 3, 4)\n\nix = np.array([[0], [1], [0], [1], [0]])\nx[ix,:,:,y].shape\n# ix is broadcast so each row of ix has 3 columns, the length of y\n(5, 3, 3, 4)\n```\nFound a short explanation in the docs: https:\/\/docs.scipy.org\/doc\/numpy\/reference\/arrays.indexing.html#combining-advanced-and-basic-indexing Edit: From the original question, to get a one-liner of your desired subslicing, you can use x[0][:,:,y]: \n```\nx[0][:,:,y].shape\n# returns\n(2, 50, 3)\n```\nHowever, if you are trying to assign to those subslices, you have to be very careful that you are looking at a shared memory view of the original array. Otherwise the assignment won't be to the original array, but a copy. Shared memory only occurs when you are use an integer or slice to subset your array, i.e. x[:,0:3,:,:] or x[0,:,:,1:-1]. \n```py\nnp.shares_memory(x, x[0])\n# returns:\nTrue\n\nnp.shares_memory(x, x[:,:,:,y])\n# returns:\nFalse\n```\nIn both your original question and my example y is neither an int or a slice, so will always end up assigning to a copy of the original. BUT! Because your array for y can be expressed as a slice, you CAN actually get an assignable view of your array via: \n```py\nx[0,:,:,0:21:10].shape\n# returns:\n(2, 50, 3)\n\nnp.shares_memory(x, x[0,:,:,0:21:10])\n# returns:\nTrue\n\n# actually assigns to the original array\nx[0,:,:,0:21:10] = 100\n```\nHere we use the slice 0:21:10 to grab every index that would be in range(0,21,10). We have to use 21 and not 20 because the stop-point is excluded from the slice, just like in the range function. So basically, if you can construct a slice that fits your subslicing criteria, you can do assignment.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/60422693\/weird-indexing-using-numpy","best_answers_votes":24,"tags":["python","numpy"],"question_length":441,"response_length":5709,"tag_count":2} {"question":"Perform 2 sample t-test I have a the mean, std dev and n of sample 1 and sample 2 - samples are taken from the sample population, but measured by different labs. n is different for sample 1 and sample 2. I want to do a weighted (take n into account) two-tailed t-test. I tried using the scipy.stat module by creating my numbers with np.random.normal, since it only takes data and not stat values like mean and std dev (is there any way to use these values directly). But it didn't work since the data arrays has to be of equal size. Any help on how to get the p-value would be highly appreciated.","response":"If you have the original data as arrays a and b, you can use scipy.stats.ttest_ind with the argument equal_var=False: \n```\nt, p = ttest_ind(a, b, equal_var=False)\n```\nIf you have only the summary statistics of the two data sets, you can calculate the t value using scipy.stats.ttest_ind_from_stats (added to scipy in version 0.16) or from the formula (http:\/\/en.wikipedia.org\/wiki\/Welch%27s_t_test). The following script shows the possibilities. \n```\nfrom __future__ import print_function\n\nimport numpy as np\nfrom scipy.stats import ttest_ind, ttest_ind_from_stats\nfrom scipy.special import stdtr\n\nnp.random.seed(1)\n\n# Create sample data.\na = np.random.randn(40)\nb = 4*np.random.randn(50)\n\n# Use scipy.stats.ttest_ind.\nt, p = ttest_ind(a, b, equal_var=False)\nprint(\"ttest_ind: t = %g p = %g\" % (t, p))\n\n# Compute the descriptive statistics of a and b.\nabar = a.mean()\navar = a.var(ddof=1)\nna = a.size\nadof = na - 1\n\nbbar = b.mean()\nbvar = b.var(ddof=1)\nnb = b.size\nbdof = nb - 1\n\n# Use scipy.stats.ttest_ind_from_stats.\nt2, p2 = ttest_ind_from_stats(abar, np.sqrt(avar), na,\n bbar, np.sqrt(bvar), nb,\n equal_var=False)\nprint(\"ttest_ind_from_stats: t = %g p = %g\" % (t2, p2))\n\n# Use the formulas directly.\ntf = (abar - bbar) \/ np.sqrt(avar\/na + bvar\/nb)\ndof = (avar\/na + bvar\/nb)**2 \/ (avar**2\/(na**2*adof) + bvar**2\/(nb**2*bdof))\npf = 2*stdtr(dof, -np.abs(tf))\n\nprint(\"formula: t = %g p = %g\" % (tf, pf))\n```\nThe output: \n```\nttest_ind: t = -1.5827 p = 0.118873\nttest_ind_from_stats: t = -1.5827 p = 0.118873\nformula: t = -1.5827 p = 0.118873\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/22611446\/perform-2-sample-t-test","best_answers_votes":77,"tags":["python","numpy","statistics"],"question_length":596,"response_length":1658,"tag_count":3} {"question":"Generate random array of 0 and 1 with a specific ratio I want to generate a random array of size N which only contains 0 and 1, I want my array to have some ratio between 0 and 1. For example, 90% of the array be 1 and the remaining 10% be 0 (I want this 90% to be random along with the whole array). right now I have: \n```\nrandomLabel = np.random.randint(2, size=numbers)\n```\nBut I can't control the ratio between 0 and 1.","response":"If you want an exact 1:9 ratio: \n```\nnums = numpy.ones(1000)\nnums[:100] = 0\nnumpy.random.shuffle(nums)\n```\nIf you want independent 10% probabilities: \n```\nnums = numpy.random.choice([0, 1], size=1000, p=[.1, .9])\n```\nor \n```\nnums = (numpy.random.rand(1000) > 0.1).astype(int)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/21566744\/generate-random-array-of-0-and-1-with-a-specific-ratio","best_answers_votes":69,"tags":["python","random","numpy"],"question_length":423,"response_length":279,"tag_count":3} {"question":"Python: Setting an element of a Numpy matrix I am a pretty new to python. I have created an empty matrix \n```\na = numpy.zeros(shape=(n,n))\n```\nNow I can access each element using \n```\na.item(i,j)\n```\nHow do I set an index (i,j)?","response":"Here's how: \n```\na[i,j] = x\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/2220968\/python-setting-an-element-of-a-numpy-matrix","best_answers_votes":66,"tags":["python","numpy"],"question_length":228,"response_length":31,"tag_count":2} {"question":"Loading text file containing both float and string using numpy.loadtxt I have a text file, data.txt, which contains: \n```\n5.1,3.5,1.4,0.2,Iris-setosa\n4.9,3.0,1.4,0.2,Iris-setosa\n5.8,2.7,4.1,1.0,Iris-versicolor\n6.2,2.2,4.5,1.5,Iris-versicolor\n6.4,3.1,5.5,1.8,Iris-virginica\n6.0,3.0,4.8,1.8,Iris-virginica\n```\nHow do I load this data using numpy.loadtxt() so that I get a NumPy array after loading such as [['5.1' '3.5' '1.4' '0.2' 'Iris-setosa'] ['4.9' '3.0' '1.4' '0.2' 'Iris-setosa'] ...]? I tried \n```\nnp.loadtxt(open(\"data.txt\"), 'r',\n dtype={\n 'names': (\n 'sepal length', 'sepal width', 'petal length',\n 'petal width', 'label'),\n 'formats': (\n np.float, np.float, np.float, np.float, np.str)},\n delimiter= ',', skiprows=0)\n```","response":"If you use np.genfromtxt, you could specify dtype=None, which will tell genfromtxt to intelligently guess the dtype of each column. Most conveniently, it relieves you of the burder of specifying the number of bytes required for the string column. (Omitting the number of bytes, by specifying e.g. np.str, does not work.) \n```\n\n```python\nnp.genfromtxt('data.txt', delimiter=',', dtype=None, names=('sepal length', 'sepal width', 'petal length', 'petal width', 'label'))\n#Output\n#array([(5.1, 3.5, 1.4, 0.2, 'Iris-setosa'),\n# (4.9, 3.0, 1.4, 0.2, 'Iris-setosa'),\n# (5.8, 2.7, 4.1, 1.0, 'Iris-versicolor'),\n# (6.2, 2.2, 4.5, 1.5, 'Iris-versicolor'),\n# (6.4, 3.1, 5.5, 1.8, 'Iris-virginica'),\n# (6.0, 3.0, 4.8, 1.8, 'Iris-virginica')], \n# dtype=[('sepal_length', '\", markerfacecolor='none')\ncdf = ECDF(grades)\nplt.plot(cdf.x, cdf.y, label=\"statmodels\", marker=\"<\", markerfacecolor='none')\nplt.legend()\nplt.show()\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/15792552\/numpy-scipy-equivalent-of-r-ecdfxx-function","best_answers_votes":35,"tags":["python","r","numpy","scipy"],"question_length":372,"response_length":1146,"tag_count":4} {"question":"Fitting a line in 3D Are there any algorithms that will return the equation of a straight line from a set of 3D data points? I can find plenty of sources which will give the equation of a line from 2D data sets, but none in 3D.","response":"If you are trying to predict one value from the other two, then you should use lstsq with the a argument as your independent variables (plus a column of 1's to estimate an intercept) and b as your dependent variable. If, on the other hand, you just want to get the best fitting line to the data, i.e. the line which, if you projected the data onto it, would minimize the squared distance between the real point and its projection, then what you want is the first principal component. One way to define it is the line whose direction vector is the eigenvector of the covariance matrix corresponding to the largest eigenvalue, that passes through the mean of your data. That said, eig(cov(data)) is a really bad way to calculate it, since it does a lot of needless computation and copying and is potentially less accurate than using svd. See below: \n```\nimport numpy as np\n\n# Generate some data that lies along a line\n\nx = np.mgrid[-2:5:120j]\ny = np.mgrid[1:9:120j]\nz = np.mgrid[-5:3:120j]\n\ndata = np.concatenate((x[:, np.newaxis], \n y[:, np.newaxis], \n z[:, np.newaxis]), \n axis=1)\n\n# Perturb with some Gaussian noise\ndata += np.random.normal(size=data.shape) * 0.4\n\n# Calculate the mean of the points, i.e. the 'center' of the cloud\ndatamean = data.mean(axis=0)\n\n# Do an SVD on the mean-centered data.\nuu, dd, vv = np.linalg.svd(data - datamean)\n\n# Now vv[0] contains the first principal component, i.e. the direction\n# vector of the 'best fit' line in the least squares sense.\n\n# Now generate some points along this best fit line, for plotting.\n\n# I use -7, 7 since the spread of the data is roughly 14\n# and we want it to have mean 0 (like the points we did\n# the svd on). Also, it's a straight line, so we only need 2 points.\nlinepts = vv[0] * np.mgrid[-7:7:2j][:, np.newaxis]\n\n# shift by the mean to get the line in the right place\nlinepts += datamean\n\n# Verify that everything looks right.\n\nimport matplotlib.pyplot as plt\nimport mpl_toolkits.mplot3d as m3d\n\nax = m3d.Axes3D(plt.figure())\nax.scatter3D(*data.T)\nax.plot3D(*linepts.T)\nplt.show()\n```\nHere's what it looks like:","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/2298390\/fitting-a-line-in-3d","best_answers_votes":65,"tags":["python","numpy","linear-algebra","curve-fitting","matplotlib-3d"],"question_length":227,"response_length":2144,"tag_count":5} {"question":"Convert list of numpy.float64 to float in Python quickly What is the fastest way of converting a list of elements of type numpy.float64 to type float? I am currently using the straightforward for loop iteration in conjunction with float(). I came across this post: Converting numpy dtypes to native python types, however my question isn't one of how to convert types in python but rather more specifically how to best convert an entire list of one type to another in the quickest manner possible in python (i.e. in this specific case numpy.float64 to float). I was hoping for some secret python machinery that I hadn't come across that could do it all at once :)","response":"The tolist() method should do what you want. If you have a numpy array, just call tolist(): \n```\n\n```python\na\n#Output\n#array([ 0. , 0.14285714, 0.28571429, 0.42857143, 0.57142857,\n# 0.71428571, 0.85714286, 1. , 1.14285714, 1.28571429,\n# 1.42857143, 1.57142857, 1.71428571, 1.85714286, 2. ])\n```\n\n```python\na.dtype\n#Output\n#dtype('float64')\n```\n\n```python\nb = a.tolist()\n```\n\n```python\nb\n#Output\n#[0.0,\n# 0.14285714285714285,\n# 0.2857142857142857,\n# 0.42857142857142855,\n# 0.5714285714285714,\n# 0.7142857142857142,\n# 0.8571428571428571,\n# 1.0,\n# 1.1428571428571428,\n# 1.2857142857142856,\n# 1.4285714285714284,\n# 1.5714285714285714,\n# 1.7142857142857142,\n# 1.857142857142857,\n# 2.0]\n```\n\n```python\ntype(b)\n#Output\n#list\n```\n\n```python\ntype(b[0])\n#Output\n#float\n#```\n#If, in fact, you really have python list of numpy.float64 objects, then @Alexander's answer is great, or you could convert the list to an array and then use the tolist() method. E.g. \n#```\n```\n\n```python\nc\n#Output\n#[0.0,\n# 0.33333333333333331,\n# 0.66666666666666663,\n# 1.0,\n# 1.3333333333333333,\n# 1.6666666666666665,\n# 2.0]\n```\n\n```python\ntype(c)\n#Output\n#list\n```\n\n```python\ntype(c[0])\n#Output\n#numpy.float64\n#```\n#@Alexander's suggestion, a list comprehension: \n#```\n```\n\n```python\n[float(v) for v in c]\n#Output\n#[0.0,\n# 0.3333333333333333,\n# 0.6666666666666666,\n# 1.0,\n# 1.3333333333333333,\n# 1.6666666666666665,\n# 2.0]\n#```\n#Or, convert to an array and then use the tolist() method. \n#```\n```\n\n```python\nnp.array(c).tolist()\n#Output\n#[0.0,\n# 0.3333333333333333,\n# 0.6666666666666666,\n# 1.0,\n# 1.3333333333333333,\n# 1.6666666666666665,\n# 2.0]\n#```\n#If you are concerned with the speed, here's a comparison. The input, x, is a python list of numpy.float64 objects: \n#```\n```\n\n```python\ntype(x)\n#Output\n#list\n```\n\n```python\nlen(x)\n#Output\n#1000\n```\n\n```python\ntype(x[0])\n#Output\n#numpy.float64\n#```\n#Timing for the list comprehension: \n#```\n```\n\n```python\n%timeit list1 = [float(v) for v in x]\n10000 loops, best of 3: 109 \u00b5s per loop\n```\nTiming for conversion to numpy array and then tolist(): \n```\n```\n\n```python\n%timeit list2 = np.array(x).tolist()\n10000 loops, best of 3: 70.5 \u00b5s per loop\n```\nSo it is faster to convert the list to an array and then call tolist().\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/29217399\/convert-list-of-numpy-float64-to-float-in-python-quickly","best_answers_votes":31,"tags":["python-3.x","numpy","type-conversion"],"question_length":662,"response_length":2150,"tag_count":3} {"question":"how to get the memory address of a numpy array for C I constructed an numpy array:: \n```\na=np.ndarray([2,3])\n```\nthen i want to see where its data are:: \n```\na.data \n>>>Out[213]: \na.data \n>>>Out[214]: \na.data \n>>>Out[215]: \n```\n\n```python\nwhy every time the offset address is different? if i want to transfer the data to a c function using c_types by:: \n#Output\n#```\n#ctypes_array = (ctypes.c_char * a.size * 8).from_address(ptr)\n#```\n#how should i get the value of ptr?\n```","response":"Also, have a look at ndarray.__array_interface__, which is a dict that contains all of the information you're after. In your case, \n```\npointer, read_only_flag = a.__array_interface__['data']\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/11264838\/how-to-get-the-memory-address-of-a-numpy-array-for-c","best_answers_votes":39,"tags":["python","numpy"],"question_length":654,"response_length":195,"tag_count":2} {"question":"Calculating gradient with NumPy I really can not understand what numpy.gradient function does and how to use it for computation of multivariable function gradient. For example, I have such a function: \n```\ndef func(q, chi, delta):\n return q * chi * delta\n```\nI need to compute it's 3-dimensional gradient (in other words, I want to compute partial derivatives with respect to all variables (q, chi, delta)). How can I calculate this gradient using NumPy?","response":"The problem is, that numpy can't give you the derivatives directly and you have two options: With NUMPY What you essentially have to do, is to define a grid in three dimension and to evaluate the function on this grid. Afterwards you feed this table of function values to numpy.gradient to get an array with the numerical derivative for every dimension (variable). Example from here: \n```\nfrom numpy import *\n\nx,y,z = mgrid[-100:101:25., -100:101:25., -100:101:25.]\n\nV = 2*x**2 + 3*y**2 - 4*z # just a random function for the potential\n\nEx,Ey,Ez = gradient(V)\n```\nWithout NUMPY You could also calculate the derivative yourself by using the centered difference quotient. This is essentially, what numpy.gradient is doing for every point of your predefined grid.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/16078818\/calculating-gradient-with-numpy","best_answers_votes":27,"tags":["python","numpy","scipy","gradient"],"question_length":457,"response_length":760,"tag_count":4} {"question":"Cython: Convert memory view to NumPy array How to convert a typed memoryview to an NumPy array in cython? The docs have \n```\ncimport numpy as np\nimport numpy as np\n\nnumpy_array = np.asarray( my_pointer)\n```\nI took this for my case \n```\nnp.asarray( my_memview)\n```\nUsing this the compiler tells me: \n```\nCan only create cython.array from pointer or array\n```\nCopying or not is not so decicive. I couldn't find any help on this.","response":"You should just be able to use np.asarray directly on the memoryview itself, so something like: \n```\nnp.asarray(my_memview)\n```\nshould work. For example if your cython source file contains this: \n```\nimport numpy as np\ncimport numpy as np\ndef test(double[:,:] x):\n print type(x)\n print type(np.asarray(x))\n```\nThen after compiling it, you should be able to do the following from the python side: \n```\na = np.random.normal(size=(5,5))\ntest(a)\n```\nWhich produces the following output: \n```\n\n\n```\nNote: I'm using the cythonmagic extension for IPython.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/20978377\/cython-convert-memory-view-to-numpy-array","best_answers_votes":40,"tags":["python","arrays","numpy","cython","memoryview"],"question_length":466,"response_length":648,"tag_count":5} {"question":"multiply numpy ndarray with 1d array along a given axis It seems I am getting lost in something potentially silly. I have an n-dimensional numpy array, and I want to multiply it with a vector (1d array) along some dimension (which can change!). As an example, say I want to multiply a 2d array by a 1d array along axis 0 of the first array, I can do something like this: \n```\na=np.arange(20).reshape((5,4))\nb=np.ones(5)\nc=a*b[:,np.newaxis]\n```\nEasy, but I would like to extend this idea to n-dimensions (for a, while b is always 1d) and to any axis. In other words, I would like to know how to generate a slice with the np.newaxis at the right place. Say that a is 3d and I want to multiply along axis=1, I would like to generate the slice which would correctly give: \n```\nc=a*b[np.newaxis,:,np.newaxis]\n```\nI.e. given the number of dimensions of a (say 3), and the axis along which I want to multiply (say axis=1), how do I generate and pass the slice: \n```\nnp.newaxis,:,np.newaxis\n```\nThanks.","response":"Solution Code - \n```\nimport numpy as np\n\n# Given axis along which elementwise multiplication with broadcasting \n# is to be performed\ngiven_axis = 1\n\n# Create an array which would be used to reshape 1D array, b to have \n# singleton dimensions except for the given axis where we would put -1 \n# signifying to use the entire length of elements along that axis \ndim_array = np.ones((1,a.ndim),int).ravel()\ndim_array[given_axis] = -1\n\n# Reshape b with dim_array and perform elementwise multiplication with \n# broadcasting along the singleton dimensions for the final output\nb_reshaped = b.reshape(dim_array)\nmult_out = a*b_reshaped\n```\nSample run for a demo of the steps - \n```\n\n```python\nimport numpy as np\n```\n\n```python\na = np.random.randint(0,9,(4,2,3))\n```\n\n```python\nb = np.random.randint(0,9,(2,1)).ravel()\n```\n\n```python\nwhos\nVariable Type Data\/Info\n-------------------------------\na ndarray 4x2x3: 24 elems, type `int32`, 96 bytes\nb ndarray 2: 2 elems, type `int32`, 8 bytes\n```\n\n```python\ngiven_axis = 1\n```\nNow, we would like to perform elementwise multiplications along given axis = 1. Let's create dim_array: \n```\n```\n\n```python\ndim_array = np.ones((1,a.ndim),int).ravel()\n ...: dim_array[given_axis] = -1\n ...:\n```\n\n```python\ndim_array\n#Output\n#array([ 1, -1, 1])\n#```\n#Finally, reshape b & perform the elementwise multiplication: \n#```\n```\n\n```python\nb_reshaped = b.reshape(dim_array)\n ...: mult_out = a*b_reshaped\n ...:\n```\nCheck out the whos info again and pay special attention to b_reshaped & mult_out: \n```\n```\n\n```python\nwhos\nVariable Type Data\/Info\n---------------------------------\na ndarray 4x2x3: 24 elems, type `int32`, 96 bytes\nb ndarray 2: 2 elems, type `int32`, 8 bytes\nb_reshaped ndarray 1x2x1: 2 elems, type `int32`, 8 bytes\ndim_array ndarray 3: 3 elems, type `int32`, 12 bytes\ngiven_axis int 1\nmult_out ndarray 4x2x3: 24 elems, type `int32`, 96 bytes\n```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/30031828\/multiply-numpy-ndarray-with-1d-array-along-a-given-axis","best_answers_votes":21,"tags":["python","arrays","numpy","slice"],"question_length":994,"response_length":1959,"tag_count":4} {"question":"Seeding random number generators in parallel programs I am studing the multiprocessing module of Python. I have two cases: Ex. 1 \n```\ndef Foo(nbr_iter):\n for step in xrange(int(nbr_iter)) :\n print random.uniform(0,1)\n...\n\nfrom multiprocessing import Pool\n\nif __name__ == \"__main__\":\n ...\n pool = Pool(processes=nmr_parallel_block)\n pool.map(Foo, nbr_trial_per_process)\n```\nEx 2. (using numpy) \n```\ndef Foo_np(nbr_iter):\n np.random.seed()\n print np.random.uniform(0,1,nbr_iter)\n```\nIn both cases the random number generators are seeded in their forked processes. Why do I have to do the seeding explicitly in the numpy example, but not in the Python example?","response":"If no seed is provided explicitly, numpy.random will seed itself using an OS-dependent source of randomness. Usually it will use \/dev\/urandom on Unix-based systems (or some Windows equivalent), but if this is not available for some reason then it will seed itself from the wall clock. Since self-seeding occurs at the time when a new subprocess forks, it is possible for multiple subprocesses to inherit the same seed if they forked at the same time, leading to identical random variates being produced by different subprocesses. Often this correlates with the number of concurrent threads you are running. For example: \n```\nimport numpy as np\nimport random\nfrom multiprocessing import Pool\n\ndef Foo_np(seed=None):\n # np.random.seed(seed)\n return np.random.uniform(0, 1, 5)\n\npool = Pool(processes=8)\nprint np.array(pool.map(Foo_np, xrange(20)))\n\n# [[ 0.14463001 0.80273208 0.5559258 0.55629762 0.78814652] <-\n# [ 0.14463001 0.80273208 0.5559258 0.55629762 0.78814652] <-\n# [ 0.14463001 0.80273208 0.5559258 0.55629762 0.78814652] <-\n# [ 0.14463001 0.80273208 0.5559258 0.55629762 0.78814652] <-\n# [ 0.14463001 0.80273208 0.5559258 0.55629762 0.78814652] <-\n# [ 0.14463001 0.80273208 0.5559258 0.55629762 0.78814652] <-\n# [ 0.14463001 0.80273208 0.5559258 0.55629762 0.78814652] <-\n# [ 0.64672339 0.99851749 0.8873984 0.42734339 0.67158796]\n# [ 0.64672339 0.99851749 0.8873984 0.42734339 0.67158796]\n# [ 0.64672339 0.99851749 0.8873984 0.42734339 0.67158796]\n# [ 0.64672339 0.99851749 0.8873984 0.42734339 0.67158796]\n# [ 0.64672339 0.99851749 0.8873984 0.42734339 0.67158796]\n# [ 0.11283279 0.28180632 0.28365286 0.51190168 0.62864241]\n# [ 0.11283279 0.28180632 0.28365286 0.51190168 0.62864241]\n# [ 0.28917586 0.40997875 0.06308188 0.71512199 0.47386047]\n# [ 0.11283279 0.28180632 0.28365286 0.51190168 0.62864241]\n# [ 0.64672339 0.99851749 0.8873984 0.42734339 0.67158796]\n# [ 0.11283279 0.28180632 0.28365286 0.51190168 0.62864241]\n# [ 0.14463001 0.80273208 0.5559258 0.55629762 0.78814652] <-\n# [ 0.11283279 0.28180632 0.28365286 0.51190168 0.62864241]]\n```\nYou can see that groups of up to 8 threads simultaneously forked with the same seed, giving me identical random sequences (I've marked the first group with arrows). Calling np.random.seed() within a subprocess forces the thread-local RNG instance to seed itself again from \/dev\/urandom or the wall clock, which will (probably) prevent you from seeing identical output from multiple subprocesses. Best practice is to explicitly pass a different seed (or numpy.random.RandomState instance) to each subprocess, e.g.: \n```\ndef Foo_np(seed=None):\n local_state = np.random.RandomState(seed)\n print local_state.uniform(0, 1, 5)\n\npool.map(Foo_np, range(20))\n```\nI'm not entirely sure what underlies the differences between random and numpy.random in this respect (perhaps it has slightly different rules for selecting a source of randomness to self-seed with compared to numpy.random?). I would still recommend explicitly passing a seed or a random.Random instance to each subprocess to be on the safe side. You could also use the .jumpahead() method of random.Random which is designed for shuffling the states of Random instances in multithreaded programs.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/29854398\/seeding-random-number-generators-in-parallel-programs","best_answers_votes":32,"tags":["python","numpy","random","multiprocessing"],"question_length":684,"response_length":3336,"tag_count":4} {"question":"Why is `np.sum(range(N))` very slow? I saw a video about speed of loops in python, where it was explained that doing sum(range(N)) is much faster than manually looping through range and adding the variables together, since the former runs in C due to built-in functions being used, while in the latter the summation is done in (slow) python. I was curious what happens when adding numpy to the mix. As I expected np.sum(np.arange(N)) is the fastest, but sum(np.arange(N)) and np.sum(range(N)) are even slower than doing the naive for loop. Why is this? Here's the script I used to test, some comments about the supposed cause of slowing done where I know (taken mostly from the video) and the results I got on my machine (python 3.10.0, numpy 1.21.2): updated script: \n```py\nimport numpy as np\nfrom timeit import timeit\n\nN = 10_000_000\nrepetition = 10\n\ndef sum0(N = N):\n s = 0\n i = 0\n while i < N: # condition is checked in python\n s += i\n i += 1 # both additions are done in python\n return s\n\ndef sum1(N = N):\n s = 0\n for i in range(N): # increment in C\n s += i # addition in python\n return s\n\ndef sum2(N = N):\n return sum(range(N)) # everything in C\n\ndef sum3(N = N):\n return sum(list(range(N)))\n\ndef sum4(N = N):\n return np.sum(range(N)) # very slow np.array conversion\n\ndef sum5(N = N):\n # much faster np.array conversion\n return np.sum(np.fromiter(range(N),dtype = int))\n\ndef sum5v2_(N = N):\n # much faster np.array conversion\n return np.sum(np.fromiter(range(N),dtype = np.int_))\n\ndef sum6(N = N):\n # possibly slow conversion to Py_long from np.int\n return sum(np.arange(N))\n\ndef sum7(N = N):\n # list returns a list of np.int-s\n return sum(list(np.arange(N)))\n\ndef sum7v2(N = N):\n # tolist conversion to python int seems faster than the implicit conversion\n # in sum(list()) (tolist returns a list of python int-s)\n return sum(np.arange(N).tolist())\n\ndef sum8(N = N):\n return np.sum(np.arange(N)) # everything in numpy (fortran libblas?)\n\ndef sum9(N = N):\n return np.arange(N).sum() # remove dispatch overhead\n\ndef array_basic(N = N):\n return np.array(range(N))\n\ndef array_dtype(N = N):\n return np.array(range(N),dtype = np.int_)\n\ndef array_iter(N = N):\n # np.sum's source code mentions to use fromiter to convert from generators\n return np.fromiter(range(N),dtype = np.int_)\n\nprint(f\"while loop: {timeit(sum0, number = repetition)}\")\nprint(f\"for loop: {timeit(sum1, number = repetition)}\")\nprint(f\"sum_range: {timeit(sum2, number = repetition)}\")\nprint(f\"sum_rangelist: {timeit(sum3, number = repetition)}\")\nprint(f\"npsum_range: {timeit(sum4, number = repetition)}\")\nprint(f\"npsum_iterrange: {timeit(sum5, number = repetition)}\")\nprint(f\"npsum_iterrangev2: {timeit(sum5, number = repetition)}\")\nprint(f\"sum_arange: {timeit(sum6, number = repetition)}\")\nprint(f\"sum_list_arange: {timeit(sum7, number = repetition)}\")\nprint(f\"sum_arange_tolist: {timeit(sum7v2, number = repetition)}\")\nprint(f\"npsum_arange: {timeit(sum8, number = repetition)}\")\nprint(f\"nparangenpsum: {timeit(sum9, number = repetition)}\")\nprint(f\"array_basic: {timeit(array_basic, number = repetition)}\")\nprint(f\"array_dtype: {timeit(array_dtype, number = repetition)}\")\nprint(f\"array_iter: {timeit(array_iter, number = repetition)}\")\n\nprint(f\"npsumarangeREP: {timeit(lambda : sum8(N\/1000), number = 100000*repetition)}\")\nprint(f\"npsumarangeREP: {timeit(lambda : sum9(N\/1000), number = 100000*repetition)}\")\n\n# Example output:\n#\n# while loop: 11.493371912998555\n# for loop: 7.385945574002108\n# sum_range: 2.4605720699983067\n# sum_rangelist: 4.509678105998319\n# npsum_range: 11.85120212900074\n# npsum_iterrange: 4.464334709002287\n# npsum_iterrangev2: 4.498494338993623\n# sum_arange: 9.537815956995473\n# sum_list_arange: 13.290120724996086\n# sum_arange_tolist: 5.231948580003518\n# npsum_arange: 0.241889145996538\n# nparangenpsum: 0.21876695199898677\n# array_basic: 11.736577274998126\n# array_dtype: 8.71628468400013\n# array_iter: 4.303306431000237\n# npsumarangeREP: 21.240833958996518\n# npsumarangeREP: 16.690092379001726\n```","response":"np.sum(range(N)) is slow mostly because the current Numpy implementation do not use enough informations about the exact type\/content of the values provided by the generator range(N). The heart of the general problem is inherently due to dynamic typing of Python and big integers although Numpy could optimize this specific case. First of all, range(N) returns a dynamically-typed Python object which is a (special kind of) Python generator. The object provided by this generator are also dynamically-typed. It is in practice a pure-Python integer. The thing is Numpy is written in the statically-typed language C and so it cannot efficiently work on dynamically-typed pure-Python objects. The strategy of Numpy is to convert such objects into C types when it can. One big problem in this case is that the integers provided by the generator can theorically be huge: Numpy do not know if the values can overflow a np.int32 or even a np.int64 type. Thus, Numpy first detect the good type to use and then compute the result using this type. This translation process can be quite expensive and appear not to be needed here since all the values provided by range(10_000_000). However, range(5_000_000_000) returns the same object type with pure-Python integers overflowing np.int32 and Numpy needs to automatically detect this case not to return wrong results. The thing is also the input type can be correctly identified (np.int32 on my machine), it does not means that the output result will be correct because overflows can appear in during the computation of the sum. This is sadly the case on my machine. Numpy developers decided to deprecate such a use and put in the documentation that np.fromiter should be used instead. np.fromiter has a dtype required parameter to let the user define what is the good type to use. One way to check this behaviour in practice is to simply use create a temporary list: \n```py\ntmp = list(range(10_000_000))\n\n# Numpy implicitly convert the list in a Numpy array but \n# still automatically detect the input type to use\nnp.sum(tmp)\n```\nA faster implementation is the following: \n```py\ntmp = list(range(10_000_000))\n\n# The array is explicitly converted using a well-defined type and \n# thus there is no need to perform an automatic detection \n# (note that the result is still wrong since it does not fit in a np.int32)\ntmp2 = np.array(tmp, dtype=np.int32)\nresult = np.sum(tmp2)\n```\nThe first case takes 476 ms on my machine while the second takes 289 ms. Note that np.sum takes only 4 ms. Thus, a large part of the time is spend in the conversion of pure-Python integer objects to internal int32 types (more specifically the management of pure-Python integers). list(range(10_000_000)) is expensive too as it takes 205 ms. This is again due to the overhead of pure-Python integers (ie. allocations, deallocations, reference counting, increment of variable-sized integers, memory indirections and conditions due to the dynamic typing) as well as the overhead of the generator. sum(np.arange(N)) is slow because sum is a pure-Python function working on a Numpy-defined object. The CPython interpreter needs to call Numpy functions to perform basic additions. Moreover, Numpy-defined integer object are still Python object and so they are subject to reference counting, allocation, deallocation, etc. Not to mention Numpy and CPython add many checks in the functions aiming to finally just add two native numbers together. A Numpy-aware just-in-time compiler such as Numba can solve this issue. Indeed, Numba takes 23 ms on my machine to compute the sum of np.arange(10_000_000) (with code still written in Python) while the CPython interpreter takes 556 ms.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/69584027\/why-is-np-sumrangen-very-slow","best_answers_votes":18,"tags":["python","numpy","performance"],"question_length":4290,"response_length":3686,"tag_count":3} {"question":"floor and ceil with number of decimals I need to floor a float number with an specific number of decimals. So: \n```\n2.1235 with 2 decimals --> 2.12\n2.1276 with 2 decimals --> 2.12 (round would give 2.13 which is not what I need)\n```\nThe function np.round accepts a decimals parameter but it appears that the functions ceil and floor don't accept a number of decimals and always return a number with zero decimals. Of course I can multiply the number by 10^ndecimals, then apply floor and finally divide by 10^ndecimals \n```\nnew_value = np.floor(old_value * 10**ndecimals) \/ 10**ndecimals\n```\nBut I'm wondering if there's a built-in function that does this without having to do the operations.","response":"Neither Python built-in nor numpy's version of ceil\/floor support precision. One hint though is to reuse round instead of multiplication + division (should be much faster): \n```\ndef my_ceil(a, precision=0):\n return np.round(a + 0.5 * 10**(-precision), precision)\n\ndef my_floor(a, precision=0):\n return np.round(a - 0.5 * 10**(-precision), precision)\n```\nUPD: As pointed out by @aschipfl, for whole values np.round will round to the nearest even, which will lead to unexpected results, e.g. my_ceil(11) will return 12. Here is an updated solution, free of this problem: \n```\ndef my_ceil(a, precision=0):\n return np.true_divide(np.ceil(a * 10**precision), 10**precision)\n\ndef my_floor(a, precision=0):\n return np.true_divide(np.floor(a * 10**precision), 10**precision)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/58065055\/floor-and-ceil-with-number-of-decimals","best_answers_votes":29,"tags":["python","numpy","floor","ceil"],"question_length":693,"response_length":782,"tag_count":4} {"question":"Precision lost while using read_csv in pandas I have files of the below format in a text file which I am trying to read into a pandas dataframe. \n```\n895|2015-4-23|19|10000|LA|0.4677978806|0.4773469340|0.4089938425|0.8224291972|0.8652525793|0.6829942860|0.5139162227|\n```\nAs you can see there are 10 integers after the floating point in the input file. \n```\ndf = pd.read_csv('mockup.txt',header=None,delimiter='|')\n```\nWhen I try to read it into dataframe, I am not getting the last 4 integers \n```\ndf[5].head()\n\n0 0.467798\n1 0.258165\n2 0.860384\n3 0.803388\n4 0.249820\nName: 5, dtype: float64\n```\nHow can I get the complete precision as present in the input file? I have some matrix operations that needs to be performed so i cannot cast it as string. I figured out that I have to do something about dtype but I am not sure where I should use it.","response":"It is only display problem, see docs: \n```\n#temporaly set display precision\nwith pd.option_context('display.precision', 10):\n print df\n\n 0 1 2 3 4 5 6 7 \\\n0 895 2015-4-23 19 10000 LA 0.4677978806 0.477346934 0.4089938425 \n\n 8 9 10 11 12 \n0 0.8224291972 0.8652525793 0.682994286 0.5139162227 NaN\n```\nEDIT: (Thank you Mark Dickinson): Pandas uses a dedicated decimal-to-binary converter that sacrifices perfect accuracy for the sake of speed. Passing float_precision='round_trip' to read_csv fixes this. See the documentation for more.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/36909368\/precision-lost-while-using-read-csv-in-pandas","best_answers_votes":34,"tags":["python","csv","pandas","numpy","floating-accuracy"],"question_length":860,"response_length":657,"tag_count":5} {"question":"Numpy Indexing: Return the rest A simply example of numpy indexing: \n```\nIn: a = numpy.arange(10)\nIn: sel_id = numpy.arange(5)\nIn: a[sel_id]\nOut: array([0,1,2,3,4])\n```\nHow do I return the rest of the array that are not indexed by sel_id? What I can think of is: \n```\nIn: numpy.array([x for x in a if x not in a[id]])\nout: array([5,6,7,8,9])\n```\nIs there any easier way?","response":"For this simple 1D case, I'd actually use a boolean mask: \n```\na = numpy.arange(10)\ninclude_index = numpy.arange(4)\ninclude_idx = set(include_index) #Set is more efficient, but doesn't reorder your elements if that is desireable\nmask = numpy.array([(i in include_idx) for i in xrange(len(a))])\n```\nNow you can get your values: \n```\nincluded = a[mask] # array([0, 1, 2, 3])\nexcluded = a[~mask] # array([4, 5, 6, 7, 8, 9])\n```\nNote that a[mask] doesn't necessarily yield the same thing as a[include_index] since the order of include_index matters for the output in that scenario (it should be roughly equivalent to a[sorted(include_index)]). However, since the order of your excluded items isn't well defined, this should work Ok. EDIT A better way to create the mask is: \n```\nmask = np.zeros(a.shape,dtype=bool)\nmask[include_idx] = True\n```\n(thanks to seberg).","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/12518043\/numpy-indexing-return-the-rest","best_answers_votes":21,"tags":["python","arrays","numpy","indexing","scipy"],"question_length":370,"response_length":861,"tag_count":5} {"question":"'numpy.float64' object is not iterable I'm trying to iterate an array of values generated with numpy.linspace: \n```\nslX = numpy.linspace(obsvX, flightX, numSPts)\nslY = np.linspace(obsvY, flightY, numSPts)\n\nfor index,point in slX:\n yPoint = slY[index]\n arcpy.AddMessage(yPoint)\n```\nThis code worked fine on my office computer, but I sat down this morning to work from home on a different machine and this error came up: \n```\nFile \"C:\\temp\\gssm_arcpy.1.0.3.py\", line 147, in AnalyzeSightLine\n for index,point in slX:\nTypeError: 'numpy.float64' object is not iterable\n```\nslX is just an array of floats, and the script has no problem printing the contents -- just, apparently iterating through them. Any suggestions for what is causing it to break, and possible fixes?","response":"numpy.linspace() gives you a one-dimensional NumPy array. For example: \n```\n\n```python\nmy_array = numpy.linspace(1, 10, 10)\n```\n\n```python\nmy_array\n#Output\n#array([ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.])\n#```\n#Therefore: \n#```\n#for index,point in my_array\n#```\n#cannot work. You would need some kind of two-dimensional array with two elements in the second dimension: \n#```\n```\n\n```python\ntwo_d = numpy.array([[1, 2], [4, 5]])\n```\n\n```python\ntwo_d\n#Output\n#array([[1, 2], [4, 5]])\n#```\n#Now you can do this: \n#```\n```\n\n```python\nfor x, y in two_d:\n#Output\n# print(x, y)\n```\n\n1 2\n4 5\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/16862459\/numpy-float64-object-is-not-iterable","best_answers_votes":11,"tags":["python","numpy","iterator"],"question_length":772,"response_length":518,"tag_count":3} {"question":"Passing a structured numpy array with strings to a cython function I am attempting to create a function in cython that accepts a numpy structured array or record array by defining a cython struct type. Suppose I have the data: \n```\na = np.recarray(3, dtype=[('a', np.float32), ('b', np.int32), ('c', '|S5'), ('d', '|S3')])\na[0] = (1.1, 1, 'this\\0', 'to\\0')\na[1] = (2.1, 2, 'that\\0', 'ta\\0')\na[2] = (3.1, 3, 'dogs\\0', 'ot\\0')\n```\n(Note: the problem described below occurs with or without the null terminator) I then have the cython code: \n```\nimport numpy as np\ncimport numpy as np\n\ncdef packed struct tstruct:\n np.float32_t a\n np.int32_t b\n char[5] c\n char[3] d\n\ndef test_struct(tstruct[:] x):\n cdef:\n int k\n tstruct y\n\n for k in xrange(3):\n y = x[k]\n print y.a, y.b, y.c, y.d\n```\nWhen I try to run test_struct(a), I get the error: \n```\nValueError: Expected a dimension of size 5, got 8\n```\nIf in the array and corresponding struct are reordered such that the fields containing strings are not adjacent to each other, then the function works as expected. It appears as if the Cython function is not detecting the boundary between the c and d fields correctly and thinks as if you are passing in a char array of the sum of the lengths. Short of reshuffling the data (which is possible but not ideal), is there another way to pass a recarray with fixed length string data into Cython? Update: This appears to be a potential Cython bug. See the following discussion on the Cython google group that hints at where the problem is arising: https:\/\/groups.google.com\/forum\/#!topic\/cython-users\/TbLbXdi0_h4 Update 2: This bug has been fixed in the master cython branch on Github as of Feb 23, 2014 and the patch is slated for inclusion in v0.20.2: https:\/\/github.com\/cython\/cython\/commit\/58d9361e0a6d4cb3d4e87775f78e0550c2fea836","response":"This was a bug that has been fixed in the master cython branch on Github as of Feb 22, 2014 and the patch is slated for inclusion in v0.20.2: https:\/\/github.com\/cython\/cython\/commit\/58d9361e0a6d4cb3d4e87775f78e0550c2fea836","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/21435378\/passing-a-structured-numpy-array-with-strings-to-a-cython-function","best_answers_votes":1,"tags":["python","string","numpy","cython","recarray"],"question_length":1867,"response_length":222,"tag_count":5} {"question":"Get all permutations of a numpy array I have a numpy array [0, 1, 1, 2, 2, 0, 1, ...] which only contains the numbers 0-k. I would like to create a new array that contains the n possible arrays of permutations of 0-k. A small example with k=2 and n=6: \n```\na = [0, 1, 0, 2]\npermute(a)\nresult = [[0, 1, 0, 2]\n [0, 2, 0, 1]\n [1, 0, 1, 2]\n [2, 1, 2, 0]\n [1, 2, 1, 0]\n [2, 0, 2, 1]]\n```\nDoes anyone have any ideas\/solutions as to how one could achieve this?","response":"Your a is what combinatorists call a multiset. The sympy library has various routines for working with them. \n```\n\n```python\nfrom sympy.utilities.iterables import multiset_permutations\n```\n\n```python\nimport numpy as np\n```\n\n```python\na = np.array([0, 1, 0, 2])\n```\n\n```python\nfor p in multiset_permutations(a):\n p\n \n#Output\n#[0, 0, 1, 2]\n#[0, 0, 2, 1]\n#[0, 1, 0, 2]\n#[0, 1, 2, 0]\n#[0, 2, 0, 1]\n#[0, 2, 1, 0]\n#[1, 0, 0, 2]\n#[1, 0, 2, 0]\n#[1, 2, 0, 0]\n#[2, 0, 0, 1]\n#[2, 0, 1, 0]\n#[2, 1, 0, 0]\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/41210142\/get-all-permutations-of-a-numpy-array","best_answers_votes":56,"tags":["python","numpy"],"question_length":498,"response_length":449,"tag_count":2} {"question":"Count consecutive occurences of values varying in length in a numpy array Say I have a bunch of numbers in a numpy array and I test them based on a condition returning a boolean array: \n```\nnp.random.seed(3456)\na = np.random.rand(8)\ncondition = a>0.5\n```\nAnd with this boolean array I want to count all of the lengths of consecutive occurences of True. For example if I had [True,True,True,False,False,True,True,False,True] I would want to get back [3,2,1]. I can do that using this code: \n```\nlength,count = [],0\nfor i in range(len(condition)):\n\n if condition[i]==True:\n count += 1\n elif condition[i]==False and count>0:\n length.append(count)\n count = 0\n\n if i==len(condition)-1 and count>0:\n length.append(count)\n\n print length\n```\nBut is there anything already implemented for this or a python,numpy,scipy, etc. function that counts the length of consecutive occurences in a list or array for a given input?","response":"If you already have a numpy array, this is probably going to be faster: \n```\n\n```python\ncondition = np.array([True,True,True,False,False,True,True,False,True])\n```\n\n```python\nnp.diff(np.where(np.concatenate(([condition[0]],\n#Output\n# condition[:-1] != condition[1:],\n# [True])))[0])[::2]\n#array([3, 2, 1])\n#```\n#It detects where chunks begin, has some logic for the first and last chunk, and simply computes differences between chunk starts and discards lengths corresponding to False chunks.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/24342047\/count-consecutive-occurences-of-values-varying-in-length-in-a-numpy-array","best_answers_votes":60,"tags":["python","arrays","numpy"],"question_length":950,"response_length":533,"tag_count":3} {"question":"Python : How to avoid numpy RuntimeWarning in function definition? i designed a simple function to return a mathematical function which can be used to fit experimental data to it. The functions looks pretty much like the following: \n```\ndef colecole_2(f,*p):\n term1=p[0] * ( 1 - 1 \/ (1 + numpy.power((0+1j) * 2 * numpy.pi * f * p[1], p[2])))\n term2=p[3] * ( 1 - 1 \/ (1 + numpy.power((0+1j) * 2 * numpy.pi * f * p[4], p[5])))\n return p[6]*(1-abs( term1+ term2))\n```\nUnfortunately I run into troubles with RunTimeWarnings as: \n```\nRuntimeWarning: overflow encountered in power\nRuntimeWarning: overflow encountered in divide\n```\ndue to values that are too large or small. I am not able to figure this problem out on my own though. Is there any way to redefine my function so it will pass without warnings?","response":"You can use numpy.errstate which is a built-in context manager. This will let you set the err handing to be within the context of the with statement. \n```\nimport numpy\n# warning is not logged here. Perfect for clean unit test output\nwith numpy.errstate(divide='ignore'):\n numpy.float64(1.0) \/ 0.0\n```\nI had to do this recently when writing unit tests for some legacy python code.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/10519237\/python-how-to-avoid-numpy-runtimewarning-in-function-definition","best_answers_votes":53,"tags":["python","numpy"],"question_length":811,"response_length":382,"tag_count":2} {"question":"find time shift between two similar waveforms I have to compare two time-vs-voltage waveforms. Because of the peculiarity of the sources of these waveforms, one of them can be a time shifted version of the other. How can i find whether there is a time shift? and if yes, how much is it. I am doing this in Python and wish to use numpy\/scipy libraries.","response":"scipy provides a correlation function which will work fine for small input and also if you want non-circular correlation meaning that the signal will not wrap around. note that in mode='full' , the size of the array returned by signal.correlation is sum of the signal sizes minus one (i.e. len(a) + len(b) - 1), so the value from argmax is off by (signal size -1 = 20) from what you seem to expect. \n```\nfrom scipy import signal, fftpack\nimport numpy\na = numpy.array([0, 1, 2, 3, 4, 3, 2, 1, 0, 1, 2, 3, 4, 3, 2, 1, 0, 0, 0, 0, 0])\nb = numpy.array([0, 0, 0, 0, 0, 1, 2, 3, 4, 3, 2, 1, 0, 1, 2, 3, 4, 3, 2, 1, 0])\nnumpy.argmax(signal.correlate(a,b)) -> 16\nnumpy.argmax(signal.correlate(b,a)) -> 24\n```\nThe two different values correspond to whether the shift is in a or b. If you want circular correlation and for big signal size, you can use the convolution\/Fourier transform theorem with the caveat that correlation is very similar to but not identical to convolution. \n```\nA = fftpack.fft(a)\nB = fftpack.fft(b)\nAr = -A.conjugate()\nBr = -B.conjugate()\nnumpy.argmax(numpy.abs(fftpack.ifft(Ar*B))) -> 4\nnumpy.argmax(numpy.abs(fftpack.ifft(A*Br))) -> 17\n```\nagain the two values correspond to whether your interpreting a shift in a or a shift in b. The negative conjugation is due to convolution flipping one of the functions, but in correlation there is no flipping. You can undo the flipping by either reversing one of the signals and then taking the FFT, or taking the FFT of the signal and then taking the negative conjugate. i.e. the following is true: Ar = -A.conjugate() = fft(a[::-1])","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/4688715\/find-time-shift-between-two-similar-waveforms","best_answers_votes":47,"tags":["python","numpy","signal-processing","correlation"],"question_length":351,"response_length":1590,"tag_count":4} {"question":"Get mean value avoiding nan using numpy in python [duplicate] This question already has answers here: NumPy: calculate averages with NaNs removed (12 answers) Closed 11 years ago. How to calculate mean value of an array (A) avoiding nan? \n```\nimport numpy as np \nA = [5 nan nan nan nan 10]\nM = np.mean(A[A!=nan]) does not work\nAny idea?\n```","response":"An other possibility is the following: \n```\nimport numpy\nfrom scipy.stats import nanmean # nanmedian exists too, if you need it\nA = numpy.array([5, numpy.nan, numpy.nan, numpy.nan, numpy.nan, 10])\nprint nanmean(A) # gives 7.5 as expected\n```\ni guess this looks more elegant (and readable) than the other solution already given edit: apparently (@Jaime) reports that this functionality already exists directly in the latest numpy (1.8) as well, so no need to import scipy.stats anymore if you have that version of numpy: \n```\nimport numpy\nA = numpy.array([5, numpy.nan, numpy.nan, numpy.nan, numpy.nan, 10])\nprint numpy.nanmean(A)\n```\nthe first solution works also for people who dont have the latest version of numpy (like me)","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/19852586\/get-mean-value-avoiding-nan-using-numpy-in-python","best_answers_votes":36,"tags":["python","arrays","numpy"],"question_length":353,"response_length":726,"tag_count":3} {"question":"Vectorized look-up of values in Pandas dataframe I have two pandas dataframes one called orders and another one called daily_prices. daily_prices is as follows: \n```\nAAPL GOOG IBM XOM\n2011-01-10 339.44 614.21 142.78 71.57\n2011-01-13 342.64 616.69 143.92 73.08\n2011-01-26 340.82 616.50 155.74 75.89\n2011-02-02 341.29 612.00 157.93 79.46\n2011-02-10 351.42 616.44 159.32 79.68\n2011-03-03 356.40 609.56 158.73 82.19\n2011-05-03 345.14 533.89 167.84 82.00\n2011-06-03 340.42 523.08 160.97 78.19\n2011-06-10 323.03 509.51 159.14 76.84\n2011-08-01 393.26 606.77 176.28 76.67\n2011-12-20 392.46 630.37 184.14 79.97\n```\norders is as follows: \n```\ndirection size ticker prices\n2011-01-10 Buy 1500 AAPL 339.44\n2011-01-13 Sell 1500 AAPL 342.64\n2011-01-13 Buy 4000 IBM 143.92\n2011-01-26 Buy 1000 GOOG 616.50\n2011-02-02 Sell 4000 XOM 79.46\n2011-02-10 Buy 4000 XOM 79.68\n2011-03-03 Sell 1000 GOOG 609.56\n2011-03-03 Sell 2200 IBM 158.73\n2011-06-03 Sell 3300 IBM 160.97\n2011-05-03 Buy 1500 IBM 167.84\n2011-06-10 Buy 1200 AAPL 323.03\n2011-08-01 Buy 55 GOOG 606.77\n2011-08-01 Sell 55 GOOG 606.77\n2011-12-20 Sell 1200 AAPL 392.46\n```\nIndex of both dataframes is datetime.date. prices column in the orders dataframe was added by using a list comprehension to loop through all the orders and look up the specific ticker for the specific date in the daily_prices data frame and then adding that list as a column to the orders dataframe. I would like to do this using an array operation rather than something that loops. can it be done? I tried to use: \n```\ndaily_prices.ix[dates, tickers]\n```\nbut this returns a matrix of cartesian product of the two lists. I want it to return a column vector of only the price of a specified ticker for a specified date.","response":"Use our friend lookup, designed precisely for this purpose: \n```\n\n```python\nprices\n#Output\n#AAPL GOOG IBM XOM\n#2011-01-10 339.44 614.21 142.78 71.57\n#2011-01-13 342.64 616.69 143.92 73.08\n#2011-01-26 340.82 616.50 155.74 75.89\n#2011-02-02 341.29 612.00 157.93 79.46\n#2011-02-10 351.42 616.44 159.32 79.68\n#2011-03-03 356.40 609.56 158.73 82.19\n#2011-05-03 345.14 533.89 167.84 82.00\n#2011-06-03 340.42 523.08 160.97 78.19\n#2011-06-10 323.03 509.51 159.14 76.84\n#2011-08-01 393.26 606.77 176.28 76.67\n#2011-12-20 392.46 630.37 184.14 79.97\n```\n\n```python\norders\n#Output\n#Date direction size ticker prices\n#0 2011-01-10 00:00:00 Buy 1500 AAPL 339.44\n#1 2011-01-13 00:00:00 Sell 1500 AAPL 342.64\n#2 2011-01-13 00:00:00 Buy 4000 IBM 143.92\n#3 2011-01-26 00:00:00 Buy 1000 GOOG 616.50\n#4 2011-02-02 00:00:00 Sell 4000 XOM 79.46\n#5 2011-02-10 00:00:00 Buy 4000 XOM 79.68\n#6 2011-03-03 00:00:00 Sell 1000 GOOG 609.56\n#7 2011-03-03 00:00:00 Sell 2200 IBM 158.73\n#8 2011-06-03 00:00:00 Sell 3300 IBM 160.97\n#9 2011-05-03 00:00:00 Buy 1500 IBM 167.84\n#10 2011-06-10 00:00:00 Buy 1200 AAPL 323.03\n#11 2011-08-01 00:00:00 Buy 55 GOOG 606.77\n#12 2011-08-01 00:00:00 Sell 55 GOOG 606.77\n#13 2011-12-20 00:00:00 Sell 1200 AAPL 392.46\n```\n\n```python\nprices.lookup(orders.Date, orders.ticker)\n#Output\n#array([ 339.44, 342.64, 143.92, 616.5 , 79.46, 79.68, 609.56,\n# 158.73, 160.97, 167.84, 323.03, 606.77, 606.77, 392.46])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/13893227\/vectorized-look-up-of-values-in-pandas-dataframe","best_answers_votes":55,"tags":["python","pandas","numpy","vectorization"],"question_length":1928,"response_length":1638,"tag_count":4} {"question":"Filter integers in numpy float array Is there any built in function to discard integer and keep only float number in numpy. \n```\nimport numpy as np\n\ninput = np.array([0.0, 0.01, 1.0, 2.0, 2.001, 2.002])\n\ndesired_ouput = some_function(input)\n# Expected ouput\n# desired_output = np.array([0.01, 2.001, 2.002])\n```","response":"I don't think so. My approach would be \n```\nimport numpy as np\na = np.array([0.0, 0.01, 1.0, 2.0, 2.001, 2.002])\nmask = np.isclose(a, a.astype(int))\n\nprint(a[~mask])\n#[ 0.01 2.001 2.002]\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/52094533\/filter-integers-in-numpy-float-array","best_answers_votes":18,"tags":["python","numpy"],"question_length":311,"response_length":193,"tag_count":2} {"question":"Python reshape list to ndim array Hi I have a list flat which is length 2800, it contains 100 results for each of 28 variables: Below is an example of 4 results for 2 variables \n```\n[0,\n 0,\n 1,\n 1,\n 2,\n 2,\n 3,\n 3]\n```\nI would like to reshape the list to an array (2,4) so that the results for each variable are in a single element. \n```\n[[0,1,2,3],\n [0,1,2,3]]\n```","response":"You can think of reshaping that the new shape is filled row by row (last dimension varies fastest) from the flattened original list\/array. If you want to fill an array by column instead, an easy solution is to shape the list into an array with reversed dimensions and then transpose it: \n```\nx = np.reshape(list_data, (100, 28)).T\n```\nAbove snippet results in a 28x100 array, filled column-wise. To illustrate, here are the two options of shaping a list into a 2x4 array: \n```\nnp.reshape([0, 0, 1, 1, 2, 2, 3, 3], (4, 2)).T\n# array([[0, 1, 2, 3],\n# [0, 1, 2, 3]])\n\nnp.reshape([0, 0, 1, 1, 2, 2, 3, 3], (2, 4))\n# array([[0, 0, 1, 1],\n# [2, 2, 3, 3]])\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/35432378\/python-reshape-list-to-ndim-array","best_answers_votes":47,"tags":["python","numpy","reshape"],"question_length":364,"response_length":667,"tag_count":3} {"question":"Find nth smallest element in numpy array I need to find just the the smallest nth element in a 1D numpy.array. For example: \n```\na = np.array([90,10,30,40,80,70,20,50,60,0])\n```\nI want to get 5th smallest element, so my desired output is 40. My current solution is this: \n```\nresult = np.max(np.partition(a, 5)[:5])\n```\nHowever, finding 5 smallest elements and then taking the largest one them seems little clumsy to me. Is there a better way to do it? Am I missing a single function that would achieve my goal? There are questions with similar titles to this one, but I did not see anything that answered my question. Edit: I should've mentioned it originally, but performance is very important for me; therefore, heapq solution though nice would not work for me. \n```\nimport numpy as np\nimport heapq\n\ndef find_nth_smallest_old_way(a, n):\n return np.max(np.partition(a, n)[:n])\n\n# Solution suggested by Jaime and HYRY \ndef find_nth_smallest_proper_way(a, n):\n return np.partition(a, n-1)[n-1]\n\ndef find_nth_smallest_heapq(a, n):\n return heapq.nsmallest(n, a)[-1]\n# \nn_iterations = 10000\n\na = np.arange(1000)\nnp.random.shuffle(a)\n\nt1 = timeit('find_nth_smallest_old_way(a, 100)', 'from __main__ import find_nth_smallest_old_way, a', number = n_iterations)\nprint 'time taken using partition old_way: {}'.format(t1) \nt2 = timeit('find_nth_smallest_proper_way(a, 100)', 'from __main__ import find_nth_smallest_proper_way, a', number = n_iterations)\nprint 'time taken using partition proper way: {}'.format(t2) \nt3 = timeit('find_nth_smallest_heapq(a, 100)', 'from __main__ import find_nth_smallest_heapq, a', number = n_iterations) \nprint 'time taken using heapq : {}'.format(t3)\n```\nResult: \n```\ntime taken using partition old_way: 0.255564928055\ntime taken using partition proper way: 0.129678010941\ntime taken using heapq : 7.81094002724\n```","response":"Unless I am missing something, what you want to do is: \n```\n\n```python\na = np.array([90,10,30,40,80,70,20,50,60,0])\n```\n\n```python\nnp.partition(a, 4)[4]\n#Output\n#40\n#```\n#np.partition(a, k) will place the k+1-th smallest element of a at a[k], smaller values in a[:k] and larger values in a[k+1:]. The only thing to be aware of is that, because of the 0 indexing, the fifth element is at index 4.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/22546180\/find-nth-smallest-element-in-numpy-array","best_answers_votes":44,"tags":["python","numpy"],"question_length":1860,"response_length":366,"tag_count":2} {"question":"scipy.linalg.eig return complex eigenvalues for covariance matrix? The eigenvalues of a covariance matrix should be real and non-negative because covariance matrices are symmetric and semi positive definite. However, take a look at the following experiment with scipy: \n```\n\n```python\na=np.random.random(5)\n```\n\n```python\nb=np.random.random(5)\n```\n\n```python\nab = np.vstack((a,b)).T\n```\n\n```python\nC=np.cov(ab)\n```\n\n```python\neig(C)\n#Output\n#7.90174997e-01 +0.00000000e+00j,\n#2.38344473e-17 +6.15983679e-17j,\n#2.38344473e-17 -6.15983679e-17j,\n#-1.76100435e-17 +0.00000000e+00j, \n#5.42658040e-33 +0.00000000e+00j\n#```\n#However, reproducing the above example in Matlab works correctly: \n#```\n#a = [0.6271, 0.4314, 0.3453, 0.8073, 0.9739]\n#b = [0.1924, 0.3680, 0.0568, 0.1831, 0.0176]\n#C=cov([a;b])\n#eig(C)\n#-0.0000\n#-0.0000\n# 0.0000\n# 0.0000\n# 0.7902\n#```\n```","response":"You have raised two issues: The eigenvalues returned by scipy.linalg.eig are not real. Some of the eigenvalues are negative. Both of these issues are the result of errors introduced by truncation and rounding errors, which always happen with iterative algorithms using floating-point arithmetic. Note that the Matlab results also produced negative eigenvalues. Now, for a more interesting aspect of the issue: why is Matlab's result real, whereas SciPy's result has some complex components? Matlab's eig detects if the input matrix is real symmetric or Hermitian and uses Cholesky factorization when it is. See the description of the chol argument in the eig documentation. This is not done automatically in SciPy. If you want to use an algorithm that exploits the structure of a real symmetric or Hermitian matrix, use scipy.linalg.eigh. For the example in the question: \n```\n\n```python\neigh(C, eigvals_only=True)\n#Output\n#array([ -3.73825923e-17, -1.60154836e-17, 8.11704449e-19,\n# 3.65055777e-17, 7.90175615e-01])\n#```\n#This result is the same as Matlab's, if you round to the same number of digits of precision that Matlab printed.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/8765310\/scipy-linalg-eig-return-complex-eigenvalues-for-covariance-matrix","best_answers_votes":45,"tags":["python","matlab","numpy","scipy","linear-algebra"],"question_length":778,"response_length":1129,"tag_count":5} {"question":"How to split an array according to a condition in numpy? For example, I have a ndarray that is: \n```\na = np.array([1, 3, 5, 7, 2, 4, 6, 8])\n```\nNow I want to split a into two parts, one is all numbers =5: \n```\n[array([1,3,2,4]), array([5,7,6,8])]\n```\nCertainly I can traverse a and create two new array. But I want to know does numpy provide some better ways? Similarly, for multidimensional array, e.g. \n```\narray([[1, 2, 3],\n [4, 5, 6],\n [7, 8, 9],\n [2, 4, 7]])\n```\nI want to split it according to the first column =3, which result is: \n```\n[array([[1, 2, 3],\n [2, 4, 7]]), \n array([[4, 5, 6],\n [7, 8, 9]])]\n```\nAre there any better ways instead of traverse it? Thanks.","response":"```\nimport numpy as np\n\ndef split(arr, cond):\n return [arr[cond], arr[~cond]]\n\na = np.array([1,3,5,7,2,4,6,8])\nprint split(a, a<5)\n\na = np.array([[1,2,3],[4,5,6],[7,8,9],[2,4,7]])\nprint split(a, a[:,0]<3)\n```\nThis produces the following output: \n```\n[array([1, 3, 2, 4]), array([5, 7, 6, 8])]\n\n[array([[1, 2, 3],\n [2, 4, 7]]), array([[4, 5, 6],\n [7, 8, 9]])]\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/7662458\/how-to-split-an-array-according-to-a-condition-in-numpy","best_answers_votes":47,"tags":["python","numpy"],"question_length":703,"response_length":375,"tag_count":2} {"question":"How to create 0x0 Numpy array? How do I create a 0 x 0 (i.e. ndim= 2, shape= (0,0)) float numpy.ndarray?","response":"```\n\n```python\nimport numpy as np\n```\n\n```python\na = np.empty( shape=(0, 0) )\n```\n\n```python\na\n#Output\n# array([], shape=(0, 0), dtype=float64)\n```\n\n```python\na.shape\n#Output\n# (0, 0)\n```\n\n```python\na.size\n#Output\n# 0\n#```\n#The array above is initialized as a 2D array--i.e., two size parameters passed for shape. Second, the call to empty is not strictly necessary--i.e., an array having 0 size could (i believe) be initialized using other array-creation methods in NumPy, e.g., NP.zeros, Np.ones, etc. I just chose empty because it gives the smallest array (memory-wise).\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/9614516\/how-to-create-0x0-numpy-array","best_answers_votes":43,"tags":["python","arrays","numpy"],"question_length":104,"response_length":503,"tag_count":3} {"question":"numpy random choice in Tensorflow Is there an equivalent function to numpy random choice in Tensorflow. In numpy we can get an item randomly from the given list with its weights. \n```\nnp.random.choice([1,2,3,5], 1, p=[0.1, 0, 0.3, 0.6, 0])\n```\nThis code will select an item from the given list with p weights.","response":"No, but you can achieve the same result using tf.multinomial: \n```\nelems = tf.convert_to_tensor([1,2,3,5])\nsamples = tf.multinomial(tf.log([[1, 0, 0.3, 0.6]]), 1) # note log-prob\nelems[tf.cast(samples[0][0], tf.int32)].eval()\nOut: 1\nelems[tf.cast(samples[0][0], tf.int32)].eval()\nOut: 5\n```\nThe [0][0] part is here, as multinomial expects a row of unnormalized log-probabilities for each element of the batch and also has another dimension for the number of samples.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/41123879\/numpy-random-choice-in-tensorflow","best_answers_votes":25,"tags":["python","numpy","tensorflow","deep-learning"],"question_length":309,"response_length":466,"tag_count":4} {"question":"Python Numpy - Complex Numbers - Is there a function for Polar to Rectangular conversion? Is there a built-in Numpy function to convert a complex number in polar form, a magnitude and an angle (degrees) to one in real and imaginary components? Clearly I could write my own but it seems like the type of thing for which there is an optimised version included in some module? More specifically, I have an array of magnitudes and an array of angles: \n```\n\n```python\na\n#Output\n#array([1, 1, 1, 1, 1])\n```\n\n```python\nb\n#Output\n#array([120, 121, 120, 120, 121])\n#```\n#And what I would like is: \n#```\n```\n\n```python\nc\n#Output\n#[(-0.5+0.8660254038j),(-0.515038074+0.8571673007j),(-0.5+0.8660254038j),(-0.5+0.8660254038j),(-0.515038074+0.8571673007j)]\n#```\n```","response":"There isn't a function to do exactly what you want, but there is angle, which does the hardest part. So, for example, one could define two functions: \n```\ndef P2R(radii, angles):\n return radii * exp(1j*angles)\n\ndef R2P(x):\n return abs(x), angle(x)\n```\nThese functions are using radians for input and output, and for degrees, one would need to do the conversion to radians in both functions. In the numpy reference there's a section on handling complex numbers, and this is where the function you're looking for would be listed (so since they're not there, I don't think they exist within numpy).","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/16444719\/python-numpy-complex-numbers-is-there-a-function-for-polar-to-rectangular-co","best_answers_votes":37,"tags":["python","numpy","complex-numbers"],"question_length":687,"response_length":601,"tag_count":3} {"question":"How to compute cluster assignments from linkage\/distance matrices if you have this hierarchical clustering call in scipy in Python: \n```\nfrom scipy.cluster.hierarchy import linkage\n# dist_matrix is long form distance matrix\nlinkage_matrix = linkage(squareform(dist_matrix), linkage_method)\n```\nthen what's an efficient way to go from this to cluster assignments for individual points? i.e. a vector of length N where N is number of points, where each entry i is the cluster number of point i, given the number of clusters generated by a given threshold thresh on the resulting clustering? To clarify: The cluster number would be the cluster that it's in after applying a threshold to the tree. In which case you would get a unique cluster for each leaf node for the cluster that it is in. Unique in the sense that each point belongs to one \"most specific cluster\" which is defined by the threshold where you cut the dendrogram. I know that scipy.cluster.hierarchy.fclusterdata gives you this cluster assignment as its return value, but I am starting from a custom made distance matrix and distance metric, so I cannot use fclusterdata. The question boils down to: how can I compute what fclusterdata is computing -- the cluster assignments?","response":"If I understand you right, that is what fcluster does: scipy.cluster.hierarchy.fcluster(Z, t, criterion='inconsistent', depth=2, R=None, monocrit=None) Forms flat clusters from the hierarchical clustering defined by the linkage matrix Z. ... Returns: An array of length n. T[i] is the flat cluster number to which original observation i belongs. So just call fcluster(linkage_matrix, t), where t is your threshold.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/15951711\/how-to-compute-cluster-assignments-from-linkage-distance-matrices","best_answers_votes":32,"tags":["python","numpy","scipy","cluster-analysis"],"question_length":1240,"response_length":414,"tag_count":4} {"question":"Python DataFrame or list for storing objects Can I \"store\" instances of class in pandas\/numpy Series-DataFrame\/ndarray just like I do in list? Or these libraries support on built-in types (numerics, strings). For example I have Point with x,y coordinates, and I want to store Points in Plane, that would return Point with given coordinates. \n```\n#my class\nclass MyPoint:\n\n def __init__(self, x,y):\n self.x = x\n self.y = y\n\n @property\n def x(self):\n return self.x\n\n @property\n def y(self):\n return self.y\n```\nHere I create instances: \n```\nfirst_point = MyClass(1,1)\nsecond_point = MyClass(2,2)\n```\nI can store instances in some list \n```\nmy_list = []\nmy_list.append(first_point)\nmy_list.append(second_point)\n```\nThe problem in list is that it's indexes do not correspond to x,y properties. Dictionary\/DataFrame approach: \n```\nPlane = {\"x\" : [first_point.x, second_point.x], \"y\" : [first_point.y, second_point.y], \"some_reference\/id_to_point_instance\" = ???}\nPlane_pd = pd.DataFrame(Plane)\n```\nI've read posts, that using \"id\" of instance as third column value in DataFrame could cause problems with the garbage collector.","response":"A pandas.DataFrame will gladly store python objects. Some test code to demonstrate... Test Code: \n```\nclass MyPoint:\n def __init__(self, x, y):\n self._x = x\n self._y = y\n\n @property\n def x(self):\n return self._x\n\n @property\n def y(self):\n return self._y\n\nmy_list = [MyPoint(1, 1), MyPoint(2, 2)]\nprint(my_list)\n\nplane_pd = pd.DataFrame([[p.x, p.y, p] for p in my_list],\n columns=list('XYO'))\nprint(plane_pd.dtypes)\nprint(plane_pd)\n```\nResults: \n```\n[<__main__.MyPoint object at 0x033D2AF0>, <__main__.MyPoint object at 0x033D2B10>]\n\nX int64\nY int64\nO object\ndtype: object\n\n X Y O\n0 1 1 <__main__.MyPoint object at 0x033D2AF0>\n1 2 2 <__main__.MyPoint object at 0x033D2B10>\n```\nNotes: Note the two object in the list are the same two objects in the dataframe. Also note the dtype for the O column is object.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/44219023\/python-dataframe-or-list-for-storing-objects","best_answers_votes":30,"tags":["python","pandas","numpy"],"question_length":1163,"response_length":930,"tag_count":3} {"question":"Sort a numpy array by another array, along a particular axis Similar to this answer, I have a pair of 3D numpy arrays, a and b, and I want to sort the entries of b by the values of a. Unlike this answer, I want to sort only along one axis of the arrays. My naive reading of the numpy.argsort() documentation: \n```\nReturns\n-------\nindex_array : ndarray, int\n Array of indices that sort `a` along the specified axis.\n In other words, ``a[index_array]`` yields a sorted `a`.\n```\nled me to believe that I could do my sort with the following code: \n```\nimport numpy\n\na = numpy.zeros((3, 3, 3))\na += numpy.array((1, 3, 2)).reshape((3, 1, 1))\nprint \"a\"\nprint a\n\"\"\"\n[[[ 1. 1. 1.]\n [ 1. 1. 1.]\n [ 1. 1. 1.]]\n\n [[ 3. 3. 3.]\n [ 3. 3. 3.]\n [ 3. 3. 3.]]\n\n [[ 2. 2. 2.]\n [ 2. 2. 2.]\n [ 2. 2. 2.]]]\n\"\"\"\nb = numpy.arange(3*3*3).reshape((3, 3, 3))\nprint \"b\"\nprint b\n\"\"\"\n[[[ 0 1 2]\n [ 3 4 5]\n [ 6 7 8]]\n\n [[ 9 10 11]\n [12 13 14]\n [15 16 17]]\n\n [[18 19 20]\n [21 22 23]\n [24 25 26]]]\n\"\"\"\nprint \"a, sorted\"\nprint numpy.sort(a, axis=0)\n\"\"\"\n[[[ 1. 1. 1.]\n [ 1. 1. 1.]\n [ 1. 1. 1.]]\n\n [[ 2. 2. 2.]\n [ 2. 2. 2.]\n [ 2. 2. 2.]]\n\n [[ 3. 3. 3.]\n [ 3. 3. 3.]\n [ 3. 3. 3.]]]\n\"\"\"\n\n##This isnt' working how I'd like\nsort_indices = numpy.argsort(a, axis=0)\nc = b[sort_indices]\n\"\"\"\nDesired output:\n\n[[[ 0 1 2]\n [ 3 4 5]\n [ 6 7 8]]\n\n [[18 19 20]\n [21 22 23]\n [24 25 26]]\n\n [[ 9 10 11]\n [12 13 14]\n [15 16 17]]]\n\"\"\"\nprint \"Desired shape of b[sort_indices]: (3, 3, 3).\"\nprint \"Actual shape of b[sort_indices]:\"\nprint c.shape\n\"\"\"\n(3, 3, 3, 3, 3)\n\"\"\"\n```\nWhat's the right way to do this?","response":"You still have to supply indices for the other two dimensions for this to work correctly. \n```\n\n```python\na = numpy.zeros((3, 3, 3))\n```\n\n```python\na += numpy.array((1, 3, 2)).reshape((3, 1, 1))\n```\n\n```python\nb = numpy.arange(3*3*3).reshape((3, 3, 3))\n```\n\n```python\nsort_indices = numpy.argsort(a, axis=0)\n```\n\n```python\nstatic_indices = numpy.indices((3, 3, 3))\n```\n\n```python\nb[sort_indices, static_indices[1], static_indices[2]]\n#Output\n#array([[[ 0, 1, 2],\n# [ 3, 4, 5],\n# [ 6, 7, 8]],\n```\n\n[[18, 19, 20],\n [21, 22, 23],\n [24, 25, 26]],\n\n [[ 9, 10, 11],\n [12, 13, 14],\n [15, 16, 17]]])\n```\nnumpy.indices calculates the indices of each axis of the array when \"flattened\" through the other two axes (or n - 1 axes where n = total number of axes). In other words, this (apologies for the long post): \n```\n\n```python\nstatic_indices\n#Output\n#array([[[[0, 0, 0],\n# [0, 0, 0],\n# [0, 0, 0]],\n```\n\n[[1, 1, 1],\n [1, 1, 1],\n [1, 1, 1]],\n\n [[2, 2, 2],\n [2, 2, 2],\n [2, 2, 2]]],\n\n\n [[[0, 0, 0],\n [1, 1, 1],\n [2, 2, 2]],\n\n [[0, 0, 0],\n [1, 1, 1],\n [2, 2, 2]],\n\n [[0, 0, 0],\n [1, 1, 1],\n [2, 2, 2]]],\n\n\n [[[0, 1, 2],\n [0, 1, 2],\n [0, 1, 2]],\n\n [[0, 1, 2],\n [0, 1, 2],\n [0, 1, 2]],\n\n [[0, 1, 2],\n [0, 1, 2],\n [0, 1, 2]]]])\n```\nThese are the identity indices for each axis; when used to index b, they recreate b. \n```\n\n```python\nb[static_indices[0], static_indices[1], static_indices[2]]\n#Output\n#array([[[ 0, 1, 2],\n# [ 3, 4, 5],\n# [ 6, 7, 8]],\n```\n\n[[ 9, 10, 11],\n [12, 13, 14],\n [15, 16, 17]],\n\n [[18, 19, 20],\n [21, 22, 23],\n [24, 25, 26]]])\n```\nAs an alternative to numpy.indices, you could use numpy.ogrid, as unutbu suggests. Since the object generated by ogrid is smaller, I'll create all three axes, just for consistency sake, but note unutbu's comment for a way to do this by generating only two. \n```\n\n```python\nstatic_indices = numpy.ogrid[0:a.shape[0], 0:a.shape[1], 0:a.shape[2]]\n```\n\n```python\na[sort_indices, static_indices[1], static_indices[2]]\n#Output\n#array([[[ 1., 1., 1.],\n# [ 1., 1., 1.],\n# [ 1., 1., 1.]],\n```\n\n[[ 2., 2., 2.],\n [ 2., 2., 2.],\n [ 2., 2., 2.]],\n\n [[ 3., 3., 3.],\n [ 3., 3., 3.],\n [ 3., 3., 3.]]])\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/6155649\/sort-a-numpy-array-by-another-array-along-a-particular-axis","best_answers_votes":23,"tags":["python","sorting","multidimensional-array","numpy"],"question_length":1625,"response_length":2368,"tag_count":4} {"question":"Most memory-efficient way to compute abs()**2 of complex numpy ndarray I'm looking for the most memory-efficient way to compute the absolute squared value of a complex numpy ndarray \n```py\narr = np.empty((250000, 150), dtype='complex128') # common size\n```\nI haven't found a ufunc that would do exactly np.abs()**2. As an array of that size and type takes up around half a GB, I'm looking for a primarily memory-efficient way. I would also like it to be portable, so ideally some combination of ufuncs. So far my understanding is that this should be about the best \n```py\nresult = np.abs(arr)\nresult **= 2\n```\nIt will needlessly compute (**0.5)**2, but should compute **2 in-place. Altogether the peak memory requirement is only the original array size + result array size, which should be 1.5 * original array size as the result is real. If I wanted to get rid of the useless **2 call I'd have to do something like this \n```py\nresult = arr.real**2\nresult += arr.imag**2\n```\nbut if I'm not mistaken, this means I'll have to allocate memory for both the real and imaginary part calculation, so the peak memory usage would be 2.0 * original array size. The arr.real properties also return a non-contiguous array (but that is of lesser concern). Is there anything I'm missing? Are there any better ways to do this? EDIT 1: I'm sorry for not making it clear, I don't want to overwrite arr, so I can't use it as out.","response":"Thanks to numba.vectorize in recent versions of numba, creating a numpy universal function for the task is very easy: \n```\n@numba.vectorize([numba.float64(numba.complex128),numba.float32(numba.complex64)])\ndef abs2(x):\n return x.real**2 + x.imag**2\n```\nOn my machine, I find a threefold speedup compared to a pure-numpy version that creates intermediate arrays: \n```\n\n```python\nx = np.random.randn(10000).view('c16')\n```\n\n```python\ny = abs2(x)\n```\n\n```python\nnp.all(y == x.real**2 + x.imag**2) # exactly equal, being the same operation\n#Output\n#True\n```\n\n```python\n%timeit np.abs(x)**2\n#Output\n#10000 loops, best of 3: 81.4 \u00b5s per loop\n```\n\n```python\n%timeit x.real**2 + x.imag**2\n#Output\n#100000 loops, best of 3: 12.7 \u00b5s per loop\n```\n\n```python\n%timeit abs2(x)\n#Output\n#100000 loops, best of 3: 4.6 \u00b5s per loop\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/30437947\/most-memory-efficient-way-to-compute-abs2-of-complex-numpy-ndarray","best_answers_votes":17,"tags":["python","numpy","complex-numbers","memory-efficient","numpy-ufunc"],"question_length":1412,"response_length":723,"tag_count":5} {"question":"Python: Making numpy default to float32 Is there any clean way of setting numpy to use float32 values instead of float64 globally?","response":"Not that I am aware of. You either need to specify the dtype explicitly when you call the constructor for any array, or cast an array to float32 (use the ndarray.astype method) before passing it to your GPU code (I take it this is what the question pertains to?). If it is the GPU case you are really worried about, I favor the latter - it can become very annoying to try and keep everything in single precision without an extremely thorough understanding of the numpy broadcasting rules and very carefully designed code. Another alternative might be to create your own methods which overload the standard numpy constructors (so numpy.zeros, numpy.ones, numpy.empty). That should go pretty close to keeping everything in float32.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/5721831\/python-making-numpy-default-to-float32","best_answers_votes":13,"tags":["python","numpy","numbers"],"question_length":130,"response_length":729,"tag_count":3} {"question":"When will numpy copy the array when using reshape() In the document of numpy.reshape, it says: This will be a new view object if possible; otherwise, it will be a copy. Note there is no guarantee of the memory layout (C- or Fortran- contiguous) of the returned array. My question is, when will numpy chooses to return a new view, and when to copy the whole array? Is there any general principles telling people about the behavior of reshape, or it is just unpredictable?","response":"The link that @mgillson found appears to address the question of 'how do I tell if it made a copy', but not 'how do I predict it' or understand why it made the copy. As for the test, I like to use A.__array_interfrace__. Most likely this would be a problem if you tried to assign values to the reshaped array, expecting to also change the original. And I'd be hard pressed to find a SO case where that was the issue. A copying reshape will be a bit slower than a noncopying one, but again I can't think of a case where that produced a slow down of the whole code. A copy could also be an issue if you are working with arrays so big that the simplest operation produces a memory error. After reshaping the values in the data buffer need to be in a contiguous order, either 'C' or 'F'. For example: \n```\n\n```python\nnp.arange(12).reshape(3,4,order='C')\n#Output\n#array([[ 0, 1, 2, 3],\n# [ 4, 5, 6, 7],\n# [ 8, 9, 10, 11]])\n```\n\n```python\nnp.arange(12).reshape(3,4,order='F')\n#Output\n#array([[ 0, 3, 6, 9],\n# [ 1, 4, 7, 10],\n# [ 2, 5, 8, 11]])\n#```\n#It will do a copy if the initial order is so 'messed up' that it can't return values like this. Reshape after transpose may do this (see my example below). So might games with stride_tricks.as_strided. Off hand those are the only cases I can think of. \n#```\n```\n\n```python\nx=np.arange(12).reshape(3,4,order='C')\n```\n\n```python\ny=x.T\n```\n\n```python\nx.__array_interface__\n#Output\n#{'version': 3,\n# 'descr': [('', ' ni; nk--) {\n for nk in range(nj-1, ni, -1):\n newstrides[nk - 1] = newstrides[nk]*newdims[nk];\n nj += 1; ni = nj\n oj += 1; oi = oj \n print(olddims, newdims) \n print(oldstrides, newstrides)\n\n # * Set strides corresponding to trailing 1s of the new shape.\n if (ni >= 1) :\n print(newstrides, ni)\n last_stride = newstrides[ni - 1];\n else :\n last_stride = self.itemsize # PyArray_ITEMSIZE(self);\n\n if (is_f_order) :\n last_stride *= newdims[ni - 1];\n\n for nk in range(ni, newnd):\n newstrides[nk] = last_stride;\n return newstrides\n```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/36995289\/when-will-numpy-copy-the-array-when-using-reshape","best_answers_votes":12,"tags":["python","numpy"],"question_length":470,"response_length":6771,"tag_count":2} {"question":"Root mean square of a function in python I want to calculate root mean square of a function in Python. My function is in a simple form like y = f(x). x and y are arrays. I tried Numpy and Scipy Docs and couldn't find anything.","response":"I'm going to assume that you want to compute the expression given by the following pseudocode: \n```\nms = 0\nfor i = 1 ... N\n ms = ms + y[i]^2\nms = ms \/ N\nrms = sqrt(ms)\n```\ni.e. the square root of the mean of the squared values of elements of y. In numpy, you can simply square y, take its mean and then its square root as follows: \n```\nrms = np.sqrt(np.mean(y**2))\n```\nSo, for example: \n```\n\n```python\ny = np.array([0, 0, 1, 1, 0, 1, 0, 1, 1, 1]) # Six 1's\n```\n\n```python\ny.size\n#Output\n#10\n```\n\n```python\nnp.mean(y**2)\n#Output\n#0.59999999999999998\n```\n\n```python\nnp.sqrt(np.mean(y**2))\n#Output\n#0.7745966692414834\n#```\n#Do clarify your question if you mean to ask something else.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/40963659\/root-mean-square-of-a-function-in-python","best_answers_votes":68,"tags":["python","arrays","python-3.x","numpy","scipy"],"question_length":226,"response_length":615,"tag_count":5} {"question":"Python Pandas: remove entries based on the number of occurrences I'm trying to remove entries from a data frame which occur less than 100 times. The data frame data looks like this: \n```\npid tag\n1 23 \n1 45\n1 62\n2 24\n2 45\n3 34\n3 25\n3 62\n```\nNow I count the number of tag occurrences like this: \n```\nbytag = data.groupby('tag').aggregate(np.count_nonzero)\n```\nBut then I can't figure out how to remove those entries which have low count...","response":"New in 0.12, groupby objects have a filter method, allowing you to do these types of operations: \n```\n\n```python\ng = data.groupby('tag')\n```\n\n```python\ng.filter(lambda x: len(x) > 1) # pandas 0.13.1\n#Output\n#pid tag\n#1 1 45\n#2 1 62\n#4 2 45\n#7 3 62\n#```\n#The function (the first argument of filter) is applied to each group (subframe), and the results include elements of the original DataFrame belonging to groups which evaluated to True. Note: in 0.12 the ordering is different than in the original DataFrame, this was fixed in 0.13+: \n#```\n```\n\n```python\ng.filter(lambda x: len(x) > 1) # pandas 0.12\n#Output\n#pid tag\n#1 1 45\n#4 2 45\n#2 1 62\n#7 3 62\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/13446480\/python-pandas-remove-entries-based-on-the-number-of-occurrences","best_answers_votes":37,"tags":["python","numpy","python-2.7","pandas"],"question_length":474,"response_length":681,"tag_count":4} {"question":"Increase image brightness without overflow I got a problem when trying to increase image brightness. Here is the origin image: The image I wanted to get is like this: Now to increase the brightness with the following code: \n```\nimage = cv2.imread(\"\/home\/wni\/vbshare\/tmp\/a4_index2.png\",0)\n\n if sum(image[0])\/len(image[0])<200:\n new = np.where((255-image) 128\n\ncv2.imshow(\"\", y*1.0)\ncv2.waitKey(0)\ncv2.destroyAllWindows()\n```\nBut I'm getting this error: \n```\n>Traceback (most recent call last):\nFile \"ex5.py\", line 11, in \ncv2.imshow(\"\", y*1.0)\ncv2.error: OpenCV(4.0.0) c:\\projects\\opencv- \npython\\opencv\\modules\\imgproc\\src\\color.hpp:261: error: (-2:Unspecified \nerror) >in function '__cdecl cv::CvtHelper,struct cv::Set<3,4,-1>,struct \ncv::Set<0,2,5>,2>::CvtHelper(const class cv::_InputArray &,const class \ncv::_OutputArray &,int)'\n>Unsupported depth of input image:\n> 'VDepth::contains(depth)'\n> where\n> 'depth' is 6 (CV_64F).\n```","response":"try cv2.imshow(\"\", y.astype('float32')) or cv2.imshow(\"\", y.astype('uint8') * 255) CV_64F means the numpy array dtype is float64 (64-bit floating-point). opencv imshow only works with float32 (32-bit floating point) where the range for the pixel values is 0.0-1.0 or uint8 (unsigned 8-bit) where the range is 0-255 Since y was a bool, converting it to a number means converting True to 1 for float32, that is fine because 1 is the maximum for the imshow range if you use uint8, that means you're trying to display pixels of value 1\/255 (since the maximum imshow range for uint8 is 255) which will be barely visible, so you can multiply by 255 to bring those pixels to max and appear as bright white pixels","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/55128386\/python-opencv-depth-of-image-unsupported-cv-64f","best_answers_votes":51,"tags":["python","numpy","opencv"],"question_length":887,"response_length":705,"tag_count":3} {"question":"Converting image from RGB to HSV color space I couldn't find such function (i.e. RGB_to_HSV()) in Scipy or Matplotlib's documentations, and Google doesn't show pointers, except ActiveState recipe which demonstrates rgb2hsv function, though not usable on Numpy array as is. Does someone know of a shortcut? Edit: Sorry, just found matplotlib.colors.rgb_to_hsv() which is exactly what I was looking for. Should I delete this question?","response":"Matplotlib provides RGB to HSV conversion function: matplotlib.colors.rgb_to_hsv(): matplotlib.colors.rgb_to_hsv(arr) convert rgb values in a numpy array to hsv values input and output arrays should have shape (M,N,3)","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/15278323\/converting-image-from-rgb-to-hsv-color-space","best_answers_votes":53,"tags":["numpy","matplotlib","scipy","python-imaging-library"],"question_length":432,"response_length":217,"tag_count":4} {"question":"Is \"norm\" equivalent to \"Euclidean distance\"? I am not sure whether \"norm\" and \"Euclidean distance\" mean the same thing. Please could you help me with this distinction. I have an n by m array a, where m > 3. I want to calculate the Eculidean distance between the second data point a[1,:] to all the other points (including itself). So I used the np.linalg.norm, which outputs the norm of two given points. But I don't know if this is the right way of getting the EDs. \n```\nimport numpy as np\n\na = np.array([[0, 0, 0 ,0 ], [1, 1 , 1, 1],[2,2, 2, 3], [3,5, 1, 5]])\nN = a.shape[0] # number of row\npos = a[1,:] # pick out the second data point. \ndist = np.zeros((N,1), dtype=np.float64)\n\nfor i in range(N):\n dist[i]= np.linalg.norm(a[i,:] - pos)\n```","response":"A norm is a function that takes a vector as an input and returns a scalar value that can be interpreted as the \"size\", \"length\" or \"magnitude\" of that vector. More formally, norms are defined as having the following mathematical properties: They scale multiplicatively, i.e. Norm(a\u00b7v) = |a|\u00b7Norm(v) for any scalar a They satisfy the triangle inequality, i.e. Norm(u + v) \u2264 Norm(u) + Norm(v) The norm of a vector is zero if and only if it is the zero vector, i.e. Norm(v) = 0 \u21d4 v = 0 The Euclidean norm (also known as the L\u00b2 norm) is just one of many different norms - there is also the max norm, the Manhattan norm etc. The L\u00b2 norm of a single vector is equivalent to the Euclidean distance from that point to the origin, and the L\u00b2 norm of the difference between two vectors is equivalent to the Euclidean distance between the two points. As @nobar's answer says, np.linalg.norm(x - y, ord=2) (or just np.linalg.norm(x - y)) will give you Euclidean distance between the vectors x and y. Since you want to compute the Euclidean distance between a[1, :] and every other row in a, you could do this a lot faster by eliminating the for loop and broadcasting over the rows of a: \n```\ndist = np.linalg.norm(a[1:2] - a, axis=1)\n```\nIt's also easy to compute the Euclidean distance yourself using broadcasting: \n```\ndist = np.sqrt(((a[1:2] - a) ** 2).sum(1))\n```\nThe fastest method is probably scipy.spatial.distance.cdist: \n```\nfrom scipy.spatial.distance import cdist\n\ndist = cdist(a[1:2], a)[0]\n```\nSome timings for a (1000, 1000) array: \n```\na = np.random.randn(1000, 1000)\n\n%timeit np.linalg.norm(a[1:2] - a, axis=1)\n# 100 loops, best of 3: 5.43 ms per loop\n\n%timeit np.sqrt(((a[1:2] - a) ** 2).sum(1))\n# 100 loops, best of 3: 5.5 ms per loop\n\n%timeit cdist(a[1:2], a)[0]\n# 1000 loops, best of 3: 1.38 ms per loop\n\n# check that all 3 methods return the same result\nd1 = np.linalg.norm(a[1:2] - a, axis=1)\nd2 = np.sqrt(((a[1:2] - a) ** 2).sum(1))\nd3 = cdist(a[1:2], a)[0]\n\nassert np.allclose(d1, d2) and np.allclose(d1, d3)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/32141856\/is-norm-equivalent-to-euclidean-distance","best_answers_votes":42,"tags":["python","arrays","math","numpy","euclidean-distance"],"question_length":748,"response_length":2024,"tag_count":5} {"question":"Why is x**3 slower than x*x*x? [duplicate] This question already has answers here: Speed of calculating powers (in python) (6 answers) Closed 11 years ago. In NumPy, x*x*x is an order of magnitude faster than x**3 or even np.power(x, 3). \n```\nx = np.random.rand(1e6)\n%timeit x**3\n100 loops, best of 3: 7.07 ms per loop\n\n%timeit x*x*x\n10000 loops, best of 3: 163 \u00b5s per loop\n\n%timeit np.power(x, 3)\n100 loops, best of 3: 7.15 ms per loop\n```\nAny ideas as to why this behavior happens? As far as I can tell all three yield the same output (checked with np.allclose).","response":"As per this answer, it's because the implementation of exponentiation has some overhead that multiplication does not. However, naive multiplication will get slower and slower as the exponent increases. An empirical demonstration: \n```\n\n```python\nx = np.random.rand(1e6)\n```\n\n```python\n%timeit x**2\n 100 loops, best of 3: 11.9 ms per loop\n```\n\n```python\n%timeit x*x\n 100 loops, best of 3: 12.7 ms per loop\n```\n\n```python\n%timeit x**3\n 10 loops, best of 3: 132 ms per loop\n```\n\n```python\n%timeit x*x*x\n 10 loops, best of 3: 27.2 ms per loop\n```\n\n```python\n%timeit x**4\n 10 loops, best of 3: 132 ms per loop\n```\n\n```python\n%timeit x*x*x*x\n 10 loops, best of 3: 42.4 ms per loop\n```\n\n```python\n%timeit x**10\n 10 loops, best of 3: 132 ms per loop\n```\n\n```python\n%timeit x*x*x*x*x*x*x*x*x*x\n 10 loops, best of 3: 137 ms per loop\n```\n\n```python\n%timeit x**15\n 10 loops, best of 3: 132 ms per loop\n```\n\n```python\n%timeit x*x*x*x*x*x*x*x*x*x*x*x*x*x*x\n 1 loops, best of 3: 212 ms per loop\n```\nNote the exponentiation time stays more or less constant, except for the x**2 case which I suspect is special-cased, while multiplication gets slower and slower. It seems you could exploit this to get faster integer exponentiation... for example: \n```\n```\n\n```python\n%timeit x**16\n10 loops, best of 3: 132 ms per loop\n```\n\n```python\n%timeit x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x\n1 loops, best of 3: 225 ms per loop\n```\n\n```python\ndef tosixteenth(x):\n ....: x2 = x*x\n ....: x4 = x2*x2\n ....: x8 = x4*x4\n ....: x16 = x8*x8\n ....: return x16\n ....:\n```\n\n```python\n%timeit tosixteenth(x)\n10 loops, best of 3: 49.5 ms per loop\n```\nIt seems you could apply this technique generically by splitting any integer into a sum of the powers of two, computing each power of two as above, and summing: \n```\n```\n\n```python\n%paste\ndef smartintexp(x, exp):\n result = np.ones(len(x))\n curexp = np.array(x)\n while True:\n if exp%2 == 1:\n result *= curexp\n exp >>= 1\n if not exp: break\n curexp *= curexp\n return result\n## -- End pasted text --\n```\n\n```python\nx\n#Output\n#array([ 0.0163407 , 0.57694587, 0.47336487, ..., 0.70255032,\n# 0.62043303, 0.0796748 ])\n```\n\n```python\nx**21\n#Output\n#array([ 3.01080670e-38, 9.63466181e-06, 1.51048544e-07, ...,\n# 6.02873388e-04, 4.43193256e-05, 8.46721060e-24])\n```\n\n```python\nsmartintexp(x, 21)\n#Output\n#array([ 3.01080670e-38, 9.63466181e-06, 1.51048544e-07, ...,\n# 6.02873388e-04, 4.43193256e-05, 8.46721060e-24])\n```\n\n```python\n%timeit x**21\n10 loops, best of 3: 132 ms per loop\n```\n\n```python\n%timeit smartintexp(x, 21)\n10 loops, best of 3: 70.7 ms per loop\n```\nIt's fast for small even powers of two: \n```\n```\n\n```python\n%timeit x**32\n10 loops, best of 3: 131 ms per loop\n```\n\n```python\n%timeit smartintexp(x, 32)\n10 loops, best of 3: 57.4 ms per loop\n```\nBut gets slower as the exponent gets larger: \n```\n```\n\n```python\n%timeit x**63\n10 loops, best of 3: 133 ms per loop\n```\n\n```python\n%timeit smartintexp(x, 63)\n10 loops, best of 3: 110 ms per loop\n```\nAnd not faster for large worst-cases: \n```\n```\n\n```python\n%timeit x**511\n10 loops, best of 3: 135 ms per loop\n```\n\n```python\n%timeit smartintexp(x, 511)\n10 loops, best of 3: 192 ms per loop\n```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/18453771\/why-is-x3-slower-than-xxx","best_answers_votes":37,"tags":["python","numpy"],"question_length":564,"response_length":3153,"tag_count":2} {"question":"Numpy python find minimum value of each column and subtract this value from each column Can anyone help with a snippet of code using numpy and python? Given an numpy array such as \n```\na = array([[1,11], [3,9], [5,7]]\n```\nI want to find the minimun value of each column, so 1 and 7 and then subtract this value from the respective columns, \n```\na = array([[0,4], [2,2], [4,0]]\n```","response":"```\n\n```python\na - a.min(axis=0)\n#Output\n#array([[0, 4],\n# [2, 2],\n# [4, 0]])\n#```\n#Where axis=0 refers to columns.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/7994992\/numpy-python-find-minimum-value-of-each-column-and-subtract-this-value-from-each","best_answers_votes":46,"tags":["python","numpy"],"question_length":380,"response_length":107,"tag_count":2} {"question":"How to iterate over columns of a matrix? In python if a define: \n```\na = arange(9).reshape(3,3)\n```\nas a 3x3 matrix and iterate: \n```\nfor i in a:\n```\nIt'll iterate over the matrix's rows. Is there any way to iterate over columns?","response":"How about \n```\nfor i in a.transpose():\n```\nor, shorter: \n```\nfor i in a.T:\n```\nThis may look expensive but is in fact very cheap (it returns a view onto the same data, but with the shape and stride attributes permuted).","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/5515181\/how-to-iterate-over-columns-of-a-matrix","best_answers_votes":42,"tags":["python","numpy"],"question_length":229,"response_length":219,"tag_count":2} {"question":"Fast way to Hash Numpy objects for Caching Implementing a system where, when it comes to the heavy mathematical lifting, I want to do as little as possible. I'm aware that there are issues with memoisation with numpy objects, and as such implemented a lazy-key cache to avoid the whole \"Premature optimisation\" argument. \n```\ndef magic(numpyarg,intarg):\n key = str(numpyarg)+str(intarg)\n\n try:\n ret = self._cache[key]\n return ret\n except:\n pass\n\n ... here be dragons ...\n self._cache[key]=value\n return value\n```\nbut since string conversion takes quite a while... \n```\nt=timeit.Timer(\"str(a)\",\"import numpy;a=numpy.random.rand(10,10)\")\nt.timeit(number=100000)\/100000 = 0.00132s\/call\n```\nWhat do people suggest as being 'the better way' to do it?","response":"Borrowed from this answer... so really I guess this is a duplicate: \n```\n\n```python\nimport hashlib\n```\n\n```python\nimport numpy\n```\n\n```python\na = numpy.random.rand(10, 100)\n```\n\n```python\nb = a.view(numpy.uint8)\n```\n\n```python\nhashlib.sha1(b).hexdigest()\n#Output\n#'15c61fba5c969e5ed12cee619551881be908f11b'\n```\n\n```python\nt=timeit.Timer(\"hashlib.sha1(a.view(numpy.uint8)).hexdigest()\", \n#Output\n# \"import hashlib;import numpy;a=numpy.random.rand(10,10)\") \n```\n\n```python\nt.timeit(number=10000)\/10000\n#Output\n#2.5790500640869139e-05\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/5386694\/fast-way-to-hash-numpy-objects-for-caching","best_answers_votes":30,"tags":["python","performance","numpy"],"question_length":784,"response_length":453,"tag_count":3} {"question":"python+numpy: why does numpy.log throw an attribute error if its operand is too big? Running \n```\nnp.log(math.factorial(21))\n```\nthrows an AttributeError: log. Why is that? I could imagine a ValueError, or some sort of UseYourHighSchoolMathsError, but why the attribute error?","response":"The result of math.factorial(21) is a Python long. numpy cannot convert it to one of its numeric types, so it leaves it as dtype=object. The way that unary ufuncs work for object arrays is that they simply try to call a method of the same name on the object. E.g. \n```\nnp.log(np.array([x], dtype=object)) <-> np.array([x.log()], dtype=object)\n```\nSince there is no .log() method on a Python long, you get the AttributeError.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/6032781\/pythonnumpy-why-does-numpy-log-throw-an-attribute-error-if-its-operand-is-too","best_answers_votes":33,"tags":["python","numpy"],"question_length":276,"response_length":424,"tag_count":2} {"question":"Normalized Cross-Correlation in Python I have been struggling the last days trying to compute the degrees of freedom of two pair of vectors (x and y) following reference of Chelton (1983) which is: degrees of freedom according to Chelton(1983) and I can't find a proper way to calculate the normalized cross correlation function using np.correlate, I always get an output that it isn't in between -1, 1. Is there any easy way to get the cross correlation function normalized in order to compute the degrees of freedom of two vectors?","response":"Nice Question. There is no direct way but you can \"normalize\" the input vectors before using np.correlate like this and reasonable values will be returned within a range of [-1,1]: Here i define the correlation as generally defined in signal processing textbooks. \n```latex\nc'_{ab}[k] = sum_n a[n] conj(b[n+k])\n```\nCODE: If a and b are the vectors: \n```\na = (a - np.mean(a)) \/ (np.std(a) * len(a))\nb = (b - np.mean(b)) \/ (np.std(b))\nc = np.correlate(a, b, 'full')\n```\nReferences: https:\/\/docs.scipy.org\/doc\/numpy\/reference\/generated\/numpy.correlate.html https:\/\/en.wikipedia.org\/wiki\/Cross-correlation","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/53436231\/normalized-cross-correlation-in-python","best_answers_votes":24,"tags":["python","numpy","machine-learning","correlation","cross-correlation"],"question_length":533,"response_length":601,"tag_count":5} {"question":"How to print numpy objects without line breaks I am logging input arguments to a function using \n```\nlogging.debug('Input to this function = %s',\n inspect.getargvalues(inspect.currentframe())[3])\n```\nBut I do not want the line breaks inserted within numpy objects. numpy.set_printoptions(linewidth=np.nan) removes some, but line breaks are still inserted in 2D objects such as \n```\narray([[ 0.84148239, 0.71467895, 0.00946744, 0.3471317 ],\n [ 0.68041249, 0.20310698, 0.89486761, 0.97799646],\n [ 0.22328803, 0.32401271, 0.96479887, 0.43404245]])\n```\nI want it to be like this: \n```\narray([[ 0.84148239, 0.71467895, 0.00946744, 0.3471317 ], [ 0.68041249, 0.20310698, 0.89486761, 0.97799646], [ 0.22328803, 0.32401271, 0.96479887, 0.43404245]])\n```\nHow can I do this? Thanks.","response":"Given an array x, you can print it without line breaks with, \n```\nimport numpy as np\nx_str = np.array_repr(x).replace('\\n', '')\nprint(x_str)\n```\nor alternatively using the function np.array2string instead of np.array_repr. I'm not sure if there is an easy way to remove newlines from the string representation or numpy arrays. However, it is always possible to remove them after the conversion took place, \n```\ninput_args = inspect.getargvalues(inspect.currentframe())[3]\nlogging.debug('Input to this function = %s', repr(input_args).replace('\\n', ''))\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/29102955\/how-to-print-numpy-objects-without-line-breaks","best_answers_votes":16,"tags":["python","logging","numpy"],"question_length":815,"response_length":556,"tag_count":3} {"question":"Deal with overflow in exp using numpy Using numpy, I have this definition of a function: \n```\ndef powellBadlyScaled(X):\n f1 = 10**4 * X[0] * X[1] - 1\n f2 = numpy.exp(-numpy.float(X[0])) + numpy.exp(-numpy.float(X[1])) - 1.0001\n return f1 + f2\n```\nThis function is evaluated a huge number of times on an optimization routine. It often raises exception: \n```\nRuntimeWarning: overflow encountered in exp\n```\nI understand that operand cannot be stored in allocated space for a float. But how can I overcome the problem?","response":"You can use the bigfloat package. It supports arbitrary precision floating point operations. http:\/\/packages.python.org\/bigfloat\/ \n```\nimport bigfloat\nbigfloat.exp(5000,bigfloat.precision(100))\n# -> BigFloat.exact('2.9676283840236670689662968052896e+2171', precision=100)\n```\nAre you using a function optimization framework? They usually implement value boundaries (using penalty terms). Try that. Are the relevant values really that extreme? In optimization it's not uncommon to minimize log(f). (approximate log likelihood etc etc). Are you sure you want to optimize on that exp value and not log(exp(f)) == f. ? Have a look at my answer to this question: logit and inverse logit functions for extreme values Btw, if all you do is minimize powellBadlyScaled(x,y) then the minimum is at x -> + inf and y -> + inf, so no need for numerics.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/9559346\/deal-with-overflow-in-exp-using-numpy","best_answers_votes":23,"tags":["python","numpy","overflow","exp"],"question_length":524,"response_length":839,"tag_count":4} {"question":"Is there an \"enhanced\" numpy\/scipy dot method? Problem I would like to compute the following using numpy or scipy: \n```\nY = A**T * Q * A\n```\nwhere A is a m x n matrix, A**T is the transpose of A and Q is an m x m diagonal matrix. Since Q is a diagonal matrix I store only its diagonal elements as a vector. Ways of solving for Y Currently I can think of two ways of how to calculate Y: Y = np.dot(np.dot(A.T, np.diag(Q)), A) and Y = np.dot(A.T * Q, A). Clearly option 2 is better than option 1 since no real matrix has to be created with diag(Q) (if this is what numpy really does...) However, both methods suffer from the defect of having to allocate more memory than there really is necessary since A.T * Q and np.dot(A.T, np.diag(Q)) have to be stored along with A in order to calculate Y. Question Is there a method in numpy\/scipy that would eliminate the unnecessary allocation of extra memory where you would only pass two matrices A and B (in my case B is A.T) and a weighting vector Q along with it?","response":"(w\/r\/t the last sentence of the OP: i am not aware of such a numpy\/scipy method but w\/r\/t the Question in the OP Title (i.e., improving NumPy dot performance) what's below should be of some help. In other words, my answer is directed to improving performance of most of the steps comprising your function for Y). First, this should give you a noticeable boost over the vanilla NumPy dot method: \n```\n\n```python\nfrom scipy.linalg import blas as FB\n```\n\n```python\nvx = FB.dgemm(alpha=1., a=v1, b=v2, trans_b=True)\n#Output\n#```\n#Note that the two arrays, v1, v2 are both in C_FORTRAN order You can access the byte order of a NumPy array through an array's flags attribute like so: \n#```\n```\n\n```python\nc = NP.ones((4, 3))\n```\n\n```python\nc.flags\n#Output\n# C_CONTIGUOUS : True # refers to C-contiguous order\n# F_CONTIGUOUS : False # fortran-contiguous\n# OWNDATA : True\n# MASKNA : False\n# OWNMASKNA : False\n# WRITEABLE : True\n# ALIGNED : True\n# UPDATEIFCOPY : False\n#```\n#to change the order of one of the arrays so both are aligned, just call the NumPy array constructor, pass in the array and set the appropriate order flag to True \n#```\n```\n\n```python\nc = NP.array(c, order=\"F\")\n```\n\n```python\nc.flags\n#Output\n# C_CONTIGUOUS : False\n# F_CONTIGUOUS : True\n# OWNDATA : True\n# MASKNA : False\n# OWNMASKNA : False\n# WRITEABLE : True\n# ALIGNED : True\n# UPDATEIFCOPY : False\n#```\n#You can further optimize by exploiting array-order alignment to reduce excess memory consumption caused by copying the original arrays. But why are the arrays copied before being passed to dot? The dot product relies on BLAS operations. These operations require arrays stored in C-contiguous order--it's this constraint that causes the arrays to be copied. On the other hand, the transpose does not effect a copy, though unfortunately returns the result in Fortran order: Therefore, to remove the performance bottleneck, you need to eliminate the predicate array-copying step; to do that just requires passing both arrays to dot in C-contiguous order*. So to calculate dot(A.T., A) without making an extra copy: \n#```\n```\n\n```python\nimport scipy.linalg.blas as FB\n```\n\n```python\nvx = FB.dgemm(alpha=1.0, a=A.T, b=A.T, trans_b=True)\n#Output\n#```\n#In sum, the expression just above (along with the predicate import statement) can substitute for dot, to supply the same functionality but better performance you can bind that expression to a function like so: \n#```\n```\n\n```python\nsuper_dot = lambda v, w: FB.dgemm(alpha=1., a=v.T, b=w.T, trans_b=True)\n#Output\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/9478791\/is-there-an-enhanced-numpy-scipy-dot-method","best_answers_votes":26,"tags":["python","math","numpy","scipy"],"question_length":1007,"response_length":2466,"tag_count":4} {"question":"Why is sin(180) not zero when using python and numpy? Does anyone know why the below doesn't equal 0? \n```\nimport numpy as np\nnp.sin(np.radians(180))\n```\nor: \n```\nnp.sin(np.pi)\n```\nWhen I enter it into python it gives me 1.22e-16.","response":"The number \u03c0 cannot be represented exactly as a floating-point number. So, np.radians(180) doesn't give you \u03c0, it gives you 3.1415926535897931. And sin(3.1415926535897931) is in fact something like 1.22e-16. So, how do you deal with this? You have to work out, or at least guess at, appropriate absolute and\/or relative error bounds, and then instead of x == y, you write: \n```\nabs(y - x) < abs_bounds and abs(y-x) < rel_bounds * y\n```\n(This also means that you have to organize your computation so that the relative error is larger relative to y than to x. In your case, because y is the constant 0, that's trivial\u2014just do it backward.) Numpy provides a function that does this for you across a whole array, allclose: \n```\nnp.allclose(x, y, rel_bounds, abs_bounds)\n```\n(This actually checks abs(y - x) < abs_ bounds + rel_bounds * y), but that's almost always sufficient, and you can easily reorganize your code when it's not.) In your case: \n```\nnp.allclose(0, np.sin(np.radians(180)), rel_bounds, abs_bounds)\n```\nSo, how do you know what the right bounds are? There's no way to teach you enough error analysis in an SO answer. Propagation of uncertainty at Wikipedia gives a high-level overview. If you really have no clue, you can use the defaults, which are 1e-5 relative and 1e-8 absolute.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/18646477\/why-is-sin180-not-zero-when-using-python-and-numpy","best_answers_votes":29,"tags":["python","numpy","trigonometry"],"question_length":230,"response_length":1295,"tag_count":3} {"question":"why is converting a long 2D list to numpy array so slow? I have a long list of xy coordinates, and would like to convert it into numpy array. \n```\n\n```python\nimport numpy as np\n```\n\n```python\nxy = np.random.rand(1000000, 2).tolist()\n#Output\n#```\n#The obvious way would be: \n#```\n```\n\n```python\na = np.array(xy) # Very slow...\n#Output\n#```\n#However, the above code is unreasonably slow. Interestingly, to transpose the long list first, convert it into numpy array, and then transpose back would be much faster (20x on my laptop). \n#```\n```\n\n```python\ndef longlist2array(longlist):\n wide = [[row[c] for row in longlist] for c in range(len(longlist[0]))]\n return np.array(wide).T\n```\n\n```python\na = longlist2array(xy) # 20x faster!\n#Output\n#```\n#Is this a bug of numpy? EDIT: This is a list of points (with xy coordinates) generated on-the-fly, so instead of preallocating an array and enlarging it when necessary, or maintaining two 1D lists for x and y, I think current representation is most natural. Why is looping through 2nd index faster than 1st index, given that we are iterating through a python list in both directions? EDIT 2: Based on @tiago's answer and this question, I found the following code twice as fast as my original version: \n#```\n```\n\n```python\nfrom itertools import chain\n```\n\n```python\ndef longlist2array(longlist):\n flat = np.fromiter(chain.from_iterable(longlist), np.array(longlist[0][0]).dtype, -1) # Without intermediate list:)\n return flat.reshape((len(longlist), -1))\n#Output\n#```\n```","response":"This is because the fastest-varying index of your list is the last one, so np.array() has to traverse the array many times because the first index is much larger. If your list was transposed, np.array() would be faster than your longlist2array: \n```\n\n```python\nimport numpy as np\n```\n\n```python\nxy = np.random.rand(10000, 2).tolist()\n```\n\n```python\n%timeit longlist2array(xy)\n100 loops, best of 3: 3.38 ms per loop\n```\n\n```python\n%timeit np.array(xy)\n10 loops, best of 3: 55.8 ms per loop\n```\n\n```python\nxy = np.random.rand(2, 10000).tolist()\n```\n\n```python\n%timeit longlist2array(xy)\n10 loops, best of 3: 59.8 ms per loop\n```\n\n```python\n%timeit np.array(xy)\n1000 loops, best of 3: 1.96 ms per loop\n```\nThere is no magical solution for your problem. It's just how Python stores your list in memory. Do you really need to have a list with that shape? Can't you reverse it? (And do you really need a list, given that you're converting to numpy?) If you must convert a list, this function is about 10% faster than your longlist2array: \n```\nfrom itertools import chain\n\ndef convertlist(longlist)\n tmp = list(chain.from_iterable(longlist))\n return np.array(tmp).reshape((len(longlist), len(longlist[0])))\n```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/17973507\/why-is-converting-a-long-2d-list-to-numpy-array-so-slow","best_answers_votes":6,"tags":["python","performance","numpy"],"question_length":1422,"response_length":1177,"tag_count":3} {"question":"Plot line graph from histogram data in matplotlib I have a numpy array of ints representing time periods, which I'm currently plotting in a histogram to get a nice distribution graph, using the following code: \n```\nax.hist(data,bins=100,range=(minimum,maximum),facecolor=\"r\")\n```\nHowever I'm trying to modify this graph to represent the exact same data using a line instead of bars, so I can overlay more samples to the same plot and have them be clear (otherwise the bars overlap each other). What I've tried so far is to collate the data array into an array of tuples containing (time, count), and then plot it using \n```\nax.plot(data[:,0],data[:,1],color=\"red\",lw=2)\n```\nHowever that's not giving me anything close, as I can't accurately simulate the bins option of the histogram in my plot. Is there a better way to do this?","response":"I am very late to the party - but maybe this will be useful to someone else. I think what you need to do is set the histtype parameter to 'step', i.e. \n```\nax.hist(data,bins=100,range=(minimum,maximum),facecolor=\"r\", histtype = 'step')\n```\nSee also http:\/\/matplotlib.sourceforge.net\/examples\/pylab_examples\/histogram_demo_extended.html","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/8822370\/plot-line-graph-from-histogram-data-in-matplotlib","best_answers_votes":52,"tags":["python","numpy","matplotlib"],"question_length":828,"response_length":335,"tag_count":3} {"question":"How to automatically annotate maximum value in pyplot I'm trying to figure out how I can automatically annotate the maximum value in a figure window. I know you can do this by manually entering in x,y coordinates to annotate whatever point you want using the .annotate() method, but I want the annotation to be automatic, or to find the maximum point by itself. Here's my code so far: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nfrom pandas import Series, DataFrame\n\ndf = pd.read_csv('macrodata.csv') #Read csv file into dataframe\nyears = df['year'] #Get years column\ninfl = df['infl'] #Get inflation rate column\n\nfig10 = plt.figure()\nwin = fig10.add_subplot(1,1,1)\nfig10 = plt.plot(years, infl, lw = 2)\n\nfig10 = plt.xlabel(\"Years\")\nfig10 = plt.ylabel(\"Inflation\")\nfig10 = plt.title(\"Inflation with Annotations\")\n```","response":"If x and y are the arrays to plot, you get the coordinates of the maximum via \n```\nxmax = x[numpy.argmax(y)]\nymax = y.max()\n```\nThis can be incorporated into a function that you may simply call with your data. \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nx = np.linspace(-2,8, num=301)\ny = np.sinc((x-2.21)*3)\n\n\nfig, ax = plt.subplots()\nax.plot(x,y)\n\ndef annot_max(x,y, ax=None):\n xmax = x[np.argmax(y)]\n ymax = y.max()\n text= \"x={:.3f}, y={:.3f}\".format(xmax, ymax)\n if not ax:\n ax=plt.gca()\n bbox_props = dict(boxstyle=\"square,pad=0.3\", fc=\"w\", ec=\"k\", lw=0.72)\n arrowprops=dict(arrowstyle=\"->\",connectionstyle=\"angle,angleA=0,angleB=60\")\n kw = dict(xycoords='data',textcoords=\"axes fraction\",\n arrowprops=arrowprops, bbox=bbox_props, ha=\"right\", va=\"top\")\n ax.annotate(text, xy=(xmax, ymax), xytext=(0.94,0.96), **kw)\n\nannot_max(x,y)\n\n\nax.set_ylim(-0.3,1.5)\nplt.show()\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/43374920\/how-to-automatically-annotate-maximum-value-in-pyplot","best_answers_votes":51,"tags":["python","pandas","numpy","matplotlib"],"question_length":848,"response_length":931,"tag_count":4} {"question":"Numpy sort ndarray on multiple columns I get a ndarray reading it from a file, like this \n```\nmy_data = np.genfromtxt(input_file, delimiter='\\t', skip_header=0)\n```\nExample input (parsed) \n```\n[[ 2. 1. 2. 0.]\n [ 2. 2. 100. 0.]\n [ 2. 3. 100. 0.]\n [ 3. 1. 2. 0.]\n [ 3. 2. 4. 0.]\n [ 3. 3. 6. 0.]\n [ 4. 1. 2. 0.]\n [ 4. 2. 4. 0.]\n [ 4. 3. 6. 0.]]\n```\nLonger example input (unparsed). The first 2 columns are supposed to be int, while the last 2 columns are supposed to be float, but that's what I get. Suggestions are welcome. The main problem is, I'm trying to sort it, using Numpy, so that rows get ordered giving precedence to the numbers on second column first, and on the first column next. Example of desired output \n```\n[[ 2. 1. 2. 0.]\n [ 3. 1. 2. 0.]\n [ 4. 1. 2. 0.]\n [ 2. 2. 100. 0.]\n [ 3. 2. 4. 0.]\n [ 4. 2. 4. 0.]\n [ 2. 3. 100. 0.]\n [ 3. 3. 6. 0.]\n [ 4. 3. 6. 0.]]\n```\nI'm aware of this answer, it works for sorting rows on a single column. I tried sorting on the second column, since the first one is already sorted, but it's not enough. On occasion, the first column gets reordered too, badly. \n```\nnew_data = my_data[my_data[:, 1].argsort()]\nprint(new_data)\n\n#output\n[[ 2. 1. 2. 0.]\n [ 4. 1. 2. 0.] #ouch\n [ 3. 1. 2. 0.] #ouch\n [ 2. 2. 100. 0.]\n [ 3. 2. 4. 0.]\n [ 4. 2. 4. 0.]\n [ 2. 3. 100. 0.]\n [ 3. 3. 6. 0.]\n [ 4. 3. 6. 0.]]\n```\nI've also checked this question The answer mentions The problem here is that np.lexsort or np.sort do not work on arrays of dtype object. To get around that problem, you could sort the rows_list before creating order_list: \n```\nimport operator\nrows_list.sort(key=operator.itemgetter(0,1,2))\n```\nBut I there is no key parameter in the sort function of type ndarray. And merging fields is not an alternative in my case. Also, I don't have a header, so, if I try to sort using the order parameter, I get an error. \n```\nValueError: Cannot specify order when the array has no fields.\n```\nI'd rather sort in place or at least obtain a result of the same type ndarray. Then I want to save it to a file. How do I do this, without messing the datatypes?","response":"numpy ndarray sort by the 1st, 2nd or 3rd column: \n```\n\n```python\na = np.array([[1,30,200], [2,20,300], [3,10,100]])\n```\n\n```python\na\n#Output\n#array([[ 1, 30, 200], \n# [ 2, 20, 300], \n# [ 3, 10, 100]])\n```\n\n```python\na[a[:,2].argsort()] #sort by the 3rd column ascending\n#Output\n#array([[ 3, 10, 100],\n# [ 1, 30, 200],\n# [ 2, 20, 300]])\n```\n\n```python\na[a[:,2].argsort()][::-1] #sort by the 3rd column descending\n#Output\n#array([[ 2, 20, 300],\n# [ 1, 30, 200],\n# [ 3, 10, 100]])\n```\n\n```python\na[a[:,1].argsort()] #sort by the 2nd column ascending\n#Output\n#array([[ 3, 10, 100],\n# [ 2, 20, 300],\n# [ 1, 30, 200]])\n#```\n#To explain what is going on here: argsort() is passing back an array containing integer sequence of its parent: https:\/\/docs.scipy.org\/doc\/numpy\/reference\/generated\/numpy.argsort.html \n#```\n```\n\n```python\nx = np.array([15, 30, 4, 80, 6])\n```\n\n```python\nnp.argsort(x)\n#Output\n#array([2, 4, 0, 1, 3])\n#```\n#Sort by column 1, then by column 2 then 3: according to the doc, The last column is the primary sort key. \n#```\n```\n\n```python\na = np.array([[2,30,200], [1,30,200], [1,10,200]])\n```\n\n```python\na\n#Output\n#array([[ 2, 30, 200],\n# [ 1, 30, 200],\n# [ 1, 10, 200]])\n```\n\n```python\na[np.lexsort((a[:,2], a[:,1],a[:,0]))]\n#Output\n#array([[ 1, 10, 200],\n# [ 1, 30, 200],\n# [ 2, 30, 200]])\n#```\n#Same as above but reversed: \n#```\n```\n\n```python\na[np.lexsort((a[:,2], a[:,1],a[:,0]))][::-1]\n#Output\n#array([[ 2 30 200]\n# [ 1 30 200]\n# [ 1 10 200]])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/29352511\/numpy-sort-ndarray-on-multiple-columns","best_answers_votes":33,"tags":["python","arrays","sorting","numpy"],"question_length":2370,"response_length":1425,"tag_count":4} {"question":"why numpy.ndarray is object is not callable in my simple for python loop I loaded a text file containing a two column matrix (e.g. below) \n```\n[ 1 3\n 2 4\n 3 5 \n 2 0]\n```\nMy calculation is just to sum each row i.e. 1+3, 2+4, 3+5 and 2+0. I am using the below code: \n```\ndata=np.loadtxt(fname=\"textfile.txt\")## to load the above two column\nxy= data\nfor XY in xy:\n i=0 \n Z=XY(i,0)+XY(i,1)\n i=i+1 \n print (Z)\n```\nBut I received an error saying numpy.ndarray object is not callable. Why does this happen? How can I do this simple calculation? Thanks.","response":"The error TypeError: 'numpy.ndarray' object is not callable means that you tried to call a numpy array as a function. Use \n```\nZ=XY[0]+XY[1]\n```\nInstead of \n```\nZ=XY(i,0)+XY(i,1)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/37222989\/why-numpy-ndarray-is-object-is-not-callable-in-my-simple-for-python-loop","best_answers_votes":54,"tags":["python","numpy"],"question_length":570,"response_length":182,"tag_count":2} {"question":"Creating your own contour in opencv using python I have a set of boundary points of an object. I want to draw it using opencv as contour. I have no idea that how to convert my points to contour representation. To the same contour representation which is obtained by following call \n```\ncontours,_ = cv2.findContours(image,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)\n```\nAny ideas? Thanks","response":"By looking at the format of the contours I would think something like this should be sufficient: \n```\ncontours = [numpy.array([[1,1],[10,50],[50,50]], dtype=numpy.int32) , numpy.array([[99,99],[99,60],[60,99]], dtype=numpy.int32)]\n```\nThis small program gives an running example: \n```\nimport numpy\nimport cv2\n\ncontours = [numpy.array([[1,1],[10,50],[50,50]], dtype=numpy.int32) , numpy.array([[99,99],[99,60],[60,99]], dtype=numpy.int32)]\n\ndrawing = numpy.zeros([100, 100],numpy.uint8)\nfor cnt in contours:\n cv2.drawContours(drawing,[cnt],0,(255,255,255),2)\n\ncv2.imshow('output',drawing)\ncv2.waitKey(0)\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/14161331\/creating-your-own-contour-in-opencv-using-python","best_answers_votes":33,"tags":["python","numpy","opencv","image-processing","contour"],"question_length":382,"response_length":609,"tag_count":5} {"question":"Convert a 2d matrix to a 3d one hot matrix numpy I have np matrix and I want to convert it to a 3d array with one hot encoding of the elements as third dimension. Is there a way to do with without looping over each row eg \n```\na=[[1,3],\n [2,4]]\n```\nshould be made into \n```\nb=[[1,0,0,0], [0,0,1,0],\n [0,1,0,0], [0,0,0,1]]\n```","response":"Approach #1 Here's a cheeky one-liner that abuses broadcasted comparison - \n```\n(np.arange(a.max()) == a[...,None]-1).astype(int)\n```\nSample run - \n```\n\n```python\na\n#Output\n#array([[1, 7, 5, 3],\n# [2, 4, 1, 4]])\n```\n\n```python\n(np.arange(a.max()) == a[...,None]-1).astype(int)\n#Output\n#array([[[1, 0, 0, 0, 0, 0, 0],\n# [0, 0, 0, 0, 0, 0, 1],\n# [0, 0, 0, 0, 1, 0, 0],\n# [0, 0, 1, 0, 0, 0, 0]],\n#\n# [[0, 1, 0, 0, 0, 0, 0],\n# [0, 0, 0, 1, 0, 0, 0],\n# [1, 0, 0, 0, 0, 0, 0],\n# [0, 0, 0, 1, 0, 0, 0]]])\n#```\n#For 0-based indexing, it would be - \n#```\n```\n\n```python\n(np.arange(a.max()+1) == a[...,None]).astype(int)\n#Output\n#array([[[0, 1, 0, 0, 0, 0, 0, 0],\n# [0, 0, 0, 0, 0, 0, 0, 1],\n# [0, 0, 0, 0, 0, 1, 0, 0],\n# [0, 0, 0, 1, 0, 0, 0, 0]],\n#\n# [[0, 0, 1, 0, 0, 0, 0, 0],\n# [0, 0, 0, 0, 1, 0, 0, 0],\n# [0, 1, 0, 0, 0, 0, 0, 0],\n# [0, 0, 0, 0, 1, 0, 0, 0]]])\n#```\n#If the one-hot enconding is to cover for the range of values ranging from the minimum to the maximum values, then offset by the minimum value and then feed it to the proposed method for 0-based indexing. This would be applicable for rest of the approaches discussed later on in this post as well. Here's a sample run on the same - \n#```\n```\n\n```python\na\n#Output\n#array([[ 6, 12, 10, 8],\n# [ 7, 9, 6, 9]])\n```\n\n```python\na_off = a - a.min() # feed a_off to proposed approaches\n```\n\n```python\n(np.arange(a_off.max()+1) == a_off[...,None]).astype(int)\n#Output\n#array([[[1, 0, 0, 0, 0, 0, 0],\n# [0, 0, 0, 0, 0, 0, 1],\n# [0, 0, 0, 0, 1, 0, 0],\n# [0, 0, 1, 0, 0, 0, 0]],\n#\n# [[0, 1, 0, 0, 0, 0, 0],\n# [0, 0, 0, 1, 0, 0, 0],\n# [1, 0, 0, 0, 0, 0, 0],\n# [0, 0, 0, 1, 0, 0, 0]]])\n#```\n#If you are okay with a boolean array with True for 1's and False for 0's, you can skip the .astype(int) conversion. Approach #2 We can also initialize a zeros arrays and index into the output with advanced-indexing. Thus, for 0-based indexing, we would have - \n#```\n#def onehot_initialization(a):\n# ncols = a.max()+1\n# out = np.zeros(a.shape + (ncols,), dtype=int)\n# out[all_idx(a, axis=2)] = 1\n# return out\n#```\n#Helper func - \n#```\n## https:\/\/stackoverflow.com\/a\/46103129\/ @Divakar\n#def all_idx(idx, axis):\n# grid = np.ogrid[tuple(map(slice, idx.shape))]\n# grid.insert(axis, idx)\n# return tuple(grid)\n#```\n#This should be especially more performant when dealing with larger range of values. For 1-based indexing, simply feed in a-1 as the input. Approach #3 : Sparse matrix solution Now, if you are looking for sparse array as output and AFAIK since scipy's inbuilt sparse matrices support only 2D formats, you can get a sparse output that is a reshaped version of the output shown earlier with the first two axes merging and the third axis being kept intact. The implementation for 0-based indexing would look something like this - \n#```\n#from scipy.sparse import coo_matrix\n#def onehot_sparse(a):\n# N = a.size\n# L = a.max()+1\n# data = np.ones(N,dtype=int)\n# return coo_matrix((data,(np.arange(N),a.ravel())), shape=(N,L))\n#```\n#Again, for 1-based indexing, simply feed in a-1 as the input. Sample run - \n#```\n```\n\n```python\na\n#Output\n#array([[1, 7, 5, 3],\n# [2, 4, 1, 4]])\n```\n\n```python\nonehot_sparse(a).toarray()\n#Output\n#array([[0, 1, 0, 0, 0, 0, 0, 0],\n# [0, 0, 0, 0, 0, 0, 0, 1],\n# [0, 0, 0, 0, 0, 1, 0, 0],\n# [0, 0, 0, 1, 0, 0, 0, 0],\n# [0, 0, 1, 0, 0, 0, 0, 0],\n# [0, 0, 0, 0, 1, 0, 0, 0],\n# [0, 1, 0, 0, 0, 0, 0, 0],\n# [0, 0, 0, 0, 1, 0, 0, 0]])\n```\n\n```python\nonehot_sparse(a-1).toarray()\n#Output\n#array([[1, 0, 0, 0, 0, 0, 0],\n# [0, 0, 0, 0, 0, 0, 1],\n# [0, 0, 0, 0, 1, 0, 0],\n# [0, 0, 1, 0, 0, 0, 0],\n# [0, 1, 0, 0, 0, 0, 0],\n# [0, 0, 0, 1, 0, 0, 0],\n# [1, 0, 0, 0, 0, 0, 0],\n# [0, 0, 0, 1, 0, 0, 0]])\n#```\n#This would be much better than previous two approaches if you are okay with having sparse output. Runtime comparison for 0-based indexing Case #1 : \n#```\n```\n\n```python\na = np.random.randint(0,100,(100,100))\n```\n\n```python\n%timeit (np.arange(a.max()+1) == a[...,None]).astype(int)\n1000 loops, best of 3: 1.51 ms per loop\n```\n\n```python\n%timeit onehot_initialization(a)\n1000 loops, best of 3: 478 \u00b5s per loop\n```\n\n```python\n%timeit onehot_sparse(a)\n10000 loops, best of 3: 87.5 \u00b5s per loop\n```\n\n```python\n%timeit onehot_sparse(a).toarray()\n1000 loops, best of 3: 530 \u00b5s per loop\n```\nCase #2 : \n```\n```\n\n```python\na = np.random.randint(0,500,(100,100))\n```\n\n```python\n%timeit (np.arange(a.max()+1) == a[...,None]).astype(int)\n100 loops, best of 3: 8.51 ms per loop\n```\n\n```python\n%timeit onehot_initialization(a)\n100 loops, best of 3: 2.52 ms per loop\n```\n\n```python\n%timeit onehot_sparse(a)\n10000 loops, best of 3: 87.1 \u00b5s per loop\n```\n\n```python\n%timeit onehot_sparse(a).toarray()\n100 loops, best of 3: 2.67 ms per loop\n```\nSqueezing out best performance To squeeze out the best performance, we could modify approach #2 to use indexing on a 2D shaped output array and also use uint8 dtype for memory efficiency and that leading to much faster assignments, like so - \n```\ndef onehot_initialization_v2(a):\n ncols = a.max()+1\n out = np.zeros( (a.size,ncols), dtype=np.uint8)\n out[np.arange(a.size),a.ravel()] = 1\n out.shape = a.shape + (ncols,)\n return out\n```\nTimings - \n```\n```\n\n```python\na = np.random.randint(0,100,(100,100))\n```\n\n```python\n%timeit onehot_initialization(a)\n ...: %timeit onehot_initialization_v2(a)\n ...: \n1000 loops, best of 3: 474 \u00b5s per loop\n10000 loops, best of 3: 128 \u00b5s per loop\n```\n\n```python\na = np.random.randint(0,500,(100,100))\n```\n\n```python\n%timeit onehot_initialization(a)\n ...: %timeit onehot_initialization_v2(a)\n ...: \n100 loops, best of 3: 2.38 ms per loop\n1000 loops, best of 3: 213 \u00b5s per loop\n```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/36960320\/convert-a-2d-matrix-to-a-3d-one-hot-matrix-numpy","best_answers_votes":59,"tags":["python","numpy","vectorization","one-hot-encoding"],"question_length":329,"response_length":5753,"tag_count":4} {"question":"How to convert a pytorch tensor into a numpy array? How do I convert a torch tensor to numpy?","response":"copied from pytorch doc: \n```\na = torch.ones(5)\nprint(a)\n```\ntensor([1., 1., 1., 1., 1.]) \n```\nb = a.numpy()\nprint(b)\n```\n[1. 1. 1. 1. 1.] Following from the below discussion with @John: In case the tensor is (or can be) on GPU, or in case it (or it can) require grad, one can use \n```\nt.detach().cpu().numpy()\n```\nI recommend to uglify your code only as much as required.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/54268029\/how-to-convert-a-pytorch-tensor-into-a-numpy-array","best_answers_votes":44,"tags":["python","numpy","pytorch"],"question_length":93,"response_length":372,"tag_count":3} {"question":"Check if each element in a numpy array is in another array This problem seems easy but I cannot quite get a nice-looking solution. I have two numpy arrays (A and B), and I want to get the indices of A where the elements of A are in B and also get the indices of A where the elements are not in B. So, if \n```\nA = np.array([1,2,3,4,5,6,7])\nB = np.array([2,4,6])\n```\nCurrently I am using \n```\nC = np.searchsorted(A,B)\n```\nwhich takes advantage of the fact that A is in order, and gives me [1, 3, 5], the indices of the elements that are in A. This is great, but how do I get D = [0,2,4,6], the indices of elements of A that are not in B?","response":"searchsorted may give you wrong answer if not every element of B is in A. You can use numpy.in1d: \n```\nA = np.array([1,2,3,4,5,6,7])\nB = np.array([2,4,6,8])\nmask = np.in1d(A, B)\nprint np.where(mask)[0]\nprint np.where(~mask)[0]\n```\noutput is: \n```\n[1 3 5]\n[0 2 4 6]\n```\nHowever in1d() uses sort, which is slow for large datasets. You can use pandas if your dataset is large: \n```\nimport pandas as pd\nnp.where(pd.Index(pd.unique(B)).get_indexer(A) >= 0)[0]\n```\nHere is the time comparison: \n```\nA = np.random.randint(0, 1000, 10000)\nB = np.random.randint(0, 1000, 10000)\n\n%timeit np.where(np.in1d(A, B))[0]\n%timeit np.where(pd.Index(pd.unique(B)).get_indexer(A) >= 0)[0]\n```\noutput: \n```\n100 loops, best of 3: 2.09 ms per loop\n1000 loops, best of 3: 594 \u00b5s per loop\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/15939748\/check-if-each-element-in-a-numpy-array-is-in-another-array","best_answers_votes":44,"tags":["python","numpy"],"question_length":635,"response_length":767,"tag_count":2} {"question":"Concatenating column vectors using numpy arrays I'd like to concatenate 'column' vectors using numpy arrays but because numpy sees all arrays as row vectors by default, np.hstack and np.concatenate along any axis don't help (and neither did np.transpose as expected). \n```\na = np.array((0, 1))\nb = np.array((2, 1))\nc = np.array((-1, -1))\n\nnp.hstack((a, b, c))\n# array([ 0, 1, 2, 1, -1, -1]) ## Noooooo\nnp.reshape(np.hstack((a, b, c)), (2, 3))\n# array([[ 0, 1, 2], [ 1, -1, -1]]) ## Reshaping won't help\n```\nOne possibility (but too cumbersome) is \n```\nnp.hstack((a[:, np.newaxis], b[:, np.newaxis], c[:, np.newaxis]))\n# array([[ 0, 2, -1], [ 1, 1, -1]]) ##\n```\nAre there better ways?","response":"I believe numpy.column_stack should do what you want. Example: \n```\n\n```python\na = np.array((0, 1))\n```\n\n```python\nb = np.array((2, 1))\n```\n\n```python\nc = np.array((-1, -1))\n```\n\n```python\nnumpy.column_stack((a,b,c))\n#Output\n#array([[ 0, 2, -1],\n# [ 1, 1, -1]])\n#```\n#It is essentially equal to \n#```\n```\n\n```python\nnumpy.vstack((a,b,c)).T\n#Output\n#```\n#though. As it says in the documentation.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/14741061\/concatenating-column-vectors-using-numpy-arrays","best_answers_votes":61,"tags":["python","vector","numpy","concatenation"],"question_length":691,"response_length":328,"tag_count":4} {"question":"How can I conditionally change the values in a numpy array taking into account nans? My array is a 2D matrix and it has numpy.nan values besides negative and positive values: \n```\n\n```python\narray\n#Output\n#array([[ nan, nan, nan, ..., -0.04891211,\n# nan, nan],\n# [ nan, nan, nan, ..., nan,\n# nan, nan],\n# [ nan, nan, nan, ..., nan,\n# nan, nan],\n# ..., \n# [-0.02510989, -0.02520096, -0.02669156, ..., nan,\n# nan, nan],\n# [-0.02725595, -0.02715945, -0.0286231 , ..., nan,\n# nan, nan],\n# [ nan, nan, nan, ..., nan,\n# nan, nan]], dtype=float32)\n#```\n#(There are positive numbers in the array, they just don't show in the preview.) And I want to replace all the positive numbers with a number and all the negative numbers with another number. How can I perform that using python\/numpy? (For the record, the matrix is a result of geoimage, which I want to perform a classification)\n```","response":"The fact that you have np.nan in your array should not matter. Just use fancy indexing: \n```\nx[x>0] = new_value_for_pos\nx[x<0] = new_value_for_neg\n```\nIf you want to replace your np.nans: \n```\nx[np.isnan(x)] = something_not_nan\n```\nMore info on fancy indexing a tutorial and the NumPy documentation.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/12424824\/how-can-i-conditionally-change-the-values-in-a-numpy-array-taking-into-account-n","best_answers_votes":48,"tags":["python","numpy"],"question_length":1151,"response_length":299,"tag_count":2} {"question":"Integrating a multidimensional integral in scipy Motivation: I have a multidimensional integral, which for completeness I have reproduced below. It comes from the computation of the second virial coefficient when there is significant anisotropy: Here W is a function of all the variables. It is a known function, one which I can define a python function for. Programming Question: How do I get scipy to integrate this expression? I was thinking of chaining two triple quads (scipy.integrate.tplquad) together, but I'm worried about performance and accuracy. Is there a higher dimensional integrator in scipy, one that can handle an arbitrary number of nested integrals? If not, what is the best way to do this?","response":"With a higher-dimensional integral like this, monte carlo methods are often a useful technique - they converge on the answer as the inverse square root of the number of function evaluations, which is better for higher dimension then you'll generally get out of even fairly sophisticated adaptive methods (unless you know something very specific about your integrand - symmetries that can be exploited, etc.) The mcint package performs a monte carlo integration: running with a non-trivial W that is nonetheless integrable so we know the answer we get (note that I've truncated r to be from [0,1); you'll have to do some sort of log transform or something to get that semi-unbounded domain into something tractable for most numerical integrators): \n```\nimport mcint\nimport random\nimport math\n\ndef w(r, theta, phi, alpha, beta, gamma):\n return(-math.log(theta * beta))\n\ndef integrand(x):\n r = x[0]\n theta = x[1]\n alpha = x[2]\n beta = x[3]\n gamma = x[4]\n phi = x[5]\n\n k = 1.\n T = 1.\n ww = w(r, theta, phi, alpha, beta, gamma)\n return (math.exp(-ww\/(k*T)) - 1.)*r*r*math.sin(beta)*math.sin(theta)\n\ndef sampler():\n while True:\n r = random.uniform(0.,1.)\n theta = random.uniform(0.,2.*math.pi)\n alpha = random.uniform(0.,2.*math.pi)\n beta = random.uniform(0.,2.*math.pi)\n gamma = random.uniform(0.,2.*math.pi)\n phi = random.uniform(0.,math.pi)\n yield (r, theta, alpha, beta, gamma, phi)\n\n\ndomainsize = math.pow(2*math.pi,4)*math.pi*1\nexpected = 16*math.pow(math.pi,5)\/3.\n\nfor nmc in [1000, 10000, 100000, 1000000, 10000000, 100000000]:\n random.seed(1)\n result, error = mcint.integrate(integrand, sampler(), measure=domainsize, n=nmc)\n diff = abs(result - expected)\n\n print \"Using n = \", nmc\n print \"Result = \", result, \"estimated error = \", error\n print \"Known result = \", expected, \" error = \", diff, \" = \", 100.*diff\/expected, \"%\"\n print \" \"\n```\nRunning gives \n```\nUsing n = 1000\nResult = 1654.19633236 estimated error = 399.360391622\nKnown result = 1632.10498552 error = 22.0913468345 = 1.35354937522 %\n\nUsing n = 10000\nResult = 1634.88583778 estimated error = 128.824988953\nKnown result = 1632.10498552 error = 2.78085225405 = 0.170384397984 %\n\nUsing n = 100000\nResult = 1646.72936 estimated error = 41.3384733174\nKnown result = 1632.10498552 error = 14.6243744747 = 0.8960437352 %\n\nUsing n = 1000000\nResult = 1640.67189792 estimated error = 13.0282663003\nKnown result = 1632.10498552 error = 8.56691239895 = 0.524899591322 %\n\nUsing n = 10000000\nResult = 1635.52135088 estimated error = 4.12131562436\nKnown result = 1632.10498552 error = 3.41636536248 = 0.209322647304 %\n\nUsing n = 100000000\nResult = 1631.5982799 estimated error = 1.30214644297\nKnown result = 1632.10498552 error = 0.506705620147 = 0.0310461413109 %\n```\nYou could greatly speed this up by vectorizing the random number generation, etc. Of course, you can chain the triple integrals as you suggest: \n```\nimport numpy\nimport scipy.integrate\nimport math\n\ndef w(r, theta, phi, alpha, beta, gamma):\n return(-math.log(theta * beta))\n\ndef integrand(phi, alpha, gamma, r, theta, beta):\n ww = w(r, theta, phi, alpha, beta, gamma)\n k = 1.\n T = 1.\n return (math.exp(-ww\/(k*T)) - 1.)*r*r*math.sin(beta)*math.sin(theta)\n\n# limits of integration\n\ndef zero(x, y=0):\n return 0.\n\ndef one(x, y=0):\n return 1.\n\ndef pi(x, y=0):\n return math.pi\n\ndef twopi(x, y=0):\n return 2.*math.pi\n\n# integrate over phi [0, Pi), alpha [0, 2 Pi), gamma [0, 2 Pi)\ndef secondIntegrals(r, theta, beta):\n res, err = scipy.integrate.tplquad(integrand, 0., 2.*math.pi, zero, twopi, zero, pi, args=(r, theta, beta))\n return res\n\n# integrate over r [0, 1), beta [0, 2 Pi), theta [0, 2 Pi)\ndef integral():\n return scipy.integrate.tplquad(secondIntegrals, 0., 2.*math.pi, zero, twopi, zero, one)\n\nexpected = 16*math.pow(math.pi,5)\/3.\nresult, err = integral()\ndiff = abs(result - expected)\n\nprint \"Result = \", result, \" estimated error = \", err\nprint \"Known result = \", expected, \" error = \", diff, \" = \", 100.*diff\/expected, \"%\"\n```\nwhich is slow but gives very good results for this simple case. Which is better is going to come down to how complicated your W is and what your accuracy requirements are. Simple (fast to evaluate) W with high accuracy will push you to this sort of method; complicated (slow to evaluate) W with moderate accuracy requirements will push you towards MC techniques. \n```\nResult = 1632.10498552 estimated error = 3.59054059995e-11\nKnown result = 1632.10498552 error = 4.54747350886e-13 = 2.7862628625e-14 %\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/14071704\/integrating-a-multidimensional-integral-in-scipy","best_answers_votes":36,"tags":["python","math","numpy","scipy","scientific-computing"],"question_length":710,"response_length":4671,"tag_count":5} {"question":"Is there a better way of making numpy.argmin() ignore NaN values I want to get the index of the min value of a numpy array that contains NaNs and I want them ignored \n```\n\n```python\na = array([ nan, 2.5, 3., nan, 4., 5.]) \n```\n\n```python\na \n#Output\n#array([ NaN, 2.5, 3. , NaN, 4. , 5. ])\n#```\n#if I run argmin, it returns the index of the first NaN \n#```\n```\n\n```python\na.argmin() \n#Output\n#0\n#```\n#I substitute NaNs with Infs and then run argmin \n#```\n```\n\n```python\na[isnan(a)] = Inf \n```\n\n```python\na \n#Output\n#array([ Inf, 2.5, 3. , Inf, 4. , 5. ]) \n```\n\n```python\na.argmin() \n#Output\n#1\n#```\n#My dilemma is the following: I'd rather not change NaNs to Infs and then back after I'm done with argmin (since NaNs have a meaning later on in the code). Is there a better way to do this? There is also a question of what should the result be if all of the original values of a are NaN? In my implementation the answer is 0\n```","response":"Sure! Use nanargmin: \n```\nimport numpy as np\na = np.array([ np.nan, 2.5, 3., np.nan, 4., 5.])\nprint(np.nanargmin(a))\n# 1\n```\nThere is also nansum, nanmax, nanargmax, and nanmin, In scipy.stats, there is nanmean and nanmedian. For more ways to ignore nans, check out masked arrays.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/2821072\/is-there-a-better-way-of-making-numpy-argmin-ignore-nan-values","best_answers_votes":59,"tags":["arrays","numpy","python","nan"],"question_length":842,"response_length":289,"tag_count":4} {"question":"Merge two numpy arrays I am trying to merge two arrays with the same number of arguments. Input: \n```\nfirst = [[650001.88, 300442.2, 18.73, 0.575, 650002.094, 300441.668, 18.775],\n [650001.96, 300443.4, 18.7, 0.65, 650002.571, 300443.182, 18.745],\n [650002.95, 300442.54, 18.82, 0.473, 650003.056, 300442.085, 18.745]]\n\nsecond = [[1],\n [2],\n [3]]\n```\nMy expected output: \n```\nfinal = [[650001.88, 300442.2, 18.73, 0.575, 650002.094, 300441.668, 18.775, 1],\n [650001.96, 300443.4, 18.7, 0.65, 650002.571, 300443.182, 18.745, 2],\n [650002.95, 300442.54, 18.82, 0.473, 650003.056, 300442.085, 18.745, 3]]\n```\nTo do that I create simple loop: \n```\nfor i in first:\n for j in second:\n final += np.append(j, i)\n```\nI got i filling that i missing something. First of all my loop i extremely slow. Secondly my data is quite have i got more than 2 mlns rows to loop. So I tried to find faster way for example with this code: \n```\nfinal = [np.append(i, second[0]) for i in first]\n```\nIt working far more faster than previous loop but its appending only first value of second array. Can you help me?","response":"Use np.array and then np.concatenate, \n```\nimport numpy as np\n\nfirst = np.array([[650001.88, 300442.2, 18.73, 0.575, \n 650002.094, 300441.668, 18.775],\n [650001.96, 300443.4, 18.7, 0.65, \n 650002.571, 300443.182, 18.745],\n [650002.95, 300442.54, 18.82, 0.473, \n 650003.056, 300442.085, 18.745]])\n\nsecond = np.array([[1],\n [2],\n [3]])\n\nnp.concatenate((first, second), axis=1)\n```\nWhere axis=1 means that we want to concatenate horizontally. That works for me","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/43561622\/merge-two-numpy-arrays","best_answers_votes":37,"tags":["python","arrays","python-3.x","numpy","merge"],"question_length":1181,"response_length":594,"tag_count":5} {"question":"Create block diagonal numpy array from a given numpy array I have a 2-dimensional numpy array with an equal number of columns and rows. I would like to arrange them into a bigger array having the smaller ones on the diagonal. It should be possible to specify how often the starting matrix should be on the diagonal. For example: \n```\na = numpy.array([[5, 7], \n [6, 3]])\n```\nSo if I wanted this array 2 times on the diagonal the desired output would be: \n```\narray([[5, 7, 0, 0], \n [6, 3, 0, 0], \n [0, 0, 5, 7], \n [0, 0, 6, 3]])\n```\nFor 3 times: \n```\narray([[5, 7, 0, 0, 0, 0], \n [6, 3, 0, 0, 0, 0], \n [0, 0, 5, 7, 0, 0], \n [0, 0, 6, 3, 0, 0],\n [0, 0, 0, 0, 5, 7],\n [0, 0, 0, 0, 6, 3]])\n```\nIs there a fast way to implement this with numpy methods and for arbitrary sizes of the starting array (still considering the starting array to have the same number of rows and columns)?","response":"Approach #1 Classic case of numpy.kron - \n```\nnp.kron(np.eye(r,dtype=int),a) # r is number of repeats\n```\nSample run - \n```\n\n```python\na\n#Output\n#array([[1, 2, 3],\n# [3, 4, 5]])\n```\n\n```python\nr = 3 # number of repeats\n```\n\n```python\nnp.kron(np.eye(r,dtype=int),a)\n#Output\n#array([[1, 2, 3, 0, 0, 0, 0, 0, 0],\n# [3, 4, 5, 0, 0, 0, 0, 0, 0],\n# [0, 0, 0, 1, 2, 3, 0, 0, 0],\n# [0, 0, 0, 3, 4, 5, 0, 0, 0],\n# [0, 0, 0, 0, 0, 0, 1, 2, 3],\n# [0, 0, 0, 0, 0, 0, 3, 4, 5]])\n#```\n#Approach #2 Another efficient one with diagonal-viewed-array-assignment - \n#```\n#def repeat_along_diag(a, r):\n# m,n = a.shape\n# out = np.zeros((r,m,r,n), dtype=a.dtype)\n# diag = np.einsum('ijik->ijk',out)\n# diag[:] = a\n# return out.reshape(-1,n*r)\n#```\n#Sample run - \n#```\n```\n\n```python\nrepeat_along_diag(a,3)\n#Output\n#array([[1, 2, 3, 0, 0, 0, 0, 0, 0],\n# [3, 4, 5, 0, 0, 0, 0, 0, 0],\n# [0, 0, 0, 1, 2, 3, 0, 0, 0],\n# [0, 0, 0, 3, 4, 5, 0, 0, 0],\n# [0, 0, 0, 0, 0, 0, 1, 2, 3],\n# [0, 0, 0, 0, 0, 0, 3, 4, 5]])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/33508322\/create-block-diagonal-numpy-array-from-a-given-numpy-array","best_answers_votes":30,"tags":["python","arrays","numpy"],"question_length":940,"response_length":1037,"tag_count":3} {"question":"Techniques for working with large Numpy arrays? [duplicate] This question already has answers here: Very large matrices using Python and NumPy (10 answers) Closed 5 years ago. There are times when you have to perform many intermediate operations on one, or more, large Numpy arrays. This can quickly result in MemoryErrors. In my research so far, I have found that Pickling (Pickle, CPickle, Pytables etc.) and gc.collect() are ways to mitigate this. I was wondering if there are any other techniques experienced programmers use when dealing with large quantities of data (other than removing redundancies in your strategy\/code, of course). Also, if there's one thing I'm sure of is that nothing is free. With some of these techniques, what are the trade-offs (i.e., speed, robustness, etc.)?","response":"I feel your pain... You sometimes end up storing several times the size of your array in values you will later discard. When processing one item in your array at a time, this is irrelevant, but can kill you when vectorizing. I'll use an example from work for illustration purposes. I recently coded the algorithm described here using numpy. It is a color map algorithm, which takes an RGB image, and converts it into a CMYK image. The process, which is repeated for every pixel, is as follows: Use the most significant 4 bits of every RGB value, as indices into a three-dimensional look up table. This determines the CMYK values for the 8 vertices of a cube within the LUT. Use the least significant 4 bits of every RGB value to interpolate within that cube, based on the vertex values from the previous step. The most efficient way of doing this requires computing 16 arrays of uint8s the size of the image being processed. For a 24bit RGB image that is equivalent to needing storage of x6 times that of the image to process it. A couple of things you can do to handle this: 1. Divide and conquer Maybe you cannot process a 1,000x1,000 array in a single pass. But if you can do it with a python for loop iterating over 10 arrays of 100x1,000, it is still going to beat by a very far margin a python iterator over 1,000,000 items! It\u00b4s going to be slower, yes, but not as much. 2. Cache expensive computations This relates directly to my interpolation example above, and is harder to come across, although worth keeping an eye open for it. Because I am interpolating on a three-dimensional cube with 4 bits in each dimension, there are only 16x16x16 possible outcomes, which can be stored in 16 arrays of 16x16x16 bytes. So I can precompute them and store them using 64KB of memory, and look-up the values one by one for the whole image, rather than redoing the same operations for every pixel at huge memory cost. This already pays-off for images as small as 64x64 pixels, and basically allows processing images with x6 times the amount of pixels without having to subdivide the array. 3. Use your dtypes wisely If your intermediate values can fit in a single uint8, don't use an array of int32s! This can turn into a nightmare of mysterious errors due to silent overflows, but if you are careful, it can provide a big saving of resources.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/14351255\/techniques-for-working-with-large-numpy-arrays","best_answers_votes":28,"tags":["python","arrays","optimization","memory-management","numpy"],"question_length":792,"response_length":2340,"tag_count":5} {"question":"Select certain rows (condition met), but only some columns in Python\/Numpy I have an numpy array with 4 columns and want to select columns 1, 3 and 4, where the value of the second column meets a certain condition (i.e. a fixed value). I tried to first select only the rows, but with all 4 columns via: \n```\nI = A[A[:,1] == i]\n```\nwhich works. Then I further tried (similarly to matlab which I know very well): \n```\nI = A[A[:,1] == i, [0,2,3]]\n```\nwhich doesn't work. How to do it? EXAMPLE DATA: \n```\n\n```python\nA = np.array([[1,2,3,4],[6,1,3,4],[3,2,5,6]])\n#Output\n# >>> print A\n# [[1 2 3 4]\n# [6 1 3 4]\n# [3 2 5 6]]\n# >>> i = 2\n```\n\n# I want to get the columns 1, 3 and 4 \n # for every row which has the value i in the second column. \n # In this case, this would be row 1 and 3 with columns 1, 3 and 4:\n [[1 3 4]\n [3 5 6]]\n```\nI am now currently using this: \n```\nI = A[A[:,1] == i]\nI = I[:, [0,2,3]]\n```\nBut I thought that there had to be a nicer way of doing it... (I am used to MATLAB)","response":"```\n\n```python\na = np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12]])\n```\n\n```python\na\n#Output\n#array([[ 1, 2, 3, 4],\n# [ 5, 6, 7, 8],\n# [ 9, 10, 11, 12]])\n```\n\n```python\na[a[:,0] > 3] # select rows where first column is greater than 3\n#Output\n#array([[ 5, 6, 7, 8],\n# [ 9, 10, 11, 12]])\n```\n\n```python\na[a[:,0] > 3][:,np.array([True, True, False, True])] # select columns\n#Output\n#array([[ 5, 6, 8],\n# [ 9, 10, 12]])\n```\n\n# fancier equivalent of the previous\n\n```python\na[np.ix_(a[:,0] > 3, np.array([True, True, False, True]))]\n#Output\n#array([[ 5, 6, 8],\n# [ 9, 10, 12]])\n#```\n#For an explanation of the obscure np.ix_(), see https:\/\/stackoverflow.com\/a\/13599843\/4323 Finally, we can simplify by giving the list of column numbers instead of the tedious boolean mask: \n#```\n```\n\n```python\na[np.ix_(a[:,0] > 3, (0,1,3))]\n#Output\n#array([[ 5, 6, 8],\n# [ 9, 10, 12]])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/23911875\/select-certain-rows-condition-met-but-only-some-columns-in-python-numpy","best_answers_votes":38,"tags":["python","numpy"],"question_length":974,"response_length":801,"tag_count":2} {"question":"What is the difference between numpy.fft.fft and numpy.fft.rfft? The documentation says that np.fft.fft does this: Compute the one-dimensional discrete Fourier Transform. and np.fft.rfft does this: Compute the one-dimensional discrete Fourier Transform for real input. I also see that for my data (audio data, real valued), np.fft.fft returns a 2 dimensional array of shape (number_of_frames, fft_length) containing complex numbers. For np.fft.rfft returns a 2 dimensional array of shape (number_of_frames, ((fft_length\/2) + 1)) containing complex numbers. I am led to believe that this only contains nonredundant FFT bins. Can someone explain in more depth the difference between the commands and why the shape of the returned array is different. Thank you.","response":"the reason is explained in the docs: When the DFT is computed for purely real input, the output is Hermitian-symmetric, i.e. the negative frequency terms are just the complex conjugates of the corresponding positive-frequency terms, and the negative-frequency terms are therefore redundant. This function does not compute the negative frequency terms, and the length of the transformed axis of the output is therefore n\/\/2 + 1. As a consequence, the algorithm is optimized and rfft is twice as fast. Furthermore, the spectrum is easier to plot : \n```\n\n```python\ns=abs(sin(arange(0,2**13,3)))\n```\n\n```python\nsp=rfft(s)\n```\n\n```python\nplot(abs(sp))\n```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/52387673\/what-is-the-difference-between-numpy-fft-fft-and-numpy-fft-rfft","best_answers_votes":29,"tags":["python","arrays","numpy","signal-processing","fft"],"question_length":758,"response_length":641,"tag_count":5} {"question":"Cheapest way to get a numpy array into C-contiguous order? The following produces a C-contiguous numpy array: \n```\nimport numpy\n\na = numpy.ones((1024,1024,5))\n```\nNow if I slice it, the result may not longer be the same. For example: \n```\nbn = a[:, :, n]\n```\nwith n from 0 to 4. My problem is that I need bn to be C-contiguous, and I need to do this for many instances of a. I just need each bn once, and want to avoid doing \n```\nbn = bn.copy(order='C')\n```\nI also don't want to rewrite my code such that \n```\na = numpy.ones((5,1024,1024))\n```\nIs there a faster, cheaper way to get bn than doing the copy? Background: I want to hash each slice of every a, using \n```\nimport hashlib\n\nhashlib.sha1(a[:, :, n]).hexdigest()\n```\nUnfortunately, this will throw a ValueError, complaining about the order. So if there is another fast way to get the hash I want, I'd also use it.","response":"This is a standard operation when interfacing numpy with C. Have a look at numpy.ascontiguousarray x=numpy.ascontiguousarray(x) is the proper way of dealing with it. Use numpy.asfortranarray if you need fortran order. As mentioned the function will copy if necessary. So there is no way around it. You can try rollaxis before your operation, such that the short axis is the first axis. This gives you a view on the array \n```\n\n```python\nA=np.random.rand(1024,1024,5)\n```\n\n```python\nB=np.rollaxis(A,2)\n```\n\n```python\nB.shape\n#Output\n#(5, 1024, 1024)\n```\n\n```python\nB.flags\n#Output\n#C_CONTIGUOUS : False\n# F_CONTIGUOUS : False\n# OWNDATA : False\n# WRITEABLE : True\n# ALIGNED : True\n# UPDATEIFCOPY : False\n```\n\n```python\nA.flags\n#Output\n#C_CONTIGUOUS : True\n# F_CONTIGUOUS : False\n# OWNDATA : True\n# WRITEABLE : True\n# ALIGNED : True\n# UPDATEIFCOPY : False\n#```\n#So rollaxis does not solve this either.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/29947639\/cheapest-way-to-get-a-numpy-array-into-c-contiguous-order","best_answers_votes":23,"tags":["python","arrays","numpy"],"question_length":871,"response_length":867,"tag_count":3} {"question":"How to test if all rows are equal in a numpy In numpy, is there a nice idiomatic way of testing if all rows are equal in a 2d array? I can do something like \n```\nnp.all([np.array_equal(M[0], M[i]) for i in xrange(1,len(M))])\n```\nThis seems to mix python lists with numpy arrays which is ugly and presumably also slow. Is there a nicer\/neater way?","response":"One way is to check that every row of the array arr is equal to its first row arr[0]: \n```\n(arr == arr[0]).all()\n```\nUsing equality == is fine for integer values, but if arr contains floating point values you could use np.isclose instead to check for equality within a given tolerance: \n```\nnp.isclose(a, a[0]).all()\n```\nIf your array contains NaN and you want to avoid the tricky NaN != NaN issue, you could combine this approach with np.isnan: \n```\n(np.isclose(a, a[0]) | np.isnan(a)).all()\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/26163727\/how-to-test-if-all-rows-are-equal-in-a-numpy","best_answers_votes":33,"tags":["python","arrays","numpy"],"question_length":346,"response_length":496,"tag_count":3} {"question":"How do I read numpy source? I built it myself on Python 3.3, but I can't for the life of me find the class definition of numpy.array(). I've looked all through the code and even found the core C files, but where is the dang array class?? Can anyone tell me what directory to look in, or how to find out from the python shell?","response":"np.array is not a class itself, it's just a convenience function to create an np.ndarray. ndarray is aliased to multiarray, which is implemented in C code (an extension module in .so\/.pyd file, compiled code). You can start looking at the ndarray interfaces here in numeric.py. Most of the meat of the implementation is in C code, here in multiarray. array() is implemented in _core\/src\/multiarray\/methods.c in array_getarray().","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/16682672\/how-do-i-read-numpy-source","best_answers_votes":45,"tags":["python","class","numpy","dir"],"question_length":325,"response_length":428,"tag_count":4} {"question":"Downsampling a 2d numpy array in python I'm self learning python and have found a problem which requires down sampling a feature vector. I need some help understanding how down-sampling a array. in the array each row represents an image by being number from 0 to 255. I was wonder how you apply down-sampling to the array? I don't want to scikit-learn because I want to understand how to apply down-sampling. If you could explain down-sampling too that would be amazing thanks. the feature vector is 400x250","response":"If with downsampling you mean something like this, you can simply slice the array. For a 1D example: \n```\nimport numpy as np\na = np.arange(1,11,1)\nprint(a)\nprint(a[::3])\n```\nThe last line is equivalent to: \n```\nprint(a[0:a.size:3])\n```\nwith the slicing notation as start:stop:step Result: [ 1 2 3 4 5 6 7 8 9 10] [ 1 4 7 10] For a 2D array the idea is the same: \n```\nb = np.arange(0,100)\nc = b.reshape([10,10])\nprint(c[::3,::3])\n```\nThis gives you, in both dimensions, every third item from the original array. Or, if you only want to down sample a single dimension: \n```\nd = np.zeros((400,250))\nprint(d.shape)\ne = d[::10,:]\nprint(e.shape)\n```\n(400, 250) (40, 250) The are lots of other examples in the Numpy manual","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/34231244\/downsampling-a-2d-numpy-array-in-python","best_answers_votes":43,"tags":["python","arrays","numpy"],"question_length":507,"response_length":715,"tag_count":3} {"question":"Check for None in pandas dataframe I would like to find where None is found in the dataframe. \n```\npd.DataFrame([None,np.nan]).isnull()\nOUT: \n 0\n0 True\n1 True\n```\nisnull() finds both numpy Nan and None values. I only want the None values and not numpy Nan. Is there an easier way to do that without looping through the dataframe? Edit: After reading the comments, I realized that in my dataframe in my work also include strings, so the None were not coerced to numpy Nan. So the answer given by Pisdom works.","response":"If you want to get True\/False for each line, you can use the following code. Here is an example as a result for the following DataFrame: \n```py\ndf = pd.DataFrame([[None, 3], [\"\", np.nan]])\n\ndf\n# 0 1\n#0 None 3.0\n#1 NaN\n```\nHow to check None Available: .isnull() \n```py\n\n```python\ndf[0].isnull()\n#Output\n#0 True\n#1 False\n#Name: 0, dtype: bool\n#```\n#Available: .apply == or is None \n#```py\n```\n\n```python\ndf[0].apply(lambda x: x == None)\n#Output\n#0 True\n#1 False\n#Name: 0, dtype: bool\n```\n\n```python\ndf[0].apply(lambda x: x is None)\n#Output\n#0 True\n#1 False\n#Name: 0, dtype: bool\n#```\n#Available: .values == None \n#```py\n```\n\n```python\ndf[0].values == None\n#Output\n#array([ True, False])\n#```\n#Unavailable: is or == \n#```py\n```\n\n```python\ndf[0] is None\n#Output\n#False\n```\n\n```python\ndf[0] == None\n#Output\n#0 False\n#1 False\n#Name: 0, dtype: bool\n#```\n#Unavailable: .values is None \n#```py\n```\n\n```python\ndf[0].values is None\n#Output\n#False\n#```\n#How to check np.nan Available: .isnull() \n#```py\n```\n\n```python\ndf[1].isnull()\n#Output\n#0 False\n#1 True\n#Name: 1, dtype: bool\n#```\n#Available: np.isnan \n#```py\n```\n\n```python\nnp.isnan(df[1])\n#Output\n#0 False\n#1 True\n#Name: 1, dtype: bool\n```\n\n```python\nnp.isnan(df[1].values)\n#Output\n#array([False, True])\n```\n\n```python\ndf[1].apply(lambda x: np.isnan(x))\n#Output\n#0 False\n#1 True\n#Name: 1, dtype: bool\n#```\n#Unavailable: is or == np.nan \n#```py\n```\n\n```python\ndf[1] is np.nan\n#Output\n#False\n```\n\n```python\ndf[1] == np.nan\n#Output\n#0 False\n#1 False\n#Name: 1, dtype: bool\n```\n\n```python\ndf[1].values is np.nan\n#Output\n#False\n```\n\n```python\ndf[1].values == np.nan\n#Output\n#array([False, False])\n```\n\n```python\ndf[1].apply(lambda x: x is np.nan)\n#Output\n#0 False\n#1 False\n#Name: 1, dtype: bool\n```\n\n```python\ndf[1].apply(lambda x: x == np.nan)\n#Output\n#0 False\n#1 False\n#Name: 1, dtype: bool\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/45271309\/check-for-none-in-pandas-dataframe","best_answers_votes":29,"tags":["python","pandas","numpy","nan"],"question_length":515,"response_length":1556,"tag_count":4} {"question":"Numpy mean AND variance from single function? Using Numpy\/Python, is it possible to return the mean AND variance from a single function call? I know that I can do them separately, but the mean is required to calculate the sample standard deviation. So if I use separate functions to get the mean and variance I am adding unnecesary overhead. I have tried looking at the numpy docs here (http:\/\/docs.scipy.org\/doc\/numpy\/reference\/routines.statistics.html), but with no success.","response":"You can't pass a known mean to np.std or np.var, you'll have to wait for the new standard library statistics module, but in the meantime you can save a little time by using the formula: \n```\n\n```python\na = np.random.rand(1000)\n```\n\n```python\n%%timeit\n .....: a.mean()\n .....: a.var()\n .....: \n10000 loops, best of 3: 80.6 \u00b5s per loop\n```\n\n```python\n%%timeit\n .....: m = a.mean()\n .....: np.mean((a-m)**2)\n .....: \n10000 loops, best of 3: 60.9 \u00b5s per loop\n```\n\n```python\nm = a.mean()\n```\n\n```python\na.var()\n#Output\n#0.078365856465916137\n```\n\n```python\nnp.mean((a-m)**2)\n#Output\n#0.078365856465916137\n#```\n#If you really are trying to speed things up, try np.dot to do the squaring and summing (since that's what a dot-product is): \n#```\n```\n\n```python\nnp.dot(a-m,a-m)\/a.size\n#Output\n#0.078365856465916137\n```\n\n```python\n%%timeit\n .....: m = a.mean()\n .....: c = a-m\n .....: np.dot(c,c)\/a.size\n .....: \n10000 loops, best of 3: 38.2 \u00b5s per loop\n```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/19391149\/numpy-mean-and-variance-from-single-function","best_answers_votes":32,"tags":["python","numpy"],"question_length":476,"response_length":935,"tag_count":2} {"question":"How to plot complex numbers (Argand Diagram) using matplotlib I'd like to create an Argand Diagram from a set of complex numbers using matplotlib. Are there any pre-built functions to help me do this? Can anyone recommend an approach? Image by LeonardoG, CC-SA-3.0","response":"I'm not sure exactly what you're after here...you have a set of complex numbers, and want to map them to the plane by using their real part as the x coordinate and the imaginary part as y? If so you can get the real part of any python imaginary number with number.real and the imaginary part with number.imag. If you're using numpy, it also provides a set of helper functions numpy.real and numpy.imag etc. which work on numpy arrays. So for instance if you had an array of complex numbers stored something like this: \n```\n\n```python\na = n.arange(5) + 1j*n.arange(6,11)\n```\n\n```python\na\n#Output\n#array([ 0. +6.j, 1. +7.j, 2. +8.j, 3. +9.j, 4.+10.j])\n#```\n#...you can just do \n#```\n```\n\n```python\nfig,ax = subplots()\n```\n\n```python\nax.scatter(a.real,a.imag)\n```\nThis plots dots on an argand diagram for each point. edit: For the plotting part, you must of course have imported matplotlib.pyplot via from matplotlib.pyplot import * or (as I did) use the ipython shell in pylab mode.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/17445720\/how-to-plot-complex-numbers-argand-diagram-using-matplotlib","best_answers_votes":23,"tags":["python","numpy","matplotlib","plot","complex-numbers"],"question_length":264,"response_length":963,"tag_count":5} {"question":"Indexing one array by another in numpy Suppose I have a matrix A with some arbitrary values: \n```\narray([[ 2, 4, 5, 3],\n [ 1, 6, 8, 9],\n [ 8, 7, 0, 2]])\n```\nAnd a matrix B which contains indices of elements in A: \n```\narray([[0, 0, 1, 2],\n [0, 3, 2, 1],\n [3, 2, 1, 0]])\n```\nHow do I select values from A pointed by B, i.e.: \n```\nA[B] = [[2, 2, 4, 5],\n [1, 9, 8, 6],\n [2, 0, 7, 8]]\n```","response":"EDIT: np.take_along_axis is a builtin function for this use case implemented since numpy 1.15. See @hpaulj 's answer below for how to use it. You can use NumPy's advanced indexing - \n```\nA[np.arange(A.shape[0])[:,None],B]\n```\nOne can also use linear indexing - \n```\nm,n = A.shape\nout = np.take(A,B + n*np.arange(m)[:,None])\n```\nSample run - \n```\n\n```python\nA\n#Output\n#array([[2, 4, 5, 3],\n# [1, 6, 8, 9],\n# [8, 7, 0, 2]])\n```\n\n```python\nB\n#Output\n#array([[0, 0, 1, 2],\n# [0, 3, 2, 1],\n# [3, 2, 1, 0]])\n```\n\n```python\nA[np.arange(A.shape[0])[:,None],B]\n#Output\n#array([[2, 2, 4, 5],\n# [1, 9, 8, 6],\n# [2, 0, 7, 8]])\n```\n\n```python\nm,n = A.shape\n```\n\n```python\nnp.take(A,B + n*np.arange(m)[:,None])\n#Output\n#array([[2, 2, 4, 5],\n# [1, 9, 8, 6],\n# [2, 0, 7, 8]])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/37878946\/indexing-one-array-by-another-in-numpy","best_answers_votes":26,"tags":["python","numpy"],"question_length":422,"response_length":785,"tag_count":2} {"question":"TensorFlow: using a tensor to index another tensor I have a basic question about how to do indexing in TensorFlow. In numpy: \n```\nx = np.asarray([1,2,3,3,2,5,6,7,1,3])\ne = np.asarray([0,1,0,1,1,1,0,1])\n#numpy \nprint x * e[x]\n```\nI can get \n```\n[1 0 3 3 0 5 0 7 1 3]\n```\nHow can I do this in TensorFlow? \n```\nx = np.asarray([1,2,3,3,2,5,6,7,1,3])\ne = np.asarray([0,1,0,1,1,1,0,1])\nx_t = tf.constant(x)\ne_t = tf.constant(e)\nwith tf.Session():\n ????\n```\nThanks!","response":"Fortunately, the exact case you're asking about is supported in TensorFlow by tf.gather(): \n```\nresult = x_t * tf.gather(e_t, x_t)\n\nwith tf.Session() as sess:\n print sess.run(result) # ==> 'array([1, 0, 3, 3, 0, 5, 0, 7, 1, 3])'\n```\nThe tf.gather() op is less powerful than NumPy's advanced indexing: it only supports extracting full slices of a tensor on its 0th dimension. Support for more general indexing has been requested, and is being tracked in this GitHub issue.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/35842598\/tensorflow-using-a-tensor-to-index-another-tensor","best_answers_votes":36,"tags":["python","numpy","tensorflow"],"question_length":461,"response_length":475,"tag_count":3} {"question":"How to force pandas read_csv to use float32 for all float columns? Because I don't need double precision My machine has limited memory and I want to process bigger datasets I need to pass the extracted data (as matrix) to BLAS libraries, and BLAS calls for single precision are 2x faster than for double precision equivalence. Note that not all columns in the raw csv file have float types. I only need to set float32 as the default for float columns.","response":"Try: \n```\nimport numpy as np\nimport pandas as pd\n\n# Sample 100 rows of data to determine dtypes.\ndf_test = pd.read_csv(filename, nrows=100)\n\nfloat_cols = [c for c in df_test if df_test[c].dtype == \"float64\"]\nfloat32_cols = {c: np.float32 for c in float_cols}\n\ndf = pd.read_csv(filename, engine='c', dtype=float32_cols)\n```\nThis first reads a sample of 100 rows of data (modify as required) to determine the type of each column. It the creates a list of those columns which are 'float64', and then uses dictionary comprehension to create a dictionary with these columns as the keys and 'np.float32' as the value for each key. Finally, it reads the whole file using the 'c' engine (required for assigning dtypes to columns) and then passes the float32_cols dictionary as a parameter to dtype. \n```\ndf = pd.read_csv(filename, nrows=100)\n\n```python\ndf\n#Output\n# int_col float1 string_col float2\n#0 1 1.2 a 2.2\n#1 2 1.3 b 3.3\n#2 3 1.4 c 4.4\n```\n\n```python\ndf.info()\n#Output\n#\n#Int64Index: 3 entries, 0 to 2\n#Data columns (total 4 columns):\n#int_col 3 non-null int64\n#float1 3 non-null float64\n#string_col 3 non-null object\n#float2 3 non-null float64\n#dtypes: float64(2), int64(1), object(1)\n```\n\ndf32 = pd.read_csv(filename, engine='c', dtype={c: np.float32 for c in float_cols})\n\n```python\ndf32.info()\n#Output\n#\n#Int64Index: 3 entries, 0 to 2\n#Data columns (total 4 columns):\n#int_col 3 non-null int64\n#float1 3 non-null float32\n#string_col 3 non-null object\n#float2 3 non-null float32\n#dtypes: float32(2), int64(1), object(1)\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/30494569\/how-to-force-pandas-read-csv-to-use-float32-for-all-float-columns","best_answers_votes":25,"tags":["python","numpy","pandas"],"question_length":451,"response_length":1650,"tag_count":3} {"question":"what does numpy.apply_along_axis perform exactly? I have come across the numpy.apply_along_axis function in some code. And I don't understand the documentation about it. This is an example of the documentation: \n```\n\n```python\ndef new_func(a):\n \"\"\"Divide elements of a by 2.\"\"\"\n return a * 0.5\n```\n\n```python\nb = np.array([[1,2,3], [4,5,6], [7,8,9]])\n```\n\n```python\nnp.apply_along_axis(new_func, 0, b)\n#Output\n#array([[ 0.5, 1. , 1.5],\n# [ 2. , 2.5, 3. ],\n# [ 3.5, 4. , 4.5]])\n#```\n#As far I as thought I understood the documentation, I would have expected: \n#```\n#array([[ 0.5, 1. , 1.5],\n# [ 4 , 5 , 6 ],\n# [ 7 , 8 , 9 ]])\n#```\n#i.e. having applied the function along the axis [1,2,3] which is axis 0 in [[1,2,3], [4,5,6], [7,8,9]] Obviously I am wrong. Could you correct me ?\n```","response":"apply_along_axis applies the supplied function along 1D slices of the input array, with the slices taken along the axis you specify. So in your example, new_func is applied over each slice of the array along the first axis. It becomes clearer if you use a vector valued function, rather than a scalar, like this: \n```\n\n```python\nb = np.array([[1,2,3], [4,5,6], [7,8,9]])\n```\n\n```python\nnp.apply_along_axis(np.diff,0,b)\n#Output\n#array([[3, 3, 3],\n# [3, 3, 3]])\n```\n\n```python\nnp.apply_along_axis(np.diff,1,b)\n#Output\n#array([[1, 1],\n# [1, 1],\n# [1, 1]])\n#```\n#Here, numpy.diff (i.e. the arithmetic difference of adjacent array elements) is applied along each slice of either the first or second axis (dimension) of the input array.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/9019581\/what-does-numpy-apply-along-axis-perform-exactly","best_answers_votes":26,"tags":["python","numpy","scipy"],"question_length":786,"response_length":733,"tag_count":3} {"question":"Numpy array: sequence too large I have an array of size 11 called 'wavelength' and a larger array of size n called 'MN'. And 'model' is an m by n array. I'm doing this: \n```\nfor i in xrange(10+len(wavelength)-2):\n y=np.empty(model[MN][i],float)\n```\nand getting this as an error: \n```\nFile \"test_prog.py\", line 658, in \n y=np.empty(model[MN][i],float)\nValueError: sequence too large; must be smaller than 32\n```\nI'm not sure what to do about that. I've looked elsewhere online but I can't find anything of obvious substance.","response":"sequence too large error means that you are creating a multidimension array that has a dimension larger than 32. For example: np.empty([1]*33) will raise this error. Are you sure you want to create >32 dimension array? If you want to create an empty array the same shape as model[MN][i], you should use: empty_like()","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/17688094\/numpy-array-sequence-too-large","best_answers_votes":27,"tags":["python","numpy"],"question_length":535,"response_length":316,"tag_count":2} {"question":"Python import error: cannot import name 'six' from 'sklearn.externals' I'm using numpy and mlrose, and all i have written so far is: \n```\nimport numpy as np\nimport mlrose\n```\nHowever, when i run it, it comes up with an error message: \n```\nFile \"C:\\Users\\\\AppData\\Local\\Programs\\Python\\Python38-32\\lib\\site-packages\\mlrose\\neural.py\", line 12, in \n from sklearn.externals import six\nImportError: cannot import name 'six' from 'sklearn.externals' (C:\\Users\\\\AppData\\Local\\Programs\\Python\\Python38-32\\lib\\site-packages\\sklearn\\externals\\__init__.py)\n```\nAny help on sorting this problem will be greatly appreciated.","response":"Solution: The real answer is that the dependency needs to be changed by the mlrose maintainers. A workaround is: \n```\nimport six\nimport sys\nsys.modules['sklearn.externals.six'] = six\nimport mlrose\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/61867945\/python-import-error-cannot-import-name-six-from-sklearn-externals","best_answers_votes":76,"tags":["python","numpy","scikit-learn","python-import","six"],"question_length":649,"response_length":200,"tag_count":5} {"question":"Pytorch: Why is the memory occupied by the `tensor` variable so small? In Pytorch 1.0.0, I found that a tensor variable occupies very small memory. I wonder how it stores so much data. Here's the code. \n```\na = np.random.randn(1, 1, 128, 256)\nb = torch.tensor(a, device=torch.device('cpu'))\n\na_size = sys.getsizeof(a)\nb_size = sys.getsizeof(b)\n```\na_size is 262288. b_size is 72.","response":"The answer is in two parts. From the documentation of sys.getsizeof, firstly All built-in objects will return correct results, but this does not have to hold true for third-party extensions as it is implementation specific. so it could be that for tensors __sizeof__ is undefined or defined differently than you would expect - this function is not something you can rely on. Secondly Only the memory consumption directly attributed to the object is accounted for, not the memory consumption of objects it refers to. which means that if the torch.Tensor object merely holds a reference to the actual memory, this won't show in sys.getsizeof. This is indeed the case, if you check the size of the underlying storage instead, you will see the expected number \n```\nimport torch, sys\nb = torch.randn(1, 1, 128, 256, dtype=torch.float64)\nsys.getsizeof(b)\n>> 72\nsys.getsizeof(b.storage())\n>> 262208\n```\nNote: I am setting dtype to float64 explicitly, because that is the default dtype in numpy, whereas torch uses float32 by default.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/54361763\/pytorch-why-is-the-memory-occupied-by-the-tensor-variable-so-small","best_answers_votes":70,"tags":["python","numpy","pytorch","tensor","numpy-ndarray"],"question_length":379,"response_length":1026,"tag_count":5} {"question":"How can I calculate the nearest positive semi-definite matrix? I'm coming to Python from R and trying to reproduce a number of things that I'm used to doing in R using Python. The Matrix library for R has a very nifty function called nearPD() which finds the closest positive semi-definite (PSD) matrix to a given matrix. While I could code something up, being new to Python\/Numpy I don't feel too excited about reinventing the wheel if something is already out there. Any tips on an existing implementation in Python?","response":"I don't think there is a library which returns the matrix you want, but here is a \"just for fun\" coding of neareast positive semi-definite matrix algorithm from Higham (2000) \n```\nimport numpy as np,numpy.linalg\n\ndef _getAplus(A):\n eigval, eigvec = np.linalg.eig(A)\n Q = np.matrix(eigvec)\n xdiag = np.matrix(np.diag(np.maximum(eigval, 0)))\n return Q*xdiag*Q.T\n\ndef _getPs(A, W=None):\n W05 = np.matrix(W**.5)\n return W05.I * _getAplus(W05 * A * W05) * W05.I\n\ndef _getPu(A, W=None):\n Aret = np.array(A.copy())\n Aret[W > 0] = np.array(W)[W > 0]\n return np.matrix(Aret)\n\ndef nearPD(A, nit=10):\n n = A.shape[0]\n W = np.identity(n) \n# W is the matrix used for the norm (assumed to be Identity matrix here)\n# the algorithm should work for any diagonal W\n deltaS = 0\n Yk = A.copy()\n for k in range(nit):\n Rk = Yk - deltaS\n Xk = _getPs(Rk, W=W)\n deltaS = Xk - Rk\n Yk = _getPu(Xk, W=W)\n return Yk\n```\nWhen tested on the example from the paper, it returns the correct answer \n```\nprint nearPD(np.matrix([[2,-1,0,0],[-1,2,-1,0],[0,-1,2,-1],[0,0,-1,2]]),nit=10)\n[[ 1. -0.80842467 0.19157533 0.10677227]\n [-0.80842467 1. -0.65626745 0.19157533]\n [ 0.19157533 -0.65626745 1. -0.80842467]\n [ 0.10677227 0.19157533 -0.80842467 1. ]]\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/10939213\/how-can-i-calculate-the-nearest-positive-semi-definite-matrix","best_answers_votes":36,"tags":["python","matrix","numpy"],"question_length":518,"response_length":1331,"tag_count":3} {"question":"Truncating decimal digits numpy array of floats I want to truncate the float values within the numpy array, for .e.g. \n```\n2.34341232 --> 2.34\n```\nI read the post truncate floating point but its for one float. I don't want to run a loop on the numpy array, it will be quite expensive. Is there any inbuilt method within numpy which can do this easily? I do need output as a float not string.","response":"Try out this modified version of numpy.trunc(). \n```\nimport numpy as np\ndef trunc(values, decs=0):\n return np.trunc(values*10**decs)\/(10**decs)\n```\nSadly, numpy.trunc function doesn't allow decimal truncation. Luckily, multiplying the argument and dividing it's result by a power of ten give the expected results. \n```\nvec = np.array([-4.79, -0.38, -0.001, 0.011, 0.4444, 2.34341232, 6.999])\n\ntrunc(vec, decs=2)\n```\nwhich returns: \n```\n\n```python\narray([-4.79, -0.38, -0. , 0.01, 0.44, 2.34, 6.99])\n#Output\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/42021972\/truncating-decimal-digits-numpy-array-of-floats","best_answers_votes":44,"tags":["python","numpy","vectorization"],"question_length":391,"response_length":503,"tag_count":3} {"question":"threshold in 2D numpy array I have an array of shape 512x512 which contains numbers between 0 and 100 at ith and jth position. Now I want to select array[i,j] < 25 and zero at other places. I have tried with array = array[where(array<25)], which gives me a 1D array, but I want 2D. Please help me to solve this.","response":"One solution: \n```\nresult = (array < 25) * array\n```\nThe first part array = 25 to zero in the original array: \n```\narray[array >= 25] = 0\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/36719997\/threshold-in-2d-numpy-array","best_answers_votes":59,"tags":["python","numpy"],"question_length":311,"response_length":141,"tag_count":2} {"question":"igraph Graph from numpy or pandas adjacency matrix I have an adjacency matrix stored as a pandas.DataFrame: \n```\nnode_names = ['A', 'B', 'C']\na = pd.DataFrame([[1,2,3],[3,1,1],[4,0,2]],\n index=node_names, columns=node_names)\na_numpy = a.as_matrix()\n```\nI'd like to create an igraph.Graph from either the pandas or the numpy adjacency matrices. In an ideal world the nodes would be named as expected. Is this possible? The tutorial seems to be silent on the issue.","response":"In igraph you can use igraph.Graph.Adjacency to create a graph from an adjacency matrix without having to use zip. There are some things to be aware of when a weighted adjacency matrix is used and stored in a np.array or pd.DataFrame. igraph.Graph.Adjacency can't take an np.array as argument, but that is easily solved using tolist. Integers in adjacency-matrix are interpreted as number of edges between nodes rather than weights, solved by using adjacency as boolean. An example of how to do it: \n```\nimport igraph\nimport pandas as pd\n\nnode_names = ['A', 'B', 'C']\na = pd.DataFrame([[1,2,3],[3,1,1],[4,0,2]], index=node_names, columns=node_names)\n\n# Get the values as np.array, it's more convenenient.\nA = a.values\n\n# Create graph, A.astype(bool).tolist() or (A \/ A).tolist() can also be used.\ng = igraph.Graph.Adjacency((A > 0).tolist())\n\n# Add edge weights and node labels.\ng.es['weight'] = A[A.nonzero()]\ng.vs['label'] = node_names # or a.index\/a.columns\n```\nYou can reconstruct your adjacency dataframe using get_adjacency by: \n```\ndf_from_g = pd.DataFrame(g.get_adjacency(attribute='weight').data,\n columns=g.vs['label'], index=g.vs['label'])\n(df_from_g == a).all().all() # --> True\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/29655111\/igraph-graph-from-numpy-or-pandas-adjacency-matrix","best_answers_votes":41,"tags":["python","numpy","pandas","igraph"],"question_length":466,"response_length":1220,"tag_count":4} {"question":"how to export HDF5 file to NumPy using H5PY? I have an existing hdf5 file with three arrays, i want to extract one of the arrays using h5py.","response":"h5py already reads files in as numpy arrays, so just: \n```\nwith h5py.File('the_filename', 'r') as f:\n my_array = f['array_name'][()]\n```\nThe [()] means to read the entire array in; if you don't do that, it doesn't read the whole data but instead gives you lazy access to sub-parts (very useful when the array is huge but you only need a small part of it).","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/10274476\/how-to-export-hdf5-file-to-numpy-using-h5py","best_answers_votes":57,"tags":["python","numpy","hdf5","h5py"],"question_length":140,"response_length":358,"tag_count":4} {"question":"Finding the consecutive zeros in a numpy array I have the following array \n```py\na = [1, 2, 3, 0, 0, 0, 0, 0, 0, 4, 5, 6, 0, 0, 0, 0, 9, 8, 7,0,10,11]\n```\nI would like to find the start and the end index of the array where the values are zeros consecutively. For the array above the output would be as follows \n```py\n[3,8],[12,15],[19]\n```\nI want to achieve this as efficiently as possible.","response":"Here's a fairly compact vectorized implementation. I've changed the requirements a bit, so the return value is a bit more \"numpythonic\": it creates an array with shape (m, 2), where m is the number of \"runs\" of zeros. The first column is the index of the first 0 in each run, and the second is the index of the first nonzero element after the run. (This indexing pattern matches, for example, how slicing works and how the range function works.) \n```\nimport numpy as np\n\ndef zero_runs(a):\n # Create an array that is 1 where a is 0, and pad each end with an extra 0.\n iszero = np.concatenate(([0], np.equal(a, 0).view(np.int8), [0]))\n absdiff = np.abs(np.diff(iszero))\n # Runs start and end where absdiff is 1.\n ranges = np.where(absdiff == 1)[0].reshape(-1, 2)\n return ranges\n```\nFor example: \n```\n\n```python\na = [1, 2, 3, 0, 0, 0, 0, 0, 0, 4, 5, 6, 0, 0, 0, 0, 9, 8, 7, 0, 10, 11]\n```\n\n```python\nruns = zero_runs(a)\n```\n\n```python\nruns\n#Output\n#array([[ 3, 9],\n# [12, 16],\n# [19, 20]])\n#```\n#With this format, it is simple to get the number of zeros in each run: \n#```\n```\n\n```python\nruns[:,1] - runs[:,0]\n#Output\n#array([6, 4, 1])\n#```\n#It's always a good idea to check the edge cases: \n#```\n```\n\n```python\nzero_runs([0,1,2])\n#Output\n#array([[0, 1]])\n```\n\n```python\nzero_runs([1,2,0])\n#Output\n#array([[2, 3]])\n```\n\n```python\nzero_runs([1,2,3])\n#Output\n#array([], shape=(0, 2), dtype=int64)\n```\n\n```python\nzero_runs([0,0,0])\n#Output\n#array([[0, 3]])\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/24885092\/finding-the-consecutive-zeros-in-a-numpy-array","best_answers_votes":55,"tags":["python","arrays","numpy"],"question_length":390,"response_length":1453,"tag_count":3} {"question":"Numpy inverse mask I want to inverse the true\/false value in my numpy masked array. So in the example below i don't want to mask out the second value in the data array, I want to mask out the first and third value. Below is just an example. My masked array is created by a longer process than runs before. So I can not change the mask array itself. Is there another way to inverse the values? \n```\nimport numpy\ndata = numpy.array([[ 1, 2, 5 ]])\nmask = numpy.array([[0,1,0]])\n\nnumpy.ma.masked_array(data, mask)\n```","response":"```\nimport numpy\ndata = numpy.array([[ 1, 2, 5 ]])\nmask = numpy.array([[0,1,0]])\n\nnumpy.ma.masked_array(data, ~mask) #note this probably wont work right for non-boolean (T\/F) values\n#or\nnumpy.ma.masked_array(data, numpy.logical_not(mask))\n```\nfor example \n```\n\n```python\na = numpy.array([False,True,False])\n```\n\n```python\n~a\n#Output\n#array([ True, False, True], dtype=bool)\n```\n\n```python\nnumpy.logical_not(a)\n#Output\n#array([ True, False, True], dtype=bool)\n```\n\n```python\na = numpy.array([0,1,0])\n```\n\n```python\n~a\n#Output\n#array([-1, -2, -1])\n```\n\n```python\nnumpy.logical_not(a)\n#Output\n#array([ True, False, True], dtype=bool)\n#```\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/16724669\/numpy-inverse-mask","best_answers_votes":47,"tags":["python","arrays","numpy","mask"],"question_length":513,"response_length":539,"tag_count":4} {"question":"How to extract the bits of larger numeric Numpy data types Numpy has a library function, np.unpackbits, which will unpack a uint8 into a bit vector of length 8. Is there a correspondingly fast way to unpack larger numeric types? E.g. uint16 or uint32. I am working on a question that involves frequent translation between numbers, for array indexing, and their bit vector representations, and the bottleneck is our pack and unpack functions.","response":"You can do this with view and unpackbits Input: \n```\nunpackbits(arange(2, dtype=uint16).view(uint8))\n```\nOutput: \n```\n[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0]\n```\nFor a = arange(int(1e6), dtype=uint16) this is pretty fast at around 7 ms on my machine \n```\n%%timeit\nunpackbits(a.view(uint8))\n\n100 loops, best of 3: 7.03 ms per loop\n```\nAs for endianness, you'll have to look at http:\/\/docs.scipy.org\/doc\/numpy\/user\/basics.byteswapping.html and apply the suggestions there depending on your needs.","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/18296035\/how-to-extract-the-bits-of-larger-numeric-numpy-data-types","best_answers_votes":27,"tags":["python","numpy"],"question_length":441,"response_length":520,"tag_count":2} {"question":"How many digits can float8, float16, float32, float64, and float128 contain? Numpy's dtype documentation only shows \"x bits exponent, y bits mantissa\" for each float type, but I couldn't translate that to exactly how many digits before\/after the decimal point. Is there any simple formula\/table to look this up in?","response":"This is not as simple as usually expected. For accuracy of mantissa, there generally are two values: Given a value in decimal representation, how many decimal digits can be guaranteed to be preserved if converted from a decimal to a selected binary format and back (with default rounding). Given a value in binary format, how many decimal digits are needed if the value is converted to decimal format and back to the original binary format (again, with default rounding) to get the original value unchanged. In both cases, the decimal representation is treated as independent of the exponent, without leading and trailing zeros (for example, all of 0.0123e4, 1.23e2, 1.2300e2, 123, 123.0, 123000.000e-3 are 3 digits). For 32-bit binary floats, these two sizes are 6 and 9 decimal digits, respectively. In C ;, these are FLT_DIG and FLT_DECIMAL_DIG. (This is weird that 32-bit float keeps 7 decimal digits for total most of all numbers, but there are exceptions.) In C++, look at std::numeric_limits::digits10 and std::numeric_limits::max_digits10, respectively. For 64-bit binary floats, these are 15 and 17 (DBL_DIG and DBL_DECIMAL_DIG, respectively; and std::numeric_limits::{digits10, max_digits10}). General formulas for them (thx2 @MarkDickinson) ${format}_DIG (digits10): floor((p-1)*log10(2)) ${format}_DECIMAL_DIG (max_digits10): ceil(1+p*log10(2)) where p is number of digits in mantissa (including hidden one for normalized IEEE754 case). Also, comments with some mathematical explanation at C++ numeric limits page: The standard 32-bit IEEE 754 floating-point type has a 24 bit fractional part (23 bits written, one implied), which may suggest that it can represent 7 digit decimals (24 * std::log10(2) is 7.22), but relative rounding errors are non-uniform and some floating-point values with 7 decimal digits do not survive conversion to 32-bit float and back: the smallest positive example is 8.589973e9, which becomes 8.589974e9 after the roundtrip. These rounding errors cannot exceed one bit in the representation, and digits10 is calculated as (24-1)*std::log10(2), which is 6.92. Rounding down results in the value 6. Look for values for 16- and 128-bit floats in comments (but see below for what is 128-bit float in real). For the exponent, this is simpler because each of the border values (minimum normalized, minimum denormalized, maximum represented) are exact and can be easily obtained and printed. @PaulPanzer suggested numpy.finfo. It gives first of these values ({format}_DIG); maybe it is the thing you search: \n```\n\n```python\nnumpy.finfo(numpy.float16).precision\n#Output\n#3\n```\n\n```python\nnumpy.finfo(numpy.float32).precision\n#Output\n#6\n```\n\n```python\nnumpy.finfo(numpy.float64).precision\n#Output\n#15\n```\n\n```python\nnumpy.finfo(numpy.float128).precision\n#Output\n#18\n#```\n#but, on most systems (my one was Ubuntu 18.04 on x86-84) the value is confusing for float128; it is really for 80-bit x86 \"extended\" float with a 64 bit significand; real IEEE754 float128 has 112 significand bits and so the real value should have been 33, but numpy presents another type under this name. See here for details: in general, float128 is a delusion in numpy. UPD3: you mentioned float8 - there is no such type in IEEE754 set. One could imagine such type for some utterly specific purposes, but its range is too narrow for any universal usage.\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/56514892\/how-many-digits-can-float8-float16-float32-float64-and-float128-contain","best_answers_votes":25,"tags":["python","numpy","floating-point","precision"],"question_length":314,"response_length":3280,"tag_count":4} {"question":"Increment Numpy array with repeated indices I have a Numpy array and a list of indices whose values I would like to increment by one. This list may contain repeated indices, and I would like the increment to scale with the number of repeats of each index. Without repeats, the command is simple: \n```\na=np.zeros(6).astype('int')\nb=[3,2,5]\na[b]+=1\n```\nWith repeats, I've come up with the following method. \n```\nb=[3,2,5,2] # indices to increment by one each replicate\nbbins=np.bincount(b)\nb.sort() # sort b because bincount is sorted\nincr=bbins[np.nonzero(bbins)] # create increment array\nbu=np.unique(b) # sorted, unique indices (len(bu)=len(incr))\na[bu]+=incr\n```\nIs this the best way? Is there are risk involved with assuming that the np.bincount and np.unique operations would result in the same sorted order? Am I missing some simple Numpy operation to solve this?","response":"In numpy >= 1.8, you can also use the at method of the addition 'universal function' ('ufunc'). As the docs note: For addition ufunc, this method is equivalent to a[indices] += b, except that results are accumulated for elements that are indexed more than once. So taking your example: \n```\na = np.zeros(6).astype('int')\nb = [3, 2, 5, 2]\n```\n\u2026to then\u2026 \n```\nnp.add.at(a, b, 1)\n```\n\u2026will leave a as\u2026 \n```\narray([0, 0, 2, 1, 0, 1])\n```","best_answers_score":0.8,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/2004364\/increment-numpy-array-with-repeated-indices","best_answers_votes":43,"tags":["python","indexing","numpy"],"question_length":929,"response_length":432,"tag_count":3} {"question":"What are the available datatypes for 'dtype' with NumPy's loadtxt() an genfromtxt? What are the available numpy.loadtxt or numpy.genfromtxt for importing table data with varying datatypes, and what are the available abbreviations for the use (e.g. i32 for integer)? This post demonstrates the use of conditions, which I was curious if somebody might elaborate on.","response":"In addition to np.sctypeDict, there are these variables: \n```\n\n```python\nnp.typecodes\n#Output\n#{'All': '?bhilqpBHILQPefdgFDGSUVOMm',\n# 'AllFloat': 'efdgFDG',\n# 'AllInteger': 'bBhHiIlLqQpP',\n# 'Character': 'c',\n# 'Complex': 'FDG',\n# 'Datetime': 'Mm',\n# 'Float': 'efdg',\n# 'Integer': 'bhilqp',\n# 'UnsignedInteger': 'BHILQP'}\n```\n\n```python\nnp.sctypes\n#Output\n#{'complex': [numpy.complex64, numpy.complex128, numpy.complex192],\n# 'float': [numpy.float16, numpy.float32, numpy.float64, numpy.float96],\n# 'int': [numpy.int8, numpy.int16, numpy.int32, numpy.int32, numpy.int64],\n# 'others': [bool, object, str, unicode, numpy.void],\n# 'uint': [numpy.uint8, numpy.uint16, numpy.uint32, numpy.uint32, numpy.uint64]}\n#```\n```","best_answers_score":0.7986,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/13997087\/what-are-the-available-datatypes-for-dtype-with-numpys-loadtxt-an-genfromtx","best_answers_votes":52,"tags":["python","numpy"],"question_length":363,"response_length":698,"tag_count":2} {"question":"Quickest way to find the nth largest value in a numpy Matrix There are lots of solutions to do this for a single array, but what about a matrix, such as: \n```\n\n```python\nk\n#Output\n#array([[ 35, 48, 63],\n# [ 60, 77, 96],\n# [ 91, 112, 135]])\n#```\n#You can use k.max(), but of course this only returns the highest value, 135. What if I want the second or third?\n```","response":"As said, np.partition should be faster (at most O(n) running time): \n```\nnp.partition(k.flatten(), -2)[-2]\n```\nshould return the 2nd largest element. (partition guarantees that the numbered element is in position, all elements before are smaller, and all behind are bigger).","best_answers_score":0.7979,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/33181350\/quickest-way-to-find-the-nth-largest-value-in-a-numpy-matrix","best_answers_votes":71,"tags":["python","numpy"],"question_length":354,"response_length":274,"tag_count":2} {"question":"Calculate the Cumulative Distribution Function (CDF) in Python How can I calculate in python the Cumulative Distribution Function (CDF)? I want to calculate it from an array of points I have (discrete distribution), not with the continuous distributions that, for example, scipy has.","response":"(It is possible that my interpretation of the question is wrong. If the question is how to get from a discrete PDF into a discrete CDF, then np.cumsum divided by a suitable constant will do if the samples are equispaced. If the array is not equispaced, then np.cumsum of the array multiplied by the distances between the points will do.) If you have a discrete array of samples, and you would like to know the CDF of the sample, then you can just sort the array. If you look at the sorted result, you'll realize that the smallest value represents 0% , and largest value represents 100 %. If you want to know the value at 50 % of the distribution, just look at the array element which is in the middle of the sorted array. Let us have a closer look at this with a simple example: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# create some randomly ddistributed data:\ndata = np.random.randn(10000)\n\n# sort the data:\ndata_sorted = np.sort(data)\n\n# calculate the proportional values of samples\np = 1. * np.arange(len(data)) \/ (len(data) - 1)\n\n# plot the sorted data:\nfig = plt.figure()\nax1 = fig.add_subplot(121)\nax1.plot(p, data_sorted)\nax1.set_xlabel('$p$')\nax1.set_ylabel('$x$')\n\nax2 = fig.add_subplot(122)\nax2.plot(data_sorted, p)\nax2.set_xlabel('$x$')\nax2.set_ylabel('$p$')\n```\nThis gives the following plot where the right-hand-side plot is the traditional cumulative distribution function. It should reflect the CDF of the process behind the points, but naturally, it is not as long as the number of points is finite. This function is easy to invert, and it depends on your application which form you need.","best_answers_score":0.7974,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/24788200\/calculate-the-cumulative-distribution-function-cdf-in-python","best_answers_votes":60,"tags":["python","numpy","machine-learning","statistics","scipy"],"question_length":283,"response_length":1621,"tag_count":5} {"question":"Pandas - Replace values based on index If I create a dataframe like so: \n```\nimport pandas as pd, numpy as np\n\ndf = pd.DataFrame(np.random.randint(0,100,size=(100, 2)), columns=list('AB'))\n```\nHow would I change the entry in column A to be the number 16 from row 0 -15, for example? In other words, how do I replace cells based purely on index?","response":"Use loc: \n```\ndf.loc[0:15,'A'] = 16\nprint (df)\n A B\n0 16 45\n1 16 5\n2 16 97\n3 16 58\n4 16 26\n5 16 87\n6 16 51\n7 16 17\n8 16 39\n9 16 73\n10 16 94\n11 16 69\n12 16 57\n13 16 24\n14 16 43\n15 16 77\n16 41 0\n17 3 21\n18 0 98\n19 45 39\n20 66 62\n21 8 53\n22 69 47\n23 48 53\n```\nSolution with ix is deprecated.","best_answers_score":0.7973,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/37725195\/pandas-replace-values-based-on-index","best_answers_votes":90,"tags":["python","numpy","pandas","replace","dataframe"],"question_length":344,"response_length":357,"tag_count":5} {"question":"R's which() and which.min() Equivalent in Python I read the similar topic here. I think the question is different or at least .index() could not solve my problem. This is a simple code in R and its answer: \n```\nx <- c(1:4, 0:5, 11)\nx\n#[1] 1 2 3 4 0 1 2 3 4 5 11\nwhich(x==2)\n# [1] 2 7\nmin(which(x==2))\n# [1] 2\nwhich.min(x)\n#[1] 5\n```\nWhich simply returns the index of the item which meets the condition. If x be the input for Python, how can I get the indeces for the elements which meet criteria x==2 and the one which is the smallest in the array which.min. \n```\nx = [1,2,3,4,0,1,2,3,4,11] \nx=np.array(x)\nx[x>2].index()\n##'numpy.ndarray' object has no attribute 'index'\n```","response":"Numpy does have built-in functions for it \n```\nx = [1,2,3,4,0,1,2,3,4,11] \nx=np.array(x)\nnp.where(x == 2)\nnp.min(np.where(x==2))\nnp.argmin(x)\n\nnp.where(x == 2)\nOut[9]: (array([1, 6], dtype=int64),)\n\nnp.min(np.where(x==2))\nOut[10]: 1\n\nnp.argmin(x)\nOut[11]: 4\n```","best_answers_score":0.7941,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/48519062\/rs-which-and-which-min-equivalent-in-python","best_answers_votes":44,"tags":["python","r","numpy"],"question_length":684,"response_length":261,"tag_count":3} {"question":"Better way to shuffle two numpy arrays in unison I have two numpy arrays of different shapes, but with the same length (leading dimension). I want to shuffle each of them, such that corresponding elements continue to correspond -- i.e. shuffle them in unison with respect to their leading indices. This code works, and illustrates my goals: \n```\ndef shuffle_in_unison(a, b):\n assert len(a) == len(b)\n shuffled_a = numpy.empty(a.shape, dtype=a.dtype)\n shuffled_b = numpy.empty(b.shape, dtype=b.dtype)\n permutation = numpy.random.permutation(len(a))\n for old_index, new_index in enumerate(permutation):\n shuffled_a[new_index] = a[old_index]\n shuffled_b[new_index] = b[old_index]\n return shuffled_a, shuffled_b\n```\nFor example: \n```\n\n```python\na = numpy.asarray([[1, 1], [2, 2], [3, 3]])\n```\n\n```python\nb = numpy.asarray([1, 2, 3])\n```\n\n```python\nshuffle_in_unison(a, b)\n#Output\n#(array([[2, 2],\n# [1, 1],\n# [3, 3]]), array([2, 1, 3]))\n#```\n#However, this feels clunky, inefficient, and slow, and it requires making a copy of the arrays -- I'd rather shuffle them in-place, since they'll be quite large. Is there a better way to go about this? Faster execution and lower memory usage are my primary goals, but elegant code would be nice, too. One other thought I had was this: \n#```\n#def shuffle_in_unison_scary(a, b):\n# rng_state = numpy.random.get_state()\n# numpy.random.shuffle(a)\n# numpy.random.set_state(rng_state)\n# numpy.random.shuffle(b)\n#```\n#This works...but it's a little scary, as I see little guarantee it'll continue to work -- it doesn't look like the sort of thing that's guaranteed to survive across numpy version, for example.\n```","response":"Your can use NumPy's array indexing: \n```\ndef unison_shuffled_copies(a, b):\n assert len(a) == len(b)\n p = numpy.random.permutation(len(a))\n return a[p], b[p]\n```\nThis will result in creation of separate unison-shuffled arrays.","best_answers_score":0.7935,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/4601373\/better-way-to-shuffle-two-numpy-arrays-in-unison","best_answers_votes":485,"tags":["python","numpy","random","shuffle","numpy-ndarray"],"question_length":1647,"response_length":235,"tag_count":5} {"question":"Is there a difference between scipy.pi, numpy.pi, or math.pi? In a project using SciPy and NumPy, when should one use scipy.pi vs numpy.pi vs just math.pi? Is there a difference between these values?","response":"```\n\n```python\nimport math\n```\n\n```python\nimport numpy as np\n```\n\n```python\nimport scipy\n```\n\n```python\nmath.pi == np.pi == scipy.pi\n#Output\n#True\n#```\n#So it doesn't matter, they are all the same value. The only reason all three modules provide a pi value is so if you are using just one of the three modules, you can conveniently have access to pi without having to import another module. They're not providing different values for pi.\n```","best_answers_score":0.7935,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/12645547\/is-there-a-difference-between-scipy-pi-numpy-pi-or-math-pi","best_answers_votes":257,"tags":["python","numpy","math","scipy","pi"],"question_length":199,"response_length":386,"tag_count":5} {"question":"What is the difference between math.exp and numpy.exp and why do numpy creators choose to introduce exp again? exp means exponential function. Why do numpy creators introduce this function again? math.exp numpy.exp","response":"The math.exp works only for scalars, whereas numpy.exp will work for arrays. Example: \n```\n\n```python\nimport math\n```\n\n```python\nimport numpy as np\n```\n\n```python\nx = [1.,2.,3.,4.,5.]\n```\n\n```python\nmath.exp(x)\n#Output\n#Traceback (most recent call last):\n# File \"\", line 1, in \n# math.exp(x)\n#TypeError: a float is required\n```\n\n```python\nnp.exp(x)\n#Output\n#array([ 2.71828183, 7.3890561 , 20.08553692, 54.59815003,\n# 148.4131591 ])\n#```\n#It is the same case for other math functions. \n#```\n```\n\n```python\nmath.sin(x)\n#Output\n#Traceback (most recent call last):\n# File \"\", line 1, in \n# math.sin(x)\n#TypeError: a float is required\n```\n\n```python\nnp.sin(x)\n#Output\n#array([ 0.84147098, 0.90929743, 0.14112001, -0.7568025 , -0.95892427])\n#```\n#Also refer to this answer to check out how numpy is faster than math.\n```","best_answers_score":0.7934,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/30712402\/what-is-the-difference-between-math-exp-and-numpy-exp-and-why-do-numpy-creators","best_answers_votes":43,"tags":["python","arrays","numpy"],"question_length":214,"response_length":756,"tag_count":3} {"question":"Random Number from Histogram Suppose I create a histogram using scipy\/numpy, so I have two arrays: one for the bin counts, and one for the bin edges. If I use the histogram to represent a probability distribution function, how can I efficiently generate random numbers from that distribution?","response":"It's probably what np.random.choice does in @Ophion's answer, but you can construct a normalized cumulative density function, then choose based on a uniform random number: \n```\nfrom __future__ import division\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndata = np.random.normal(size=1000)\nhist, bins = np.histogram(data, bins=50)\n\nbin_midpoints = bins[:-1] + np.diff(bins)\/2\ncdf = np.cumsum(hist)\ncdf = cdf \/ cdf[-1]\nvalues = np.random.rand(10000)\nvalue_bins = np.searchsorted(cdf, values)\nrandom_from_cdf = bin_midpoints[value_bins]\n\nplt.subplot(121)\nplt.hist(data, 50)\nplt.subplot(122)\nplt.hist(random_from_cdf, 50)\nplt.show()\n```\nA 2D case can be done as follows: \n```\ndata = np.column_stack((np.random.normal(scale=10, size=1000),\n np.random.normal(scale=20, size=1000)))\nx, y = data.T \nhist, x_bins, y_bins = np.histogram2d(x, y, bins=(50, 50))\nx_bin_midpoints = x_bins[:-1] + np.diff(x_bins)\/2\ny_bin_midpoints = y_bins[:-1] + np.diff(y_bins)\/2\ncdf = np.cumsum(hist.ravel())\ncdf = cdf \/ cdf[-1]\n\nvalues = np.random.rand(10000)\nvalue_bins = np.searchsorted(cdf, values)\nx_idx, y_idx = np.unravel_index(value_bins,\n (len(x_bin_midpoints),\n len(y_bin_midpoints)))\nrandom_from_cdf = np.column_stack((x_bin_midpoints[x_idx],\n y_bin_midpoints[y_idx]))\nnew_x, new_y = random_from_cdf.T\n\nplt.subplot(121, aspect='equal')\nplt.hist2d(x, y, bins=(50, 50))\nplt.subplot(122, aspect='equal')\nplt.hist2d(new_x, new_y, bins=(50, 50))\nplt.show()\n```","best_answers_score":0.7918,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/17821458\/random-number-from-histogram","best_answers_votes":50,"tags":["python","numpy","scipy","montecarlo"],"question_length":292,"response_length":1587,"tag_count":4} {"question":"Compare (assert equality of) two complex data structures containing numpy arrays in unittest I use Python's unittest module and want to check if two complex data structures are equal. The objects can be lists of dicts with all sorts of values: numbers, strings, Python containers (lists\/tuples\/dicts) and numpy arrays. The latter are the reason for asking the question, because I cannot just do \n```\nself.assertEqual(big_struct1, big_struct2)\n```\nbecause it produces a \n```none\nValueError: The truth value of an array with more than one element is ambiguous.\nUse a.any() or a.all()\n```\nI imagine that I need to write my own equality test for this. It should work for arbitrary structures. My current idea is a recursive function that: tries direct comparison of the current \"node\" of arg1 to the corresponding node of arg2; if no exception is raised, moves on (\"terminal\" nodes\/leaves are processed here, too); if ValueError is caught, goes deeper until it finds a numpy.array; compares the arrays (e.g. like this). What seems a little problematic is keeping track of \"corresponding\" nodes of two structures, but perhaps zip is all I need here. The question is: are there good (simpler) alternatives to this approach? Maybe numpy presents some tools for this? If no alternatives are suggested, I will implement this idea (unless I have a better one) and post as an answer. P.S. I have a vague feeling that I might have seen a question addressing this problem, but I can't find it now. P.P.S. An alternative approach would be a function that traverses the structure and converts all numpy.arrays to lists, but is this any easier to implement? Seems the same to me. Edit: Subclassing numpy.ndarray sounds very promising, but obviously I don't have both sides of the comparison hard-coded into a test. One of them, though, is indeed hardcoded, so I can: populate it with custom subclasses of numpy.array; change isinstance(other, SaneEqualityArray) to isinstance(other, np.ndarray) in jterrace's answer; always use it as LHS in comparisons. My questions in this regard are: Will it work (I mean, it sounds all right to me, but maybe some tricky edge cases will not be handled correctly)? Will my custom object always end up as LHS in the recursive equality checks, as I expect? Again, are there better ways (given that I get at least one of the structures with real numpy arrays). Edit 2: I tried it out, the (seemingly) working implementation is shown in this answer.","response":"Would have commented, but it gets too long... Fun fact, you cannot use == to test if arrays are the same I would suggest you use np.testing.assert_array_equal instead. that checks dtype, shape, etc., that doesn't fail for the neat little math of (float('nan') == float('nan')) == False (normal python sequence == has an even more fun way of ignoring this sometimes, because it uses PyObject_RichCompareBool which does a (for NaNs incorrect) is quick check (for testing of course that is perfect)... There is also assert_allclose because floating point equality can get very tricky if you do actual calculations and you usually want almost the same values, since the values can become hardware depended or possibly random depending what you do with them. I would almost suggest trying serializing it with pickle if you want something this insanely nested, but that is overly strict (and point 3 is of course fully broken then), for example the memory layout of your array does not matter, but matters to its serialization.","best_answers_score":0.7918,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/14246983\/compare-assert-equality-of-two-complex-data-structures-containing-numpy-arrays","best_answers_votes":14,"tags":["python","unit-testing","numpy"],"question_length":2465,"response_length":1021,"tag_count":3} {"question":"How to get the indices list of all NaN value in numpy array? Say now I have a numpy array which is defined as, \n```\n[[1,2,3,4],\n[2,3,NaN,5],\n[NaN,5,2,3]]\n```\nNow I want to have a list that contains all the indices of the missing values, which is [(1,2),(2,0)] at this case. Is there any way I can do that?","response":"np.isnan combined with np.argwhere \n```\nx = np.array([[1,2,3,4],\n [2,3,np.nan,5],\n [np.nan,5,2,3]])\nnp.argwhere(np.isnan(x))\n```\noutput: \n```\narray([[1, 2],\n [2, 0]])\n```","best_answers_score":0.7907,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/37754948\/how-to-get-the-indices-list-of-all-nan-value-in-numpy-array","best_answers_votes":234,"tags":["python","numpy","scipy"],"question_length":305,"response_length":202,"tag_count":3} {"question":"load csv into 2D matrix with numpy for plotting Given this CSV file: \n```\n\"A\",\"B\",\"C\",\"D\",\"E\",\"F\",\"timestamp\"\n611.88243,9089.5601,5133.0,864.07514,1715.37476,765.22777,1.291111964948E12\n611.88243,9089.5601,5133.0,864.07514,1715.37476,765.22777,1.291113113366E12\n611.88243,9089.5601,5133.0,864.07514,1715.37476,765.22777,1.291120650486E12\n```\nI simply want to load it as a matrix\/ndarray with 3 rows and 7 columns. However, for some reason, all I can get out of numpy is an ndarray with 3 rows (one per line) and no columns. \n```\nr = np.genfromtxt(fname,delimiter=',',dtype=None, names=True)\nprint r\nprint r.shape\n\n[ (611.88243, 9089.5601000000006, 5133.0, 864.07514000000003, 1715.3747599999999, 765.22776999999996, 1291111964948.0)\n (611.88243, 9089.5601000000006, 5133.0, 864.07514000000003, 1715.3747599999999, 765.22776999999996, 1291113113366.0)\n (611.88243, 9089.5601000000006, 5133.0, 864.07514000000003, 1715.3747599999999, 765.22776999999996, 1291120650486.0)]\n(3,)\n```\nI can manually iterate and hack it into the shape I want, but this seems silly. I just want to load it as a proper matrix so I can slice it across different dimensions and plot it, just like in matlab.","response":"Pure numpy \n```\nnumpy.loadtxt(open(\"test.csv\", \"rb\"), delimiter=\",\", skiprows=1)\n```\nCheck out the loadtxt documentation. You can also use python's csv module: \n```\nimport csv\nimport numpy\nreader = csv.reader(open(\"test.csv\", \"rb\"), delimiter=\",\")\nx = list(reader)\nresult = numpy.array(x).astype(\"float\")\n```\nYou will have to convert it to your favorite numeric type. I guess you can write the whole thing in one line: \n```\nresult = numpy.array(list(csv.reader(open(\"test.csv\", \"rb\"), delimiter=\",\"))).astype(\"float\")\n```\nAdded Hint: You could also use pandas.io.parsers.read_csv and get the associated numpy array which can be faster.","best_answers_score":0.7891,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/4315506\/load-csv-into-2d-matrix-with-numpy-for-plotting","best_answers_votes":169,"tags":["python","arrays","csv","numpy","reshape"],"question_length":1180,"response_length":635,"tag_count":5} {"question":"Convert an image RGB->Lab with python What is the preferred way of doing the conversion using PIL\/Numpy\/SciPy today?","response":"Since 2010 when the linked question was asked the corresponding code moved from scipy to a separate toolkit: http:\/\/scikit-image.org\/ So here's the code I was actually looking for: \n```\nfrom skimage import io, color\nrgb = io.imread(filename)\nlab = color.rgb2lab(rgb)\n```\nIt should also be noted that due to Lab nature srgb->lab conversion depends on an additional parameter: whitepoint, eg: \u2022 Photoshop uses a white point called D50 (which is a standard for icc) \u2022 OpenCV and skimage use D65 (which is a standard for srgb). \u2022 default Matlab implementation uses D50 (it is capable of using others), This nice FAQ explains it this way: You should use D65 unless you have a good reason to use something else. The print industry commonly uses D50 and photography commonly uses D55. These represent compromises between the conditions of indoor (tungsten) and daylight viewing. You can tell which whitepoint you're dealing with by converting RGB (0,0,255) to Lab: \u2022 D50 would give you (30, 68, -112) \u2022 D55 (30, 73, -110) \u2022 D65 (32, 79, -108) The numbers after 'D' correspond to (internally) used color temperature of white point: D50 = 5003 K (yellowish), D65 = 6504 K (blueish) I'm grateful to Alex and Roman for their answers because they pointed me into the right direction.","best_answers_score":0.789,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/13405956\/convert-an-image-rgb-lab-with-python","best_answers_votes":82,"tags":["python","numpy","scipy","python-imaging-library","color-space"],"question_length":116,"response_length":1271,"tag_count":5} {"question":"Find indices of elements equal to zero in a NumPy array NumPy has the efficient function\/method nonzero() to identify the indices of non-zero elements in an ndarray object. What is the most efficient way to obtain the indices of the elements that do have a value of zero?","response":"numpy.where() is my favorite. \n```\n\n```python\nx = numpy.array([1,0,2,0,3,0,4,5,6,7,8])\n```\n\n```python\nnumpy.where(x == 0)[0]\n#Output\n#array([1, 3, 5])\n#```\n#The method where returns a tuple of ndarrays, each corresponding to a different dimension of the input. Since the input is one-dimensional, the [0] unboxes the tuple's only element.\n```","best_answers_score":0.7885,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/4588628\/find-indices-of-elements-equal-to-zero-in-a-numpy-array","best_answers_votes":312,"tags":["python","numpy"],"question_length":271,"response_length":309,"tag_count":2} {"question":"How do I check which version of NumPy I'm using? How can I check which version of NumPy I'm using?","response":"```\nimport numpy\nnumpy.version.version\n```","best_answers_score":0.7878,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/1520234\/how-do-i-check-which-version-of-numpy-im-using","best_answers_votes":493,"tags":["python","numpy","version"],"question_length":98,"response_length":42,"tag_count":3} {"question":"How do I create character arrays in numpy? Say I have the following array: \n```\nimport numpy as np\na = ['hello','snake','plate']\n```\nI want this to turn into a numpy array b so that: \n```\nb[0,0] = 'h'\nb[0,1] = 'e'\nb[0,2] = 'l'\nb[1,0] = 's'\n...\n```\nI want the standard numpy tricks to work, like broadcasting, comparison, etc. How is it done? And where is this in the numpy documentation? Thanks! Uri","response":"Actually, you can do this without any copies or list comprehensions in numpy (caveats about non-equal-length strings aside...). Just view it as a 1 character string array and reshape it: \n```\nimport numpy as np\n\nx = np.array(['hello','snake','plate'], dtype=str)\ny = x.view('S1').reshape((x.size, -1))\n\nprint repr(y)\n```\nThis yields: \n```\narray([['h', 'e', 'l', 'l', 'o'],\n ['s', 'n', 'a', 'k', 'e'],\n ['p', 'l', 'a', 't', 'e']], \n dtype='|S1')\n```\nGenerally speaking, though, I'd avoid using numpy arrays to store strings in most cases. There are cases where it's useful, but you're usually better off sticking to data structures that allow variable-length strings for, well, holding strings.","best_answers_score":0.7869,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/9476797\/how-do-i-create-character-arrays-in-numpy","best_answers_votes":27,"tags":["python","string","character-encoding","numpy"],"question_length":399,"response_length":710,"tag_count":4} {"question":"Extract csv file specific columns to list in Python What I'm trying to do is plot the latitude and longitude values of specific storms on a map using matplotlib,basemap,python, etc. My problem is that I'm trying to extract the latitude, longitude, and name of the storms on map but I keep getting errors between lines 41-44 where I try to extract the columns into the list. Here is what the file looks like: \n```\n1957,AUDREY,HU, 21.6N, 93.3W\n1957,AUDREY,HU,22.0N, 93.4W\n1957,AUDREY,HU,22.6N, 93.5W\n1957,AUDREY,HU,23.2N, 93.6W\n```\nI want the list to look like the following: \n```\nlatitude = [21.6N,22.0N,23.4N]\nlongitude = [93.3W, 93.5W,93.8W]\nname = [\"Audrey\",\"Audrey\"]\n```\nHere's what I have so far: \n```\ndata = np.loadtxt('louisianastormb.csv',dtype=np.str,delimiter=',',skiprows=1)\n'''print data'''\n\ndata = np.loadtxt('louisianastormb.csv',dtype=np.str,delimiter=',',skiprows=0)\n\nf= open('louisianastormb.csv', 'rb')\nreader = csv.reader(f, delimiter=',')\nheader = reader.next()\nzipped = zip(*reader)\n\nlatitude = zipped[3]\nlongitude = zipped[4]\nnames = zipped[1]\nx, y = m(longitude, latitude)\n```\nHere's the last error message\/traceback I received: Traceback (most recent call last): File \"\/home\/darealmzd\/lstorms.py\", line 42, in header = reader.next() _csv.Error: new-line character seen in unquoted field - do you need to open the file in universal-newline mode?","response":"This looks like a problem with line endings in your code. If you're going to be using all these other scientific packages, you may as well use Pandas for the CSV reading part, which is both more robust and more useful than just the csv module: \n```\nimport pandas\ncolnames = ['year', 'name', 'city', 'latitude', 'longitude']\ndata = pandas.read_csv('test.csv', names=colnames)\n```\nIf you want your lists as in the question, you can now do: \n```\nnames = data.name.tolist()\nlatitude = data.latitude.tolist()\nlongitude = data.longitude.tolist()\n```","best_answers_score":0.7867,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/19486369\/extract-csv-file-specific-columns-to-list-in-python","best_answers_votes":76,"tags":["python","csv","numpy","matplotlib"],"question_length":1370,"response_length":543,"tag_count":4} {"question":"How to check the size of a float in python? I want to check whether a float is actually 32 or 64bits (and the number of bits of a numpy float array). There should be a built-in, but just didn't find out...","response":"Properties of a Python float can be requested via sys.float_info. It returns information such as max\/min value, max\/min exp value, etc. These properties can potentially be used to calculate the byte size of a float. I never encountered anything else than 64 bit, though, on many different architectures. The items of a NumPy array might have different size, but you can check their size in bytes by a.itemsize, where a is a NumPy array.","best_answers_score":0.786,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/8216088\/how-to-check-the-size-of-a-float-in-python","best_answers_votes":37,"tags":["python","numpy"],"question_length":205,"response_length":436,"tag_count":2} {"question":"R summary() equivalent in numpy Is there an equivalent of R's summary() function in numpy? numpy has std, mean, average functions separately, but does it have a function that sums up everything, like summary does in R? If found this question which relates to pandas and this article with R-to-numpy equivalents, but it doesn't have what I seek for.","response":"1. Load Pandas in console and load csv data file \n```\nimport pandas as pd\n\ndata = pd.read_csv(\"data.csv\", sep = \",\")\n```\n2. Examine first few rows of data \n```\ndata.head()\n```\n3. Calculate summary statistics \n```\nsummary = data.describe()\n```\n4. Transpose statistics to get similar format as R summary() function \n```\nsummary = summary.transpose()\n```\n5. Visualize summary statistics in console \n```\nsummary.head()\n```","best_answers_score":0.7852,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/33889310\/r-summary-equivalent-in-numpy","best_answers_votes":57,"tags":["python","r","numpy"],"question_length":348,"response_length":418,"tag_count":3} {"question":"Numpy ValueError: setting an array element with a sequence. This message may appear without the existing of a sequence? [duplicate] This question already has answers here: ValueError: setting an array element with a sequence (11 answers) Closed 1 year ago. Why do I get this error message? ValueError: setting an array element with a sequence. Thank you \n```\nZ=np.array([1.0,1.0,1.0,1.0]) \n\ndef func(TempLake,Z):\n A=TempLake\n B=Z\n return A*B\n\nNlayers=Z.size\nN=3\nTempLake=np.zeros((N+1,Nlayers))\n\nkOUT=np.zeros(N+1)\nfor i in xrange(N):\n kOUT[i]=func(TempLake[i],Z)\n```","response":"You're getting the error message \n```\nValueError: setting an array element with a sequence.\n```\nbecause you're trying to set an array element with a sequence. I'm not trying to be cute, there -- the error message is trying to tell you exactly what the problem is. Don't think of it as a cryptic error, it's simply a phrase. What line is giving the problem? \n```\nkOUT[i]=func(TempLake[i],Z)\n```\nThis line tries to set the ith element of kOUT to whatever func(TempLAke[i], Z) returns. Looking at the i=0 case: \n```\n\n```python\nkOUT[0]\n#Output\n#0.0\n```\n\n```python\nfunc(TempLake[0], Z)\n#Output\n#array([ 0., 0., 0., 0.])\n#```\n#You're trying to load a 4-element array into kOUT[0] which only has a float. Hence, you're trying to set an array element (the left hand side, kOUT[i]) with a sequence (the right hand side, func(TempLake[i], Z)). Probably func isn't doing what you want, but I'm not sure what you really wanted it to do (and don't forget you can usually use vectorized operations like A*B rather than looping in numpy.) That should explain the problem, anyway.\n```","best_answers_score":0.785,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/13310347\/numpy-valueerror-setting-an-array-element-with-a-sequence-this-message-may-app","best_answers_votes":57,"tags":["python","arrays","numpy","element"],"question_length":580,"response_length":1058,"tag_count":4} {"question":"How does the axis parameter from NumPy work? Can someone explain exactly what the axis parameter in NumPy does? I am terribly confused. I'm trying to use the function myArray.sum(axis=num) At first I thought if the array is itself 3 dimensions, axis=0 will return three elements, consisting of the sum of all nested items in that same position. If each dimension contained five dimensions, I expected axis=1 to return a result of five items, and so on. However this is not the case, and the documentation does not do a good job helping me out (they use a 3x3x3 array so it's hard to tell what's happening) Here's what I did: \n```\n\n```python\ne\n#Output\n#array([[[1, 0],\n# [0, 0]],\n```\n\n[[1, 1],\n [1, 0]],\n\n [[1, 0],\n [0, 1]]])\n\n```python\ne.sum(axis = 0)\n#Output\n#array([[3, 1],\n# [1, 1]])\n```\n\n```python\ne.sum(axis=1)\n#Output\n#array([[1, 0],\n# [2, 1],\n# [1, 1]])\n```\n\n```python\ne.sum(axis=2)\n#Output\n#array([[1, 0],\n# [2, 1],\n# [1, 1]])\n#>>>\n#```\n#Clearly the result is not intuitive.\n```","response":"Clearly, \n```\ne.shape == (3, 2, 2)\n```\nSum over an axis is a reduction operation so the specified axis disappears. Hence, \n```\ne.sum(axis=0).shape == (2, 2)\ne.sum(axis=1).shape == (3, 2)\ne.sum(axis=2).shape == (3, 2)\n```\nIntuitively, we are \"squashing\" the array along the chosen axis, and summing the numbers that get squashed together.","best_answers_score":0.7848,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/22320534\/how-does-the-axis-parameter-from-numpy-work","best_answers_votes":40,"tags":["python","arrays","numpy","multidimensional-array","numpy-ndarray"],"question_length":961,"response_length":337,"tag_count":5} {"question":"How can I create a slice object for Numpy array? I've tried to find a neat solution to this, but I'm slicing several 2D arrays of the same shape in the same manner. I've tidied it up as much as I can by defining a list containing the 'x,y' center e.g. cpix = [161, 134] What I'd like to do is instead of having to write out the slice three times like so: \n```\na1 = array1[cpix[1]-50:cpix[1]+50, cpix[0]-50:cpix[0]+50] \na2 = array2[cpix[1]-50:cpix[1]+50, cpix[0]-50:cpix[0]+50] \na3 = array3[cpix[1]-50:cpix[1]+50, cpix[0]-50:cpix[0]+50]\n```\nis just have something predefined (like maybe a mask?) so I can just do a \n```\na1 = array1[predefined_2dslice] \na2 = array2[predefined_2dslice] \na3 = array3[predefined_2dslice]\n```\nIs this something that numpy supports?","response":"Yes you can use numpy.s_: Example: \n```\n\n```python\na = np.arange(10).reshape(2, 5)\n```\n\n```python\n\n```\n\n```python\nm = np.s_[0:2, 3:4]\n```\n\n```python\n\n```\n\n```python\na[m]\n#Output\n#array([[3],\n# [8]])\n#```\n#And in this case: \n#```\n#my_slice = np.s_[cpix[1]-50:cpix[1]+50, cpix[0]-50:cpix[0]+50]\n```\n\na1 = array1[my_slice] \na2 = array2[my_slice] \na3 = array3[my_slice]\n```\nYou can also use numpy.r_ in order to translates slice objects to concatenation along the first axis.","best_answers_score":0.7843,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/38917173\/how-can-i-create-a-slice-object-for-numpy-array","best_answers_votes":50,"tags":["python","arrays","numpy"],"question_length":759,"response_length":408,"tag_count":3} {"question":"Convert a numpy.ndarray to string(or bytes) and convert it back to numpy.ndarray I'm having a little trouble here, I'm trying to convert a numpy.ndarray to string, I've already done that like this: \n```\nrandomArray.tostring()\n```\nIt works, but I'm wondering if I can transform it back to a numpy.ndarray. What's the best way to do this? I'm using numpy 1.8.1 Context: The objective is to send the numpy.ndarray as a message in rabbitmq (pika library)","response":"You can use the fromstring() method for this: \n```\narr = np.array([1, 2, 3, 4, 5, 6])\nts = arr.tostring()\nprint(np.fromstring(ts, dtype=int))\n\n```python\n[1 2 3 4 5 6]\n#Output\n#```\n#Sorry for the short answer, not enough points for commenting. Remember to state the data types or you'll end up in a world of pain. Note on fromstring from numpy 1.14 onwards: sep : str, optional The string separating numbers in the data; extra whitespace between elements is also ignored. Deprecated since version 1.14: Passing sep='', the default, is deprecated since it will trigger the deprecated binary mode of this function. This mode interprets string as binary bytes, rather than ASCII text with decimal numbers, an operation which is better spelt frombuffer(string, dtype, count). If string contains unicode text, the binary mode of fromstring will first encode it into bytes using either utf-8 (python 3) or the default encoding (python 2), neither of which produce sane results.\n```","best_answers_score":0.7838,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/30167538\/convert-a-numpy-ndarray-to-stringor-bytes-and-convert-it-back-to-numpy-ndarray","best_answers_votes":55,"tags":["python","numpy","multidimensional-array"],"question_length":450,"response_length":954,"tag_count":3} {"question":"numpy: what is the logic of the argmin() and argmax() functions? I can not understand the output of argmax and argmin when use with the axis parameter. For example: \n```\n\n```python\na = np.array([[1,2,4,7], [9,88,6,45], [9,76,3,4]])\n```\n\n```python\na\n#Output\n#array([[ 1, 2, 4, 7],\n# [ 9, 88, 6, 45],\n# [ 9, 76, 3, 4]])\n```\n\n```python\na.shape\n#Output\n#(3, 4)\n```\n\n```python\na.size\n#Output\n#12\n```\n\n```python\nnp.argmax(a)\n#Output\n#5\n```\n\n```python\nnp.argmax(a,axis=0)\n#Output\n#array([1, 1, 1, 1])\n```\n\n```python\nnp.argmax(a,axis=1)\n#Output\n#array([3, 1, 1])\n```\n\n```python\nnp.argmin(a)\n#Output\n#0\n```\n\n```python\nnp.argmin(a,axis=0)\n#Output\n#array([0, 0, 2, 2])\n```\n\n```python\nnp.argmin(a,axis=1)\n#Output\n#array([0, 2, 2])\n#```\n#As you can see, the maximum value is the point (1,1) and the minimum one is the point (0,0). So in my logic when I run: np.argmin(a,axis=0) I expected array([0,0,0,0]) np.argmin(a,axis=1) I expected array([0,0,0]) np.argmax(a,axis=0) I expected array([1,1,1,1]) np.argmax(a,axis=1) I expected array([1,1,1]) What is wrong with my understanding of things?\n```","response":"By adding the axis argument, NumPy looks at the rows and columns individually. When it's not given, the array a is flattened into a single 1D array. axis=0 means that the operation is performed down the columns of a 2D array a in turn. For example np.argmin(a, axis=0) returns the index of the minimum value in each of the four columns. The minimum value in each column is shown in bold below: \n```\n\n```python\na\n#Output\n#array([[ \n#1\n#, \n#2\n#, 4, 7], # 0\n# [ 9, 88, 6, 45], # 1\n# [ 9, 76, \n#3\n#, \n#4\n#]]) # 2\n```\n\n```python\nnp.argmin(a, axis=0)\n#Output\n#array([0, 0, 2, 2])\n#```\n#On the other hand, axis=1 means that the operation is performed across the rows of a. That means np.argmin(a, axis=1) returns [0, 2, 2] because a has three rows. The index of the minimum value in the first row is 0, the index of the minimum value of the second and third rows is 2: \n#```\n```\n\n```python\na\n#Output\n## 0 1 2 3\n#array([[ \n#1\n#, 2, 4, 7],\n# [ 9, 88, \n#6\n#, 45],\n# [ 9, 76, \n#3\n#, 4]])\n```\n\n```python\nnp.argmin(a, axis=1)\n#Output\n#array([0, 2, 2])\n#```\n```","best_answers_score":0.7834,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/28697993\/numpy-what-is-the-logic-of-the-argmin-and-argmax-functions","best_answers_votes":55,"tags":["python","arrays","numpy","argmax"],"question_length":906,"response_length":997,"tag_count":4} {"question":"cv2.imread: checking if image is being read I'm writing an OpenCV program in python, and at some point I have something like \n```\nimport cv2\nimport numpy as np\n... \nimg = cv2.imread(\"myImage.jpg\")\n\n# do stuff with image here\n```\nThe problem is that I have to detect if the image file is being correctly read before continuing. cv2.imread returns False if not able to open the image, so I think of doing something like: \n```\nif (img):\n #continue doing stuff\n```\nWhat happens is that if the image is not opened (e.g. if the file does not exist) img is equal to None (as expected). However, when imread works, the condition, breaks: \n```\nValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()\n```\ni.e. the returned numpy.ndarray cannot be used as a boolean. The problem seems to be that imread returns numpy.ndarray if success and False (boolean) otherwise. My solution so far involves using the type of the returned value as follows: \n```\nif (type(img) is np.ndarray): \n #do stuff with image\n```\nBut I was wondering: isn't there a nicer solution, closer to the initial check if(img): #do stuff ?","response":"If you're sure that the value of img is None in your case, you can simply use if not img is None, or, equivalently, if img is not None. You don't need to check the type explicitly. Note that None and False are not the same value. However, bool(None)==False, which is why if None fails. The documentation for imread, both for OpenCV 2 and 3, states, however, that a empty matrix should be returned on error. You can check for that using if img.size ==0","best_answers_score":0.7827,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/23628325\/cv2-imread-checking-if-image-is-being-read","best_answers_votes":64,"tags":["python","opencv","numpy"],"question_length":1148,"response_length":451,"tag_count":3} {"question":"How to rearrange array based upon index array I'm looking for a one line solution that would help me do the following. Suppose I have \n```\narray = np.array([10, 20, 30, 40, 50])\n```\nI'd like to rearrange it based upon an input ordering. If there were a numpy function called arrange, it would do the following: \n```\nnewarray = np.arrange(array, [1, 0, 3, 4, 2])\nprint newarray\n\n [20, 10, 40, 50, 30]\n```\nFormally, if the array to be reordered is m x n, and the \"index\" array is 1 x n, the ordering would be determined by the array called \"index\". Does numpy have a function like this?","response":"You can simply use your \"index\" list directly, as, well, an index array: \n```\n\n```python\narr = np.array([10, 20, 30, 40, 50])\n```\n\n```python\nidx = [1, 0, 3, 4, 2]\n```\n\n```python\narr[idx]\n#Output\n#array([20, 10, 40, 50, 30])\n#```\n#It tends to be much faster if idx is already an ndarray and not a list, even though it'll work either way: \n#```\n```\n\n```python\n%timeit arr[idx]\n#Output\n#100000 loops, best of 3: 2.11 \u00b5s per loop\n```\n\n```python\nai = np.array(idx)\n```\n\n```python\n%timeit arr[ai]\n#Output\n#1000000 loops, best of 3: 296 ns per loop\n#```\n```","best_answers_score":0.7821,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/26194389\/how-to-rearrange-array-based-upon-index-array","best_answers_votes":98,"tags":["python","arrays","numpy"],"question_length":587,"response_length":453,"tag_count":3} {"question":"Can anybody explain me the numpy.indices()? I've read documentation several times about np.indices() but I can't seem to grasp what is it about. I've used it numerous times on things to see what it does, but I still can't really get it. Maybe the thing is I'm a beginner in programming so I can't understand the idea behind the words describing it. In addition I'm not a native English speaker (though I have no problems with it). I would be very grateful for kind of easier explanation, possibly on some example. Thanks.","response":"Suppose you have a matrix M whose (i,j)-th element equals \n```\nM_ij = 2*i + 3*j\n```\nOne way to define this matrix would be \n```\ni, j = np.indices((2,3))\nM = 2*i + 3*j\n```\nwhich yields \n```\narray([[0, 3, 6],\n [2, 5, 8]])\n```\nIn other words, np.indices returns arrays which can be used as indices. The elements in i indicate the row index: \n```\n\n```python\ni\n#Output\n#array([[0, 0, 0],\n# [1, 1, 1]])\n#```\n#The elements in j indicate the column index: \n#```\n```\n\n```python\nj\n#Output\n#array([[0, 1, 2],\n# [0, 1, 2]])\n#```\n```","best_answers_score":0.7821,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/32271331\/can-anybody-explain-me-the-numpy-indices","best_answers_votes":31,"tags":["python","numpy","indices"],"question_length":521,"response_length":522,"tag_count":3} {"question":"Numerically stable softmax Is there a numerically stable way to compute softmax function below? I am getting values that becomes Nans in Neural network code. \n```\nnp.exp(x)\/np.sum(np.exp(y))\n```","response":"The softmax exp(x)\/sum(exp(x)) is actually numerically well-behaved. It has only positive terms, so we needn't worry about loss of significance, and the denominator is at least as large as the numerator, so the result is guaranteed to fall between 0 and 1. The only accident that might happen is over- or under-flow in the exponentials. Overflow of a single or underflow of all elements of x will render the output more or less useless. But it is easy to guard against that by using the identity softmax(x) = softmax(x + c) which holds for any scalar c: Subtracting max(x) from x leaves a vector that has only non-positive entries, ruling out overflow and at least one element that is zero ruling out a vanishing denominator (underflow in some but not all entries is harmless). Footnote: theoretically, catastrophic accidents in the sum are possible, but you'd need a ridiculous number of terms. For example, even using 16 bit floats which can only resolve 3 decimals---compared to 15 decimals of a \"normal\" 64 bit float---we'd need between 2^1431 (~6 x 10^431) and 2^1432 to get a sum that is off by a factor of two.","best_answers_score":0.782,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/42599498\/numerically-stable-softmax","best_answers_votes":89,"tags":["python","numpy","nan","scientific-computing","softmax"],"question_length":194,"response_length":1117,"tag_count":5} {"question":"Python baseline correction library I am currently working with some Raman Spectra data, and I am trying to correct my data caused by florescence skewing. Take a look at the graph below: I am pretty close to achieving what I want. As you can see, I am trying to fit a polynomial in all my data whereas I should really just be fitting a polynomial at the local minimas. Ideally I would want to have a polynomial fitting which when subtracted from my original data would result in something like this: Are there any built in libs that does this already? If not, any simple algorithm one can recommend for me?","response":"I found an answer to my question, just sharing for everyone who stumbles upon this. There is an algorithm called \"Asymmetric Least Squares Smoothing\" by P. Eilers and H. Boelens in 2005. The paper is free and you can find it on google. \n```\ndef baseline_als(y, lam, p, niter=10):\n L = len(y)\n D = sparse.csc_matrix(np.diff(np.eye(L), 2))\n w = np.ones(L)\n for i in xrange(niter):\n W = sparse.spdiags(w, 0, L, L)\n Z = W + lam * D.dot(D.transpose())\n z = spsolve(Z, w*y)\n w = p * (y > z) + (1-p) * (y < z)\n return z\n```","best_answers_score":0.782,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/29156532\/python-baseline-correction-library","best_answers_votes":43,"tags":["python","numpy","scipy","signal-processing"],"question_length":605,"response_length":533,"tag_count":4} {"question":"numpy.max or max ? Which one is faster? In python, which one is faster ? \n```\nnumpy.max(), numpy.min()\n```\nor \n```\nmax(), min()\n```\nMy list\/array length varies from 2 to 600. Which one should I use to save some run time ?","response":"Well from my timings it follows if you already have numpy array a you should use a.max (the source tells it's the same as np.max if a.max available). But if you have built-in list then most of the time takes converting it into np.ndarray => that's why max is better in your timings. In essense: if np.ndarray then a.max, if list and no need for all the machinery of np.ndarray then standard max.","best_answers_score":0.7817,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/10943088\/numpy-max-or-max-which-one-is-faster","best_answers_votes":64,"tags":["python","numpy","runtime","max","min"],"question_length":221,"response_length":395,"tag_count":5} {"question":"Python - byte image to NumPy array using OpenCV I have an image in bytes: print(image_bytes) b'\\xff\\xd8\\xff\\xfe\\x00\\x10Lavc57.64.101\\x00\\xff\\xdb\\x00C\\x00\\x08\\x04\\x04\\x04\\x04\\x04\\x05\\x05\\x05\\x05\\x05\\x05\\x06\\x06\\x06\\x06\\x06\\x06\\x06\\x06\\x06\\x06\\x06\\x06\\x06\\x07\\x07\\x07\\x08\\x08\\x08\\x07\\x07\\x07\\x06\\x06\\x07\\x07\\x08\\x08\\x08\\x08\\t\\t\\t\\x08\\x08\\x08\\x08\\t\\t\\n\\n\\n\\x0c\\x0c\\x0b\\x0b\\x0e\\x0e\\x0e\\x11\\x11\\x14\\xff\\xc4\\x01\\xa2\\x00\\x00\\x01\\x05\\x01\\x01\\x01\\x01\\x01\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\x02\\x03\\x04\\x05\\x06\\x07\\x08\\t\\n\\x0b\\x01\\x00\\x03\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x00\\x00\\ ... some other stuff I am able to convert it to a NumPy array using Pillow: \n```\nimage = numpy.array(Image.open(io.BytesIO(image_bytes)))\n```\nBut I don't really like using Pillow. Is there a way to use clear OpenCV, or directly NumPy even better, or some other faster library?","response":"I created a 2x2 JPEG image to test this. The image has white, red, green and purple pixels. I used cv2.imdecode and numpy.frombuffer \n```\nimport cv2\nimport numpy as np\n\nf = open('image.jpg', 'rb')\nimage_bytes = f.read() # b'\\xff\\xd8\\xff\\xe0\\x00\\x10...'\n\ndecoded = cv2.imdecode(np.frombuffer(image_bytes, np.uint8), -1)\n\nprint('OpenCV:\\n', decoded)\n\n# your Pillow code\nimport io\nfrom PIL import Image\nimage = np.array(Image.open(io.BytesIO(image_bytes))) \nprint('PIL:\\n', image)\n```\nThis seems to work, although the channel order is BGR and not RGB as in PIL.Image. There are probably some flags you might use to tune this. Test results: \n```\nOpenCV:\n [[[255 254 255]\n [ 0 0 254]]\n\n [[ 1 255 0]\n [254 0 255]]]\nPIL:\n [[[255 254 255]\n [254 0 0]]\n\n [[ 0 255 1]\n [255 0 254]]]\n```","best_answers_score":0.7816,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/49511753\/python-byte-image-to-numpy-array-using-opencv","best_answers_votes":47,"tags":["python","python-3.x","numpy","opencv"],"question_length":855,"response_length":797,"tag_count":4} {"question":"How to convert an image from np.uint16 to np.uint8? I am creating an image so: \n```\nimage = np.empty(shape=(height, width, 1), dtype = np.uint16)\n```\nAfter that I convert the image to BGR model: \n```\nimage = cv2.cvtColor(image, cv2.COLOR_GRAY2BGR)\n```\nI'd like to convert the image now in a dtype = np.uint8 in order to use that image with cv2.threshold() function. I meant, I would like to convert the image to CV_8UC1.","response":"You can use cv2.convertScaleAbs for this problem. See the Documentation. Check out the command terminal demo below : \n```\n\n```python\nimg = np.empty((100,100,1),dtype = np.uint16)\n```\n\n```python\nimage = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)\n```\n\n```python\ncvuint8 = cv2.convertScaleAbs(image)\n```\n\n```python\ncvuint8.dtype\n#Output\n#dtype('uint8')\n#```\n#Hope it helps!!!\n```","best_answers_score":0.7813,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/11337499\/how-to-convert-an-image-from-np-uint16-to-np-uint8","best_answers_votes":39,"tags":["python","opencv","numpy"],"question_length":420,"response_length":317,"tag_count":3} {"question":"Faster numpy cartesian to spherical coordinate conversion? I have an array of 3 million data points from a 3-axiz accellerometer (XYZ), and I want to add 3 columns to the array containing the equivalent spherical coordinates (r, theta, phi). The following code works, but seems way too slow. How can I do better? \n```\nimport numpy as np\nimport math as m\n\ndef cart2sph(x,y,z):\n XsqPlusYsq = x**2 + y**2\n r = m.sqrt(XsqPlusYsq + z**2) # r\n elev = m.atan2(z,m.sqrt(XsqPlusYsq)) # theta\n az = m.atan2(y,x) # phi\n return r, elev, az\n\ndef cart2sphA(pts):\n return np.array([cart2sph(x,y,z) for x,y,z in pts])\n\ndef appendSpherical(xyz):\n np.hstack((xyz, cart2sphA(xyz)))\n```","response":"This is similar to Justin Peel's answer, but using just numpy and taking advantage of its built-in vectorization: \n```\nimport numpy as np\n\ndef appendSpherical_np(xyz):\n ptsnew = np.hstack((xyz, np.zeros(xyz.shape)))\n xy = xyz[:,0]**2 + xyz[:,1]**2\n ptsnew[:,3] = np.sqrt(xy + xyz[:,2]**2)\n ptsnew[:,4] = np.arctan2(np.sqrt(xy), xyz[:,2]) # for elevation angle defined from Z-axis down\n #ptsnew[:,4] = np.arctan2(xyz[:,2], np.sqrt(xy)) # for elevation angle defined from XY-plane up\n ptsnew[:,5] = np.arctan2(xyz[:,1], xyz[:,0])\n return ptsnew\n```\nNote that, as suggested in the comments, I've changed the definition of elevation angle from your original function. On my machine, testing with pts = np.random.rand(3000000, 3), the time went from 76 seconds to 3.3 seconds. I don't have Cython so I wasn't able to compare the timing with that solution.","best_answers_score":0.7812,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/4116658\/faster-numpy-cartesian-to-spherical-coordinate-conversion","best_answers_votes":46,"tags":["python","numpy","coordinate"],"question_length":731,"response_length":871,"tag_count":3} {"question":"How can I tell if NumPy creates a view or a copy? For a minimal working example, let's digitize a 2D array. numpy.digitize requires a 1D array: \n```\nimport numpy as np\nN = 200\nA = np.random.random((N, N))\nX = np.linspace(0, 1, 20)\nprint np.digitize(A.ravel(), X).reshape((N, N))\n```\nNow the documentation says: ... A copy is made only if needed. How do I know if the ravel copy it is \"needed\" in this case? In general - is there a way I can determine if a particular operation creates a copy or a view?","response":"This question is very similar to a question that I asked a while back: You can check the base attribute. \n```\na = np.arange(50)\nb = a.reshape((5, 10))\nprint (b.base is a)\n```\nHowever, that's not perfect. You can also check to see if they share memory using np.may_share_memory. \n```\nprint (np.may_share_memory(a, b))\n```\nThere's also the flags attribute that you can check: \n```\nprint (b.flags['OWNDATA']) #False -- apparently this is a view\ne = np.ravel(b[:, 2])\nprint (e.flags['OWNDATA']) #True -- Apparently this is a new numpy object.\n```\nBut this last one seems a little fishy to me, although I can't quite put my finger on why...","best_answers_score":0.7809,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/11524664\/how-can-i-tell-if-numpy-creates-a-view-or-a-copy","best_answers_votes":95,"tags":["python","numpy","copy"],"question_length":502,"response_length":637,"tag_count":3} {"question":"Output different precision by column with pandas.DataFrame.to_csv()? Question Is it possible to specify a float precision specifically for each column to be printed by the Python pandas package method pandas.DataFrame.to_csv? Background If I have a pandas dataframe that is arranged like this: \n```\n\n```python\ndf_data[:5]\n#Output\n#year month day lats lons vals\n#0 2012 6 16 81.862745 -29.834254 0.0\n#1 2012 6 16 81.862745 -29.502762 0.1\n#2 2012 6 16 81.862745 -29.171271 0.0\n#3 2012 6 16 81.862745 -28.839779 0.2\n#4 2012 6 16 81.862745 -28.508287 0.0\n#```\n#There is the float_format option that can be used to specify a precision, but this applys that precision to all columns of the dataframe when printed. When I use that like so: \n#```\n#df_data.to_csv(outfile, index=False,\n# header=False, float_format='%11.6f')\n#```\n#I get the following, where vals is given an inaccurate precision: \n#```\n#2012,6,16, 81.862745, -29.834254, 0.000000\n#2012,6,16, 81.862745, -29.502762, 0.100000\n#2012,6,16, 81.862745, -29.171270, 0.000000\n#2012,6,16, 81.862745, -28.839779, 0.200000\n#2012,6,16, 81.862745, -28.508287, 0.000000\n#```\n```","response":"Change the type of column \"vals\" prior to exporting the data frame to a CSV file \n```\ndf_data['vals'] = df_data['vals'].map(lambda x: '%2.1f' % x)\n\ndf_data.to_csv(outfile, index=False, header=False, float_format='%11.6f')\n```","best_answers_score":0.7795,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/20003290\/output-different-precision-by-column-with-pandas-dataframe-to-csv","best_answers_votes":50,"tags":["python","csv","numpy","floating-point","pandas"],"question_length":1210,"response_length":225,"tag_count":5} {"question":"crop center portion of a numpy image Let's say I have a numpy image of some width x and height y. I have to crop the center portion of the image to width cropx and height cropy. Let's assume that cropx and cropy are positive non zero integers and less than the respective image size. What's the best way to apply the slicing for the output image?","response":"Something along these lines - \n```\ndef crop_center(img,cropx,cropy):\n y,x = img.shape\n startx = x\/\/2-(cropx\/\/2)\n starty = y\/\/2-(cropy\/\/2) \n return img[starty:starty+cropy,startx:startx+cropx]\n```\nSample run - \n```\n\n```python\nimg\n#Output\n#array([[88, 93, 42, 25, 36, 14, 59, 46, 77, 13, 52, 58],\n# [43, 47, 40, 48, 23, 74, 12, 33, 58, 93, 87, 87],\n# [54, 75, 79, 21, 15, 44, 51, 68, 28, 94, 78, 48],\n# [57, 46, 14, 98, 43, 76, 86, 56, 86, 88, 96, 49],\n# [52, 83, 13, 18, 40, 33, 11, 87, 38, 74, 23, 88],\n# [81, 28, 86, 89, 16, 28, 66, 67, 80, 23, 95, 98],\n# [46, 30, 18, 31, 73, 15, 90, 77, 71, 57, 61, 78],\n# [33, 58, 20, 11, 80, 25, 96, 80, 27, 40, 66, 92],\n# [13, 59, 77, 53, 91, 16, 47, 79, 33, 78, 25, 66],\n# [22, 80, 40, 24, 17, 85, 20, 70, 81, 68, 50, 80]])\n```\n\n```python\ncrop_center(img,4,6)\n#Output\n#array([[15, 44, 51, 68],\n# [43, 76, 86, 56],\n# [40, 33, 11, 87],\n# [16, 28, 66, 67],\n# [73, 15, 90, 77],\n# [80, 25, 96, 80]])\n#```\n```","best_answers_score":0.7785,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/39382412\/crop-center-portion-of-a-numpy-image","best_answers_votes":70,"tags":["python","image","numpy","image-processing","crop"],"question_length":346,"response_length":1018,"tag_count":5} {"question":"Select multiple ranges of columns in Pandas DataFrame I have to read several files some in Excel format and some in CSV format. Some of the files have hundreds of columns. Is there a way to select several ranges of columns without specifying all the column names or positions? For example something like selecting columns 1 -10, 15, 17 and 50-100: \n```\ndf = df.ix[1:10, 15, 17, 50:100]\n```\nI need to know how to do this both when creating dataframe from Excel files and CSV files and after the data framers created.","response":"use np.r_ \n```\nnp.r_[1:10, 15, 17, 50:100]\n\narray([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 15, 17, 50, 51, 52, 53, 54, 55,\n 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72,\n 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89,\n 90, 91, 92, 93, 94, 95, 96, 97, 98, 99])\n```\nso you can do \n```\ndf.iloc[:, np.r_[1:10, 15, 17, 50:100]]\n```","best_answers_score":0.7779,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/41256648\/select-multiple-ranges-of-columns-in-pandas-dataframe","best_answers_votes":92,"tags":["python","pandas","numpy","dataframe"],"question_length":515,"response_length":383,"tag_count":4} {"question":"Difference between nonzero(a), where(a) and argwhere(a). When to use which? In Numpy, nonzero(a), where(a) and argwhere(a), with a being a numpy array, all seem to return the non-zero indices of the array. What are the differences between these three calls? On argwhere the documentation says: np.argwhere(a) is the same as np.transpose(np.nonzero(a)). Why have a whole function that just transposes the output of nonzero ? When would that be so useful that it deserves a separate function? What about the difference between where(a) and nonzero(a)? Wouldn't they return the exact same result?","response":"nonzero and argwhere both give you information about where in the array the elements are True. where works the same as nonzero in the form you have posted, but it has a second form: \n```\nnp.where(mask,a,b)\n```\nwhich can be roughly thought of as a numpy \"ufunc\" version of the conditional expression: \n```\na[i] if mask[i] else b[i]\n```\n(with appropriate broadcasting of a and b). As far as having both nonzero and argwhere, they're conceptually different. nonzero is structured to return an object which can be used for indexing. This can be lighter-weight than creating an entire boolean mask if the 0's are sparse: \n```\nmask = a == 0 # entire array of bools\nmask = np.nonzero(a)\n```\nNow you can use that mask to index other arrays, etc. However, as it is, it's not very nice conceptually to figure out which indices correspond to 0 elements. That's where argwhere comes in.","best_answers_score":0.7775,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/15976697\/difference-between-nonzeroa-wherea-and-argwherea-when-to-use-which","best_answers_votes":25,"tags":["python","numpy"],"question_length":593,"response_length":875,"tag_count":2} {"question":"handling zeros in pandas DataFrames column divisions in Python What's the best way to handle zero denominators when dividing pandas DataFrame columns by each other in Python? for example: \n```\ndf = pandas.DataFrame({\"a\": [1, 2, 0, 1, 5], \"b\": [0, 10, 20, 30, 50]})\ndf.a \/ df.b # yields error\n```\nI'd like the ratios where the denominator is zero to be registered as NA (numpy.nan). How can this be done efficiently in pandas? Casting to float64 does not work at level of columns: \n```\n\n```python\ndf\n#Output\n#a b\n#0 1 0\n#1 2 10\n#2 0 20\n#3 1 30\n#4 5 50\n```\n\n```python\ndf[\"a\"].astype(\"float64\") \/ df[\"b\"].astype(\"float64\")\n...\n\nFloatingPointError: divide by zero encountered in divide\n```\nHow can I do it just for particular columns and not entire df?\n```","response":"You need to work in floats, otherwise you will have integer division, prob not what you want \n```\n\n```python\ndf = pandas.DataFrame({\"a\": [1, 2, 0, 1, 5], \n \"b\": [0, 10, 20, 30, 50]}).astype('float64')\n```\n\n```python\ndf\n#Output\n#a b\n#0 1 0\n#1 2 10\n#2 0 20\n#3 1 30\n#4 5 50\n```\n\n```python\ndf.dtypes\n#Output\n#a float64\n#b float64\n#dtype: object\n#```\n#Here's one way \n#```\n```\n\n```python\nx = df.a\/df.b\n```\n\n```python\nx\n#Output\n#0 inf\n#1 0.200000\n#2 0.000000\n#3 0.033333\n#4 0.100000\n#dtype: float64\n```\n\n```python\nx[np.isinf(x)] = np.nan\n```\n\n```python\nx\n#Output\n#0 NaN\n#1 0.200000\n#2 0.000000\n#3 0.033333\n#4 0.100000\n#dtype: float64\n#```\n#Here's another way \n#```\n```\n\n```python\ndf.a\/df.b.replace({ 0 : np.nan })\n#Output\n#0 NaN\n#1 0.200000\n#2 0.000000\n#3 0.033333\n#4 0.100000\n#dtype: float64\n#```\n```","best_answers_score":0.7774,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/16244180\/handling-zeros-in-pandas-dataframes-column-divisions-in-python","best_answers_votes":48,"tags":["python","numpy","pandas","dataframe"],"question_length":754,"response_length":841,"tag_count":4} {"question":"How to calculate the sum of all columns of a 2D numpy array (efficiently) Let's say I have the following 2D numpy array consisting of four rows and three columns: \n```\n\n```python\na = numpy.arange(12).reshape(4,3)\n```\n\n```python\nprint(a)\n#Output\n#[[ 0 1 2]\n# [ 3 4 5]\n# [ 6 7 8]\n# [ 9 10 11]]\n#```\n#What would be an efficient way to generate a 1D array that contains the sum of all columns (like [18, 22, 26])? Can this be done without having the need to loop through all columns?\n```","response":"Check out the documentation for numpy.sum, paying particular attention to the axis parameter. To sum over columns: \n```\n\n```python\nimport numpy as np\n```\n\n```python\na = np.arange(12).reshape(4,3)\n```\n\n```python\na.sum(axis=0)\n#Output\n#array([18, 22, 26])\n#```\n#Or, to sum over rows: \n#```\n```\n\n```python\na.sum(axis=1)\n#Output\n#array([ 3, 12, 21, 30])\n#```\n#Other aggregate functions, like numpy.mean, numpy.cumsum and numpy.std, e.g., also take the axis parameter. From the Tentative Numpy Tutorial: Many unary operations, such as computing the sum of all the elements in the array, are implemented as methods of the ndarray class. By default, these operations apply to the array as though it were a list of numbers, regardless of its shape. However, by specifying the axis parameter you can apply an operation along the specified axis of an array:\n```","best_answers_score":0.7771,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/13567345\/how-to-calculate-the-sum-of-all-columns-of-a-2d-numpy-array-efficiently","best_answers_votes":160,"tags":["python","numpy"],"question_length":453,"response_length":784,"tag_count":2} {"question":"Efficient & pythonic check for singular matrix Working on some matrix algebra here. Sometimes I need to invert a matrix that may be singular or ill-conditioned. I understand it is pythonic to simply do this: \n```\ntry:\n i = linalg.inv(x)\nexcept LinAlgErr as err:\n #handle it\n```\nbut am not sure how efficient that is. Wouldn't this be better? \n```\nif linalg.cond(x) < 1\/sys.float_info.epsilon:\n i = linalg.inv(x)\nelse:\n #handle it\n```\nDoes numpy.linalg simply perform up front the test I proscribed?","response":"So based on the inputs here, I'm marking my original code block with the explicit test as the solution: \n```\nif linalg.cond(x) < 1\/sys.float_info.epsilon:\n i = linalg.inv(x)\nelse:\n #handle it\n```\nSurprisingly, the numpy.linalg.inv function doesn't perform this test. I checked the code and found it goes through all it's machinations, then just calls the lapack routine - seems quite inefficient. Also, I would 2nd a point made by DaveP: that the inverse of a matrix should not be computed unless it's explicitly needed.","best_answers_score":0.777,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/13249108\/efficient-pythonic-check-for-singular-matrix","best_answers_votes":28,"tags":["python","numpy","linear-algebra"],"question_length":510,"response_length":526,"tag_count":3} {"question":"Convert list of tuples to structured numpy array I have a list of Num_tuples tuples that all have the same length Dim_tuple \n```\nxlist = [tuple_1, tuple_2, ..., tuple_Num_tuples]\n```\nFor definiteness, let's say Num_tuples=3 and Dim_tuple=2 \n```\nxlist = [(1, 1.1), (2, 1.2), (3, 1.3)]\n```\nI want to convert xlist into a structured numpy array xarr using a user-provided list of column names user_names and a user-provided list of variable types user_types \n```\nuser_names = [name_1, name_2, ..., name_Dim_tuple]\nuser_types = [type_1, type_2, ..., type_Dim_tuple]\n```\nSo in the creation of the numpy array, \n```\ndtype = [(name_1,type_1), (name_2,type_2), ..., (name_Dim_tuple, type_Dim_tuple)]\n```\nIn the case of my toy example desired end product would look something like: \n```\nxarr['name1']=np.array([1,2,3])\nxarr['name2']=np.array([1.1,1.2,1.3])\n```\nHow can I slice xlist to create xarr without any loops?","response":"A list of tuples is the correct way of providing data to a structured array: \n```\n\n```python\nxlist = [(1, 1.1), (2, 1.2), (3, 1.3)]\n```\n\n```python\ndt=np.dtype('int,float')\n```\n\n```python\nnp.array(xlist,dtype=dt)\n#Output\n#array([(1, 1.1), (2, 1.2), (3, 1.3)], \n# dtype=[('f0', ' 0 and numb < 1):\n break\n return numb\n```\nIt picks a value from a normal distribution, then discards it if it falls outside of the range 0 to 1, but I feel like there must be a better way of doing this.","response":"It sounds like you want a truncated normal distribution. Using scipy, you could use scipy.stats.truncnorm to generate random variates from such a distribution: \n```\nimport matplotlib.pyplot as plt\nimport scipy.stats as stats\n\nlower, upper = 3.5, 6\nmu, sigma = 5, 0.7\nX = stats.truncnorm(\n (lower - mu) \/ sigma, (upper - mu) \/ sigma, loc=mu, scale=sigma)\nN = stats.norm(loc=mu, scale=sigma)\n\nfig, ax = plt.subplots(2, sharex=True)\nax[0].hist(X.rvs(10000), normed=True)\nax[1].hist(N.rvs(10000), normed=True)\nplt.show()\n```\nThe top figure shows the truncated normal distribution, the lower figure shows the normal distribution with the same mean mu and standard deviation sigma.","best_answers_score":0.7758,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/18441779\/how-to-specify-upper-and-lower-limits-when-using-numpy-random-normal","best_answers_votes":72,"tags":["python","numpy","random","scipy","gaussian"],"question_length":708,"response_length":678,"tag_count":5} {"question":"How to calculate rolling \/ moving average using python + NumPy \/ SciPy? There seems to be no function that simply calculates the moving average on numpy\/scipy, leading to convoluted solutions. My question is two-fold: What's the easiest way to (correctly) implement a moving average with numpy? Since this seems non-trivial and error prone, is there a good reason not to have the batteries included in this case?","response":"A simple way to achieve this is by using np.convolve. The idea behind this is to leverage the way the discrete convolution is computed and use it to return a rolling mean. This can be done by convolving with a sequence of np.ones of a length equal to the sliding window length we want. In order to do so we could define the following function: \n```\ndef moving_average(x, w):\n return np.convolve(x, np.ones(w), 'valid') \/ w\n```\nThis function will be taking the convolution of the sequence x and a sequence of ones of length w. Note that the chosen mode is valid so that the convolution product is only given for points where the sequences overlap completely. Some examples: \n```\nx = np.array([5,3,8,10,2,1,5,1,0,2])\n```\nFor a moving average with a window of length 2 we would have: \n```\nmoving_average(x, 2)\n# array([4. , 5.5, 9. , 6. , 1.5, 3. , 3. , 0.5, 1. ])\n```\nAnd for a window of length 4: \n```\nmoving_average(x, 4)\n# array([6.5 , 5.75, 5.25, 4.5 , 2.25, 1.75, 2. ])\n```\nHow does convolve work? Lets have a more in depth look at the way the discrete convolution is being computed. The following function aims to replicate the way np.convolve is computing the output values: \n```\ndef mov_avg(x, w):\n for m in range(len(x)-(w-1)):\n yield sum(np.ones(w) * x[m:m+w]) \/ w\n```\nWhich, for the same example above would also yield: \n```\nlist(mov_avg(x, 2))\n# [4.0, 5.5, 9.0, 6.0, 1.5, 3.0, 3.0, 0.5, 1.0]\n```\nSo what is being done at each step is to take the inner product between the array of ones and the current window. In this case the multiplication by np.ones(w) is superfluous given that we are directly taking the sum of the sequence. Bellow is an example of how the first outputs are computed so that it is a little clearer. Lets suppose we want a window of w=4: \n```\n[1,1,1,1]\n[5,3,8,10,2,1,5,1,0,2]\n= (1*5 + 1*3 + 1*8 + 1*10) \/ w = 6.5\n```\nAnd the following output would be computed as: \n```\n[1,1,1,1]\n[5,3,8,10,2,1,5,1,0,2]\n= (1*3 + 1*8 + 1*10 + 1*2) \/ w = 5.75\n```\nAnd so on, returning a moving average of the sequence once all overlaps have been performed.","best_answers_score":0.7746,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/14313510\/how-to-calculate-rolling-moving-average-using-python-numpy-scipy","best_answers_votes":271,"tags":["python","numpy","time-series","moving-average","rolling-computation"],"question_length":412,"response_length":2081,"tag_count":5} {"question":"How can I serialize a numpy array while preserving matrix dimensions? numpy.array.tostring doesn't seem to preserve information about matrix dimensions (see this question), requiring the user to issue a call to numpy.array.reshape. Is there a way to serialize a numpy array to JSON format while preserving this information? Note: The arrays may contain ints, floats or bools. It's reasonable to expect a transposed array. Note 2: this is being done with the intent of passing the numpy array through a Storm topology using streamparse, in case such information ends up being relevant.","response":"pickle.dumps or numpy.save encode all the information needed to reconstruct an arbitrary NumPy array, even in the presence of endianness issues, non-contiguous arrays, or weird structured dtypes. Endianness issues are probably the most important; you don't want array([1]) to suddenly become array([16777216]) because you loaded your array on a big-endian machine. pickle is probably the more convenient option, though save has its own benefits, given in the npy format rationale. I'm giving options for serializing to JSON or a bytestring, because the original questioner needed JSON-serializable output, but most people coming here probably don't. The pickle way: \n```\nimport pickle\na = # some NumPy array\n\n# Bytestring option\nserialized = pickle.dumps(a)\ndeserialized_a = pickle.loads(serialized)\n\n# JSON option\n# latin-1 maps byte n to unicode code point n\nserialized_as_json = json.dumps(pickle.dumps(a).decode('latin-1'))\ndeserialized_from_json = pickle.loads(json.loads(serialized_as_json).encode('latin-1'))\n```\nnumpy.save uses a binary format, and it needs to write to a file, but you can get around that with io.BytesIO: \n```\na = # any NumPy array\nmemfile = io.BytesIO()\nnumpy.save(memfile, a)\n\nserialized = memfile.getvalue()\nserialized_as_json = json.dumps(serialized.decode('latin-1'))\n# latin-1 maps byte n to unicode code point n\n```\nAnd to deserialize: \n```\nmemfile = io.BytesIO()\n\n# If you're deserializing from a bytestring:\nmemfile.write(serialized)\n# Or if you're deserializing from JSON:\n# memfile.write(json.loads(serialized_as_json).encode('latin-1'))\nmemfile.seek(0)\na = numpy.load(memfile)\n```","best_answers_score":0.7746,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/30698004\/how-can-i-serialize-a-numpy-array-while-preserving-matrix-dimensions","best_answers_votes":79,"tags":["python","json","numpy","apache-storm"],"question_length":584,"response_length":1618,"tag_count":4} {"question":"How to determine if a number is any type of int (core or numpy, signed or not)? I need to test whether a variable is of type int, or any of np.int*, np.uint*, preferably using a single condition (i.e. no or). After some tests, I guess that: isinstance(n, int) will only match int and np.int32 (or np.int64 depending on plateform), np.issubdtype(type(n), int) seems to match all int and np.int*, but doesn\u2019t match np.uint*. This leads to two questions: will np.issubdtype match any kind of signed ints? Can determine in a single check whether a number is any kind of signed or unsigned int? This is about testing for integers, the test should return False for float-likes.","response":"NumPy provides base classes that you can\/should use for subtype-checking, rather than the Python types. Use np.integer to check for any instance of either signed or unsigned integers. Use np.signedinteger and np.unsignedinteger to check for signed types or unsigned types. \n```\n\n```python\nnp.issubdtype(np.uint32, np.integer)\n#Output\n#True\n```\n\n```python\nnp.issubdtype(np.uint32, np.signedinteger)\n#Output\n#False\n```\n\n```python\nnp.issubdtype(int, np.integer)\n#Output\n#True\n```\n\n```python\nnp.issubdtype(np.array([1, 2, 3]).dtype, np.integer)\n#Output\n#True\n#```\n#All floating or complex number types will return False when tested. np.issubdtype(np.uint*, int) will always be False because the Python int is a signed type. A useful reference showing the relationship between all of these base classes is found in the documentation here.\n```","best_answers_score":0.7743,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/37726830\/how-to-determine-if-a-number-is-any-type-of-int-core-or-numpy-signed-or-not","best_answers_votes":209,"tags":["python","numpy","types"],"question_length":671,"response_length":755,"tag_count":3} {"question":"Pythonic way to create a numpy array from a list of numpy arrays I generate a list of one dimensional numpy arrays in a loop and later convert this list to a 2d numpy array. I would've preallocated a 2d numpy array if i knew the number of items ahead of time, but I don't, therefore I put everything in a list. The mock up is below: \n```\n\n```python\nlist_of_arrays = map(lambda x: x*ones(2), range(5))\n```\n\n```python\nlist_of_arrays\n#Output\n#[array([ 0., 0.]), array([ 1., 1.]), array([ 2., 2.]), array([ 3., 3.]), array([ 4., 4.])]\n```\n\n```python\narr = array(list_of_arrays)\n```\n\n```python\narr\n#Output\n#array([[ 0., 0.],\n# [ 1., 1.],\n# [ 2., 2.],\n# [ 3., 3.],\n# [ 4., 4.]])\n#```\n#My question is the following: Is there a better way (performancewise) to go about the task of collecting sequential numerical data (in my case numpy arrays) than putting them in a list and then making a numpy.array out of it (I am creating a new obj and copying the data)? Is there an \"expandable\" matrix data structure available in a well tested module? A typical size of my 2d matrix would be between 100x10 and 5000x10 floats EDIT: In this example i'm using map, but in my actual application I have a for loop\n```","response":"Suppose you know that the final array arr will never be larger than 5000x10. Then you could pre-allocate an array of maximum size, populate it with data as you go through the loop, and then use arr.resize to cut it down to the discovered size after exiting the loop. The tests below suggest doing so will be slightly faster than constructing intermediate python lists no matter what the ultimate size of the array is. Also, arr.resize de-allocates the unused memory, so the final (though maybe not the intermediate) memory footprint is smaller than what is used by python_lists_to_array. This shows numpy_all_the_way is faster: \n```\n% python -mtimeit -s\"import test\" \"test.numpy_all_the_way(100)\"\n100 loops, best of 3: 1.78 msec per loop\n% python -mtimeit -s\"import test\" \"test.numpy_all_the_way(1000)\"\n100 loops, best of 3: 18.1 msec per loop\n% python -mtimeit -s\"import test\" \"test.numpy_all_the_way(5000)\"\n10 loops, best of 3: 90.4 msec per loop\n\n% python -mtimeit -s\"import test\" \"test.python_lists_to_array(100)\"\n1000 loops, best of 3: 1.97 msec per loop\n% python -mtimeit -s\"import test\" \"test.python_lists_to_array(1000)\"\n10 loops, best of 3: 20.3 msec per loop\n% python -mtimeit -s\"import test\" \"test.python_lists_to_array(5000)\"\n10 loops, best of 3: 101 msec per loop\n```\nThis shows numpy_all_the_way uses less memory: \n```\n% test.py\nInitial memory usage: 19788\nAfter python_lists_to_array: 20976\nAfter numpy_all_the_way: 20348\n```\ntest.py: \n```\nimport numpy as np\nimport os\n\n\ndef memory_usage():\n pid = os.getpid()\n return next(line for line in open('\/proc\/%s\/status' % pid).read().splitlines()\n if line.startswith('VmSize')).split()[-2]\n\nN, M = 5000, 10\n\n\ndef python_lists_to_array(k):\n list_of_arrays = list(map(lambda x: x * np.ones(M), range(k)))\n arr = np.array(list_of_arrays)\n return arr\n\n\ndef numpy_all_the_way(k):\n arr = np.empty((N, M))\n for x in range(k):\n arr[x] = x * np.ones(M)\n arr.resize((k, M))\n return arr\n\nif __name__ == '__main__':\n print('Initial memory usage: %s' % memory_usage())\n arr = python_lists_to_array(5000)\n print('After python_lists_to_array: %s' % memory_usage())\n arr = numpy_all_the_way(5000)\n print('After numpy_all_the_way: %s' % memory_usage())\n```","best_answers_score":0.7727,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/2106287\/pythonic-way-to-create-a-numpy-array-from-a-list-of-numpy-arrays","best_answers_votes":21,"tags":["python","performance","arrays","numpy","scipy"],"question_length":1161,"response_length":2261,"tag_count":5} {"question":"Sorting a 2D numpy array by multiple axes I have a 2D numpy array of shape (N,2) which is holding N points (x and y coordinates). For example: \n```\narray([[3, 2],\n [6, 2],\n [3, 6],\n [3, 4],\n [5, 3]])\n```\nI'd like to sort it such that my points are ordered by x-coordinate, and then by y in cases where the x coordinate is the same. So the array above should look like this: \n```\narray([[3, 2],\n [3, 4],\n [3, 6],\n [5, 3],\n [6, 2]])\n```\nIf this was a normal Python list, I would simply define a comparator to do what I want, but as far as I can tell, numpy's sort function doesn't accept user-defined comparators. Any ideas? EDIT: Thanks for the ideas! I set up a quick test case with 1000000 random integer points, and benchmarked the ones that I could run (sorry, can't upgrade numpy at the moment). \n```\nMine: 4.078 secs \nmtrw: 7.046 secs\nunutbu: 0.453 secs\n```","response":"Using lexsort: \n```\nimport numpy as np \na = np.array([(3, 2), (6, 2), (3, 6), (3, 4), (5, 3)])\n\nind = np.lexsort((a[:,1],a[:,0])) \n\na[ind]\n# array([[3, 2],\n# [3, 4],\n# [3, 6],\n# [5, 3],\n# [6, 2]])\n```\na.ravel() returns a view if a is C_CONTIGUOUS. If that is true, @ars's method, slightly modifed by using ravel instead of flatten, yields a nice way to sort a in-place: \n```\na = np.array([(3, 2), (6, 2), (3, 6), (3, 4), (5, 3)])\ndt = [('col1', a.dtype),('col2', a.dtype)]\nassert a.flags['C_CONTIGUOUS']\nb = a.ravel().view(dt)\nb.sort(order=['col1','col2'])\n```\nSince b is a view of a, sorting b sorts a as well: \n```\nprint(a)\n# [[3 2]\n# [3 4]\n# [3 6]\n# [5 3]\n# [6 2]]\n```","best_answers_score":0.772,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/2706605\/sorting-a-2d-numpy-array-by-multiple-axes","best_answers_votes":65,"tags":["python","sorting","numpy"],"question_length":914,"response_length":705,"tag_count":3} {"question":"Difference between various numpy random functions The numpy.random module defines the following 4 functions that all seem to return a float betweeb [0, 1.0) from the continuous uniform distribution. What (if any) is the difference between these functions? random_sample([size]) Return random floats in the half-open interval [0.0, 1.0). random([size]) Return random floats in the half-open interval [0.0, 1.0). ranf([size]) Return random floats in the half-open interval [0.0, 1.0). sample([size]) Return random floats in the half-open interval [0.0, 1.0). --------------------------- Edit Follows --------------------------------------- I found the following in numpy.random source code that supports @askewchan's answer: \n```\n# Some aliases:\nranf = random = sample = random_sample\n__all__.extend(['ranf','random','sample'])\n```","response":"Nothing. They're just aliases to random_sample: \n```\n\n```python\nnp.random.random\n#Output\n#\n```\n\n```python\nnp.random.ranf\n#Output\n#\n```\n\n```python\nnp.random.sample\n#Output\n#\n```\n\n```python\nnp.random.random_sample is np.random.random\n#Output\n#True\n```\n\n```python\nnp.random.random_sample is np.random.ranf\n#Output\n#True\n```\n\n```python\nnp.random.random_sample is np.random.sample\n#Output\n#True\n#```\n```","best_answers_score":0.7719,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/18829185\/difference-between-various-numpy-random-functions","best_answers_votes":51,"tags":["python","numpy"],"question_length":829,"response_length":450,"tag_count":2} {"question":"Saving dictionary of numpy arrays So I have a DB with a couple of years worth of site data. I am now attempting to use that data for analytics - plotting and sorting of advertising costs by keyword, etc. One of the data grabs from the DB takes minutes to complete. While I could spend some time optimizing the SQL statements I use to get the data I'd prefer to simply leave that class and it's SQL alone, grab the data, and save the results to a data file for faster retrieval later. Most of this DB data isn't going to change so I could write a separate python script to update the file every 24 hours and then use that file for this long running task. The data is being returned as a dictionary of numpy arrays. When I use numpy.save('data', data) the file is saved just fine. When I use data2 = numpy.load('data.npy') it loads the file without error. However, the output data2 doesn't not equal the original data. Specifically the line data == data2 returns false. Additionally, if I use the following: \n```\nfor key, key_data in data.items():\n print key\n```\nit works. But when I replace data.items() with data2.items() then I get an error: \n```\nAttributeError: 'numpy.ndarray' object has no attribute 'items'\n```\nUsing type(data) I get dict. Using type(data2) I get numpy.ndarray. So how do I fix this? I want the loaded data to equal the data I passed in for saving. Is there an argument to numpy.save to fix this or do I need some form of simple reformatting function to reformat the loaded data into the proper structure? Attempts to get into the ndarray via for loops or indexing all lead to errors about indexing a 0-d array. Casting like this dict(data2) also fails for iterating over a 0-d array. However, Spyder shows value of the array and it includes the data I saved. I just can't figure out how to get to it. If I need to reformat the loaded data I'd appreciate some example code on how to do this.","response":"Let's look at a small example: \n```\n\n```python\nN\n#Output\n#array([[ 0., 1., 2., 3.],\n# [ 4., 5., 6., 7.],\n# [ 8., 9., 10., 11.]])\n```\n\n```python\ndata={'N':N}\n```\n\n```python\nnp.save('temp.npy',data)\n```\n\n```python\ndata2=np.load('temp.npy')\n```\n\n```python\ndata2\n#Output\n#array({'N': array([[ 0., 1., 2., 3.],\n# [ 4., 5., 6., 7.],\n# [ 8., 9., 10., 11.]])}, dtype=object)\n#```\n#np.save is designed to save numpy arrays. data is a dictionary. So it wrapped it in a object array, and used pickle to save that object. Your data2 probably has the same character. You get at the array with: \n#```\n```\n\n```python\ndata2[()]['N']\n#Output\n#array([[ 0., 1., 2., 3.],\n# [ 4., 5., 6., 7.],\n# [ 8., 9., 10., 11.]])\n#```\n```","best_answers_score":0.7715,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/30811918\/saving-dictionary-of-numpy-arrays","best_answers_votes":65,"tags":["python","numpy"],"question_length":1914,"response_length":768,"tag_count":2} {"question":"Confusion between numpy, scipy, matplotlib and pylab Numpy, scipy, matplotlib, and pylab are common terms among they who use python for scientific computation. I just learn a bit about pylab, and I got confused. Whenever I want to import numpy, I can always do: \n```\nimport numpy as np\n```\nI just consider, that once I do \n```\nfrom pylab import *\n```\nthe numpy will be imported as well (with np alias). So basically the second one does more things compared to the first one. There are few things I want to ask: Is it right that pylab is just a wrapper for numpy, scipy and matplotlib? As np is the numpy alias in pylab, what is the scipy and matplotlib alias in pylab? (as far as I know, plt is alias of matplotlib.pyplot, but I don't know the alias for the matplotlib itself)","response":"No, pylab is part of matplotlib (in matplotlib.pylab) and tries to give you a MatLab like environment. matplotlib has a number of dependencies, among them numpy which it imports under the common alias np. scipy is not a dependency of matplotlib. If you run ipython --pylab an automatic import will put all symbols from matplotlib.pylab into global scope. Like you wrote numpy gets imported under the np alias. Symbols from matplotlib are available under the mpl alias.","best_answers_score":0.7701,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/12987624\/confusion-between-numpy-scipy-matplotlib-and-pylab","best_answers_votes":136,"tags":["python","numpy","matplotlib","scipy"],"question_length":776,"response_length":468,"tag_count":4} {"question":"How do you use the ellipsis slicing syntax in Python? This came up in Hidden features of Python, but I can't see good documentation or examples that explain how the feature works.","response":"The ellipsis is used in numpy to slice higher-dimensional data structures. It's designed to mean at this point, insert as many full slices (:) to extend the multi-dimensional slice to all dimensions. Example: \n```\n\n```python\nfrom numpy import arange\n```\n\n```python\na = arange(16).reshape(2,2,2,2)\n#Output\n#```\n#Now, you have a 4-dimensional matrix of order 2x2x2x2. To select all first elements in the 4th dimension, you can use the ellipsis notation \n#```\n```\n\n```python\na[..., 0].flatten()\n#Output\n#array([ 0, 2, 4, 6, 8, 10, 12, 14])\n#```\n#which is equivalent to \n#```\n```\n\n```python\na[:,:,:,0].flatten()\n#Output\n#array([ 0, 2, 4, 6, 8, 10, 12, 14])\n#```\n#In your own implementations, you're free to ignore the contract mentioned above and use it for whatever you see fit.\n```","best_answers_score":0.7694,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/118370\/how-do-you-use-the-ellipsis-slicing-syntax-in-python","best_answers_votes":314,"tags":["python","numpy","subclass","slice","ellipsis"],"question_length":179,"response_length":709,"tag_count":5} {"question":"Finding the nearest value and return the index of array in Python I found this post: Python: finding an element in an array and it's about returning the index of an array through matching the values. On the other hand, what I am thinking of doing is similar but different. I would like to find the nearest value for the target value. For example I am looking for 4.2 but I know in the array there is no 4.2 but I want to return the index of the value 4.1 instead of 4.4. What would be the fastest way of doing it? I am thinking of doing it the old way like how I used to do it with Matlab, which is using the array A where I want to get the index from to minus the target value and take the absolute of it, then select the min. Something like this:- \n```\n[~,idx] = min(abs(A - target))\n```\nThat is Matlab code but I am newbie in Python so I am thinking, is there a fast way of doing it in Python? Thank you so much for your help!","response":"This is similar to using bisect_left, but it'll allow you to pass in an array of targets \n```\ndef find_closest(A, target):\n #A must be sorted\n idx = A.searchsorted(target)\n idx = np.clip(idx, 1, len(A)-1)\n left = A[idx-1]\n right = A[idx]\n idx -= target - left < right - target\n return idx\n```\nSome explanation: First the general case: idx = A.searchsorted(target) returns an index for each target such that target is between A[index - 1] and A[index]. I call these left and right so we know that left = 0 so target - left < right - target is True unless target == left == right and idx - True = 0. There is another special case if target is greater than all the elements of A, In that case idx = A.searchsorted(target) and np.clip(idx, 1, len(A)-1) replaces len(A) with len(A) - 1 so idx=len(A) -1 and target - left < right - target ends up False so idx returns len(A) -1. I'll let you work though the logic on your own. For example: \n```\n\n```python\nA = np.arange(0, 20.)\n```\n\n```python\ntarget = np.array([-2, 100., 2., 2.4, 2.5, 2.6])\n```\n\n```python\nfind_closest(A, target)\n#Output\n#array([ 0, 19, 2, 2, 3, 3])\n#```\n```","best_answers_score":0.7692,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/8914491\/finding-the-nearest-value-and-return-the-index-of-array-in-python","best_answers_votes":37,"tags":["python","numpy","python-3.x"],"question_length":929,"response_length":1132,"tag_count":3} {"question":"Is there a multi-dimensional version of arange\/linspace in numpy? I would like a list of 2d NumPy arrays (x,y) , where each x is in {-5, -4.5, -4, -3.5, ..., 3.5, 4, 4.5, 5} and the same for y. I could do \n```\nx = np.arange(-5, 5.1, 0.5)\ny = np.arange(-5, 5.1, 0.5)\n```\nand then iterate through all possible pairs, but I'm sure there's a nicer way... I would like something back that looks like: \n```\n[[-5, -5],\n [-5, -4.5],\n [-5, -4],\n ...\n [5, 5]]\n```\nbut the order does not matter.","response":"You can use np.mgrid for this, it's often more convenient than np.meshgrid because it creates the arrays in one step: \n```\nimport numpy as np\nX,Y = np.mgrid[-5:5.1:0.5, -5:5.1:0.5]\n```\nFor linspace-like functionality, replace the step (i.e. 0.5) with a complex number whose magnitude specifies the number of points you want in the series. Using this syntax, the same arrays as above are specified as: \n```\nX, Y = np.mgrid[-5:5:21j, -5:5:21j]\n```\nYou can then create your pairs as: \n```\nxy = np.vstack((X.flatten(), Y.flatten())).T\n```\nAs @ali_m suggested, this can all be done in one line: \n```\nxy = np.mgrid[-5:5.1:0.5, -5:5.1:0.5].reshape(2,-1).T\n```\nBest of luck!","best_answers_score":0.7687,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/32208359\/is-there-a-multi-dimensional-version-of-arange-linspace-in-numpy","best_answers_votes":115,"tags":["python","numpy","cartesian-product"],"question_length":484,"response_length":666,"tag_count":3} {"question":"How convert a list of tuples to a numpy array of tuples? I have a list like this: \n```\nl=[(1,2),(3,4)]\n```\nI want to convert it to a numpy array,and keep array item type as tuple: \n```\narray([(1,2),(3,4)])\n```\nbut numpy.array(l) will give: \n```\narray([[1,2],[3,4)]])\n```\nand item type has been changed from tuple to numpy.ndarray,then I specified item types \n```\nnumpy.array(l,numpy.dtype('float,float'))\n```\nthis gives: \n```\narray([(1,2),(3,4)])\n```\nbut item type isn't tuple but numpy.void,so question is: \n```\nhow to convert it to a numpy.array of tuple,not of numpy.void?\n```","response":"You can have an array of object dtype, letting each element of the array being a tuple, like so - \n```\nout = np.empty(len(l), dtype=object)\nout[:] = l\n```\nSample run - \n```\n\n```python\nl = [(1,2),(3,4)]\n```\n\n```python\nout = np.empty(len(l), dtype=object)\n```\n\n```python\nout[:] = l\n```\n\n```python\nout\n#Output\n#array([(1, 2), (3, 4)], dtype=object)\n```\n\n```python\nout[0]\n#Output\n#(1, 2)\n```\n\n```python\ntype(out[0])\n#Output\n#tuple\n#```\n```","best_answers_score":0.7681,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/47389447\/how-convert-a-list-of-tuples-to-a-numpy-array-of-tuples","best_answers_votes":30,"tags":["python","numpy","numpy-ndarray"],"question_length":579,"response_length":412,"tag_count":3} {"question":"Flattening a list of NumPy arrays? It appears that I have data in the format of a list of NumPy arrays (type() = np.ndarray): \n```\n[array([[ 0.00353654]]), array([[ 0.00353654]]), array([[ 0.00353654]]), \narray([[ 0.00353654]]), array([[ 0.00353654]]), array([[ 0.00353654]]), \narray([[ 0.00353654]]), array([[ 0.00353654]]), array([[ 0.00353654]]), \narray([[ 0.00353654]]), array([[ 0.00353654]]), array([[ 0.00353654]]),\narray([[ 0.00353654]])]\n```\nI am trying to put this into a polyfit function: \n```\nm1 = np.polyfit(x, y, deg=2)\n```\nHowever, it returns the error: TypeError: expected 1D vector for x I assume I need to flatten my data into something like: \n```\n[0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654 ...]\n```\nI have tried a list comprehension which usually works on lists of lists, but this as expected has not worked: \n```\n[val for sublist in risks for val in sublist]\n```\nWhat would be the best way to do this?","response":"You could use numpy.concatenate, which as the name suggests, basically concatenates all the elements of such an input list into a single NumPy array, like so - \n```\nimport numpy as np\nout = np.concatenate(input_list).ravel()\n```\nIf you wish the final output to be a list, you can extend the solution, like so - \n```\nout = np.concatenate(input_list).ravel().tolist()\n```\nSample run - \n```\n\n```python\ninput_list\n#Output\n#[array([[ 0.00353654]]),\n# array([[ 0.00353654]]),\n# array([[ 0.00353654]]),\n# array([[ 0.00353654]]),\n# array([[ 0.00353654]]),\n# array([[ 0.00353654]]),\n# array([[ 0.00353654]]),\n# array([[ 0.00353654]]),\n# array([[ 0.00353654]]),\n# array([[ 0.00353654]]),\n# array([[ 0.00353654]]),\n# array([[ 0.00353654]]),\n# array([[ 0.00353654]])]\n```\n\n```python\nnp.concatenate(input_list).ravel()\n#Output\n#array([ 0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654,\n# 0.00353654, 0.00353654, 0.00353654, 0.00353654, 0.00353654,\n# 0.00353654, 0.00353654, 0.00353654])\n#```\n#Convert to list - \n#```\n```\n\n```python\nnp.concatenate(input_list).ravel().tolist()\n#Output\n#[0.00353654,\n# 0.00353654,\n# 0.00353654,\n# 0.00353654,\n# 0.00353654,\n# 0.00353654,\n# 0.00353654,\n# 0.00353654,\n# 0.00353654,\n# 0.00353654,\n# 0.00353654,\n# 0.00353654,\n# 0.00353654]\n#```\n```","best_answers_score":0.7662,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/33711985\/flattening-a-list-of-numpy-arrays","best_answers_votes":119,"tags":["python","arrays","numpy","list-comprehension"],"question_length":950,"response_length":1251,"tag_count":4} {"question":"numerically stable way to multiply log probability matrices in numpy I need to take the matrix product of two NumPy matrices (or other 2d arrays) containing log probabilities. The naive way np.log(np.dot(np.exp(a), np.exp(b))) is not preferred for obvious reasons. Using \n```\nfrom scipy.misc import logsumexp\nres = np.zeros((a.shape[0], b.shape[1]))\nfor n in range(b.shape[1]):\n # broadcast b[:,n] over rows of a, sum columns\n res[:, n] = logsumexp(a + b[:, n].T, axis=1)\n```\nworks but runs about 100 times slower than np.log(np.dot(np.exp(a), np.exp(b))) Using \n```\nlogsumexp((tile(a, (b.shape[1],1)) + repeat(b.T, a.shape[0], axis=0)).reshape(b.shape[1],a.shape[0],a.shape[1]), 2).T\n```\nor other combinations of tile and reshape also work but run even slower than the loop above due to the prohibitively large amounts of memory required for realistically sized input matrices. I am currently considering writing a NumPy extension in C to compute this, but of course I'd rather avoid that. Is there an established way to do this, or does anybody know of a less memory intensive way of performing this computation? EDIT: Thanks to larsmans for this solution (see below for derivation): \n```\ndef logdot(a, b):\n max_a, max_b = np.max(a), np.max(b)\n exp_a, exp_b = a - max_a, b - max_b\n np.exp(exp_a, out=exp_a)\n np.exp(exp_b, out=exp_b)\n c = np.dot(exp_a, exp_b)\n np.log(c, out=c)\n c += max_a + max_b\n return c\n```\nA quick comparison of this method to the method posted above (logdot_old) using iPython's magic %timeit function yields the following: \n```\nIn [1] a = np.log(np.random.rand(1000,2000))\n\nIn [2] b = np.log(np.random.rand(2000,1500))\n\nIn [3] x = logdot(a, b)\n\nIn [4] y = logdot_old(a, b) # this takes a while\n\nIn [5] np.any(np.abs(x-y) > 1e-14)\nOut [5] False\n\nIn [6] %timeit logdot_old(a, b)\n1 loops, best of 3: 1min 18s per loop\n\nIn [6] %timeit logdot(a, b)\n1 loops, best of 3: 264 ms per loop\n```\nObviously larsmans' method obliterates mine!","response":"logsumexp works by evaluating the right-hand side of the equation \n```\nlog(\u2211 exp[a]) = max(a) + log(\u2211 exp[a - max(a)])\n```\nI.e., it pulls out the max before starting to sum, to prevent overflow in exp. The same can be applied before doing vector dot products: \n```\nlog(exp[a] \u22c5 exp[b])\n = log(\u2211 exp[a] \u00d7 exp[b])\n = log(\u2211 exp[a + b])\n = max(a + b) + log(\u2211 exp[a + b - max(a + b)]) { this is logsumexp(a + b) }\n```\nbut by taking a different turn in the derivation, we obtain \n```\nlog(\u2211 exp[a] \u00d7 exp[b])\n = max(a) + max(b) + log(\u2211 exp[a - max(a)] \u00d7 exp[b - max(b)])\n = max(a) + max(b) + log(exp[a - max(a)] \u22c5 exp[b - max(b)])\n```\nThe final form has a vector dot product in its innards. It also extends readily to matrix multiplication, so we get the algorithm \n```\ndef logdotexp(A, B):\n max_A = np.max(A)\n max_B = np.max(B)\n C = np.dot(np.exp(A - max_A), np.exp(B - max_B))\n np.log(C, out=C)\n C += max_A + max_B\n return C\n```\nThis creates two A-sized temporaries and two B-sized ones, but one of each can be eliminated by \n```\nexp_A = A - max_A\nnp.exp(exp_A, out=exp_A)\n```\nand similarly for B. (If the input matrices may be modified by the function, all the temporaries can be eliminated.)","best_answers_score":0.766,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/23630277\/numerically-stable-way-to-multiply-log-probability-matrices-in-numpy","best_answers_votes":27,"tags":["python","numpy","matrix","matrix-multiplication","logarithm"],"question_length":1990,"response_length":1209,"tag_count":5} {"question":"Creating a Pandas DataFrame from a Numpy array: How do I specify the index column and column headers? I have a Numpy array consisting of a list of lists, representing a two-dimensional array with row labels and column names as shown below: \n```py\ndata = np.array([['','Col1','Col2'],['Row1',1,2],['Row2',3,4]])\n```\nI'd like the resulting DataFrame to have Row1 and Row2 as index values, and Col1, Col2 as header values. I can specify the index as follows: \n```py\ndf = pd.DataFrame(data, index=data[:,0])\n```\nHowever, I am unsure how to best assign column headers.","response":"Specify data, index and columns to the DataFrame constructor, as follows: \n```\n\n```python\npd.DataFrame(data=data[1:,1:], # values\n index=data[1:,0], # 1st column as index\n columns=data[0,1:]) # 1st row as the column names\n#Output\n#```\n#As @joris mentions, you may need to change above to np.int_(data[1:,1:]) to have the correct data type.\n```","best_answers_score":0.7658,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/20763012\/creating-a-pandas-dataframe-from-a-numpy-array-how-do-i-specify-the-index-colum","best_answers_votes":449,"tags":["python","pandas","dataframe","list","numpy"],"question_length":563,"response_length":361,"tag_count":5} {"question":"Unpacking tuples\/arrays\/lists as indices for Numpy Arrays I would love to be able to do \n```\n\n```python\nA = numpy.array(((1,2),(3,4)))\n```\n\n```python\nidx = (0,0)\n```\n\n```python\nA[*idx]\n#Output\n#```\n#and get \n#```\n#1\n#```\n#however this is not valid syntax. Is there a way of doing this without explicitly writing out \n#```\n```\n\n```python\nA[idx[0], idx[1]]\n#Output\n#```\n#? EDIT: Thanks for the replies. In my program I was indexing with a Numpy array rather than a tuple and getting strange results. Converting to a tuple as Alok suggests does the trick.\n```","response":"It's easier than you think: \n```\n\n```python\nimport numpy\n```\n\n```python\nA = numpy.array(((1,2),(3,4)))\n```\n\n```python\nidx = (0,0)\n```\n\n```python\nA[idx]\n#Output\n#1\n#```\n```","best_answers_score":0.7655,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/2444923\/unpacking-tuples-arrays-lists-as-indices-for-numpy-arrays","best_answers_votes":26,"tags":["python","numpy"],"question_length":487,"response_length":117,"tag_count":2} {"question":"What does the c underscore expression `c_` do exactly? It seems to be some kind of horizontal concatenation, but I could not find any documentation online. Here a minimal working example: \n```\n\n```python\nfrom numpy import c_\n```\n\n```python\na = ones(4)\n```\n\n```python\nb = zeros((4,10))\n```\n\n```python\nc_[a,b]\n#Output\n#array([[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],\n# [ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],\n# [ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],\n# [ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])\n#```\n```","response":"It took me a lot of time to understand but it seems I finally got it. All you have to do is add along second axis. let's take : \n```\nnp.c_[np.array([1,2,3]), np.array([4,5,6])]\n```\nBut there isn't second axis. So we mentally add one. so shape of both array becomes (3,1). So resultant shape would be (3,1+1) which is (3,2). which is the shape of result - \n```\narray([[1, 4],\n [2, 5],\n [3, 6]])\n```\nAnother ex.: \n```\nnp.c_[np.array([[1,2,3]]), 0, 0, np.array([[4,5,6]])]\n```\nshapes: np.array([[1,2,3]]) = 1,3 np.array([[4,5,6]]) = 1,3 0 so we can think of it as [[0]] = 1,1 So result 1,3+1+1+3 = 1,8 which is the shape of result : array([[1, 2, 3, 0, 0, 4, 5, 6]])","best_answers_score":0.7654,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/10894323\/what-does-the-c-underscore-expression-c-do-exactly","best_answers_votes":83,"tags":["python","numpy"],"question_length":557,"response_length":675,"tag_count":2} {"question":"python numpy euclidean distance calculation between matrices of row vectors I am new to Numpy and I would like to ask you how to calculate euclidean distance between points stored in a vector. Let's assume that we have a numpy.array each row is a vector and a single numpy.array. I would like to know if it is possible to calculate the euclidean distance between all the points and this single point and store them in one numpy.array. Here is an interface: \n```\npoints #2d list of row-vectors\nsinglePoint #one row-vector\n\nlistOfDistances= procedure( points,singlePoint)\n```\nCan we have something like this? Or is it possible to have one command to have the single point as a list of other points and at the end we get a matrix of distances? Thanks","response":"While you can use vectorize, @Karl's approach will be rather slow with numpy arrays. The easier approach is to just do np.hypot(*(points - single_point).T). (The transpose assumes that points is a Nx2 array, rather than a 2xN. If it's 2xN, you don't need the .T. However this is a bit unreadable, so you write it out more explictly like this (using some canned example data...): \n```\nimport numpy as np\nsingle_point = [3, 4]\npoints = np.arange(20).reshape((10,2))\n\ndist = (points - single_point)**2\ndist = np.sum(dist, axis=1)\ndist = np.sqrt(dist)\n```","best_answers_score":0.7647,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/4370975\/python-numpy-euclidean-distance-calculation-between-matrices-of-row-vectors","best_answers_votes":32,"tags":["python","vector","numpy","scipy","euclidean-distance"],"question_length":747,"response_length":551,"tag_count":5} {"question":"Find the linearly independent rows of a matrix How do I identify the linearly independent rows of a matrix? For instance, \n```\n[[0 1 0 0]\n [0 0 1 0]\n [0 1 1 0]\n [1 0 0 1]]\n```\nThe 4th row is independent.","response":"First, your 3rd row is linearly dependent with 1t and 2nd row. However, your 1st and 4th column are linearly dependent. Two methods you could use: Eigenvalue If one eigenvalue of the matrix is zero, its corresponding eigenvector is linearly dependent. The documentation eig states the returned eigenvalues are repeated according to their multiplicity and not necessarily ordered. However, assuming the eigenvalues correspond to your row vectors, one method would be: \n```\nimport numpy as np\n\nmatrix = np.array(\n [\n [0, 1 ,0 ,0],\n [0, 0, 1, 0],\n [0, 1, 1, 0],\n [1, 0, 0, 1]\n ])\n\nlambdas, V = np.linalg.eig(matrix.T)\n# The linearly dependent row vectors \nprint matrix[lambdas == 0,:]\n```\nCauchy-Schwarz inequality To test linear dependence of vectors and figure out which ones, you could use the Cauchy-Schwarz inequality. Basically, if the inner product of the vectors is equal to the product of the norm of the vectors, the vectors are linearly dependent. Here is an example for the columns: \n```\nimport numpy as np\n\nmatrix = np.array(\n [\n [0, 1 ,0 ,0],\n [0, 0, 1, 0],\n [0, 1, 1, 0],\n [1, 0, 0, 1]\n ])\n\nprint np.linalg.det(matrix)\n\nfor i in range(matrix.shape[0]):\n for j in range(matrix.shape[0]):\n if i != j:\n inner_product = np.inner(\n matrix[:,i],\n matrix[:,j]\n )\n norm_i = np.linalg.norm(matrix[:,i])\n norm_j = np.linalg.norm(matrix[:,j])\n\n print 'I: ', matrix[:,i]\n print 'J: ', matrix[:,j]\n print 'Prod: ', inner_product\n print 'Norm i: ', norm_i\n print 'Norm j: ', norm_j\n if np.abs(inner_product - norm_j * norm_i) < 1E-5:\n print 'Dependent'\n else:\n print 'Independent'\n```\nTo test the rows is a similar approach. Then you could extend this to test all combinations of vectors, but I imagine this solution scale badly with size.","best_answers_score":0.7647,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/28816627\/find-the-linearly-independent-rows-of-a-matrix","best_answers_votes":32,"tags":["python","numpy","matrix","linear-algebra"],"question_length":203,"response_length":1997,"tag_count":4} {"question":"What's the difference of numpy.ndarray.T and numpy.ndarray.transpose() when self.ndim < 2 The document numpy.ndarray.T says ndarray.T \u2014 Same as self.transpose(), except that self is returned if self.ndim < 2. Also, ndarray.transpose(*axes) says For a 1-D array, this has no effect. Doesn't this mean the same thing? Here's a little demo snippet: \n```\n\n```python\nimport numpy as np\n```\n\n```python\nprint np.__version__\n#Output\n#1.5.1rc1\n```\n\n```python\na = np.arange(7)\n```\n\n```python\nprint a, a.T, a.transpose()\n#Output\n#[0 1 2 3 4 5 6] [0 1 2 3 4 5 6] [0 1 2 3 4 5 6]\n#```\n```","response":"Regardless of rank, the .T attribute and the .transpose() method are the same\u2014they both return the transpose of the array. In the case of a rank 1 array, the .T and .transpose() don't do anything\u2014they both return the array.","best_answers_score":0.7642,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/4068177\/whats-the-difference-of-numpy-ndarray-t-and-numpy-ndarray-transpose-when-self","best_answers_votes":31,"tags":["python","numpy"],"question_length":512,"response_length":223,"tag_count":2} {"question":"how to return numpy.array from boost::python? I would like to return some data from c++ code as a numpy.array object. I had a look at boost::python::numeric, but its documentation is very terse. Can I get an example of e.g. returning a (not very large) vector to python? I don't mind doing copies of data.","response":"UPDATE: the library described in my original answer (https:\/\/github.com\/ndarray\/Boost.NumPy) has been integrated directly into Boost.Python as of Boost 1.63, and hence the standalone version is now deprecated. The text below now corresponds to the new, integrated version (only the namespace has changed). Boost.Python now includes a moderately complete wrapper of the NumPy C-API into a Boost.Python interface. It's pretty low-level, and mostly focused on how to address the more difficult problem of how to pass C++ data to and from NumPy without copying, but here's how you'd do a copied std::vector return with that: \n```\n#include \"boost\/python\/numpy.hpp\"\n\nnamespace bp = boost::python;\nnamespace bn = boost::python::numpy;\n\nstd::vector myfunc(...);\n\nbn::ndarray mywrapper(...) {\n std::vector v = myfunc(...);\n Py_intptr_t shape[1] = { v.size() };\n bn::ndarray result = bn::zeros(1, shape, bn::dtype::get_builtin());\n std::copy(v.begin(), v.end(), reinterpret_cast(result.get_data()));\n return result;\n}\n\nBOOST_PYTHON_MODULE(example) {\n bn::initialize();\n bp::def(\"myfunc\", mywrapper);\n}\n```","best_answers_score":0.7632,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/10701514\/how-to-return-numpy-array-from-boostpython","best_answers_votes":28,"tags":["c++","arrays","numpy","boost-python"],"question_length":305,"response_length":1149,"tag_count":4} {"question":"How to transform numpy.matrix or array to scipy sparse matrix For SciPy sparse matrix, one can use todense() or toarray() to transform to NumPy matrix or array. What are the functions to do the inverse? I searched, but got no idea what keywords should be the right hit.","response":"You can pass a numpy array or matrix as an argument when initializing a sparse matrix. For a CSR matrix, for example, you can do the following. \n```\n\n```python\nimport numpy as np\n```\n\n```python\nfrom scipy import sparse\n```\n\n```python\nA = np.array([[1,2,0],[0,0,3],[1,0,4]])\n```\n\n```python\nB = np.matrix([[1,2,0],[0,0,3],[1,0,4]])\n```\n\n```python\nA\n#Output\n#array([[1, 2, 0],\n# [0, 0, 3],\n# [1, 0, 4]])\n```\n\n```python\nsA = sparse.csr_matrix(A) # Here's the initialization of the sparse matrix.\n```\n\n```python\nsB = sparse.csr_matrix(B)\n```\n\n```python\nsA\n#Output\n#<3x3 sparse matrix of type ''\n# with 5 stored elements in Compressed Sparse Row format>\n```\n\n```python\nprint sA\n#Output\n# (0, 0) 1\n# (0, 1) 2\n# (1, 2) 3\n# (2, 0) 1\n# (2, 2) 4\n#```\n```","best_answers_score":0.763,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/7922487\/how-to-transform-numpy-matrix-or-array-to-scipy-sparse-matrix","best_answers_votes":162,"tags":["python","numpy","scipy","sparse-matrix"],"question_length":269,"response_length":694,"tag_count":4} {"question":"Unsuccessful append to an empty NumPy array I am trying to fill an empty(not np.empty!) array with values using append but I am gettin error: My code is as follows: \n```\nimport numpy as np\nresult=np.asarray([np.asarray([]),np.asarray([])])\nresult[0]=np.append([result[0]],[1,2])\n```\nAnd I am getting: \n```\nValueError: could not broadcast input array from shape (2) into shape (0)\n```","response":"I might understand the question incorrectly, but if you want to declare an array of a certain shape but with nothing inside, the following might be helpful: Initialise empty array: \n```\n\n```python\na = np.zeros((0,3)) #or np.empty((0,3)) or np.array([]).reshape(0,3)\n```\n\n```python\na\n#Output\n#array([], shape=(0, 3), dtype=float64)\n#```\n#Now you can use this array to append rows of similar shape to it. Remember that a numpy array is immutable, so a new array is created for each iteration: \n#```\n```\n\n```python\nfor i in range(3):\n a = np.vstack([a, [i,i,i]])\n#Output\n#...\n```\n\n```python\na\n#Output\n#array([[ 0., 0., 0.],\n# [ 1., 1., 1.],\n# [ 2., 2., 2.]])\n#```\n#np.vstack and np.hstack is the most common method for combining numpy arrays, but coming from Matlab I prefer np.r_ and np.c_: Concatenate 1d: \n#```\n```\n\n```python\na = np.zeros(0)\n```\n\n```python\nfor i in range(3):\n a = np.r_[a, [i, i, i]]\n#Output\n#...\n```\n\n```python\na\n#Output\n#array([ 0., 0., 0., 1., 1., 1., 2., 2., 2.])\n#```\n#Concatenate rows: \n#```\n```\n\n```python\na = np.zeros((0,3))\n```\n\n```python\nfor i in range(3):\n a = np.r_[a, [[i,i,i]]]\n#Output\n#...\n```\n\n```python\na\n#Output\n#array([[ 0., 0., 0.],\n# [ 1., 1., 1.],\n# [ 2., 2., 2.]])\n#```\n#Concatenate columns: \n#```\n```\n\n```python\na = np.zeros((3,0))\n```\n\n```python\nfor i in range(3):\n a = np.c_[a, [[i],[i],[i]]]\n#Output\n#...\n```\n\n```python\na\n#Output\n#array([[ 0., 1., 2.],\n# [ 0., 1., 2.],\n# [ 0., 1., 2.]])\n#```\n```","best_answers_score":0.7625,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/19646726\/unsuccessful-append-to-an-empty-numpy-array","best_answers_votes":71,"tags":["python","arrays","numpy","append"],"question_length":383,"response_length":1287,"tag_count":4} {"question":"module 'pandas' has no attribute 'read_csv \n```\nimport pandas as pd\n\ndf = pd.read_csv('FBI-CRIME11.csv')\n\nprint(df.head())\n```\nRunning this simple code gives me the error: \n```\nTraceback (most recent call last):\n File \"C:\/Users\/Dita\/Desktop\/python\/lessons\/python.data\/csv.py\", line 1, in \n import pandas as pd\n File \"C:\\python\\lib\\site-packages\\pandas-0.19.1-py3.5-win-amd64.egg\\pandas\\__init__.py\", line 37, in \n import pandas.core.config_init\n File \"C:\\python\\lib\\site-packages\\pandas-0.19.1-py3.5-win-amd64.egg\\pandas\\core\\config_init.py\", line 18, in \n from pandas.formats.format import detect_console_encoding\n File \"C:\\python\\lib\\site-packages\\pandas-0.19.1-py3.5-win-amd64.egg\\pandas\\formats\\format.py\", line 33, in \n from pandas.io.common import _get_handle, UnicodeWriter, _expand_user\n File \"C:\\python\\lib\\site-packages\\pandas-0.19.1-py3.5-win-amd64.egg\\pandas\\io\\common.py\", line 5, in \n import csv\n File \"C:\\Users\\Dita\\Desktop\\python\\lessons\\python.data\\csv.py\", line 4, in \n df = pd.read_csv('FBI-CRIME11.csv')\nAttributeError: module 'pandas' has no attribute 'read_csv'\n```","response":"Try renaming your csv.py to something else, like csv_test.py. Looks like pandas is being confused about what to import.","best_answers_score":0.7613,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/40554657\/module-pandas-has-no-attribute-read-csv","best_answers_votes":80,"tags":["python","csv","numpy"],"question_length":1159,"response_length":119,"tag_count":3} {"question":"Binning a column with pandas I have a data frame column with numeric values: \n```\ndf['percentage'].head()\n46.5\n44.2\n100.0\n42.12\n```\nI want to see the column as bin counts: \n```\nbins = [0, 1, 5, 10, 25, 50, 100]\n```\nHow can I get the result as bins with their value counts? \n```\n[0, 1] bin amount\n[1, 5] etc\n[5, 10] etc\n...\n```","response":"You can use pandas.cut: \n```\nbins = [0, 1, 5, 10, 25, 50, 100]\ndf['binned'] = pd.cut(df['percentage'], bins)\nprint (df)\n percentage binned\n0 46.50 (25, 50]\n1 44.20 (25, 50]\n2 100.00 (50, 100]\n3 42.12 (25, 50]\n```\n```\nbins = [0, 1, 5, 10, 25, 50, 100]\nlabels = [1,2,3,4,5,6]\ndf['binned'] = pd.cut(df['percentage'], bins=bins, labels=labels)\nprint (df)\n percentage binned\n0 46.50 5\n1 44.20 5\n2 100.00 6\n3 42.12 5\n```\nOr numpy.searchsorted: \n```\nbins = [0, 1, 5, 10, 25, 50, 100]\ndf['binned'] = np.searchsorted(bins, df['percentage'].values)\nprint (df)\n percentage binned\n0 46.50 5\n1 44.20 5\n2 100.00 6\n3 42.12 5\n```\n...and then value_counts or groupby and aggregate size: \n```\ns = pd.cut(df['percentage'], bins=bins).value_counts()\nprint (s)\n(25, 50] 3\n(50, 100] 1\n(10, 25] 0\n(5, 10] 0\n(1, 5] 0\n(0, 1] 0\nName: percentage, dtype: int64\n```\n```\ns = df.groupby(pd.cut(df['percentage'], bins=bins)).size()\nprint (s)\npercentage\n(0, 1] 0\n(1, 5] 0\n(5, 10] 0\n(10, 25] 0\n(25, 50] 3\n(50, 100] 1\ndtype: int64\n```\nBy default cut returns categorical. Series methods like Series.value_counts() will use all categories, even if some categories are not present in the data, operations in categorical.","best_answers_score":0.761,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/45273731\/binning-a-column-with-pandas","best_answers_votes":363,"tags":["python","pandas","numpy","dataframe","binning"],"question_length":326,"response_length":1369,"tag_count":5} {"question":"numpy vstack vs. column_stack What exactly is the difference between numpy vstack and column_stack. Reading through the documentation, it looks as if column_stack is an implementation of vstack for 1D arrays. Is it a more efficient implementation? Otherwise, I cannot find a reason for just having vstack.","response":"I think the following code illustrates the difference nicely: \n```\n\n```python\nnp.vstack(([1,2,3],[4,5,6]))\n#Output\n#array([[1, 2, 3],\n# [4, 5, 6]])\n```\n\n```python\nnp.column_stack(([1,2,3],[4,5,6]))\n#Output\n#array([[1, 4],\n# [2, 5],\n# [3, 6]])\n```\n\n```python\nnp.hstack(([1,2,3],[4,5,6]))\n#Output\n#array([1, 2, 3, 4, 5, 6])\n#```\n#I've included hstack for comparison as well. Notice how column_stack stacks along the second dimension whereas vstack stacks along the first dimension. The equivalent to column_stack is the following hstack command: \n#```\n```\n\n```python\nnp.hstack(([[1],[2],[3]],[[4],[5],[6]]))\n#Output\n#array([[1, 4],\n# [2, 5],\n# [3, 6]])\n#```\n#I hope we can agree that column_stack is more convenient.\n```","best_answers_score":0.761,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/16473042\/numpy-vstack-vs-column-stack","best_answers_votes":122,"tags":["python","numpy"],"question_length":305,"response_length":658,"tag_count":2} {"question":"Cannot get minor grid lines to appear in matplotlib figure Ok so I have the code below for real-time graphing some data from an embedded device received over serial. It's not meant to be a production tool but rather an internal eng tool, hence it's not terribly user-friendly. The problem is that no matter what I do, I can't get minor grid lines to appear, even though here they are set to True, which=both. I can do anything I want to the major grid lines but the minors won't appear. Any ideas? Here is the code: \n```\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfrom matplotlib import animation\nimport serial\n\nSERIAL_PORT_NUM=9\n\n...a bunch of constants...\n#windows starts serial port numbers at 1, python starts at 0\nSERIAL_PORT_NUM = SERIAL_PORT_NUM - 1\n\"\"\"\nOpen the serial port\n\"\"\"\nser =serial.Serial(port=SERIAL_PORT_NUM,baudrate=115200,bytesize=8,parity='N',stopbits=1,timeout=None,xonxoff=0,rtscts=0)\n\n# First set up the figure, the axis, and the plot element we want to animate\nraw_adc_fig = plt.figure()\nraw_adc_ax = plt.axes(xlim=(0, 200), ylim=(0, 2047))\nraw_adc_ax.grid(True, which='both')\nraw_adc_fig.suptitle(\"Raw ADC data\")\nplt.ylabel(\"ADC values (hex)\")\nplt.xlabel(\"time (sec)\")\nraw_adc_line, = raw_adc_ax.plot([], [], lw=2)\n\ndef read_serial(serial_port):\n tmp = ''\n same_line = True\n while same_line:\n tmp += serial_port.read(1)\n if tmp != '':\n if tmp[-1] == '*':\n same_line = False\n tmp = tmp.rstrip()\n tmp = tmp.lstrip()\n return tmp\n\ndef process_serial(input_data):\n output_data = 0\n intermediate_data = input_data[A_TYPE_START_POS:A_TYPE_STOP_POS + 1]\n if( intermediate_data != ''):\n output_data = int(intermediate_data , 16 )\n else:\n print \"bad data\"\n output_data = -100\n\n return output_data\n\ndef get_sound_value(serial_port):\n cur_line = ''\n\n get_next_line = True\n # read in the next line until a sound packet of type A is found\n while( get_next_line ):\n cur_line = read_serial(serial_port)\n if( (cur_line != '') and (cur_line[0:3] == ROUTER_SOUND_DATA) and (len(cur_line) == D_TYPE_STOP_POS + 2) ):\n get_next_line = False\n\n sound_value = process_serial(cur_line)\n return sound_value\n\n# initialization function: plot the background of each frame\ndef raw_adc_init():\n raw_adc_line.set_data([], [])\n return raw_adc_line,\n\n# animation function. This is called sequentially\ndef raw_adc_animate(i):\n sound_data_list.append( get_sound_value(ser) )\n y = sound_data_list\n if( len(y) == 190 ):\n del y[0]\n x = np.linspace(0, len(y), len(y))\n raw_adc_line.set_data(x, y)\n return raw_adc_line,\n\n# call the animator. blit=True means only re-draw the parts that have changed.\nraw_adc_anim = animation.FuncAnimation(raw_adc_fig, raw_adc_animate, init_func=raw_adc_init, frames=200, interval=1000, blit=True)\n```\nEdit: fixed a bug in opening the serial port. Changed timeout=0 to timeout=None.","response":"Unfortunately, ax.grid is a bit confusing in this regard. (This is a design bug \/ common gotcha.) It turns the minor grid on, but the minor ticks are still turned off. What you need to do is call plt.minorticks_on or ax.minorticks_on in addition to calling ax.grid(True, which='both').","best_answers_score":0.7609,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/19940518\/cannot-get-minor-grid-lines-to-appear-in-matplotlib-figure","best_answers_votes":66,"tags":["python","numpy","matplotlib","pyserial","graphing"],"question_length":2981,"response_length":285,"tag_count":5} {"question":"Converting int arrays to string arrays in numpy without truncation Trying to convert int arrays to string arrays in numpy \n```\n\n```python\na=array([0,33,4444522])\n```\n\n```python\na.astype(str)\n#Output\n#array(['0', '3', '4'], \n# dtype='|S1')\n#```\n#Not what I intended \n#```\n```\n\n```python\na.astype('S10')\n#Output\n#array(['0', '33', '4444522'], \n# dtype='|S10')\n#```\n#This works but I had to know 10 was big enough to hold my longest string. Is there a way of doing this easily without knowing ahead of time what size string you need? It seems a little dangerous that it just quietly truncates your string without throwing an error.\n```","response":"You can stay in numpy, doing \n```\nnp.char.mod('%d', a)\n```\nThis is twice faster than map or list comprehensions for 10 elements, four times faster for 100. This and other string operations are documented here.","best_answers_score":0.7594,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/9958846\/converting-int-arrays-to-string-arrays-in-numpy-without-truncation","best_answers_votes":58,"tags":["python","arrays","string","numpy"],"question_length":619,"response_length":209,"tag_count":4} {"question":"How to have logarithmic bins in a Python histogram As far as I know the option Log=True in the histogram function only refers to the y-axis. \n```\nP.hist(d,bins=50,log=True,alpha=0.5,color='b',histtype='step')\n```\nI need the bins to be equally spaced in log10. Is there something that can do this?","response":"use logspace() to create a geometric sequence, and pass it to bins parameter. And set the scale of xaxis to log scale. \n```\nimport pylab as pl\nimport numpy as np\n\ndata = np.random.normal(size=10000)\npl.hist(data, bins=np.logspace(np.log10(0.1),np.log10(1.0), 50))\npl.gca().set_xscale(\"log\")\npl.show()\n```","best_answers_score":0.7579,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/6855710\/how-to-have-logarithmic-bins-in-a-python-histogram","best_answers_votes":160,"tags":["python","numpy","matplotlib","histogram"],"question_length":296,"response_length":304,"tag_count":4} {"question":"How to avoid overlapping of labels & autopct in a pie chart My Python code is: \n```\nvalues = [234, 64, 54,10, 0, 1, 0, 9, 2, 1, 7, 7]\nmonths = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',\n 'Jul','Aug','Sep','Oct', 'Nov','Dec']\n\ncolors = ['yellowgreen', 'red', 'gold', 'lightskyblue', \n 'white','lightcoral','blue','pink', 'darkgreen', \n 'yellow','grey','violet','magenta','cyan']\n\nplt.pie(values, labels=labels, autopct='%1.1f%%', shadow=True, \n colors=colors, startangle=90, radius=1.2)\n\nplt.show()\n```\nIs it possible to show the labels \"Jan\", \"Feb\", \"Mar\", etc. and the percentages, either: without overlapping, or using an arrow mark?","response":"Alternatively you can put the legends beside the pie graph: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nx = np.char.array(['Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct', 'Nov','Dec'])\ny = np.array([234, 64, 54,10, 0, 1, 0, 9, 2, 1, 7, 7])\ncolors = ['yellowgreen','red','gold','lightskyblue','white','lightcoral','blue','pink', 'darkgreen','yellow','grey','violet','magenta','cyan']\nporcent = 100.*y\/y.sum()\n\npatches, texts = plt.pie(y, colors=colors, startangle=90, radius=1.2)\nlabels = ['{0} - {1:1.2f} %'.format(i,j) for i,j in zip(x, porcent)]\n\nsort_legend = True\nif sort_legend:\n patches, labels, dummy = zip(*sorted(zip(patches, labels, y),\n key=lambda x: x[2],\n reverse=True))\n\nplt.legend(patches, labels, loc='left center', bbox_to_anchor=(-0.1, 1.),\n fontsize=8)\n\nplt.savefig('piechart.png', bbox_inches='tight')\n```\nEDIT: if you want to keep the legend in the original order, as you mentioned in the comments, you can set sort_legend=False in the code above, giving:","best_answers_score":0.7579,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/23577505\/how-to-avoid-overlapping-of-labels-autopct-in-a-pie-chart","best_answers_votes":82,"tags":["python","numpy","matplotlib","pie-chart","plot-annotations"],"question_length":669,"response_length":1098,"tag_count":5} {"question":"numpy convert categorical string arrays to an integer array I'm trying to convert a string array of categorical variables to an integer array of categorical variables. Ex. \n```\nimport numpy as np\na = np.array( ['a', 'b', 'c', 'a', 'b', 'c'])\nprint a.dtype\n\n```python\n|S1\n```\n\nb = np.unique(a)\nprint b\n\n```python\n ['a' 'b' 'c']\n```\n\nc = a.desired_function(b)\nprint c, c.dtype\n\n```python\n[1,2,3,1,2,3] int32\n#Output\n#```\n#I realize this can be done with a loop but I imagine there is an easier way. Thanks.\n```","response":"np.unique has some optional returns return_inverse gives the integer encoding, which I use very often \n```\n\n```python\nb, c = np.unique(a, return_inverse=True)\n```\n\n```python\nb\n#Output\n#array(['a', 'b', 'c'], \n# dtype='|S1')\n```\n\n```python\nc\n#Output\n#array([0, 1, 2, 0, 1, 2])\n```\n\n```python\nc+1\n#Output\n#array([1, 2, 3, 1, 2, 3])\n#```\n#it can be used to recreate the original array from uniques \n#```\n```\n\n```python\nb[c]\n#Output\n#array(['a', 'b', 'c', 'a', 'b', 'c'], \n# dtype='|S1')\n```\n\n```python\n(b[c] == a).all()\n#Output\n#True\n#```\n```","best_answers_score":0.7574,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/3172509\/numpy-convert-categorical-string-arrays-to-an-integer-array","best_answers_votes":71,"tags":["python","statistics","numpy","machine-learning"],"question_length":465,"response_length":432,"tag_count":4} {"question":"Conditionally fill column values based on another columns value in pandas I have a DataFrame with a few columns. One columns contains a symbol for which currency is being used, for instance a euro or a dollar sign. Another column contains a budget value. So for instance in one row it could mean a budget of 5000 in euro and in the next row it could say a budget of 2000 in dollar. In pandas I would like to add an extra column to my DataFrame, normalizing the budgets in euro. So basically, for each row the value in the new column should be the value from the budget column * 1 if the symbol in the currency column is a euro sign, and the value in the new column should be the value of the budget column * 0.78125 if the symbol in the currency column is a dollar sign. I know how to add a column, fill it with values, copy values from another column etc. but not how to fill the new column conditionally based on the value of another column. Any suggestions?","response":"You probably want to do \n```\ndf['Normalized'] = np.where(df['Currency'] == '$', df['Budget'] * 0.78125, df['Budget'])\n```","best_answers_score":0.7572,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/10715519\/conditionally-fill-column-values-based-on-another-columns-value-in-pandas","best_answers_votes":131,"tags":["python","python-3.x","pandas","dataframe","numpy"],"question_length":960,"response_length":121,"tag_count":5} {"question":"Annotate Time Series plot I have an index array (x) of dates (datetime objects) and an array of actual values (y: bond prices). Doing the following: \n```py\nplot(x,y)\n```\nproduces a perfectly fine time series graph with the x-axis labeled with the dates. No problem so far. But I want to add text on certain dates. For example, on 2009-10-31, I wish to display the text \"Event 1\" with an arrow pointing to the y value at that date. I have read through the Matplotlib documentation on text() and annotate() to no avail.","response":"Matplotlib uses an internal floating point format for dates. You just need to convert your date to that format (using matplotlib.dates.date2num or matplotlib.dates.datestr2num) and then use annotate as usual. As a somewhat excessively fancy example: \n```\nimport datetime as dt\nimport matplotlib.pyplot as plt\nimport matplotlib.dates as mdates\n\nx = [dt.datetime(2009, 05, 01), dt.datetime(2010, 06, 01), \n dt.datetime(2011, 04, 01), dt.datetime(2012, 06, 01)]\ny = [1, 3, 2, 5]\n\nfig, ax = plt.subplots()\nax.plot_date(x, y, linestyle='--')\n\nax.annotate('Test', (mdates.date2num(x[1]), y[1]), xytext=(15, 15), \n textcoords='offset points', arrowprops=dict(arrowstyle='-|>'))\n\nfig.autofmt_xdate()\nplt.show()\n```","best_answers_score":0.7563,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/11067368\/annotate-time-series-plot","best_answers_votes":89,"tags":["python","numpy","matplotlib","time-series","annotations"],"question_length":517,"response_length":721,"tag_count":5} {"question":"Why the performance difference between numpy.zeros and numpy.zeros_like? I finally found a performance bottleneck in my code but am confused as to what the reason is. To solve it I changed all my calls of numpy.zeros_like to instead use numpy.zeros. But why is zeros_like sooooo much slower? For example (note e-05 on the zeros call): \n```\n\n```python\ntimeit.timeit('np.zeros((12488, 7588, 3), np.uint8)', 'import numpy as np', number = 10)\n#Output\n#5.2928924560546875e-05\n```\n\n```python\ntimeit.timeit('np.zeros_like(x)', 'import numpy as np; x = np.zeros((12488, 7588, 3), np.uint8)', number = 10)\n#Output\n#1.4402990341186523\n#```\n#But then strangely writing to an array created with zeros is noticeably slower than an array created with zeros_like: \n#```\n```\n\n```python\ntimeit.timeit('x[100:-100, 100:-100] = 1', 'import numpy as np; x = np.zeros((12488, 7588, 3), np.uint8)', number = 10)\n#Output\n#0.4310588836669922\n```\n\n```python\ntimeit.timeit('x[100:-100, 100:-100] = 1', 'import numpy as np; x = np.zeros_like(np.zeros((12488, 7588, 3), np.uint8))', number = 10)\n#Output\n#0.33325695991516113\n#```\n#My guess is zeros is using some CPU trick and not actually writing to the memory to allocate it. This is done on the fly when it's written to. But that still doesn't explain the massive discrepancy in array creation times. I'm running Mac OS X Yosemite with the current numpy version: \n#```\n```\n\n```python\nnumpy.__version__\n#Output\n#'1.9.1'\n#```\n```","response":"My timings in Ipython are (with a simplier timeit interface): \n```\n\n```python\ntimeit np.zeros_like(x)\n1 loops, best of 3: 420 ms per loop\n```\n\n```python\ntimeit np.zeros((12488, 7588, 3), np.uint8)\n100000 loops, best of 3: 15.1 \u00b5s per loop\n```\nWhen I look at the code with IPython (np.zeros_like??) I see: \n```\nres = empty_like(a, dtype=dtype, order=order, subok=subok)\nmultiarray.copyto(res, 0, casting='unsafe')\n```\nwhile np.zeros is a blackbox - pure compiled code. Timings for empty are: \n```\n```\n\n```python\ntimeit np.empty_like(x)\n100000 loops, best of 3: 13.6 \u00b5s per loop\n```\n\n```python\ntimeit np.empty((12488, 7588, 3), np.uint8)\n100000 loops, best of 3: 14.9 \u00b5s per loop\n```\nSo the extra time in zeros_like is in that copy. In my tests, the difference in assignment times (x[]=1) is negligible. My guess is that zeros, ones, empty are all early compiled creations. empty_like was added as a convenience, just drawing shape and type info from its input. zeros_like was written with more of an eye toward easy programming maintenance (reusing empty_like) than for speed. np.ones and np.full also use the np.empty ... copyto sequence, and show similar timings. https:\/\/github.com\/numpy\/numpy\/blob\/master\/numpy\/core\/src\/multiarray\/array_assign_scalar.c appears to be file that copies a scalar (such as 0) to an array. I don't see a use of memset. https:\/\/github.com\/numpy\/numpy\/blob\/master\/numpy\/core\/src\/multiarray\/alloc.c has calls to malloc and calloc. https:\/\/github.com\/numpy\/numpy\/blob\/master\/numpy\/core\/src\/multiarray\/ctors.c - source for zeros and empty. Both call PyArray_NewFromDescr_int, but one ends up using npy_alloc_cache_zero and the other npy_alloc_cache. npy_alloc_cache in alloc.c calls alloc. npy_alloc_cache_zero calls npy_alloc_cache followed by a memset. Code in alloc.c is further confused with a THREAD option. More on the calloc v malloc+memset difference at: Why malloc+memset is slower than calloc? But with caching and garbage collection, I wonder whether the calloc\/memset distinction applies. This simple test with the memory_profile package supports the claim that zeros and empty allocate memory 'on-the-fly', while zeros_like allocates everything up front: \n```\nN = (1000, 1000) \nM = (slice(None, 500, None), slice(500, None, None))\n\nLine # Mem usage Increment Line Contents\n================================================\n 2 17.699 MiB 0.000 MiB @profile\n 3 def test1(N, M):\n 4 17.699 MiB 0.000 MiB print(N, M)\n 5 17.699 MiB 0.000 MiB x = np.zeros(N) # no memory jump\n 6 17.699 MiB 0.000 MiB y = np.empty(N)\n 7 25.230 MiB 7.531 MiB z = np.zeros_like(x) # initial jump\n 8 29.098 MiB 3.867 MiB x[M] = 1 # jump on usage\n 9 32.965 MiB 3.867 MiB y[M] = 1\n 10 32.965 MiB 0.000 MiB z[M] = 1\n 11 32.965 MiB 0.000 MiB return x,y,z\n```\n```","best_answers_score":0.7556,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/27464039\/why-the-performance-difference-between-numpy-zeros-and-numpy-zeros-like","best_answers_votes":25,"tags":["python","numpy"],"question_length":1346,"response_length":2921,"tag_count":2} {"question":"sampling random floats on a range in numpy How can I sample random floats on an interval [a, b] in numpy? Not just integers, but any real numbers. For example, random_float(5, 10) would return random numbers between [5, 10]. thanks.","response":"The uniform distribution would probably do what you are asking. \n```\nnp.random.uniform(5,10) # A single value\nnp.random.uniform(5,10,[2,3]) # A 2x3 array\n```","best_answers_score":0.7538,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/11873741\/sampling-random-floats-on-a-range-in-numpy","best_answers_votes":109,"tags":["python","numpy","scipy"],"question_length":232,"response_length":157,"tag_count":3} {"question":"What's the fastest way in Python to calculate cosine similarity given sparse matrix data? Given a sparse matrix listing, what's the best way to calculate the cosine similarity between each of the columns (or rows) in the matrix? I would rather not iterate n-choose-two times. Say the input matrix is: \n```\nA= \n[0 1 0 0 1\n 0 0 1 1 1\n 1 1 0 1 0]\n```\nThe sparse representation is: \n```\nA = \n0, 1\n0, 4\n1, 2\n1, 3\n1, 4\n2, 0\n2, 1\n2, 3\n```\nIn Python, it's straightforward to work with the matrix-input format: \n```\nimport numpy as np\nfrom sklearn.metrics import pairwise_distances\nfrom scipy.spatial.distance import cosine\n\nA = np.array(\n[[0, 1, 0, 0, 1],\n[0, 0, 1, 1, 1],\n[1, 1, 0, 1, 0]])\n\ndist_out = 1-pairwise_distances(A, metric=\"cosine\")\ndist_out\n```\nGives: \n```\narray([[ 1. , 0.40824829, 0.40824829],\n [ 0.40824829, 1. , 0.33333333],\n [ 0.40824829, 0.33333333, 1. ]])\n```\nThat's fine for a full-matrix input, but I really want to start with the sparse representation (due to the size and sparsity of my matrix). Any ideas about how this could best be accomplished?","response":"You can compute pairwise cosine similarity on the rows of a sparse matrix directly using sklearn. As of version 0.17 it also supports sparse output: \n```\nfrom sklearn.metrics.pairwise import cosine_similarity\nfrom scipy import sparse\n\nA = np.array([[0, 1, 0, 0, 1], [0, 0, 1, 1, 1],[1, 1, 0, 1, 0]])\nA_sparse = sparse.csr_matrix(A)\n\nsimilarities = cosine_similarity(A_sparse)\nprint('pairwise dense output:\\n {}\\n'.format(similarities))\n\n#also can output sparse matrices\nsimilarities_sparse = cosine_similarity(A_sparse,dense_output=False)\nprint('pairwise sparse output:\\n {}\\n'.format(similarities_sparse))\n```\nResults: \n```\npairwise dense output:\n[[ 1. 0.40824829 0.40824829]\n[ 0.40824829 1. 0.33333333]\n[ 0.40824829 0.33333333 1. ]]\n\npairwise sparse output:\n(0, 1) 0.408248290464\n(0, 2) 0.408248290464\n(0, 0) 1.0\n(1, 0) 0.408248290464\n(1, 2) 0.333333333333\n(1, 1) 1.0\n(2, 1) 0.333333333333\n(2, 0) 0.408248290464\n(2, 2) 1.0\n```\nIf you want column-wise cosine similarities simply transpose your input matrix beforehand: \n```\nA_sparse.transpose()\n```","best_answers_score":0.7533,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/17627219\/whats-the-fastest-way-in-python-to-calculate-cosine-similarity-given-sparse-mat","best_answers_votes":101,"tags":["python","numpy","pandas","similarity","cosine-similarity"],"question_length":1102,"response_length":1088,"tag_count":5} {"question":"How to show the whole image when using OpenCV warpPerspective I have 2 test images here. My is question is, how to map the square in first image to the quadrilateral in the second image without cropping the image. Image 1: Image 2: Here is my current code using openCV warpPerspective function. \n```\nimport cv2\nimport numpy as np\n\nimg1_square_corners = np.float32([[253,211], [563,211], [563,519],[253,519]])\nimg2_quad_corners = np.float32([[234,197], [520,169], [715,483], [81,472]])\n\nh, mask = cv2.findHomography(img1_square_corners, img2_quad_corners)\nim = cv2.imread(\"image1.png\")\nout = cv2.warpPerspective(im, h, (800,800))\ncv2.imwrite(\"result.png\", out)\n```\nResult: As you can see, because of dsize=(800,800) parameter in the warpPerspective function, I can't get full view of image 1. If I adjust the dsize, the square won't map properly. Is there any way to resize the output image so that I can get whole picture of image 1?","response":"My solution is to calculate the result image size, and then do a translation. \n```\ndef warpTwoImages(img1, img2, H):\n '''warp img2 to img1 with homograph H'''\n h1,w1 = img1.shape[:2]\n h2,w2 = img2.shape[:2]\n pts1 = float32([[0,0],[0,h1],[w1,h1],[w1,0]]).reshape(-1,1,2)\n pts2 = float32([[0,0],[0,h2],[w2,h2],[w2,0]]).reshape(-1,1,2)\n pts2_ = cv2.perspectiveTransform(pts2, H)\n pts = concatenate((pts1, pts2_), axis=0)\n [xmin, ymin] = int32(pts.min(axis=0).ravel() - 0.5)\n [xmax, ymax] = int32(pts.max(axis=0).ravel() + 0.5)\n t = [-xmin,-ymin]\n Ht = array([[1,0,t[0]],[0,1,t[1]],[0,0,1]]) # translate\n\n result = cv2.warpPerspective(img2, Ht.dot(H), (xmax-xmin, ymax-ymin))\n result[t[1]:h1+t[1],t[0]:w1+t[0]] = img1\n return result\n\ndst_pts = float32([kp1[m.queryIdx].pt for m in good]).reshape(-1,1,2)\nsrc_pts = float32([kp2[m.trainIdx].pt for m in good]).reshape(-1,1,2)\nM, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0)\n\nresult = warpTwoImages(img1_color, img2_color, M)\n```","best_answers_score":0.7527,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/13063201\/how-to-show-the-whole-image-when-using-opencv-warpperspective","best_answers_votes":53,"tags":["python","image-processing","opencv","numpy","computer-vision"],"question_length":933,"response_length":1030,"tag_count":5} {"question":"Load CSV to Pandas MultiIndex DataFrame I have a 719mb CSV file that looks like: \n```\nfrom, to, dep, freq, arr, code, mode (header row)\nRGBOXFD,RGBPADTON,127,0,27,99999,2\nRGBOXFD,RGBPADTON,127,0,33,99999,2\nRGBOXFD,RGBRDLEY,127,0,1425,99999,2\nRGBOXFD,RGBCHOLSEY,127,0,52,99999,2\nRGBOXFD,RGBMDNHEAD,127,0,91,99999,2\nRGBDIDCOTP,RGBPADTON,127,0,46,99999,2\nRGBDIDCOTP,RGBPADTON,127,0,3,99999,2\nRGBDIDCOTP,RGBCHOLSEY,127,0,61,99999,2\nRGBDIDCOTP,RGBRDLEY,127,0,1430,99999,2\nRGBDIDCOTP,RGBPADTON,127,0,115,99999,2\nand so on...\n```\nI want to load in to a pandas DataFrame. Now I know there is a load from csv method: \n```\nr = pd.DataFrame.from_csv('test_data2.csv')\n```\nBut I specifically want to load it as a 'MultiIndex' DataFrame where from and to are the indexes: So ending up with: \n```\ndep, freq, arr, code, mode\nRGBOXFD RGBPADTON 127 0 27 99999 2\n RGBRDLEY 127 0 33 99999 2\n RGBCHOLSEY 127 0 1425 99999 2\n RGBMDNHEAD 127 0 1525 99999 2\n```\netc. I'm not sure how to do that?","response":"You could use pd.read_csv: \n```\n\n```python\ndf = pd.read_csv(\"test_data2.csv\", index_col=[0,1], skipinitialspace=True)\n```\n\n```python\ndf\n#Output\n# dep freq arr code mode\n#from to \n#RGBOXFD RGBPADTON 127 0 27 99999 2\n# RGBPADTON 127 0 33 99999 2\n# RGBRDLEY 127 0 1425 99999 2\n# RGBCHOLSEY 127 0 52 99999 2\n# RGBMDNHEAD 127 0 91 99999 2\n#RGBDIDCOTP RGBPADTON 127 0 46 99999 2\n# RGBPADTON 127 0 3 99999 2\n# RGBCHOLSEY 127 0 61 99999 2\n# RGBRDLEY 127 0 1430 99999 2\n# RGBPADTON 127 0 115 99999 2\n#```\n#where I've used skipinitialspace=True to get rid of those annoying spaces in the header row.\n```","best_answers_score":0.7526,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/19103624\/load-csv-to-pandas-multiindex-dataframe","best_answers_votes":66,"tags":["python","csv","numpy","pandas"],"question_length":1033,"response_length":838,"tag_count":4} {"question":"numpy: multiply arrays rowwise I have those arrays: \n```\na = np.array([\n [1,2],\n [3,4],\n [5,6],\n [7,8]])\n\nb = np.array([1,2,3,4])\n```\nand I want them to multiply like so: \n```\n[[1*1, 2*1],\n[3*2, 4*2],\n[5*3, 6*3],\n[7*4, 8*4]]\n```\n\n```python\nbasically out[i] = a[i] * b[i], where a[i].shape is (2,) and b[i] then is a scalar. What's the trick? np.multiply seems not to work: \n#Output\n#```\n```\n\n```python\nnp.multiply(a, b)\n#Output\n#ValueError: operands could not be broadcast together with shapes (4,2) (4)\n#```\n```","response":"add an axis to b: \n```\n\n```python\nnp.multiply(a, b[:, np.newaxis])\n#Output\n#array([[ 1, 2],\n# [ 6, 8],\n# [15, 18],\n# [28, 32]])\n#```\n```","best_answers_score":0.7525,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/22934219\/numpy-multiply-arrays-rowwise","best_answers_votes":54,"tags":["python","numpy"],"question_length":487,"response_length":132,"tag_count":2} {"question":"Rank items in an array using Python\/NumPy, without sorting array twice I have an array of numbers and I'd like to create another array that represents the rank of each item in the first array. I'm using Python and NumPy. For example: \n```\narray = [4,2,7,1]\nranks = [2,1,3,0]\n```\nHere's the best method I've come up with: \n```\narray = numpy.array([4,2,7,1])\ntemp = array.argsort()\nranks = numpy.arange(len(array))[temp.argsort()]\n```\nAre there any better\/faster methods that avoid sorting the array twice?","response":"This question is a few years old, and the accepted answer is great, but I think the following is still worth mentioning. If you don't mind the dependence on scipy, you can use scipy.stats.rankdata: \n```\n\n```python\nfrom scipy.stats import rankdata\n```\n\n```python\na = [4, 2, 7, 1]\n```\n\n```python\nrankdata(a)\n#Output\n#array([ 3., 2., 4., 1.])\n```\n\n```python\n(rankdata(a) - 1).astype(int)\n#Output\n#array([2, 1, 3, 0])\n#```\n#A nice feature of rankdata is that the method argument provides several options for handling ties. For example, there are three occurrences of 20 and two occurrences of 40 in b: \n#```\n```\n\n```python\nb = [40, 20, 70, 10, 20, 50, 30, 40, 20]\n```\nThe default assigns the average rank to the tied values: \n```\n```\n\n```python\nrankdata(b)\n#Output\n#array([ 6.5, 3. , 9. , 1. , 3. , 8. , 5. , 6.5, 3. ])\n#```\n#method='ordinal' assigns consecutive ranks: \n#```\n```\n\n```python\nrankdata(b, method='ordinal')\n#Output\n#array([6, 2, 9, 1, 3, 8, 5, 7, 4])\n#```\n#method='min' assigns the minimum rank of the tied values to all the tied values: \n#```\n```\n\n```python\nrankdata(b, method='min')\n#Output\n#array([6, 2, 9, 1, 2, 8, 5, 6, 2])\n#```\n#See the docstring for more options.\n```","best_answers_score":0.7523,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/5284646\/rank-items-in-an-array-using-python-numpy-without-sorting-array-twice","best_answers_votes":139,"tags":["python","sorting","numpy"],"question_length":504,"response_length":1139,"tag_count":3} {"question":"Have numpy argsort return an array of 2d indices? If we have a 1d array \n```\narr = np.random.randint(7, size=(5))\n# [3 1 4 6 2]\nprint np.argsort(arr)\n# [1 4 0 2 3] <= The indices in the sorted order\n```\nIf we have a 2d array \n```\narr = np.random.randint(7, size=(3, 3))\n# [[5 2 4]\n# [3 3 3]\n# [6 1 2]]\nprint np.argsort(arr)\n# [[1 2 0]\n# [0 1 2]\n# [1 2 0]] <= It sorts each row\n```\nWhat I need is the 2d indices that sort this matrix in its entirety. Something like this: \n```\n# [[2 1] => 1\n# [0 1] => 2\n# [2 2] => 2\n# .\n# .\n# .\n# [0 2] => 4\n# [0 0] => 5\n# [2 0]] => 6\n```\nHow do I get \"2d indices\" for the sorting of a 2d array?","response":"Apply numpy.argsort on flattened array and then unravel the indices back to (3, 3) shape: \n```\n\n```python\narr = np.array([[5, 2, 4],\n#Output\n#[3, 3, 3],\n#[6, 1, 2]])\n```\n\n```python\nnp.dstack(np.unravel_index(np.argsort(arr.ravel()), (3, 3)))\n#Output\n#array([[[2, 1],\n# [0, 1],\n# [2, 2],\n# [1, 0],\n# [1, 1],\n# [1, 2],\n# [0, 2],\n# [0, 0],\n# [2, 0]]])\n#```\n```","best_answers_score":0.7522,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/30577375\/have-numpy-argsort-return-an-array-of-2d-indices","best_answers_votes":56,"tags":["python","arrays","numpy"],"question_length":628,"response_length":363,"tag_count":3} {"question":"Getting the integer index of a Pandas DataFrame row fulfilling a condition? I have the following DataFrame: \n```\na b c\nb\n2 1 2 3\n5 4 5 6\n```\nAs you can see, column b is used as an index. I want to get the ordinal number of the row fulfilling ('b' == 5), which in this case would be 1. The column being tested can be either an index column (as with b in this case) or a regular column, e.g. I may want to find the index of the row fulfilling ('c' == 6).","response":"Use Index.get_loc instead. Reusing @unutbu's set up code, you'll achieve the same results. \n```\n\n```python\nimport pandas as pd\n```\n\n```python\nimport numpy as np\n```\n\n```python\ndf = pd.DataFrame(np.arange(1,7).reshape(2,3),\n#Output\n# columns = list('abc'),\n# index=pd.Series([2,5], name='b'))\n```\n\n```python\ndf\n#Output\n# a b c\n#b\n#2 1 2 3\n#5 4 5 6\n```\n\n```python\ndf.index.get_loc(5)\n#Output\n#1\n#```\n```","best_answers_score":0.7519,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/18199288\/getting-the-integer-index-of-a-pandas-dataframe-row-fulfilling-a-condition","best_answers_votes":79,"tags":["python","numpy","pandas"],"question_length":460,"response_length":360,"tag_count":3} {"question":"Moving average or running mean Is there a SciPy function or NumPy function or module for Python that calculates the running mean of a 1D array given a specific window?","response":"NOTE: More efficient solutions may include scipy.ndimage.uniform_filter1d (see this answer), or using newer libraries including talib's talib.MA. Use np.convolve: \n```\nnp.convolve(x, np.ones(N)\/N, mode='valid')\n```\nExplanation The running mean is a case of the mathematical operation of convolution. For the running mean, you slide a window along the input and compute the mean of the window's contents. For discrete 1D signals, convolution is the same thing, except instead of the mean you compute an arbitrary linear combination, i.e., multiply each element by a corresponding coefficient and add up the results. Those coefficients, one for each position in the window, are sometimes called the convolution kernel. The arithmetic mean of N values is (x_1 + x_2 + ... + x_N) \/ N, so the corresponding kernel is (1\/N, 1\/N, ..., 1\/N), and that's exactly what we get by using np.ones(N)\/N. Edges The mode argument of np.convolve specifies how to handle the edges. I chose the valid mode here because I think that's how most people expect the running mean to work, but you may have other priorities. Here is a plot that illustrates the difference between the modes: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nmodes = ['full', 'same', 'valid']\nfor m in modes:\n plt.plot(np.convolve(np.ones(200), np.ones(50)\/50, mode=m));\nplt.axis([-10, 251, -.1, 1.1]);\nplt.legend(modes, loc='lower center');\nplt.show()\n```","best_answers_score":0.7517,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/13728392\/moving-average-or-running-mean","best_answers_votes":382,"tags":["python","numpy","scipy","moving-average"],"question_length":167,"response_length":1419,"tag_count":4} {"question":"How to try-except an illegal matrix operation due to singularity in NumPy In NumPy, I'm trying to use linalg to compute matrix inverses at each step of a Newton-Raphson scheme (the problem size is small intentionally so that we can invert analytically computed Hessian matrices). However, after I get far along towards convergence, the Hessian gets close to singular. Is there any method within NumPy that lets me test whether a matrix is considered singular (computing determinant is not robust enough)? Ideally, it would be nice if there's a way to use a try except block to catch NumPy's singular array error. How would I do this? The NumPy error given at the terminal is: \n```\nraise LinAlgError, 'Singular matrix'\nnumpy.linalg.linalg.LinAlgError: Singular matrix\n```","response":"The syntax would be like this: \n```\nimport numpy as np\n\ntry:\n # your code that will (maybe) throw\nexcept np.linalg.LinAlgError as err:\n if 'Singular matrix' in str(err):\n # your error handling block\n else:\n raise\n```","best_answers_score":0.7517,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/9155478\/how-to-try-except-an-illegal-matrix-operation-due-to-singularity-in-numpy","best_answers_votes":58,"tags":["python","numpy","linear-algebra"],"question_length":770,"response_length":239,"tag_count":3} {"question":"how to implement tensorflow's next_batch for own data In the tensorflow MNIST tutorial the mnist.train.next_batch(100) function comes very handy. I am now trying to implement a simple classification myself. I have my training data in a numpy array. How could I implement a similar function for my own data to give me the next batch? \n```\nsess = tf.InteractiveSession()\ntf.global_variables_initializer().run()\nXtr, Ytr = loadData()\nfor it in range(1000):\n batch_x = Xtr.next_batch(100)\n batch_y = Ytr.next_batch(100)\n```","response":"The link you posted says: \"we get a \"batch\" of one hundred random data points from our training set\". In my example I use a global function (not a method like in your example) so there will be a difference in syntax. In my function you'll need to pass the number of samples wanted and the data array. Here is the correct code, which ensures samples have correct labels: \n```\nimport numpy as np\n\ndef next_batch(num, data, labels):\n '''\n Return a total of `num` random samples and labels. \n '''\n idx = np.arange(0 , len(data))\n np.random.shuffle(idx)\n idx = idx[:num]\n data_shuffle = [data[ i] for i in idx]\n labels_shuffle = [labels[ i] for i in idx]\n\n return np.asarray(data_shuffle), np.asarray(labels_shuffle)\n\nXtr, Ytr = np.arange(0, 10), np.arange(0, 100).reshape(10, 10)\nprint(Xtr)\nprint(Ytr)\n\nXtr, Ytr = next_batch(5, Xtr, Ytr)\nprint('\\n5 random samples')\nprint(Xtr)\nprint(Ytr)\n```\nAnd a demo run: \n```\n[0 1 2 3 4 5 6 7 8 9]\n[[ 0 1 2 3 4 5 6 7 8 9]\n [10 11 12 13 14 15 16 17 18 19]\n [20 21 22 23 24 25 26 27 28 29]\n [30 31 32 33 34 35 36 37 38 39]\n [40 41 42 43 44 45 46 47 48 49]\n [50 51 52 53 54 55 56 57 58 59]\n [60 61 62 63 64 65 66 67 68 69]\n [70 71 72 73 74 75 76 77 78 79]\n [80 81 82 83 84 85 86 87 88 89]\n [90 91 92 93 94 95 96 97 98 99]]\n\n5 random samples\n[9 1 5 6 7]\n[[90 91 92 93 94 95 96 97 98 99]\n [10 11 12 13 14 15 16 17 18 19]\n [50 51 52 53 54 55 56 57 58 59]\n [60 61 62 63 64 65 66 67 68 69]\n [70 71 72 73 74 75 76 77 78 79]]\n```","best_answers_score":0.7515,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/40994583\/how-to-implement-tensorflows-next-batch-for-own-data","best_answers_votes":34,"tags":["python","numpy","neural-network","tensorflow","classification"],"question_length":525,"response_length":1488,"tag_count":5} {"question":"Numpy how to iterate over columns of array? Suppose I have and m x n array. I want to pass each column of this array to a function to perform some operation on the entire column. How do I iterate over the columns of the array? For example, I have a 4 x 3 array like \n```\n1 99 2\n2 14 5\n3 12 7\n4 43 1\n\nfor column in array:\n some_function(column)\n```\nwhere column would be \"1,2,3,4\" in the first iteration, \"99,14,12,43\" in the second, and \"2,5,7,1\" in the third.","response":"Just iterate over the transposed of your array: \n```\nfor column in array.T:\n some_function(column)\n```","best_answers_score":0.7507,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/10148818\/numpy-how-to-iterate-over-columns-of-array","best_answers_votes":303,"tags":["python","loops","numpy"],"question_length":465,"response_length":104,"tag_count":3} {"question":"Fitting a Weibull distribution using Scipy I am trying to recreate maximum likelihood distribution fitting, I can already do this in Matlab and R, but now I want to use scipy. In particular, I would like to estimate the Weibull distribution parameters for my data set. I have tried this: \n```\nimport scipy.stats as s\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef weib(x,n,a):\n return (a \/ n) * (x \/ n)**(a - 1) * np.exp(-(x \/ n)**a)\n\ndata = np.loadtxt(\"stack_data.csv\")\n\n(loc, scale) = s.exponweib.fit_loc_scale(data, 1, 1)\nprint loc, scale\n\nx = np.linspace(data.min(), data.max(), 1000)\nplt.plot(x, weib(x, loc, scale))\nplt.hist(data, data.max(), density=True)\nplt.show()\n```\nAnd get this: \n```\n(2.5827280639441961, 3.4955032285727947)\n```\nAnd a distribution that looks like this: I have been using the exponweib after reading this http:\/\/www.johndcook.com\/distributions_scipy.html. I have also tried the other Weibull functions in scipy (just in case!). In Matlab (using the Distribution Fitting Tool - see screenshot) and in R (using both the MASS library function fitdistr and the GAMLSS package) I get a (loc) and b (scale) parameters more like 1.58463497 5.93030013. I believe all three methods use the maximum likelihood method for distribution fitting. I have posted my data here if you would like to have a go! And for completeness I am using Python 2.7.5, Scipy 0.12.0, R 2.15.2 and Matlab 2012b. Why am I getting a different result!?","response":"My guess is that you want to estimate the shape parameter and the scale of the Weibull distribution while keeping the location fixed. Fixing loc assumes that the values of your data and of the distribution are positive with lower bound at zero. floc=0 keeps the location fixed at zero, f0=1 keeps the first shape parameter of the exponential weibull fixed at one. \n```\n\n```python\nstats.exponweib.fit(data, floc=0, f0=1)\n#Output\n#[1, 1.8553346917584836, 0, 6.8820748596850905]\n```\n\n```python\nstats.weibull_min.fit(data, floc=0)\n#Output\n#[1.8553346917584836, 0, 6.8820748596850549]\n#```\n#The fit compared to the histogram looks ok, but not very good. The parameter estimates are a bit higher than the ones you mention are from R and matlab. Update The closest I can get to the plot that is now available is with unrestricted fit, but using starting values. The plot is still less peaked. Note values in fit that don't have an f in front are used as starting values. \n#```\n```\n\n```python\nfrom scipy import stats\n```\n\n```python\nimport matplotlib.pyplot as plt\n```\n\n```python\nplt.plot(data, stats.exponweib.pdf(data, *stats.exponweib.fit(data, 1, 1, scale=02, loc=0)))\n```\n\n```python\n_ = plt.hist(data, bins=np.linspace(0, 16, 33), normed=True, alpha=0.5);\n```\n\n```python\nplt.show()\n#Output\n#```\n```","best_answers_score":0.7503,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/17481672\/fitting-a-weibull-distribution-using-scipy","best_answers_votes":41,"tags":["python","numpy","scipy","distribution","weibull"],"question_length":1457,"response_length":1187,"tag_count":5} {"question":"How to append data to one specific dataset in a hdf5 file with h5py I am looking for a possibility to append data to an existing dataset inside a .h5 file using Python (h5py). A short intro to my project: I try to train a CNN using medical image data. Because of the huge amount of data and heavy memory usage during the transformation of the data to NumPy arrays, I needed to split the \"transformation\" into a few data chunks: load and preprocess the first 100 medical images and save the NumPy arrays to hdf5 file, then load the next 100 datasets and append the existing .h5 file, and so on. Now, I tried to store the first 100 transformed NumPy arrays as follows: \n```\nimport h5py\nfrom LoadIPV import LoadIPV\n\nX_train_data, Y_train_data, X_test_data, Y_test_data = LoadIPV()\n\nwith h5py.File('.\\PreprocessedData.h5', 'w') as hf:\n hf.create_dataset(\"X_train\", data=X_train_data, maxshape=(None, 512, 512, 9))\n hf.create_dataset(\"X_test\", data=X_test_data, maxshape=(None, 512, 512, 9))\n hf.create_dataset(\"Y_train\", data=Y_train_data, maxshape=(None, 512, 512, 1))\n hf.create_dataset(\"Y_test\", data=Y_test_data, maxshape=(None, 512, 512, 1))\n```\nAs can be seen, the transformed NumPy arrays are splitted into four different \"groups\" that are stored into the four hdf5 datasets[X_train, X_test, Y_train, Y_test]. The LoadIPV() function performs the preprocessing of the medical image data. My problem is that I would like to store the next 100 NumPy arrays into the same .h5 file into the existing datasets: that means that I would like to append to, for example, the existing X_train dataset of shape [100, 512, 512, 9] with the next 100 NumPy arrays, such that X_train becomes of shape [200, 512, 512, 9]. The same should work for the other three datasets X_test, Y_train and Y_test.","response":"I have found a solution that seems to work! Have a look at this: incremental writes to hdf5 with h5py! In order to append data to a specific dataset it is necessary to first resize the specific dataset in the corresponding axis and subsequently append the new data at the end of the \"old\" nparray. Thus, the solution looks like this: \n```\nwith h5py.File('.\\PreprocessedData.h5', 'a') as hf:\n hf[\"X_train\"].resize((hf[\"X_train\"].shape[0] + X_train_data.shape[0]), axis = 0)\n hf[\"X_train\"][-X_train_data.shape[0]:] = X_train_data\n\n hf[\"X_test\"].resize((hf[\"X_test\"].shape[0] + X_test_data.shape[0]), axis = 0)\n hf[\"X_test\"][-X_test_data.shape[0]:] = X_test_data\n\n hf[\"Y_train\"].resize((hf[\"Y_train\"].shape[0] + Y_train_data.shape[0]), axis = 0)\n hf[\"Y_train\"][-Y_train_data.shape[0]:] = Y_train_data\n\n hf[\"Y_test\"].resize((hf[\"Y_test\"].shape[0] + Y_test_data.shape[0]), axis = 0)\n hf[\"Y_test\"][-Y_test_data.shape[0]:] = Y_test_data\n```\nHowever, note that you should create the dataset with maxshape=(None,), for example \n```\nh5f.create_dataset('X_train', data=orig_data, compression=\"gzip\", chunks=True, maxshape=(None,))\n```\notherwise the dataset cannot be extended.","best_answers_score":0.7499,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/47072859\/how-to-append-data-to-one-specific-dataset-in-a-hdf5-file-with-h5py","best_answers_votes":75,"tags":["python","numpy","deep-learning","hdf5","h5py"],"question_length":1797,"response_length":1189,"tag_count":5} {"question":"Python+OpenCV: cv2.imwrite I'm trying to detect a face and write down area with the face in a separate file. How can I do it? I think that i must use \"faces\" (you can see this var in code). But how? \n```\nfrom ffnet import mlgraph, ffnet, tmlgraph, imlgraph\nimport pylab\nimport sys\nimport cv,cv2\nimport numpy\ncascade = cv.Load('C:\\opencv\\data\\haarcascades\\haarcascade_frontalface_alt.xml')\n\n\ndef detect(image):\n bitmap = cv.fromarray(image)\n faces = cv.HaarDetectObjects(bitmap, cascade, cv.CreateMemStorage(0))\n if faces:\n for (x,y,w,h),n in faces: \n cv2.rectangle(image,(x,y),(x+w,y+h),(255,255,255),3)\n return image\n\nif __name__ == \"__main__\":\n cam = cv2.VideoCapture(0)\n while 1:\n _,frame =cam.read()\n frame = numpy.asarray(detect(frame))\n cv2.imshow(\"features\", frame)\n if cv2.waitKey(1) == 0x1b: # ESC\n print 'ESC pressed. Exiting ...'\n break\n```","response":"This following code should extract face in images and save faces on disk \n```\ndef detect(image):\n image_faces = []\n bitmap = cv.fromarray(image)\n faces = cv.HaarDetectObjects(bitmap, cascade, cv.CreateMemStorage(0))\n if faces:\n for (x,y,w,h),n in faces:\n image_faces.append(image[y:(y+h), x:(x+w)])\n #cv2.rectangle(image,(x,y),(x+w,y+h),(255,255,255),3)\n return image_faces\n\nif __name__ == \"__main__\":\n cam = cv2.VideoCapture(0)\n while 1:\n _,frame =cam.read()\n image_faces = []\n image_faces = detect(frame)\n for i, face in enumerate(image_faces):\n cv2.imwrite(\"face-\" + str(i) + \".jpg\", face)\n\n #cv2.imshow(\"features\", frame)\n if cv2.waitKey(1) == 0x1b: # ESC\n print 'ESC pressed. Exiting ...'\n break\n```","best_answers_score":0.7499,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/20425724\/pythonopencv-cv2-imwrite","best_answers_votes":42,"tags":["python","opencv","python-2.7","numpy"],"question_length":911,"response_length":829,"tag_count":4} {"question":"Numpy concatenate 2D arrays with 1D array I am trying to concatenate 4 arrays, one 1D array of shape (78427,) and 3 2D array of shape (78427, 375\/81\/103). Basically this are 4 arrays with features for 78427 images, in which the 1D array only has 1 value for each image. I tried concatenating the arrays as follows: \n```\n\n```python\nprint X_Cscores.shape\n#Output\n#(78427, 375)\n```\n\n```python\nprint X_Mscores.shape\n#Output\n#(78427, 81)\n```\n\n```python\nprint X_Tscores.shape\n#Output\n#(78427, 103)\n```\n\n```python\nprint X_Yscores.shape\n#Output\n#(78427,)\n```\n\n```python\nnp.concatenate((X_Cscores, X_Mscores, X_Tscores, X_Yscores), axis=1)\n#Output\n#```\n#This results in the following error: Traceback (most recent call last): File \"\", line 1, in ValueError: all the input arrays must have same number of dimensions The problem seems to be the 1D array, but I can't really see why (it also has 78427 values). I tried to transpose the 1D array before concatenating it, but that also didn't work. Any help on what's the right method to concatenate these arrays would be appreciated!\n```","response":"Try concatenating X_Yscores[:, None] (or X_Yscores[:, np.newaxis] as imaluengo suggests). This creates a 2D array out of a 1D array. Example: \n```\nA = np.array([1, 2, 3])\nprint A.shape\nprint A[:, None].shape\n```\nOutput: \n```\n(3,)\n(3,1)\n```","best_answers_score":0.7482,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/30305069\/numpy-concatenate-2d-arrays-with-1d-array","best_answers_votes":38,"tags":["python","arrays","numpy","concatenation"],"question_length":973,"response_length":239,"tag_count":4} {"question":"Find out the percentage of missing values in each column in the given dataset \n```\nimport pandas as pd\ndf = pd.read_csv('https:\/\/query.data.world\/s\/Hfu_PsEuD1Z_yJHmGaxWTxvkz7W_b0')\npercent= 100*(len(df.loc[:,df.isnull().sum(axis=0)>=1 ].index) \/ len(df.index))\nprint(round(percent,2))\n```\ninput is https:\/\/query.data.world\/s\/Hfu_PsEuD1Z_yJHmGaxWTxvkz7W_b0 and the output should be \n```\nOrd_id 0.00\nProd_id 0.00\nShip_id 0.00\nCust_id 0.00\nSales 0.24\nDiscount 0.65\nOrder_Quantity 0.65\nProfit 0.65\nShipping_Cost 0.65\nProduct_Base_Margin 1.30\ndtype: float64\n```","response":"How about this? I think I actually found something similar on here once before, but I'm not seeing it now... \n```\npercent_missing = df.isnull().sum() * 100 \/ len(df)\nmissing_value_df = pd.DataFrame({'column_name': df.columns,\n 'percent_missing': percent_missing})\n```\nAnd if you want the missing percentages sorted, follow the above with: \n```\nmissing_value_df.sort_values('percent_missing', inplace=True)\n```\nAs mentioned in the comments, you may also be able to get by with just the first line in my code above, i.e.: \n```\npercent_missing = df.isnull().sum() * 100 \/ len(df)\n```","best_answers_score":0.748,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/51070985\/find-out-the-percentage-of-missing-values-in-each-column-in-the-given-dataset","best_answers_votes":131,"tags":["python","python-3.x","pandas","numpy"],"question_length":684,"response_length":612,"tag_count":4} {"question":"How to calculate 1st and 3rd quartiles? I have DataFrame: \n```\ntime_diff avg_trips\n0 0.450000 1.0\n1 0.483333 1.0\n2 0.500000 1.0\n3 0.516667 1.0\n4 0.533333 2.0\n```\nI want to get 1st quartile, 3rd quartile and median for the column time_diff. To obtain median, I use np.median(df[\"time_diff\"].values). How can I calculate quartiles?","response":"You can use np.percentile to calculate quartiles (including the median): \n```\n\n```python\nnp.percentile(df.time_diff, 25) # Q1\n#Output\n#0.48333300000000001\n```\n\n```python\nnp.percentile(df.time_diff, 50) # median\n#Output\n#0.5\n```\n\n```python\nnp.percentile(df.time_diff, 75) # Q3\n#Output\n#0.51666699999999999\n#```\n#Or all at once: \n#```\n```\n\n```python\nnp.percentile(df.time_diff, [25, 50, 75])\n#Output\n#array([ 0.483333, 0.5 , 0.516667])\n#```\n```","best_answers_score":0.7476,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/45926230\/how-to-calculate-1st-and-3rd-quartiles","best_answers_votes":94,"tags":["python","python-2.7","pandas","numpy"],"question_length":356,"response_length":369,"tag_count":4} {"question":"Numpy: For every element in one array, find the index in another array I have two 1D arrays, x & y, one smaller than the other. I'm trying to find the index of every element of y in x. I've found two naive ways to do this, the first is slow, and the second memory-intensive. The slow way \n```\nindices= []\nfor iy in y:\n indices += np.where(x==iy)[0][0]\n```\nThe memory hog \n```\nxe = np.outer([1,]*len(x), y)\nye = np.outer(x, [1,]*len(y))\njunk, indices = np.where(np.equal(xe, ye))\n```\nIs there a faster way or less memory intensive approach? Ideally the search would take advantage of the fact that we are searching for not one thing in a list, but many things, and thus is slightly more amenable to parallelization. Bonus points if you don't assume that every element of y is actually in x.","response":"As Joe Kington said, searchsorted() can search element very quickly. To deal with elements that are not in x, you can check the searched result with original y, and create a masked array: \n```\nimport numpy as np\nx = np.array([3,5,7,1,9,8,6,6])\ny = np.array([2,1,5,10,100,6])\n\nindex = np.argsort(x)\nsorted_x = x[index]\nsorted_index = np.searchsorted(sorted_x, y)\n\nyindex = np.take(index, sorted_index, mode=\"clip\")\nmask = x[yindex] != y\n\nresult = np.ma.array(yindex, mask=mask)\nprint result\n```\nthe result is: \n```\n[-- 3 1 -- -- 6]\n```","best_answers_score":0.7474,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/8251541\/numpy-for-every-element-in-one-array-find-the-index-in-another-array","best_answers_votes":52,"tags":["python","arrays","search","numpy","indexing"],"question_length":792,"response_length":534,"tag_count":5} {"question":"array.shape() giving error tuple not callable I have a 2D numpy array called results, which contains its own array of data, and I want to go into it and use each list: \n```py\nfor r in results:\n print \"r:\"\n print r\n y_pred = np.array(r)\n print y_pred.shape()\n```\nThis is the output I get: \n```none\nr:\n[ 25. 25. 25. 25. 25. 25. 26. 26. 26. 26. 26. 22. 27. 27. 42.\n 23. 23. 23. 28. 28. 28. 44. 29. 29. 30. 30. 30. 18. 18. 18.\n 19. 30. 17. 17. 17. 17. 2. 19. 2. 17. 17. 17. 17. 17. 17.\n 4. 17. 17. 41. 7. 17. 19. 19. 19. 10. 32. 4. 19. 34. 19.\n 34. 34. 34. 34. 34. 34. 20. 20. 20. 36. 36. 36. 4. 36. 36.\n 22. 22. 22. 22. 22. 22. 23. 23. 23. 27. 27. 27. 24. 39. 39.\n 10. 10. 10. 6. 10. 10. 11. 11. 11. 11. 11. 11. 12. 12. 12.\n 12. 12. 12. 13. 13. 13. 14. 14. 14. 15. 15. 15. 1. 17. 1.\n 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2.\n 19. 19. 19. 2. 2. 4. 3. 3. 3. 4. 4. 4. 4. 4. 4.\n 4. 19. 4. 4. 4. 17. 5. 5. 5. 6. 6. 6. 6. 6. 6.\n 7. 7. 7. 7. 7. 7. 8. 8. 8. 8. 8. 8. 9. 9. 9.\n 23. 38. 38. 34. 34. 10. 17. 17. 26. 0. 42. 0. 18. 32. 32.\n 0. 0. 21. 38. 38. 38. 27. 27. 27. 0. 0. 0. 34. 2. 2.\n 0. 26. 26. 36. 0. 36. 36. 36. 23. 0. 27. 38. 25. 25. 25.\n 26. 26. 26. 0. 15. 15. 32. 38. 38. 0. 32. 32. 32. 41. 32.\n 7. 34. 32. 42. 34. 34. 36. 36. 25. 32. 32. 32. 36. 17. 8.\n 32. 17. 38. 3. 3. 3. 18. 18. 18. 0. 1. 1. 34. 1. 1.\n 34. 17. 17. 34. 34. 34. 34. 34. 34. 17. 17. 17. 24. 2. 32.\n 2. 2. 2. 0. 2. 2. 0. 34. 34. 0. 1. 1. 38. 23. 38.]\nTraceback (most recent call last):\n File \"C:\\Users\\app\\Documents\\Python Scripts\\gbc_classifier_test.py\", line 93, in \n print y_pred.shape()\nTypeError: 'tuple' object is not callable\n```\nI don't understand why y_pred is not a regular array and why it's being considered a tuple, I've assigned it to be an array using r.","response":"shape is just an attribute, not a method. Just use y_pred.shape (no parentheses). (The error message isn't telling you that y_pred is a tuple, it's telling you that y_pred.shape is a tuple.)","best_answers_score":0.7473,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/25125168\/array-shape-giving-error-tuple-not-callable","best_answers_votes":98,"tags":["python","arrays","numpy","tuples","typeerror"],"question_length":2163,"response_length":190,"tag_count":5} {"question":"How to gauss-filter (blur) a floating point numpy array I have got a numpy array a of type float64. How can I blur this data with a Gauss filter? I have tried \n```\nfrom PIL import Image, ImageFilter\n\nimage = Image.fromarray(a)\nfiltered = image.filter(ImageFilter.GaussianBlur(radius=7))\n```\n, but this yields ValueError: 'image has wrong mode'. (It has mode F.) I could create an image of suitable mode by multiplying a with some constant, then rounding to integer. That should work, but I would like to have a more direct way. (I am using Pillow 2.7.0.)","response":"If you have a two-dimensional numpy array a, you can use a Gaussian filter on it directly without using Pillow to convert it to an image first. scipy has a function gaussian_filter that does the same. \n```\nfrom scipy.ndimage import gaussian_filter\n\nblurred = gaussian_filter(a, sigma=7)\n```","best_answers_score":0.7472,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/29920114\/how-to-gauss-filter-blur-a-floating-point-numpy-array","best_answers_votes":63,"tags":["python","numpy","opencv","python-imaging-library","filtering"],"question_length":554,"response_length":290,"tag_count":5} {"question":"numpy with python: convert 3d array to 2d Say that I have a color image, and naturally this will be represented by a 3-dimensional array in python, say of shape (n x m x 3) and call it img. I want a new 2-d array, call it \"narray\" to have a shape (3,nxm), such that each row of this array contains the \"flattened\" version of R,G,and B channel respectively. Moreover, it should have the property that I can easily reconstruct back any of the original channel by something like \n```\nnarray[0,].reshape(img.shape[0:2]) #so this should reconstruct back the R channel.\n```\nThe question is how can I construct the \"narray\" from \"img\"? The simple img.reshape(3,-1) does not work as the order of the elements are not desirable for me. Thanks","response":"You need to use np.transpose to rearrange dimensions. Now, n x m x 3 is to be converted to 3 x (n*m), so send the last axis to the front and shift right the order of the remaining axes (0,1). Finally , reshape to have 3 rows. Thus, the implementation would be - \n```\nimg.transpose(2,0,1).reshape(3,-1)\n```\nSample run - \n```\n\n```python\nimg\n#Output\n#array([[[155, 33, 129],\n# [161, 218, 6]],\n#\n# [[215, 142, 235],\n# [143, 249, 164]],\n#\n# [[221, 71, 229],\n# [ 56, 91, 120]],\n#\n# [[236, 4, 177],\n# [171, 105, 40]]])\n```\n\n```python\nimg.transpose(2,0,1).reshape(3,-1)\n#Output\n#array([[155, 161, 215, 143, 221, 56, 236, 171],\n# [ 33, 218, 142, 249, 71, 91, 4, 105],\n# [129, 6, 235, 164, 229, 120, 177, 40]])\n#```\n```","best_answers_score":0.7467,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/32838802\/numpy-with-python-convert-3d-array-to-2d","best_answers_votes":54,"tags":["python","arrays","numpy","image-processing","computer-vision"],"question_length":736,"response_length":761,"tag_count":5} {"question":"Python numpy float16 datatype operations, and float8? when performing math operations on float16 Numpy numbers, the result is also in float16 type number. My question is how exactly the result is computed? Say Im multiplying\/adding two float16 numbers, does python generate the result in float32 and then truncate\/round the result to float16? Or does the calculation performed in '16bit multiplexer\/adder hardware' all the way? another question - is there a float8 type? I couldnt find this one... if not, then why? Thank-you all!","response":"To the first question: there's no hardware support for float16 on a typical processor (at least outside the GPU). NumPy does exactly what you suggest: convert the float16 operands to float32, perform the scalar operation on the float32 values, then round the float32 result back to float16. It can be proved that the results are still correctly-rounded: the precision of float32 is large enough (relative to that of float16) that double rounding isn't an issue here, at least for the four basic arithmetic operations and square root. In the current NumPy source, this is what the definition of the four basic arithmetic operations looks like for float16 scalar operations. \n```\n#define half_ctype_add(a, b, outp) *(outp) = \\\n npy_float_to_half(npy_half_to_float(a) + npy_half_to_float(b))\n#define half_ctype_subtract(a, b, outp) *(outp) = \\\n npy_float_to_half(npy_half_to_float(a) - npy_half_to_float(b))\n#define half_ctype_multiply(a, b, outp) *(outp) = \\\n npy_float_to_half(npy_half_to_float(a) * npy_half_to_float(b))\n#define half_ctype_divide(a, b, outp) *(outp) = \\\n npy_float_to_half(npy_half_to_float(a) \/ npy_half_to_float(b))\n```\nThe code above is taken from scalarmath.c.src in the NumPy source. You can also take a look at loops.c.src for the corresponding code for array ufuncs. The supporting npy_half_to_float and npy_float_to_half functions are defined in halffloat.c, along with various other support functions for the float16 type. For the second question: no, there's no float8 type in NumPy. float16 is a standardized type (described in the IEEE 754 standard), that's already in wide use in some contexts (notably GPUs). There's no IEEE 754 float8 type, and there doesn't appear to be an obvious candidate for a \"standard\" float8 type. I'd also guess that there just hasn't been that much demand for float8 support in NumPy.","best_answers_score":0.7456,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/38975770\/python-numpy-float16-datatype-operations-and-float8","best_answers_votes":40,"tags":["python","numpy","floating-point","precision"],"question_length":530,"response_length":1871,"tag_count":4} {"question":"Converting a NumPy array to a PIL image I want to create a PIL image from a NumPy array. Here is my attempt: \n```\n# Create a NumPy array, which has four elements. The top-left should be pure \n# red, the top-right should be pure blue, the bottom-left should be pure green, \n# and the bottom-right should be yellow.\npixels = np.array([[[255, 0, 0], [0, 255, 0]], [[0, 0, 255], [255, 255, 0]]])\n\n# Create a PIL image from the NumPy array\nimage = Image.fromarray(pixels, 'RGB')\n\n# Print out the pixel values\nprint image.getpixel((0, 0))\nprint image.getpixel((0, 1))\nprint image.getpixel((1, 0))\nprint image.getpixel((1, 1))\n\n# Save the image\nimage.save('image.png')\n```\nHowever, the print out gives the following: \n```none\n(255, 0, 0)\n(0, 0, 0)\n(0, 0, 0)\n(0, 0, 0)\n```\nAnd the saved image has pure red in the top-left, but all the other pixels are black. Why are these other pixels not retaining the colour I have assigned to them in the NumPy array?","response":"The RGB mode is expecting 8-bit values, so just casting your array should fix the problem: \n```\n\n```python\nimage = Image.fromarray(pixels.astype('uint8'), 'RGB')\n ...:\n ...: # Print out the pixel values\n ...: print image.getpixel((0, 0))\n ...: print image.getpixel((0, 1))\n ...: print image.getpixel((1, 0))\n ...: print image.getpixel((1, 1))\n ...:\n(255, 0, 0)\n(0, 0, 255)\n(0, 255, 0)\n(255, 255, 0)\n```\n```","best_answers_score":0.7455,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/42096141\/converting-a-numpy-array-to-a-pil-image","best_answers_votes":62,"tags":["python","image","numpy","python-imaging-library"],"question_length":946,"response_length":421,"tag_count":4} {"question":"Find nearest value in numpy array How do I find the nearest value in a numpy array? Example: \n```\nnp.find_nearest(array, value)\n```","response":"```\nimport numpy as np\ndef find_nearest(array, value):\n array = np.asarray(array)\n idx = (np.abs(array - value)).argmin()\n return array[idx]\n```\nExample usage: \n```\narray = np.random.random(10)\nprint(array)\n# [ 0.21069679 0.61290182 0.63425412 0.84635244 0.91599191 0.00213826\n# 0.17104965 0.56874386 0.57319379 0.28719469]\n\nprint(find_nearest(array, value=0.5))\n# 0.568743859261\n```","best_answers_score":0.7454,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/2566412\/find-nearest-value-in-numpy-array","best_answers_votes":749,"tags":["python","search","numpy"],"question_length":131,"response_length":402,"tag_count":3} {"question":"Numpy quirk: Apply function to all pairs of two 1D arrays, to get one 2D array Let's say I have 2 one-dimensional (1D) numpy arrays, a and b, with lengths n1 and n2 respectively. I also have a function, F(x,y), that takes two values. Now I want to apply that function to each pair of values from my two 1D arrays, so the result would be a 2D numpy array with shape n1, n2. The i, j element of the two-dimensional array would be F(a[i], b[j]). I haven't been able to find a way of doing this without a horrible amount of for-loops, and I'm sure there's a much simpler (and faster!) way of doing this in numpy. Thanks in advance!","response":"You can use numpy broadcasting to do calculation on the two arrays, turning a into a vertical 2D array using newaxis: \n```\n\n```python\na = np.array([1, 2, 3]) # n1 = 3\n ...: b = np.array([4, 5]) # n2 = 2\n ...: #if function is c(i, j) = a(i) + b(j)*2:\n ...: c = a[:, None] + b*2\n```\n\n```python\nc\n#Output\n#array([[ 9, 11],\n# [10, 12],\n# [11, 13]])\n#```\n#To benchmark: \n#```\n```\n\n```python\na = arange(100)\n```\n\n```python\nb = arange(222)\n```\n\n```python\ntimeit r = np.array([[f(i, j) for j in b] for i in a])\n10 loops, best of 3: 29.9 ms per loop\n```\n\n```python\ntimeit c = a[:, None] + b*2\n10000 loops, best of 3: 71.6 us per loop\n```\n```","best_answers_score":0.7446,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/21226610\/numpy-quirk-apply-function-to-all-pairs-of-two-1d-arrays-to-get-one-2d-array","best_answers_votes":27,"tags":["python","python-2.7","numpy"],"question_length":627,"response_length":617,"tag_count":3} {"question":"Creating a Confidence Ellipse in a scatterplot using matplotlib How do I create a confidence ellipse in a scatterplot using matplotlib? The following code works until creating scatter plot. Then, is anyone familiar with putting confidence ellipses over the scatter plot? \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nx = [5,7,11,15,16,17,18]\ny = [8, 5, 8, 9, 17, 18, 25]\n\nplt.scatter(x,y)\nplt.show()\n```\nFollowing is the reference for Confidence Ellipses from SAS. http:\/\/support.sas.com\/documentation\/cdl\/en\/grstatproc\/62603\/HTML\/default\/viewer.htm#a003160800.htm The code in sas is like this: \n```\nproc sgscatter data=sashelp.iris(where=(species=\"Versicolor\"));\n title \"Versicolor Length and Width\";\n compare y=(sepalwidth petalwidth)\n x=(sepallength petallength)\n \/ reg ellipse=(type=mean) spacing=4;\nrun;\n```","response":"After giving the accepted answer a go, I found that it doesn't choose the quadrant correctly when calculating theta, as it relies on np.arccos: Taking a look at the 'possible duplicate' and Joe Kington's solution on github, I watered his code down to this: \n```\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.patches import Ellipse\n\ndef eigsorted(cov):\n vals, vecs = np.linalg.eigh(cov)\n order = vals.argsort()[::-1]\n return vals[order], vecs[:,order]\n\nx = [5,7,11,15,16,17,18]\ny = [25, 18, 17, 9, 8, 5, 8]\n\nnstd = 2\nax = plt.subplot(111)\n\ncov = np.cov(x, y)\nvals, vecs = eigsorted(cov)\ntheta = np.degrees(np.arctan2(*vecs[:,0][::-1]))\nw, h = 2 * nstd * np.sqrt(vals)\nell = Ellipse(xy=(np.mean(x), np.mean(y)),\n width=w, height=h,\n angle=theta, color='black')\nell.set_facecolor('none')\nax.add_artist(ell)\nplt.scatter(x, y)\nplt.show()\n```","best_answers_score":0.7437,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/20126061\/creating-a-confidence-ellipse-in-a-scatterplot-using-matplotlib","best_answers_votes":29,"tags":["python","numpy","matplotlib","scipy"],"question_length":842,"response_length":892,"tag_count":4} {"question":"How to count values in a certain range in a Numpy array? I have a NumPy array of values. I want to count how many of these values are in a specific range say x25. I have read about the counter, but it seems to only be valid for specif values not ranges of values. I have searched, but have not found anything regarding my specific problem. If someone could point me towards the proper documentation I would appreciate it. Thank you I have tried this \n```\nX = array(X)\n for X in range(25, 100):\n print(X)\n```\nBut it just gives me the numbers in between 25 and 99. EDIT The data I am using was created by another program. I then used a script to read the data and store it as a list. I then took the list and turned it in to an array using array(r). Edit The result of running \n```\n\n```python\na[0:10]\n#Output\n# array(['29.63827346', '40.61488812', '25.48300065', '26.22910525',\n# '42.41172923', '20.15013315', '34.95323355', '13.03604098',\n# '29.71097606', '9.53222141'], \n# dtype='>>\n#```\n```","best_answers_score":0.7426,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/57351363\/is-there-a-difference-between-boardx-y-and-boardxy-in-python","best_answers_votes":48,"tags":["python","arrays","list","numpy","indexing"],"question_length":1231,"response_length":228,"tag_count":5} {"question":"How to filter rows of a numpy array I am looking to apply a function to each row of a numpy array. If this function evaluates to True I will keep the row, otherwise I will discard it. For example, my function might be: \n```py\ndef f(row):\n if sum(row)>10: return True\n else: return False\n```\nI was wondering if there was something similar to: \n```py\nnp.apply_over_axes()\n```\nwhich applies a function to each row of a numpy array and returns the result. I was hoping for something like: \n```py\nnp.filter_over_axes()\n```\nwhich would apply a function to each row of a numpy array and only return rows for which the function returned True. Is there anything like this? Or should I just use a for loop?","response":"Ideally, you would be able to implement a vectorized version of your function and use that to do boolean indexing. For the vast majority of problems this is the right solution. Numpy provides quite a few functions that can act over various axes as well as all the basic operations and comparisons, so most useful conditions should be vectorizable. \n```\nimport numpy as np\n\nx = np.random.randn(20, 3)\nx_new = x[np.sum(x, axis=1) > .5]\n```\nIf you are absolutely sure that you can't do the above, I would suggest using a list comprehension (or np.apply_along_axis) to create an array of bools to index with. \n```\ndef myfunc(row):\n return sum(row) > .5\n\nbool_arr = np.array([myfunc(row) for row in x])\nx_new = x[bool_arr]\n```\nThis will get the job done in a relatively clean way, but will be significantly slower than a vectorized version. An example: \n```\nx = np.random.randn(5000, 200)\n\n%timeit x[np.sum(x, axis=1) > .5]\n# 100 loops, best of 3: 5.71 ms per loop\n\n%timeit x[np.array([myfunc(row) for row in x])]\n# 1 loops, best of 3: 217 ms per loop\n```","best_answers_score":0.7423,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/26154711\/how-to-filter-rows-of-a-numpy-array","best_answers_votes":41,"tags":["python","arrays","python-3.x","numpy","filter"],"question_length":702,"response_length":1053,"tag_count":5} {"question":"Why does PyCharm give unresolved reference errors on some Numpy imports? The following line in PyCharm is flagged by on-the-fly inspection with unresolved reference errors for each import. (They are underlined red.) \n```\nfrom numpy import tan, arcsin, arccos, arctan\n```\nHowever the following imports do not cause any error\/warning: \n```\nfrom numpy import sin, cos, arctan2, sqrt, cross, pi\n```\nThe code in which I use these imports runs fine without any errors or warnings. I generally rely on PyCharm's red errors as a warning that my code is broken and will not run, but in this case PyCharm is wrong. Why are some of numpy's functions recognized by PyCharm's introspection and others aren't? Current Versions: Windows 7 64-bit Python 2.7.5 PyCharm 3.1.2 Numpy 1.8 Thanks!","response":"The reason you are getting this is because of PyCharm's static analysis. Now, what Python does is use static skeletons (some are pre-generated and some are generated) to give you the analysis. Take a look at the pre-generated skeletons here -> https:\/\/github.com\/JetBrains\/python-skeletons This might be solved, by enabling the following: However, if that does not work: which will block off the error, it will appear as a comment above the line.","best_answers_score":0.7417,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/23689183\/why-does-pycharm-give-unresolved-reference-errors-on-some-numpy-imports","best_answers_votes":22,"tags":["python","python-2.7","numpy","pycharm"],"question_length":775,"response_length":446,"tag_count":4} {"question":"How to prevent TypeError: list indices must be integers, not tuple when copying a python list to a numpy array? I am trying to create 3 numpy arrays\/lists using data from another array called mean_data as follows: \n```\n---> 39 R = np.array(mean_data[:,0])\n 40 P = np.array(mean_data[:,1])\n 41 Z = np.array(mean_data[:,2])\n```\nWhen I try to run the program I get the error: TypeError: list indices must be integers, not tuple The mean_data list looks like this sample...: \n```\n[6.0, 315.0, 4.8123788544375692e-06],\n[6.5, 0.0, 2.259217450023793e-06],\n[6.5, 45.0, 9.2823565008402673e-06],\n[6.5, 90.0, 8.309270169336028e-06],\n[6.5, 135.0, 6.4709418114245381e-05],\n[6.5, 180.0, 1.7227922423558414e-05],\n[6.5, 225.0, 1.2308522579848724e-05],\n[6.5, 270.0, 2.6905672894824344e-05],\n[6.5, 315.0, 2.2727114437176048e-05]]\n```\nI don't know how to prevent this error, I have tried creating mean_data as a np.array and using np.append to add values to it but that doesn't solve the problem either. Here's the traceback (was using ipython before): \n```\nTraceback (most recent call last):\n File \"polarplot.py\", line 36, in \n R = np.array(mean_data[:,0])\nTypeError: list indices must be integers, not tuple\n```\nAnd the other way I tried to create an array was: \n```\nmean_data = np.array([])\n\nfor ur, ua in it.product(uradius, uangle):\n samepoints = (data[:,0]==ur) & (data[:,1]==ua)\n if samepoints.sum() > 1: # check if there is more than one match\n np.append(mean_data[ur, ua, np.mean(data[samepoints,-1])])\n elif samepoints.sum() == 1:\n np.append(mean_data, [ur, ua, data[samepoints,-1]])\n```\nThe traceback on that is: \n```\nIndexError Traceback (most recent call last)\n in ()\n 31 samepoints = (data[:,0]==ur) & (data[:,1]==ua)\n 32 if samepoints.sum() > 1: # check if there is more than one match\n---> 33 np.append(mean_data[ur, ua, np.mean(data[samepoints,-1])])\n 34 elif samepoints.sum() == 1:\n 35 np.append(mean_data, [ur, ua, data[samepoints,-1]])\n\nIndexError: invalid index\n```","response":"The variable mean_data is a nested list, in Python accessing a nested list cannot be done by multi-dimensional slicing, i.e.: mean_data[1,2], instead one would write mean_data[1][2]. This is becausemean_data[2] is a list. Further indexing is done recursively - since mean_data[2] is a list, mean_data[2][0] is the first index of that list. Additionally, mean_data[:][0] does not work because mean_data[:] returns mean_data. The solution is to replace the array ,or import the original data, as follows: \n```\nmean_data = np.array(mean_data)\n```\nnumpy arrays (like MATLAB arrays and unlike nested lists) support multi-dimensional slicing with tuples.","best_answers_score":0.741,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/15884527\/how-to-prevent-typeerror-list-indices-must-be-integers-not-tuple-when-copying","best_answers_votes":101,"tags":["python","arrays","list","numpy","typeerror"],"question_length":2125,"response_length":648,"tag_count":5} {"question":"Python memory usage of numpy arrays I'm using python to analyse some large files and I'm running into memory issues, so I've been using sys.getsizeof() to try and keep track of the usage, but it's behaviour with numpy arrays is bizarre. Here's an example involving a map of albedos that I'm having to open: \n```\n\n```python\nimport numpy as np\n```\n\n```python\nimport struct\n```\n\n```python\nfrom sys import getsizeof\n```\n\n```python\nf = open('Albedo_map.assoc', 'rb')\n```\n\n```python\ngetsizeof(f)\n#Output\n#144\n```\n\n```python\nalbedo = struct.unpack('%df' % (7200*3600), f.read(7200*3600*4))\n```\n\n```python\ngetsizeof(albedo)\n#Output\n#207360056\n```\n\n```python\nalbedo = np.array(albedo).reshape(3600,7200)\n```\n\n```python\ngetsizeof(albedo)\n#Output\n#80\n#```\n#Well the data's still there, but the size of the object, a 3600x7200 pixel map, has gone from ~200 Mb to 80 bytes. I'd like to hope that my memory issues are over and just convert everything to numpy arrays, but I feel that this behaviour, if true, would in some way violate some law of information theory or thermodynamics, or something, so I'm inclined to believe that getsizeof() doesn't work with numpy arrays. Any ideas?\n```","response":"You can use array.nbytes for numpy arrays, for example: \n```\nimport numpy as np\nfrom sys import getsizeof\na = [0] * 1024\nb = np.array(a)\nprint(getsizeof(a))\nprint(b.nbytes)\n```\nOutput: \n```\n8264\n8192\n```","best_answers_score":0.7408,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/11784329\/python-memory-usage-of-numpy-arrays","best_answers_votes":357,"tags":["python","numpy","sys"],"question_length":1047,"response_length":203,"tag_count":3} {"question":"Convert timedelta64[ns] column to seconds in Python Pandas DataFrame A pandas DataFrame column duration contains timedelta64[ns] as shown. How can you convert them to seconds? \n```\n0 00:20:32\n1 00:23:10\n2 00:24:55\n3 00:13:17\n4 00:18:52\nName: duration, dtype: timedelta64[ns]\n```\nI tried the following \n```\nprint df[:5]['duration'] \/ np.timedelta64(1, 's')\n```\nbut got the error \n```\nTraceback (most recent call last):\n File \"test.py\", line 16, in \n print df[0:5]['duration'] \/ np.timedelta64(1, 's')\n File \"C:\\Python27\\lib\\site-packages\\pandas\\core\\series.py\", line 130, in wrapper\n \"addition and subtraction, but the operator [%s] was passed\" % name)\nTypeError: can only operate on a timedeltas for addition and subtraction, but the operator [__div__] was passed\n```\nAlso tried \n```\nprint df[:5]['duration'].astype('timedelta64[s]')\n```\nbut received the error \n```\nTraceback (most recent call last):\n File \"test.py\", line 17, in \n print df[:5]['duration'].astype('timedelta64[s]')\n File \"C:\\Python27\\lib\\site-packages\\pandas\\core\\series.py\", line 934, in astype\n values = com._astype_nansafe(self.values, dtype)\n File \"C:\\Python27\\lib\\site-packages\\pandas\\core\\common.py\", line 1653, in _astype_nansafe\n raise TypeError(\"cannot astype a timedelta from [%s] to [%s]\" % (arr.dtype,dtype))\nTypeError: cannot astype a timedelta from [timedelta64[ns]] to [timedelta64[s]]\n```","response":"Use the Series dt accessor to get access to the methods and attributes of a datetime (timedelta) series. \n```\n\n```python\ns\n#Output\n#0 -1 days +23:45:14.304000\n#1 -1 days +23:46:57.132000\n#2 -1 days +23:49:25.913000\n#3 -1 days +23:59:48.913000\n#4 00:00:00.820000\n#dtype: timedelta64[ns]\n#>>>\n```\n\n```python\ns.dt.total_seconds()\n#Output\n#0 -885.696\n#1 -782.868\n#2 -634.087\n#3 -11.087\n#4 0.820\n#dtype: float64\n#```\n#There are other Pandas Series Accessors for String, Categorical, and Sparse data types.\n```","best_answers_score":0.74,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/26456825\/convert-timedelta64ns-column-to-seconds-in-python-pandas-dataframe","best_answers_votes":97,"tags":["python","python-2.7","numpy","pandas"],"question_length":1417,"response_length":484,"tag_count":4} {"question":"Is this the best way to add an extra dimension to a numpy array in one line of code? If k is an numpy array of an arbitrary shape, so k.shape = (s1, s2, s3, ..., sn), and I want to reshape it so that k.shape becomes (s1, s2, ..., sn, 1), is this the best way to do it in one line? \n```\nk.reshape(*(list(k.shape) + [1])\n```","response":"It's easier like this: \n```\nk.reshape(k.shape + (1,))\n```\nBut if all you want is to add an empty dimension at the end, you should use numpy.newaxis: \n```\nimport numpy as np\nk = k[..., np.newaxis]\n```\nor \n```\nk = k[..., None]\n```\n(See the documentation on slicing).","best_answers_score":0.7399,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/17835121\/is-this-the-best-way-to-add-an-extra-dimension-to-a-numpy-array-in-one-line-of-c","best_answers_votes":44,"tags":["python","arrays","python-3.x","numpy","reshape"],"question_length":322,"response_length":264,"tag_count":5} {"question":"Function to determine if two numbers are nearly equal when rounded to n significant decimal digits I have been asked to test a library provided by a 3rd party. The library is known to be accurate to n significant figures. Any less-significant errors can safely be ignored. I want to write a function to help me compare the results: \n```\ndef nearlyequal(a, b, sigfig=5):\n```\nThe purpose of this function is to determine if two floating-point numbers (a and b) are approximately equal. The function will return True if a==b (exact match) or if a and b have the same value when rounded to sigfig significant-figures when written in decimal. Can anybody suggest a good implementation? I've written a mini unit-test. Unless you can see a bug in my tests then a good implementation should pass the following: \n```\nassert nearlyequal(1, 1, 5) \nassert nearlyequal(1.0, 1.0, 5) \nassert nearlyequal(1.0, 1.0, 5) \nassert nearlyequal(-1e-9, 1e-9, 5) \nassert nearlyequal(1e9, 1e9 + 1 , 5) \nassert not nearlyequal(1e4, 1e4 + 1, 5) \nassert nearlyequal(0.0, 1e-15, 5) \nassert not nearlyequal(0.0, 1e-4, 6)\n```\nAdditional notes: Values a and b might be of type int, float or numpy.float64. Values a and b will always be of the same type. It's vital that conversion does not introduce additional error into the function. Lets keep this numerical, so functions that convert to strings or use non-mathematical tricks are not ideal. This program will be audited by somebody who is a mathematician who will want to be able to prove that the function does what it is supposed to do. Speed... I've got to compare a lot of numbers so the faster the better. I've got numpy, scipy and the standard-library. Anything else will be hard for me to get, especially for such a small part of the project.","response":"As of Python 3.5, the standard way to do this (using the standard library) is with the math.isclose function. It has the following signature: \n```\nisclose(a, b, rel_tol=1e-9, abs_tol=0.0)\n```\nAn example of usage with absolute error tolerance: \n```\nfrom math import isclose\na = 1.0\nb = 1.00000001\nassert isclose(a, b, abs_tol=1e-8)\n```\nIf you want it with precision of n decimal places, simply replace the last line with: \n```\nassert isclose(a, b, abs_tol=10**-n)\n```","best_answers_score":0.7391,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/558216\/function-to-determine-if-two-numbers-are-nearly-equal-when-rounded-to-n-signific","best_answers_votes":165,"tags":["python","math","floating-point","numpy"],"question_length":1770,"response_length":466,"tag_count":4} {"question":"Vectorized way of calculating row-wise dot product two matrices with Scipy I want to calculate the row-wise dot product of two matrices of the same dimension as fast as possible. This is the way I am doing it: \n```\nimport numpy as np\na = np.array([[1,2,3], [3,4,5]])\nb = np.array([[1,2,3], [1,2,3]])\nresult = np.array([])\nfor row1, row2 in a, b:\n result = np.append(result, np.dot(row1, row2))\nprint result\n```\nand of course the output is: \n```\n[ 26. 14.]\n```","response":"Straightforward way to do that is: \n```python\nimport numpy as np\na=np.array([[1,2,3],[3,4,5]])\nb=np.array([[1,2,3],[1,2,3]])\nnp.sum(a*b, axis=1)\n```\nwhich avoids the python loop and is faster in cases like: \n```python\ndef npsumdot(x, y):\n return np.sum(x*y, axis=1)\n\ndef loopdot(x, y):\n result = np.empty((x.shape[0]))\n for i in range(x.shape[0]):\n result[i] = np.dot(x[i], y[i])\n return result\n\ntimeit npsumdot(np.random.rand(500000,50),np.random.rand(500000,50))\n# 1 loops, best of 3: 861 ms per loop\ntimeit loopdot(np.random.rand(500000,50),np.random.rand(500000,50))\n# 1 loops, best of 3: 1.58 s per loop\n```","best_answers_score":0.7374,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/15616742\/vectorized-way-of-calculating-row-wise-dot-product-two-matrices-with-scipy","best_answers_votes":71,"tags":["numpy","scipy","vectorization","matrix-multiplication","dot-product"],"question_length":463,"response_length":631,"tag_count":5} {"question":"size of NumPy array Is there an equivalent to the MATLAB size() command in Numpy? In MATLAB, \n```matlab\n\n```python\na = zeros(2,5)\n#Output\n# 0 0 0 0 0\n# 0 0 0 0 0\n```\n\n```python\nsize(a)\n#Output\n# 2 5\n#```\n#In Python, \n#```\n```\n\n```python\na = zeros((2,5))\n```\n\n```python\na\n#Output\n#array([[ 0., 0., 0., 0., 0.],\n# [ 0., 0., 0., 0., 0.]])\n```\n\n```python\n?????\n#Output\n#```\n```","response":"This is called the \"shape\" in NumPy, and can be requested via the .shape attribute: \n```\n\n```python\na = zeros((2, 5))\n```\n\n```python\na.shape\n#Output\n#(2, 5)\n#```\n#If you prefer a function, you could also use numpy.shape(a).\n```","best_answers_score":0.7373,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/11125429\/size-of-numpy-array","best_answers_votes":73,"tags":["python","arrays","numpy","size"],"question_length":291,"response_length":194,"tag_count":4} {"question":"Is there a standard solution for Gauss elimination in Python? Is there somewhere in the cosmos of scipy\/numpy\/... a standard method for Gauss-elimination of a matrix? One finds many snippets via google, but I would prefer to use \"trusted\" modules if possible.","response":"I finally found, that it can be done using LU decomposition. Here the U matrix represents the reduced form of the linear system. \n```\nfrom numpy import array\nfrom scipy.linalg import lu\n\na = array([[2.,4.,4.,4.],[1.,2.,3.,3.],[1.,2.,2.,2.],[1.,4.,3.,4.]])\n\npl, u = lu(a, permute_l=True)\n```\nThen u reads \n```\narray([[ 2., 4., 4., 4.],\n [ 0., 2., 1., 2.],\n [ 0., 0., 1., 1.],\n [ 0., 0., 0., 0.]])\n```\nDepending on the solvability of the system this matrix has an upper triangular or trapezoidal structure. In the above case a line of zeros arises, as the matrix has only rank 3.","best_answers_score":0.7359,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/15638650\/is-there-a-standard-solution-for-gauss-elimination-in-python","best_answers_votes":42,"tags":["python","matrix","numpy"],"question_length":259,"response_length":607,"tag_count":3} {"question":"\"Converting\" Numpy arrays to Matlab and vice versa I am looking for a way to pass NumPy arrays to Matlab. I've managed to do this by storing the array into an image using scipy.misc.imsave and then loading it using imread, but this of course causes the matrix to contain values between 0 and 256 instead of the 'real' values. Taking the product of this matrix divided by 256, and the maximum value in the original NumPy array gives me the correct matrix, but I feel that this is a bit tedious. is there a simpler way?","response":"Sure, just use scipy.io.savemat As an example: \n```\nimport numpy as np\nimport scipy.io\n\nx = np.linspace(0, 2 * np.pi, 100)\ny = np.cos(x)\n\nscipy.io.savemat('test.mat', dict(x=x, y=y))\n```\nSimilarly, there's scipy.io.loadmat. You then load this in matlab with load test. Alteratively, as @JAB suggested, you could just save things to an ascii tab delimited file (e.g. numpy.savetxt). However, you'll be limited to 2 dimensions if you go this route. On the other hand, ascii is the universial exchange format. Pretty much anything will handle a delimited text file.","best_answers_score":0.7353,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/10997254\/converting-numpy-arrays-to-matlab-and-vice-versa","best_answers_votes":60,"tags":["python","matlab","numpy"],"question_length":517,"response_length":562,"tag_count":3} {"question":"Manually set color of points in legend I'm making a scatter plot which looks like this: (MWE at bottom of question) As can be seen in the image above the colors of the points in the legend are set to blue automatically by matplotlib. I need to set this points to some other color not present in the colormap (ie: black) so they won't generate confusion with the colors associated with said colormap. I looked around but the matplotlib.legend module does not seem to accept a color keyword. Is there any way to do this? Here's the MWE: \n```\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndef rand_data():\n return np.random.uniform(low=0., high=1., size=(100,))\n\n# Generate data.\nx, y, x2, x3 = [rand_data() for i in range(4)]\n# This data defines the markes and labels used.\nx1 = np.random.random_integers(7, 9, size=(100,))\n\n# Order all lists so smaller points are on top.\norder = np.argsort(-np.array(x2))\n# Order x and y.\nx_o, y_o = np.take(x, order), np.take(y, order)\n# Order list related to markers and labels.\nz1 = np.take(x1, order)\n# Order list related to sizes.\nz2 = np.take(x2, order)\n# Order list related to colors.\nz3 = np.take(x3, order)\n\nplt.figure()\ncm = plt.cm.get_cmap('RdYlBu')\n\n# Scatter plot where each value in z1 has a different marker and label\n# assigned.\nmrk = {7: ('o', '7'), 8: ('s', '8'), 9: ('D', '9')}\nfor key, value in mrk.items():\n\n s1 = (z1 == key)\n plt.scatter(x_o[s1], y_o[s1], marker=value[0], label=value[1],\n s=z2[s1] * 100., c=z3[s1], cmap=cm, lw=0.2)\n\n# Plot colorbar\nplt.colorbar()\n\n# Plot legend.\nplt.legend(loc=\"lower left\", markerscale=0.7, scatterpoints=1, fontsize=10)\n\nplt.show()\n```","response":"You can obtain the legend handles and change their colors individually. Thanks for the comments of @OrOrg and @Reinderien that led me to update this answer. \n```\nax = plt.gca()\nleg = ax.get_legend()\nleg.legend_handles[0].set_facecolor('red')\nleg.legend_handles[0].set_edgecolor('red')\n\nleg.legend_handles[1].set_facecolor('yellow')\nleg.legend_handles[1].set_edgecolor('yellow')\n```","best_answers_score":0.7352,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/23698850\/manually-set-color-of-points-in-legend","best_answers_votes":82,"tags":["python","numpy","matplotlib","colors","legend"],"question_length":1650,"response_length":381,"tag_count":5} {"question":"Reshape an array in NumPy Consider an array of the following form (just an example): \n```\n[[ 0 1]\n [ 2 3]\n [ 4 5]\n [ 6 7]\n [ 8 9]\n [10 11]\n [12 13]\n [14 15]\n [16 17]]\n```\nIt's shape is [9,2]. Now I want to transform the array so that each column becomes a shape [3,3], like this: \n```\n[[ 0 6 12]\n [ 2 8 14]\n [ 4 10 16]]\n[[ 1 7 13]\n [ 3 9 15]\n [ 5 11 17]]\n```\nThe most obvious (and surely \"non-pythonic\") solution is to initialise an array of zeroes with the proper dimension and run two for-loops where it will be filled with data. I'm interested in a solution that is language-conform...","response":"```\na = np.arange(18).reshape(9,2)\nb = a.reshape(3,3,2).swapaxes(0,2)\n\n# a: \narray([[ 0, 1],\n [ 2, 3],\n [ 4, 5],\n [ 6, 7],\n [ 8, 9],\n [10, 11],\n [12, 13],\n [14, 15],\n [16, 17]])\n\n\n# b:\narray([[[ 0, 6, 12],\n [ 2, 8, 14],\n [ 4, 10, 16]],\n\n [[ 1, 7, 13],\n [ 3, 9, 15],\n [ 5, 11, 17]]])\n```","best_answers_score":0.7344,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/14476415\/reshape-an-array-in-numpy","best_answers_votes":70,"tags":["python","arrays","numpy","reshape"],"question_length":597,"response_length":377,"tag_count":4} {"question":"Ignoring -Inf values in arrays using numpy\/scipy in Python I have an NxM array in numpy that I would like to take the log of, and ignore entries that were negative prior to taking the log. When I take the log of negative entries, it returns -Inf, so I will have a matrix with some -Inf values as a result. I then want to sum over the columns of this matrix, but ignoring the -Inf values -- how can I do this? For example, \n```\nmylogarray = log(myarray)\n# take sum, but ignore -Inf?\nsum(mylogarray, 0)\n```\nI know there's nansum and I need the equivalent, something like infsum. Thanks.","response":"The easiest way to do this is to use numpy.ma.masked_invalid(): \n```\na = numpy.log(numpy.arange(15))\na.sum()\n# -inf\nnumpy.ma.masked_invalid(a).sum()\n# 25.19122118273868\n```","best_answers_score":0.7341,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/4485779\/ignoring-inf-values-in-arrays-using-numpy-scipy-in-python","best_answers_votes":44,"tags":["python","numpy","scipy"],"question_length":584,"response_length":172,"tag_count":3} {"question":"ValueError: shape mismatch: objects cannot be broadcast to a single shape I am using the SciPy's pearsonr(x,y) method and I cannot figure out why the following error is happening: \n```none\nValueError: shape mismatch: objects cannot be broadcast to a single shape\n```\nIt computes the first two (I am running several thousand of these tests in a loop) and then dies. Does anyone have any ideas about what the problem might be? \n```py\nr_num = n*(np.add.reduce(xm*ym))\n```\nthis is the line in the pearsonr method that the error occurs on.","response":"This particular error implies that one of the variables being used in the arithmetic on the line has a shape incompatible with another on the same line (i.e., both different and non-scalar). Since n and the output of np.add.reduce() are both scalars, this implies that the problem lies with xm and ym, the two of which are simply your x and y inputs minus their respective means. Based on this, my guess is that your x and y inputs have different shapes from one another, making them incompatible for element-wise multiplication. ** Technically, it's not that variables on the same line have incompatible shapes. The only problem is when two variables being added, multiplied, etc., have incompatible shapes, whether the variables are temporary (e.g., function output) or not. Two variables with different shapes on the same line are fine as long as something else corrects the issue before the mathematical expression is evaluated.","best_answers_score":0.7335,"library_name":"numpy","question_url":"https:\/\/stackoverflow.com\/questions\/16950074\/valueerror-shape-mismatch-objects-cannot-be-broadcast-to-a-single-shape","best_answers_votes":56,"tags":["python","arrays","python-3.x","numpy","valueerror"],"question_length":534,"response_length":932,"tag_count":5} {"question":"How to set max output width in numpy? I am using a Jupyter notebook. I have a pretty wide screen, but the displayed output (say, when I print a numpy array) is formatted as if the screen was narrow. I found a way of increasing the width of the cells, with \n```\nfrom IPython.core.display import HTML\nHTML(\"